Skip to main content

Theory and Modern Applications

Robust stability of uncertain Markovian jump neural networks with mode-dependent time-varying delays and nonlinear perturbations

Abstract

In this paper, the problem of delay-dependent stability is investigated for uncertain Markovian jump neural networks with leakage delay, two additive time-varying delay components, and nonlinear perturbations. The Markovian jumping parameters in the connection weight matrices and two additive time-varying delay components are assumed to be different in the system model, and the Markovian jumping parameters in each of the two additive time-varying delay components are also different. The relationship between the time-varying delays and their upper delay bounds is efficiently utilized to study the suggested system in two cases: with known or unknown parameters, which leads to more information of the lower and upper bounds of the time-varying delays that can be used. By constructing a newly augmented Lyapunov-Krasovskii functional and using the extended Wirtinger inequality and a reciprocally convex method, several sufficient criteria are derived to guarantee the stability of the proposed model. Numerical examples and their simulations are given to show the effectiveness and advantage of the proposed method.

1 Introduction

Over the last decades, considerable attention has been devoted to the study of neural networks because they have been extensively applied in many areas, such as signal processing, optimization problem, static image treatment, and so on [1–4]. However, significant differences between an ideal and a practical neural networks are often encountered due to the limitations of hardware. These differences can cause unpredictable problems such as time delays, uncertainties, etc. [5–9]. A special type of time delay, namely, leakage delay, is a time delay that exists in the negative feedback terms of the system which has a tendency to destabilize a system [10–15]. In [11], Peng discusses global attractive periodic solutions of BAM neural networks with continuously distributed delays in the leakage terms. Very recently, the stability problem for a class of dynamical systems with leakage delay and nonlinear perturbations is investigated in [12]. Further, Zhao et al. [13] deal with the passivity problem for a class of stochastic neural networks with time-varying delays and leakage delay as well as generalized activation functions by the free-weighting method and stochastic analysis technique. In addition, it is well known that nonlinear perturbations exist widely in practice and may cause instability, oscillation, and poor performance of real systems. With this regard, many attentions have been paid to the problem of nonlinear perturbed systems with time delays [12, 16–19]. However, it is rare to see the study of the stability problem for Markovian jump neural networks with leakage delay, two additive time-varying delays, and nonlinear perturbations.

In applications, there will be some parameter variations in the structures of neural networks. These changes may be abrupt or may be continuous variations. Abrupt variations can be described by the switch or Markovian jump systems [20–26]. For Markovian jump systems with one time-varying delay component, the finite-time boundedness of delayed Markovian jumping neural networks is studied in [27]. However, in [27], the Markovian jumping parameters in the connection weight matrices and discrete delays are the same. Furthermore, the state estimation problem of delayed Markovian jump neural networks is investigated in [28], where the Markovian jumping parameters in the connection weight matrices and delays are assumed to be different. For Markovian jump systems with two additive time-varying delay components, in [29], Chen et al. discuss the problem of delay-dependent stability and dissipativity analysis of generalized Markovian jump neural networks with two delay components, where the two delay components are not related to the Markovian jumping parameters. Once again, the Markovian jump neural network is investigated in [30], in the considered system, two additive time-delay components are two mode-dependent time-varying delays, which have the same Markovian jumping parameters with connection weight matrices. Motivated by [28], it is natural to consider the case that the Markovian jumping parameters in the connection weight matrices and each of the two additive time-varying delay components are different. In fact, when the modes in the connection weight matrices are fixed, two additive time-varying delay components may also have finite modes due to the dynamic systems subject to abrupt variation frequently in their structures, and the switching between different modes can also be governed by a Markov chain. The Markovian jumping parameters in the connection weight matrices and two additive time-varying delay components may be different. Similarly, when the modes in the connection weight matrices and one of the two additive time-varying delay components are fixed, the other time delay of the two additive time-varying components may have different finite modes as well. So the Markovian jumping parameters in the connection weight matrices and each of the two additive time-varying delay components may be different. To the best of the authors’ knowledge, there are results as regards the stability of delayed neural networks with three different Markovian jumping parameters.

Due to the complexity of neural networks, parameter uncertainties which often destroy the stability of systems can be commonly encountered. Fortunately, one can obtain the ranges of some fundamental coefficients by engineering experience even from incomplete information. Therefore, to meet the practical applications, it is of great importance and significance to study the robustness of delayed neural networks [31–35]. In the field of robust analysis, how to estimate more accurately the derivatives of the constructed Lyapunov-Krasovskii functional is a crucial step in reducing the conservatism. There have been many methods in the existing works such as Jenson’s inequality [36], the reciprocally convex approach [37], the integral inequality technique [38], and so on. It is worth noting that there is still room for improvement. First, both sides of Jenson’s inequality in [36] are integrals about the state. In this paper, the extended Wirtinger inequality is introduced, which indicates the relationship between the state and the derivative of the state. Second, all the above mentioned works do not consider the relationship between time-varying delays and their upper bounds. In [39], the relationship between the time-varying delay and its upper bound is taken into account when estimating the upper bound of the derivative of Lyapunov functional, and it is seen that \(d_{1}(t)\) is not simply enlarged as \(h_{1}\), instead, the relationship that \(d_{1}(t)+(h_{1}-d_{1}(t))=h_{1}\) is considered. Recently, the relationship between time-varying delays and their upper bounds is further considered in [40]. According to the relationship \(0\leq d_{1}(t)\leq d_{1}\) and \(d_{1}(t)\leq d(t)\leq d\), the authors consider two cases while calculating the derivative of the Lyapunov functional: \(d(t)\in[d_{1}(t),d_{1})\) and \(d(t)\in[d_{1},d]\). But so far, this method has not been fully used to investigate the robust stability of Markovian jump neural networks with two additive time-varying delay components. Third, since the relationship between time-varying delays and their upper bounds is fully considered, the extended reciprocally convex approach in [40] will be used to deal with the robust stability problem of Markovian jump neural networks with two additive time-varying delay components.

Enlightened by the above discussion, the problem of robust stability for neural networks with mode-dependent time-varying delays and nonlinear perturbations is studied in this paper. The Markovian jumping parameters in the connection weight matrices and each of the two additive time-varying components are assumed to be different in the system model. Accordingly, a new weak infinitesimal operator is first proposed to act on the Lyapunov-Krasovskii functional with three different Markovian jumping parameters. The relationship between the time-varying delays and their upper delay bounds is efficiently utilized. According to which interval time-varying delay \(h(t)\) belongs to, different methods are used to estimate the derivatives of the constructed Lyapunov-Krasovskii functional. By constructing a newly augmented Lyapunov-Krasovskii functional and using the extended Wirtinger inequality, extended reciprocally convex method, several sufficient conditions are derived to guarantee the stability of the proposed model for all admissible parameter uncertainties. Numerical examples and their simulations are given to show the smaller conservatism and the effectiveness of the proposed method.

Notations

Throughout this paper, the superscripts −1 and T stand for the inverse and transpose of a matrix, respectively; \(P>0\) means that the matrix P is symmetric positive definite; \(R^{n}\) denotes n-dimensional Euclidean space; \(R^{{m}\times{n}}\) is the set of \(m\times n\) real matrices; ∗ denotes the symmetric block in symmetric matrix; \(\|\cdot\|\) refers to the induced matrix 2-norm; \(\operatorname{Sym}\{M\} \) means \(M+M^{T}\); \(\mathcal{C}_{\tau}^{1}=C^{1}([-\tau,0],R^{n})=\{\phi:[-\tau ,0]\rightarrow R^{n}\mbox{ is continuously differentiable}\}\); \(\lambda_{\max }(Q)\) and \(\lambda_{\min}(Q)\) denote, respectively, the maximal and minimal eigenvalue of matrix Q; The space of functions φ: \([a,b]\rightarrow R^{n}\) which are absolutely continuous on \([a, b)\), have a finite \(\lim_{\theta\rightarrow b^{-}}\varphi(\theta)\), and have square integrable first order derivatives, is denoted by \(W_{n}[a, b)\).

2 Problem statement and preliminaries

Let \(\{r_{t},t\geq0\}\), \(\{\delta_{t},t\geq0\}\), and \(\{\ell_{t},t\geq0\}\) be three right-continuous Markov chains on a complete probability space \((\Omega, F, \mathbf{P})\) taking values in finite state spaces \(\varsigma_{1}=\{1,2,\ldots,N_{1}\}\), \(\varsigma_{2}=\{1,2,\ldots,N_{2}\}\), and \(\varsigma_{3}=\{1,2,\ldots,N_{3}\}\), respectively. The transition probability matrices \(\Pi=(\pi_{ij})_{N_{1}\times N_{1}}\), \(P=(p_{qk_{1}})_{N_{2}\times N_{2}}\) and \(L=(l_{rk_{2}})_{N_{3}\times N_{3}}\) are given by

$$\begin{aligned}& \operatorname{pr}(r_{t+\triangle}=j|r_{t}=i)=\left \{ \textstyle\begin{array}{l@{\quad}l} \pi_{ij}\triangle+o(\triangle), &j\neq i, \\ 1+\pi_{ii}\triangle+o(\triangle), &j=i, \end{array}\displaystyle \right . \\& \operatorname{pr}(\delta_{t+\triangle}=k_{1}|\delta_{t}=q)= \left \{ \textstyle\begin{array}{l@{\quad}l} p_{qk_{1}}\triangle+o(\triangle), &k_{1}\neq q, \\ 1+p_{qq}\triangle+o(\triangle), &k_{1}=q, \end{array}\displaystyle \right . \\& \operatorname{pr}(\ell_{t+\triangle}=k_{2}|\ell_{t}=r)= \left \{ \textstyle\begin{array}{l@{\quad}l} l_{rk_{2}}\triangle+o(\triangle), &k_{2}\neq r, \\ 1+l_{rr}\triangle+o(\triangle), &k_{2}=r, \end{array}\displaystyle \right . \end{aligned}$$

where \(\triangle>0\), \(\lim_{\triangle\rightarrow0}\frac{o(\triangle)}{\triangle}=0\), \(\pi _{ij}\geq0\), \(\forall j\neq i\), \(p_{qk_{1}}\geq0\), \(\forall k_{1}\neq q\) and \(l_{rk_{2}}\geq0\), \(\forall k_{2}\neq r\) are, respectively, the transition rate from mode i at time t to mode j at time \(t+\triangle\), mode q at time t to mode \(k_{1}\) at time \(t+\triangle\) and mode r at time t to mode \(k_{2}\) at time \(t+\triangle\). Moreover, \(\pi_{ii}=-\sum_{j=1,j\neq i}^{j=N_{1}}\pi_{ij}\), \(p_{qq}=-\sum_{k_{1}=1,k_{1}\neq q}^{k_{1}=N_{2}}p_{qk_{1}}\) and \(l_{rr}=-\sum_{k_{2}=1,k_{2}\neq r}^{k_{2}=N_{3}}l_{rk_{2}}\).

In this paper, we consider the following dynamical system:

$$ \begin{aligned} &\dot{x}(t) = -C(r_{t})x(t- \sigma)+A(r_{t})x\bigl(t-h_{1}(t,\delta_{t})-h_{2}(t, \ell _{t})\bigr) \\ &\hphantom{\dot{x}(t) ={}}{}+f\bigl(t,x(t-\sigma),x\bigl(t-h_{1}(t, \delta_{t})-h_{2}(t,\ell_{t})\bigr)\bigr),\quad t>0, \\ &x(s)= \phi(s), \quad s\in\bigl[-\max\{\sigma,h\},0\bigr], \end{aligned} $$
(1)

where \(x(t)=[x_{1}(t),x_{2}(t),\ldots,x_{n}(t)]^{T}\in{R}^{n}\) represents the neuron state vector; \(C(r_{t})=\operatorname{diag}\{c_{1}(r_{t}),c_{2}(r_{t}),\ldots,c_{n}(r_{t})\}\) is a diagonal matrix with positive entries. The matrices \(A(r_{t})\) represent the discretely delayed connection weight matrices; \(\sigma\geq0\) is the leakage delay, \(h_{1}(t,\delta_{t})\) and \(h_{2}(t,\ell_{t})\) are continuous mode-dependent time-varying functions that represent the two delay components in the state which satisfy

$$ \begin{aligned} &0\leq h_{1}(t,\delta_{t})\leq h_{1}(t)\leq h_{1},\qquad 0 \leq h_{2}(t, \ell_{t})\leq h_{2}(t)\leq h_{2}, \\ &\dot {h}_{1}(t)\leq\mu_{1}, \qquad \dot{h}_{2}(t)\leq \mu_{2}, \end{aligned} $$
(2)

where \(h_{1}\), \(h_{2}\), \(\mu_{1}\), and \(\mu_{2}\) are constants scalars, and we denote \(h_{qr}(t)=h_{1}(t,\delta_{t})+h_{2}(t,\ell_{t})\), \(h(t)=h_{1}(t)+h_{2}(t)\), \(h=h_{1}+h_{2}\), and \(\mu=\mu_{1}+\mu_{2}\).

And \(\phi(s)\in\mathcal{C}_{\tau}^{1}\); \(f(t,x(t-\sigma),x(t-h_{qr}(t)))\) represents the nonlinear term of system (1) which satisfies \(f(t,0,0)=0\) and

$$ \bigl\Vert f\bigl(t,x(t-\sigma),x\bigl(t-h_{qr}(t) \bigr)\bigr)\bigr\Vert \leq\alpha\bigl\Vert E_{\alpha}x(t-\sigma)\bigr\Vert +\beta\bigl\Vert E_{\beta}x\bigl(t-h(t)\bigr)\bigr\Vert , $$
(3)

where \(\alpha\geq0\) and \(\beta\geq0\) are two real constants, \(E_{\alpha}\) and \(E_{\beta}\) are two known real matrices.

Remark 1

For Markovian jump systems with two additive time-varying delay components, the two additive time-varying delay components may be irrelated to Markovian jumping parameters [29]; the Markovian jumping parameters in the two additive time-varying delay components may be the same as the one in the connection weight matrices [30]. In fact, when the modes in the connection weight matrices are fixed, two additive time-varying delay components may also has finite modes, and the switching between different modes can also be governed by a Markov chain. So the Markovian jumping parameters in the connection weight matrices and two additive time-varying delay components may be different. Similarly, when the modes in the connection weight matrices and one of the two additive time-varying delay components are fixed, the other time delay of the two additive time-varying components may has different finite modes as well. So the Markovian jumping parameters in the connection weight matrices and each of the two additive time-varying delay components may be different. Therefore, the considered model (1) with three different Markovian jumping parameters needs to be introduced.

Moreover, the system (1) has an equivalent form as follows:

$$ \begin{aligned} &\frac{d}{dt}\biggl[x(t)-C(r_{t}) \int_{t-\sigma}^{t}x(s)\, ds\biggr] = -C(r_{t})x(t)+A(r_{t})x \bigl(t-h_{1}(t,\delta_{t})-h_{2}(t, \ell_{t})\bigr) \\ &\hphantom{\frac{d}{dt}\biggl[x(t)-C(r_{t}) \int_{t-\sigma}^{t}x(s)\, ds\biggr] ={}}{} +f\bigl(t,x(t-\sigma),x\bigl(t-h_{qr}(t)\bigr) \bigr), \quad t>0, \\ &x(s)=\phi(s),\quad s\in\bigl[-\max\{\sigma,h\},0\bigr]. \end{aligned} $$
(4)

Before proceeding, the following definition and lemmas are introduced.

Definition 1

Let \(x_{t}=x(t+s)\), \(-\max\{\sigma,h\}\leq s\leq0\), \(\{x_{t}, r_{t}, \delta_{t}, \ell_{t}\}_{t}\geq0\) is a \(\mathcal {C}([-\max\{\sigma,h\},0]; R^{n})\times\varsigma_{1}\times\varsigma_{2}\times \varsigma_{3}\)-valued Markov process. The weak infinitesimal operator acting on a LKF: \(\mathcal{C}([-\max\{\sigma,h\},0];R^{n})\times\varsigma _{1}\times\varsigma_{2}\times\varsigma_{3}\times R^{+}\rightarrow R\) is defined by

$$\begin{aligned} \mathcal{L} {V}(x_{t},r_{t},\delta_{t}, \ell_{t},t) =& {\lim_{\triangle\rightarrow0^{+}}\frac{1}{\triangle}}\bigl\{ { \mathbf{E}\bigl[V(x_{t+ \triangle},r_{t+\triangle},\delta_{t+\triangle}, \ell_{t+ \triangle},t+\triangle)\bigr]-V(x_{t},i,q,r,t)}\bigr\} \\ =& {\lim_{\triangle\rightarrow0^{+}}\frac{1}{\triangle }}\Biggl\{ {\mathbf{E} \bigl[V(x_{t+\triangle},i,\delta_{t+\triangle},\ell _{t+\triangle},t+\triangle) \bigr]-V(x_{t},i,q,r,t)} \\ &{}+\Biggl( {\sum_{j=1}^{N_{1}} \pi_{ij}}\triangle+o(\triangle )\Biggr)V(x_{t+\triangle},j, \delta_{t+\triangle},\ell_{t+\triangle },t+\triangle)\Biggr\} \\ =& {\lim_{\triangle\rightarrow0^{+}}\frac{1}{\triangle}}\Biggl\{ {\mathbf{E} \bigl[V(x_{t+\triangle},i,q,\ell_{t+\triangle},t+\triangle ) \bigr]-V(x_{t},i,q,r,t)} \\ &{}+\Biggl( {\sum_{j=1}^{N_{1}} \pi_{ij}}\triangle+o(\triangle )\Biggr)V(x_{t+\triangle},j, \delta_{t+\triangle},\ell_{t+\triangle },t+\triangle) \\ &{}+\Biggl( {\sum_{k_{1}=1}^{N_{2}}p_{qk_{1}}} \triangle+o(\triangle )\Biggr)V(x_{t+\triangle},i,k_{1}, \ell_{t+\triangle},t+\triangle)\Biggr\} \\ =& {\lim_{\triangle\rightarrow0^{+}}\frac{1}{\triangle}}\Biggl\{ {V(x_{t+\triangle},i,q,r,t+ \triangle)-V(x_{t},i,q,r,t)} \\ &{}+\Biggl( {\sum_{j=1}^{N_{1}} \pi_{ij}}\triangle+o(\triangle )\Biggr)V(x_{t+\triangle},j, \delta_{t+\triangle},\ell_{t+\triangle},t+ \triangle) \\ &{}+\Biggl( {\sum_{k_{1}=1}^{N_{2}}p_{qk_{1}}} \triangle+o(\triangle )\Biggr)V(x_{t+\triangle},i,k_{1}, \ell_{t+\triangle},t+\triangle) \\ &{}+\Biggl( {\sum_{k_{2}=1}^{N_{3}}l_{rk_{2}}} \triangle+o(\triangle )\Biggr)V(x_{t+\triangle},i,q,k_{2},t+\triangle) \Biggr\} \\ =&\dot{V}(x_{t},i,q,r,t)+ {\sum_{j=1}^{N_{1}} \pi _{ij}}V(x_{t},j,q,r,t) \\ &{}+ {\sum_{k_{1}=1}^{N_{2}}p_{qk_{1}}}V(x_{t},i,k_{1},r,t)+ {\sum_{k_{2}=1}^{N_{3}}l_{rk_{2}}}V(x_{t},i,q,k_{2},t). \end{aligned}$$

Remark 2

Due to three different Markovian jumping parameters being introduced in the model considered, a weak infinitesimal operator acting on a Lyapunov-Krasovskii functional with three different Markovian jumping parameters is first proposed in Definition 1.

Lemma 2.1

[12]

Given any real matrix \(M>0\) of appropriate dimension and a vector function \(\omega(\cdot):[a,b]\rightarrow R^{n}\), such that the integrations concerned are well defined, then

$$ \biggl[ \int_{b}^{a}\omega(s)\,ds\biggr]^{T}M \biggl[ \int_{b}^{a}\omega(s)\,ds\biggr]\leq(b-a) \int_{b}^{a}\omega ^{T}(s)M\omega(s)\,ds. $$

Lemma 2.2

[40]

For \(k_{i}(t)\in[0,1]\), \({\sum_{i=1}^{N}}k_{i}(t)=1\), and vectors \(\eta_{i}\) which satisfy \(\eta_{i}=0\) with \(k_{i}(t)=0\), and matrices \(R_{i}>0\), there exist matrices \(S_{ij}\) (\(i=1,\ldots,N-1\), \(j=i+1,\ldots ,N\)), satisfying \(\bigl [ {\scriptsize\begin{matrix}{} {R_{i}} & {S_{ij}} \cr {\ast} & {R_{j}}\end{matrix}} \bigr ]\geq0\) such that the following inequality holds:

$$\sum_{i=1}^{N}\frac{1}{k_{i}(t)} \eta_{i}^{T}R_{i}\eta_{i}\geq \left [ \textstyle\begin{array}{@{}c@{}} \eta_{1}\\ \vdots\\ \eta_{N} \end{array}\displaystyle \right ]^{T}\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} R_{1} &\cdots& S_{1,N}\\ \ast & \ddots&\vdots\\ \ast&\ast&R_{N} \end{array}\displaystyle \right ]\left [ \textstyle\begin{array}{@{}c@{}} \eta_{1}\\ \vdots\\ \eta_{N} \end{array}\displaystyle \right ]. $$

Lemma 2.3

[6]

For any positive semi-definite matrix

$$X=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} X_{11} &X_{12}& X_{13}\\ X_{12}^{T}& X_{22}&X_{23}\\ X_{13}^{T}&X_{23}^{T}&X_{33} \end{array}\displaystyle \right ]\geq0, $$

the following integral inequality holds:

$$- \int_{t-h}^{t}\dot{x}^{T}(s)X_{33} \dot{x}(s)\leq \int_{t-h}^{t} \bigl[x^{T}(t) x^{T}(t-h) \dot{x}^{T}(s)\bigr]\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} X_{11} &X_{12}& X_{13}\\ X_{12}^{T}& X_{22}&X_{23}\\ X_{13}^{T}&X_{23}^{T}&0 \end{array}\displaystyle \right ]\left [ \textstyle\begin{array}{@{}c@{}} x(t)\\ x(t-h)\\ \dot{x}(s) \end{array}\displaystyle \right ]\,ds. $$

Lemma 2.4

[7]

Let \(x(t)\in W_{n}[a,b)\). For any matrix \(R>0\), the following inequality holds:

$$\int_{a}^{b}\bigl(x^{T}(s)-x^{T}(a) \bigr)R\bigl(x(s)-x(a)\bigr)\,ds\leq\frac{4(b-a)^{2}}{\pi^{2}} \int _{a}^{b}\dot{x}^{T}(s)R\dot{x}(s) \,ds. $$

In the sequel, for simplicity, when \(r_{t}=i\), \(\delta_{t}=q\), and \(\ell _{t}=r\), \(C(r_{t})\), \(A(r_{t})\), \(h_{1}(t,\delta_{t})\) and \(h_{2}(t,\ell_{t})\) will be written as \(C_{i}\), \(A_{i}\), \(h_{1q}(t)\) and \(h_{2r}(t)\), respectively.

3 Main results

For the sake of the simplicity of the matrix representation, \(e_{i}\) (\(i=1,\ldots,25\)) are defined as block entry matrices. (For example, \({e_{i}}^{T}=[\underbrace{0,\ldots,0}_{i-1},I,0,\ldots,0]\).) The notations for some matrices and vectors are defined below (see the Appendix).

Now, we have the following result.

Theorem 3.1

For given scalars \(\sigma\geq0\), \(h_{1}\geq0\), \(h_{2}\geq0\), \(\mu_{1}\geq0\), and \(\mu_{2}\geq0\) satisfied (2), \(\alpha\geq 0\), \(\beta\geq0\), \(E_{\alpha}\) and \(E_{\beta}\) satisfies (3), system (1) is globally asymptotically stable, if there exist a constant \(\varepsilon \geq0\), positive definite matrices \(P_{iqr}\in R^{5n\times5n}\), \(R_{1}\in R^{4n\times4n}\), \(R_{2}\in R^{3n\times3n}\), \(R_{3}\in R^{n\times n}\), \(R_{4}\in R^{n\times n}\), \(Q_{i}\in R^{4n\times4n}\) (\(i=1,2,3\)), \(Q_{j}\in R^{3n\times3n}\) (\(j=4,5\)), \(Q_{k}\in R^{2n\times2n}\) (\(k=6,7,8\)), \(S_{q}\in R^{2n\times2n}\) (\(q=1,2,3\)), \(S_{m}\in R^{n\times n}\) (\(m=4,5,\ldots,9\)), and

$$\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} X_{11} & X_{12}&X_{13}&X_{14}\\ \ast & X_{22}&X_{23}&X_{24} \\ \ast&\ast&X_{33}&X_{34}\\ \ast&\ast&\ast&R_{3} \end{array}\displaystyle \right ]\geq0, $$

any appropriately dimensioned matrices \(U_{i}\) (\(i=1,2,3\)), \(T_{j}\) (\(j=1,2,\ldots,5\)), and \(G_{k}\) (\(k=1,2,\ldots,7\)), such that the following LMIs hold for \(i=1,2,\ldots,N_{1}\), \(q=1,2,\ldots,N_{2}\), and \(r=1,2,\ldots,N_{3}\):

$$ \left \{ \textstyle\begin{array}{l} \Omega_{miqr}(t)|_{\substack{h_{1}(t)=0 \\h_{2}(t)=0}}< 0, \\ \Omega_{miqr}(t)|_{\substack{h_{1}(t)=h_{1} \\h_{2}(t)=0}}< 0, \\ \Omega_{miqr}(t)|_{\substack{h_{1}(t)=0 \\h_{2}(t)=h_{2}}}< 0, \\ \Omega_{miqr}(t)|_{\substack{h_{1}(t)=h_{1} \\ h_{2}(t)=h_{2}}}< 0 \end{array}\displaystyle \right .\quad (m=1,2) $$
(5)

and

$$ \mathcal{P}_{i}>0 \quad (i=1,2,3,4),\qquad \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} S_{5}& T_{4}\\ \ast&S_{5} \end{array}\displaystyle \right ]>0,\qquad \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} S_{6}& T_{5}\\ \ast&S_{6} \end{array}\displaystyle \right ]>0. $$
(6)

Proof

Consider the new augmented Lyapunov-Krasovskii functional as follows:

$$ V(t,x_{t},r_{t},\delta_{t}, \ell_{t})=\sum_{j=1}^{5}V_{j}(t,x_{t},r_{t}, \delta _{t},\ell_{t}), $$
(7)

where

$$\begin{aligned}& V_{1}= \alpha_{1}^{T}(t)P(r_{t}, \delta_{t},\ell_{t})\alpha_{1}(t), \\& V_{2}= \int_{t-\sigma}^{t}f_{1}^{T}(t,s)R_{1}f_{1}(t,s) \,ds+\sigma \int_{t-\sigma }^{t} \int_{\lambda}^{t}f_{2}^{T}(t,s)R_{2}f_{2}(t,s) \,ds\,d\lambda \\& \hphantom{V_{2}= {}}{}+ \int_{t-\sigma}^{t} \int_{\lambda}^{t}\dot{x}^{T}(s)R_{3} \dot {x}(s)\,ds\,d\lambda+\frac{\sigma^{2}}{2} \int_{t-\sigma}^{t} \int_{\lambda }^{t} \int_{s}^{t}x^{T}(u)R_{4}x(u) \,du\,ds\,d\lambda, \\& V_{3}= \int_{t-h_{1}}^{t}f_{1}^{T}(t,s)Q_{1}f_{1}(t,s) \,ds+ \int _{t-h_{2}}^{t}f_{1}^{T}(t,s)Q_{2}f_{1}(t,s) \,ds+ \int_{t-h}^{t} f_{1}^{T}(t,s)Q_{3}f_{1}(t,s) \,ds \\& \hphantom{V_{3}={}}{}+ \int_{t-h}^{t-h_{1}}f_{3}^{T}(t,s)Q_{4}f_{3}(t,s) \,ds+ \int_{t-h}^{t-h_{2}}f_{4}^{T}(t,s)Q_{5}f_{4}(t,s) \,ds \\& \hphantom{V_{3}={}}{}+ \int _{t-h_{1}(t)}^{t}f_{5}^{T}(t,s)Q_{6}f_{5}(t,s) \,ds+ \int_{t-h_{2}(t)}^{t}f_{5}^{T}(t,s)Q_{7}f_{5}(t,s) \,ds \\& \hphantom{V_{3}={}}{}+ \int _{t-h(t)}^{t}f_{5}^{T}(t,s)Q_{8}f_{5}(t,s) \,ds, \\& V_{4}= h_{1} \int_{t-h_{1}}^{t} \int_{\lambda}^{t}\alpha _{2}^{T}(s)S_{1} \alpha_{2}(s)\,ds\,d\lambda+h_{2} \int_{t-h_{2}}^{t} \int _{\lambda}^{t}\alpha_{2}^{T}(s)S_{2} \alpha_{2}(s)\,ds\,d\lambda \\& \hphantom{V_{4}=}{}+h \int_{t-h}^{t} \int_{\lambda}^{t}\alpha_{2}^{T}(s)S_{3} \alpha_{2}(s)\,ds\,d\lambda +h \int_{t-h}^{t} \int_{\lambda}^{t}\dot{x}^{T}(s)S_{4} \dot {x}(s)\,ds\,d\lambda \\& \hphantom{V_{4}=}{}+h_{1} \int_{t-h_{1}}^{t} \int_{\lambda}^{t}\dot {x}^{T}(s)S_{5} \dot{x}(s)\,ds\,d\lambda+h_{2} \int_{t-h}^{t-h_{1}} \int_{\lambda }^{t-h_{1}}\dot{x}^{T}(s)S_{6} \dot{x}(s)\,ds\,d\lambda, \\& V_{5}= \frac{h_{1}^{2}}{2} \int_{t-h_{1}}^{t} \int_{\lambda}^{t} \int _{s}^{t}x^{T}(u)S_{7}x(u) \,du\,ds\,d\lambda \\& \hphantom{V_{5}={}}{}+\frac{h_{2}^{2}}{2} \int_{t-h_{2}}^{t} \int _{\lambda}^{t} \int_{s}^{t}x^{T}(u)S_{8}x(u) \,du\,ds\,d\lambda \\& \hphantom{V_{5}={}}{}+\frac{h^{2}}{2} \int_{t-h}^{t} \int_{\lambda}^{t} \int _{s}^{t}x^{T}(u)S_{9}x(u) \,du\,ds\,d\lambda. \end{aligned}$$

When \(r_{t}=i\), \(\delta_{t}=q\), and \(\ell_{t}=r\), the weak infinitesimal operator \(\mathcal{L}\) of the stochastic process \(\{x_{t},r_{t},\delta _{t},\ell_{t}\}\), \(t\geq0\) along system (4) is

$$\begin{aligned} \mathcal{L} {V}_{1} =&2\alpha_{1i}^{T}(t)P_{iqr} \dot{\alpha}_{1i}(t)+\sum_{j=1}^{N_{1}} \pi_{ij}\alpha_{1j}^{T}(t)P_{jqr}{ \alpha}_{1j}(t)+\sum_{k_{1}=1}^{N_{2}}p_{qk_{1}} \alpha_{1i}^{T}(t)P_{ik_{1}r}{\alpha}_{1i}(t) \\ &{}+\sum_{k_{2}=1}^{N_{3}}l_{rk_{2}} \alpha_{1i}^{T}(t)P_{iqk_{2}}{\alpha}_{1i}(t). \end{aligned}$$
(8)

Here, it should be noted that

$$ \alpha_{1i}(t)=\bigl[e_{1}-e_{14}C_{i}^{T},e_{14},e_{3},e_{17},e_{9} \bigr]^{T}\xi(t) $$
(9)

and

$$ \dot{\alpha }_{1i}(t)=\bigl[-e_{1}C_{i}^{T}+e_{13}A_{i}^{T}+e_{25},e_{1}-e_{3},e_{4},e_{1}-e_{9},e_{10} \bigr]^{T}\xi (t). $$
(10)

Thus, \(\mathcal{L}{V}_{1}\) can be represented as

$$\begin{aligned} \mathcal{L} {V}_{1} =&\xi^{T}(t) \Biggl( \Pi_{1i}P_{iqr}\Pi_{2i}^{T}+ \Pi_{2i}P_{iqr}\Pi _{1i}^{T}+\sum _{j=1}^{N_{1}}\pi_{ij} \Pi_{1j}P_{jqr}\Pi_{1j}^{T} \\ &{}+\sum_{k_{1}=1}^{N_{2}}p_{qk_{1}} \Pi_{1i}P_{ik_{1}r}\Pi_{1i}^{T}+\sum _{k_{2}=1}^{N_{3}}l_{rk_{2}} \Pi_{1i}P_{iqk_{2}}\Pi_{1i}^{T}\Biggr) \xi(t). \end{aligned}$$
(11)

By calculation of \(\mathcal{L}{V}_{2}\), we have

$$\begin{aligned} \mathcal{L} {V}_{2} \leq&2\left [ \textstyle\begin{array}{@{}c@{}} \int_{t-\sigma}^{t}x(s)\,ds\\ x(t)-x(t-\sigma)\\ \int_{t-\sigma}^{t}\int_{s}^{t}x(u)\,du\,ds\\ \sigma x(t)-\int_{t-\sigma}^{t}x(s)\,ds \end{array}\displaystyle \right ] ^{T}R_{1}\left [ \textstyle\begin{array}{@{}c@{}} 0\\ 0\\ x(t)\\ \dot{x}(t) \end{array}\displaystyle \right ] -\left [ \textstyle\begin{array}{@{}c@{}} x(t-\sigma)\\ \dot{x}(t-\sigma)\\ \int_{t-\sigma}^{t}x(s)\,ds\\ x(t)-x(t-\sigma) \end{array}\displaystyle \right ] ^{T}R_{1}\left [ \textstyle\begin{array}{@{}c@{}} x(t-\sigma)\\ \dot{x}(t-\sigma)\\ \int_{t-\sigma}^{t}x(s)\,ds\\ x(t)-x(t-\sigma) \end{array}\displaystyle \right ] \\ &{}+\sigma^{2}\left [ \textstyle\begin{array}{@{}c@{}} x(t)\\ \dot{x}(t)\\ 0 \end{array}\displaystyle \right ] ^{T}R_{2}\left [ \textstyle\begin{array}{@{}c@{}} x(t)\\ \dot{x}(t)\\ 0 \end{array}\displaystyle \right ]+2\sigma \left [ \textstyle\begin{array}{@{}c@{}} \int_{t-\sigma}^{t}\int_{s}^{t}x(u)\,du\,ds\\ \sigma x(t)-\int_{t-\sigma}^{t}x(s)\,ds\\ \frac{\sigma^{2}}{2}x(t)-\int_{t-\sigma}^{t}\int_{s}^{t}x(u)\,du\,ds \end{array}\displaystyle \right ] ^{T}R_{2}\left [ \textstyle\begin{array}{@{}c@{}} 0\\ 0\\ \dot{x}(t) \end{array}\displaystyle \right ] \\ &{}-\left [ \textstyle\begin{array}{@{}c@{}} \int_{t-\sigma}^{t}x(s)\,ds\\ x(t)-x(t-\sigma)\\ \sigma x(t)-\int_{t-\sigma}^{t}x(s)\,ds \end{array}\displaystyle \right ] ^{T}R_{2} \left [ \textstyle\begin{array}{@{}c@{}} \int_{t-\sigma}^{t}x(s)\,ds\\ x(t)-x(t-\sigma)\\ \sigma x(t)-\int_{t-\sigma}^{t}x(s)\,ds \end{array}\displaystyle \right ] \\ &{}+\zeta_{1}^{T}(t)R_{1} \zeta_{1}(t)+\sigma\dot {x}^{T}(t)R_{3}\dot{x}(t) - \int_{t-\sigma}^{t}\dot{x}^{T}(s)R_{3} \dot{x}(s)\,ds \\ &{}+\frac{\sigma ^{4}}{4}x^{T}(t)R_{4}x(t)- \frac{\sigma^{2}}{2} \int_{t-\sigma}^{t} \int _{s}^{t}x^{T}(u)R_{4}x(u) \,du\,ds, \end{aligned}$$
(12)

where \(\zeta_{1}(t)=[x^{T}(t),\dot{x}^{T}(t),0,0]^{T}\).

Using Lemma 2.1 and Lemma 2.3 yields

$$\begin{aligned}& - \int_{t-\sigma}^{t}\dot{x}(s)R_{3}\dot{x}(s) \,ds\leq \int_{t -\sigma}^{t}\left [ \textstyle\begin{array}{@{}c@{}} x(t)\\ x(t-\sigma)\\ \dot{x}(t-\sigma)\\ \dot{x}(s) \end{array}\displaystyle \right ] ^{T}\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} X_{11}&X_{12}&X_{13}&X_{14}\\ \ast&X_{22}&X_{23}&X_{24}\\ \ast&\ast&X_{33}&X_{34}\\ \ast&\ast&\ast&0 \end{array}\displaystyle \right ]\left [ \textstyle\begin{array}{@{}c@{}} x(t)\\ x(t-\sigma)\\ \dot{x}(t-\sigma)\\ \dot{x}(s) \end{array}\displaystyle \right ]\,ds, \end{aligned}$$
(13)
$$\begin{aligned}& -\frac{\sigma^{2}}{2} \int_{t-\sigma}^{t} \int_{s}^{t}x^{T}(u)R_{4}x(u) \,du\,ds\leq - \int_{t-\sigma}^{t} \int_{s}^{t}x^{T}(u)\, du\, dsR_{4} \int_{t-\sigma}^{t} \int _{s}^{t}x(u)\,du\,ds. \end{aligned}$$
(14)

From (12) to (14), an upper bound of \(\mathcal{L}{V}_{2}\) can be

$$ \mathcal{L} {V}_{2}\leq\xi^{T}(t) ( \Pi_{3}+\Pi_{4})\xi(t). $$
(15)

With the condition of \(\dot{h}_{i}(t)\leq \mu_{i}\) (\(i=1,2\)), an upper bound of \(\mathcal{L}{V}_{3}\) is obtained:

$$\begin{aligned} \mathcal{L} {V}_{3} \leq&\zeta_{1}^{T}(t) (Q_{1}+Q_{2}+Q_{3})\zeta_{1}(t)+2\bigl( \zeta _{2h_{1}}^{T}(t)Q_{1}+\zeta_{2h_{2}}^{T}(t)Q_{2}+ \zeta_{2h}^{T}(t)Q_{3}\bigr)\zeta _{3}(t) \\ &{}-\zeta_{4h_{1}}^{T}(t)Q_{1}\zeta_{4h_{1}}(t)- \zeta_{4h_{2}}^{T}(t)Q_{2}\zeta _{4h_{2}}(t)- \zeta_{4h}^{T}(t)Q_{3}\zeta_{4h}(t)+ \zeta_{5h_{1}}^{T} (t)Q_{4}\zeta_{5h_{1}}(t) \\ &{}+2\zeta_{6h_{1}h_{2}}^{T}(t)Q_{4}\zeta_{7h_{1}}(t) -\zeta_{8h_{1}}^{T}(t)Q_{4}\zeta_{8h_{1}}(t)+ \zeta_{5h_{2}}^{T} (t)Q_{5}\zeta_{5h_{2}}(t)+2 \zeta_{6h_{2}h_{1}}^{T}(t)Q_{5}\zeta _{7h_{2}}(t) \\ &{}+\zeta_{9}^{T}(t) (Q_{6}+Q_{7}+Q_{8}) \zeta_{9}(t)-\zeta _{8h_{2}}^{T}(t)Q_{5} \zeta_{8h_{2}}(t)+2\bigl(\zeta_{10h_{1}(t)}^{T}(t)Q_{6}+ \zeta _{10h_{2}(t)}^{T}(t)Q_{7} \\ &{}+\zeta_{10h(t)}^{T}(t)Q_{8}\bigr) \zeta_{11}(t)-(1-\mu_{1})\zeta _{12h_{1}(t)}^{T}(t)Q_{6} \zeta_{12h_{1}(t)}(t) \\ &{}-(1-\mu_{2})\zeta_{12h_{2}(t)}^{T}(t)Q_{7} \zeta_{12h_{2}(t)}(t) -(1-\mu)\zeta_{12h(t)}^{T}(t)Q_{8} \zeta_{12h(t)}(t) \\ =&\xi^{T}(t) \Biggl(\sum_{k=5}^{8} \Pi_{k}+h_{1}(t)\Xi_{1}+h_{2}(t) \Xi_{2}\Biggr)\xi(t), \end{aligned}$$
(16)

where \(\zeta_{2\mathfrak{h}}(t)=[ \int_{t-\mathfrak {h}}^{t}x^{T}(s)\,ds,x^{T}(t)-x^{T}(t-\mathfrak{h}),\int_{t-\mathfrak {h}}^{t}\int_{s}^{t}x^{T}(u)\,du\,ds,\mathfrak{h}x^{T}(t)-\int_{t- \mathfrak{h}}^{t}x^{T}(s)\,ds]^{T}\), \(\zeta_{3}(t)=[0,0,x^{T}(t),\dot{x}^{T}(t)]^{T}\), \(\zeta_{4\mathfrak {h}}(t)=[x^{T}(t-\mathfrak{h}),\dot{x}^{T}(t-\mathfrak{h}),\int_{t-\mathfrak {h}}^{t}x^{T}(s)\,ds,x^{T}(t)-x^{T}(t-\mathfrak{h})]^{T}\), \(\mathfrak{h}\) represents \(h_{1}\), \(h_{2}\), and h, respectively. \(\zeta_{5\mathcal{h}}(t)=[x^{T}(t-\mathcal{h}),\dot{x}^{T}(t-\mathcal {h}),0]^{T}\), \(\mathcal{h}\) represents \(h_{1}\) and \(h_{2}\), respectively. \(\zeta_{6\mathbb{h}\imath}(t)=[\int_{t-h}^{t-\mathbb {h}}x^{T}(s)\,ds,x^{T}(t-\mathbb{h})-x^{T}(t-h),\imath x^{T}(t-\mathbb{h})-\int _{t-h}^{t-\mathbb{h}}x^{T}(s)\,ds]^{T}\), \(\mathbb{h}\) and ı represents \(h_{1}\) and \(h_{2}\), \(h_{2}\), and \(h_{1}\), respectively. \(\zeta_{7\mathcal{h}}(t)=[0,0,\dot {x}^{T}(t-\mathcal{h})]^{T}\), \(\zeta_{8\mathcal{h}}(t)=[x^{T}(t-h),\dot {x}^{T}(t-h),x^{T}(t-\mathcal{h})-x^{T}(t-h)]^{T}\), \(\zeta_{9}(t)=[x^{T}(t),0]^{T}\), \(\zeta_{10\mathfrak{h}(t)}(t)=[\int _{t-\mathfrak{h}(t)}^{t}x^{T}(s)\,ds,\mathfrak{h}(t)x^{T}(t)-\int_{t-\mathfrak {h}(t)}^{t}x^{T}(s)\,ds]^{T}\), \(\mathfrak{h}(t)\) represents \(h_{1}(t)\), \(h_{2}(t)\), and \(h(t)\), respectively; \(\zeta_{11}(t)=[0,\dot{x}^{T}(t)]^{T}\), \(\zeta_{12\mathfrak {h}(t)}(t)=[x^{T}(t-\mathfrak{h}(t)),x^{T}(t)-x^{T}(t-\mathfrak{h}(t))]^{T}\).

Calculation of \(\mathcal{L}{V}_{4}\) and \(\mathcal{L}{V}_{5}\) leads to

$$\begin{aligned}& \mathcal{L} {V}_{4}= \alpha_{2}^{T}(t) \bigl(h_{1}^{2}S_{1}+h_{2}^{2}S_{2}+h^{2}S_{3} \bigr)\alpha _{2}(t)-h_{1} \int_{t-h_{1}}^{t}\alpha_{2}^{T}(s)S_{1} \alpha_{2}(s)\,ds \\& \hphantom{\mathcal{L}{V}_{4}={}}{}- h_{2} \int_{t-h_{2}}^{t}\alpha_{2}^{T}(s)S_{2} \alpha_{2}(s)\,ds-h \int_{t-h}^{t}\alpha _{2}^{T}(s)S_{3} \alpha_{2}(s)\,ds \\& \hphantom{\mathcal{L}{V}_{4}={}}{} +\dot{x}^{T}(t) \bigl(h^{2}S_{4}+h_{1}^{2}S_{5} \bigr)\dot{x}(t)+h_{2}^{2}\dot{x}^{T}(t-h_{1})S_{6} \dot{x}(t-h_{1}) \\& \hphantom{\mathcal{L}{V}_{4}={}}{}-h \int_{t-h}^{t}\dot{x}^{T}(s)S_{4} \dot{x}(s)\,ds -h_{1} \int_{t-h_{1}}^{t}\dot{x}^{T}(s)S_{5} \dot{x}(s)\,ds \\& \hphantom{\mathcal{L}{V}_{4}={}}{}-h_{2} \int _{t-h}^{t-h_{1}}\dot{x}^{T}(s)S_{6} \dot{x}(s)\,ds, \end{aligned}$$
(17)
$$\begin{aligned}& \mathcal{L} {V}_{5}= x^{T}(t) \biggl(\frac{h_{1}^{4}}{4}S_{7}+ \frac{h_{2}^{4}}{4}S_{8}+\frac {h^{4}}{4}S_{9}\biggr)x(t)- \frac{h_{1}^{2}}{2} \int_{t-h_{1}}^{t} \int_{s}^{t}x^{T}(u)S_{7}x(u) \,du\,ds \\& \hphantom{\mathcal{L}{V}_{5}=}{}-\frac{h_{2}^{2}}{2} \int_{t-h_{2}}^{t} \int_{s}^{t}x^{T}(u)S_{8}x(u) \,du\,ds-\frac {h^{2}}{2} \int_{t-h}^{t} \int_{s}^{t}x^{T}(u)S_{9}x(u) \,du\,ds. \end{aligned}$$
(18)

By Lemmas 2.1 and 2.2, one can obtain

$$\begin{aligned}& -h_{1} \int_{t-h_{1}}^{t}\alpha_{2}^{T}(s)S_{1} \alpha_{2}(s)\,ds\leq-\zeta _{13h_{1}(t),h_{1}}^{T}(t) \mathcal{P}_{1}\zeta_{13h_{1}(t),h_{1}}(t), \end{aligned}$$
(19)
$$\begin{aligned}& -h_{2} \int_{t-h_{2}}^{t}\alpha_{2}^{T}(s)S_{2} \alpha_{2}(s)\,ds\leq-\zeta _{13h_{2}(t),h_{2}}^{T}(t) \mathcal{P}_{2}\zeta_{13h_{2}(t),h_{2}}(t), \end{aligned}$$
(20)
$$\begin{aligned}& -h \int_{t-h}^{t}\alpha_{2}^{T}(s)S_{3} \alpha_{2}(s)\,ds\leq-\zeta _{13h(t),h}^{T}(t) \mathcal{P}_{3}\zeta_{13h(t),h}(t), \end{aligned}$$
(21)
$$\begin{aligned}& -\frac{h_{1}^{2}}{2} \int_{t-h_{1}}^{t} \int_{s}^{t}x^{T} (u)S_{7}x(u) \,du\,ds \\& \quad \leq- \int_{t-h_{1}}^{t} \int_{s}^{t} x^{T}(u)\, du\, dsS_{7} \int_{t-h_{1}}^{t} \int_{s}^{t}x(u)\,du\,ds, \end{aligned}$$
(22)
$$\begin{aligned}& -\frac{h_{2}^{2}}{2} \int_{t-h_{2}}^{t} \int_{s}^{t}x^{T} (u)S_{8}x(u) \,du\,ds \\& \quad \leq- \int_{t-h_{2}}^{t} \int_{s}^{t} x^{T}(u)\, du\, dsS_{8} \int_{t-h_{2}}^{t} \int_{s}^{t}x(u)\,du\,ds, \end{aligned}$$
(23)
$$\begin{aligned}& -\frac{h^{2}}{2} \int_{t-h}^{t} \int_{s}^{t}x^{T} (u)S_{9}x(u) \,du\,ds \\& \quad \leq- \int_{t-h}^{t} \int_{s}^{t} x^{T}(u)\, du\, dsS_{9} \int_{t-h}^{t} \int_{s}^{t}x(u)\,du\,ds, \end{aligned}$$
(24)

where \(\zeta_{13\mathfrak{h}(t),\mathfrak{h}}(t)=[\int_{t-\mathfrak {h}(t)}^{t}x^{T}(s)\,ds,x^{T}(t)-x^{T}(t-\mathfrak{h}(t)),\int_{t-\mathfrak {h}}^{t-\mathfrak{h}(t)}x^{T}(s)\,ds,x^{T}(t-\mathfrak{h}(t))-x^{T}(t-\mathfrak {h})]^{T}\), \(\mathfrak{h}(t)\) and \(\mathfrak{h}\) represent \(h_{1}(t)\) and \(h_{1}\), \(h_{2}(t)\) and \(h_{2}\), \(h(t)\) and h, respectively.

By utilizing Lemma 2.4, it yields

$$\begin{aligned}& -h \int_{t-h}^{t}\dot{x}^{T}(s)S_{4} \dot{x}(s)\,ds \\& \quad \leq-\frac{\pi ^{2}}{4h} \int_{t-h}^{t}\bigl(x(s)-x(t-h)\bigr)^{T}S_{4} \bigl(x(s)-x(t-h)\bigr)\,ds \\& \quad \leq-\frac{\pi^{2}}{4h^{2}} \int_{t-h}^{t}\bigl(x^{T}(s)-x^{T}(t -h)\bigr)\, dsS_{4} \int_{t-h}^{t}\bigl(x(s)-x(t-h)\bigr) \,ds. \end{aligned}$$
(25)

For the time-varying delays and their upper delay bounds we have the following relationship:

$$ 0\leq h_{1}(t)\leq h_{1}, \qquad h_{1}(t)\leq h(t)\leq h. $$
(26)

We consider two cases: \(h(t)\in[h_{1}(t),h_{1})\) and \(h(t)\in[h_{1},h]\).

Case 1: when \(h(t)\in[h_{1}(t),h_{1})\), by some calculation and using Lemma 2.2 and Lemma 2.4, we have

$$\begin{aligned}& -h_{1} \int_{t-h_{1}}^{t}\dot{x}^{T}(s)S_{5} \dot{x}(s)\,ds \\& \quad = -h_{1} \int_{t-h_{1}(t)}^{t}\dot{x}^{T}(s)S_{5} \dot{x}(s)\,ds- h_{1} \int_{t-h(t)}^{t-h_{1}(t)}\dot{x}^{T}(s)S_{5} \dot {x}(s)\,ds-h_{1} \int_{t-h_{1}}^{t-h(t)}\dot {x}^{T}(s)S_{5} \dot{x}(s)\,ds \\& \quad \leq-\left [ \textstyle\begin{array}{@{}c@{}} x(t)-x(t-h_{1}(t))\\ x(t-h_{1}(t))-x(t-h(t))\\ x(t-h(t))-x(t-h_{1}) \end{array}\displaystyle \right ]^{T} \mathcal{P}_{4}\left [ \textstyle\begin{array}{@{}c@{}} x(t)-x(t-h_{1}(t))\\ x(t-h_{1}(t))-x(t-h(t))\\ x(t-h(t))-x(t-h_{1}) \end{array}\displaystyle \right ], \end{aligned}$$
(27)
$$\begin{aligned}& -h_{2} \int_{t-h}^{t-h_{1}}\dot{x}^{T}(s)S_{6} \dot{x}(s)\,ds \\& \quad \leq -\frac{\pi^{2}}{4h_{2}} \int_{t-h}^{t-h_{1}}\bigl(x^{T}(s)- x^{T}(t-h)\bigr)S_{6}\bigl(x(s)-x(t-h)\bigr)\,ds \\& \quad \leq-\frac{\pi^{2}}{4h_{2}^{2}} \int_{t-h}^{t-h_{1}}\bigl(x^{T}(s)-x^{T}(t-h) \bigr)\, dsS_{6} \int _{t-h}^{t-h_{1}}\bigl(x(s)-x(t-h)\bigr) \,ds. \end{aligned}$$
(28)

From (17), (19)-(21), (25)-(28), we can obtain

$$ \mathcal{L} {V}_{4}\leq\xi^{T}(t) ( \Pi_{9}+\Pi_{10}+\Sigma_{1})\xi(t). $$
(29)

Case 2: when \(h(t)\in[h_{1},h]\), by using Lemma 2.2, we have

$$\begin{aligned}& -h_{1} \int_{t-h_{1}}^{t}\dot {x}^{T}(s)S_{5} \dot{x}(s)\,ds \\& \quad =-h_{1} \int_{t-h_{1}(t)}^{t}\dot {x}^{T}(s)S_{5} \dot{x}(s)\,ds-h_{1} \int_{t-h_{1}}^{t-h_{1}(t)} \dot{x}^{T}(s)S_{5} \dot{x}(s)\,ds \\& \quad \leq-\left [ \textstyle\begin{array}{@{}c@{}} x(t)-x(t-h_{1}(t))\\ x(t-h_{1}(t))-x(t-h_{1}) \end{array}\displaystyle \right ]^{T} \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} S_{5}&T_{4}\\ \ast&S_{5} \end{array}\displaystyle \right ]\left [ \textstyle\begin{array}{@{}c@{}} x(t)-x(t-h_{1}(t))\\ x(t-h_{1}(t))-x(t-h_{1}) \end{array}\displaystyle \right ], \end{aligned}$$
(30)
$$\begin{aligned}& -h_{2} \int_{t-h}^{t-h_{1}} \dot{x}^{T}(s)S_{6} \dot{x}(s)\,ds \\& \quad =-h_{2} \int_{t-h(t)}^{t-h_{1}} \dot{x}^{T}(s)S_{6} \dot{x}(s)\,ds-h_{2} \int_{t-h}^{t-h(t)} \dot{x}^{T}(s)S_{6} \dot{x}(s)\,ds \\& \quad \leq-\left [ \textstyle\begin{array}{@{}c@{}} x(t-h_{1})-x(t-h(t))\\ x(t-h(t))-x(t-h) \end{array}\displaystyle \right ]^{T} \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} S_{6}&T_{5}\\ \ast&S_{6} \end{array}\displaystyle \right ]\left [ \textstyle\begin{array}{@{}c@{}} x(t-h_{1})-x(t-h(t))\\ x(t-h(t))-x(t-h) \end{array}\displaystyle \right ]. \end{aligned}$$
(31)

From (17), (19)-(21), (25), (26), (30), (31), we have

$$ \mathcal{L} {V}_{4}\leq\xi^{T}(t) ( \Pi_{9}+\Pi_{10}+\Sigma_{2})\xi(t). $$
(32)

Then through (18), (22)-(24), one can obtain

$$ \mathcal{L} {V}_{5}\leq\xi^{T}(t) \Pi_{11}\xi(t). $$
(33)

It follows from (3) that, for any \(\varepsilon>0\),

$$\begin{aligned} 0&\leq2\varepsilon\alpha^{2}x^{T}(t- \sigma)E_{\alpha}^{T}E_{\alpha}x(t-\sigma)+2\varepsilon \beta^{2}x^{T}\bigl(t-h(t)\bigr)E_{\beta}^{T} E_{\beta}x\bigl(t-h(t)\bigr)-\varepsilon\mathcal{F}^{T} \mathcal{F} \\ &=\xi^{T}(t)\Pi_{12}\xi(t). \end{aligned}$$
(34)

For any appropriately dimensioned matrices \(G_{i}\) (\(i=1,2,\ldots,7\)), the following zero equality holds:

$$\begin{aligned} 0 =&2\bigl[x^{T}(t)G_{1}+\dot{x}^{T}(t)G_{2}+x^{T}(t- \sigma)G_{3}+\dot{x}^{T}(t-\sigma )G_{4}+x^{T} \bigl(t-h(t)\bigr)G_{5} \\ &{}+\dot{x}^{T}(t-h)G_{6}+\mathcal{F}G_{7} \bigr] \bigl[-\dot{x}(t)- C_{l}x(t-\sigma)+A_{l}x \bigl(t-h(t)\bigr)+\mathcal{F}\bigr] \\ =&\xi^{T} (t)\Pi_{13} \xi(t). \end{aligned}$$
(35)

Therefore, from equations (7)-(35), an upper bound of \(\mathcal{L}{V}\) can be written as

$$ \mathcal{L} {V}\leq\xi^{T}(t)\Omega_{miqr}(t) \xi(t)\quad (m=1,2). $$
(36)

From (36), it is clear that \(\Omega_{miqr}(t)\) is a function for \(h_{1}(t)\) and \(h_{2}(t)\), by using a convex polyhedron method, the LMIs described by (5) can guarantee \(\Omega_{miqr}(t)<0\) to be true.

Thus, using Dynkin’s formula, when \(t\geq0\), it can be induced that

$$ \mathbf{E}\bigl\{ V(t,x_{t},r_{t}, \delta_{t},\ell_{t})\bigr\} -\mathbf{E}\biggl\{ \int_{0}^{t} \xi^{T}(s) \Omega_{miqr}(s)\xi(s)\,ds\biggr\} \leq\mathbf{E}\bigl\{ V(0,x_{0},r_{0},\delta_{0},\ell_{0}) \bigr\} < \infty, $$
(37)

where

$$\begin{aligned}& \mathbf{E}\bigl\{ V(0,x_{0},r_{0},\delta_{0}, \ell_{0})\bigr\} \\& \quad \leq \biggl\{ \Bigl(4+2\sigma^{2} {\max _{i\in1,\ldots,n} }c_{i0}^{2}+\sigma^{2}+h^{2} \Bigr)\lambda_{\max}(P_{0})+\biggl( 2\sigma+\frac{2\sigma^{3}}{3} \biggr)\lambda_{\max}(R_{1}) \\& \qquad {}+\biggl(\sigma^{3}+\frac{\sigma^{5}}{12}\biggr)\lambda_{\max}(R_{2})+ \frac{\sigma ^{2}}{2}\lambda_{\max}(R_{3})+\frac{\sigma^{5}}{12} \lambda_{\max}(R_{4})+ \biggl(2h_{1}+ \frac{2h_{1}^{3}}{3}\biggr)\lambda_{\max}(Q_{1}) \\& \qquad {}+\biggl(2h_{2}+\frac{2h_{2}^{3}}{3}\biggr)\lambda_{\max}(Q_{2})+ \biggl(2h+\frac{2h^{3}}{3}\biggr)\lambda _{\max}(Q_{3}) \\& \qquad {}+\biggl(2h_{2}+\frac{h^{2}h_{2}+h_{2}h_{1}^{2}+2h_{1}h_{2}h}{3} \biggr)\lambda_{\max}(Q_{4}) \\& \qquad {}+ \biggl(2h_{1}+\frac{h^{2}h_{1}+h_{1}h_{2}^{2} +2h_{1}h_{2}h}{3}\biggr)\lambda_{\max}(Q_{5}) \\& \qquad {}+\biggl(h_{1}+\frac{h_{1}^{3}}{3}\biggr)\lambda_{\max}(Q_{6})+ \biggl(h_{2}+\frac {h_{2}^{3}}{3}\biggr)\lambda_{\max}(Q_{7}) \\& \qquad {}+ \biggl(h+\frac{h^{3}}{3}\biggr)\lambda _{\max}(Q_{8})+h_{1}^{3} \lambda_{\max}(S_{1}) \\& \qquad {}+h_{2}^{3}\lambda_{\max}(S_{2})+h^{3} \lambda_{\max}(S_{3})+\frac{h^{3}}{2}\lambda _{\max}(S_{4})+\frac{h_{1}^{3}}{2}\lambda_{\max}(S_{5})+ \frac{h_{2}^{3}}{2}\lambda _{\max}(S_{6}) \\& \qquad {}+\frac{h_{1}^{5}}{12}\lambda_{\max}(S_{7})+ \frac{h_{2}^{5}}{12}\lambda _{\max}(S_{8})+\frac{h^{5}}{12} \lambda_{\max}(S_{9})\biggr\} \|\psi\|_{\tau}^{2}< \infty, \end{aligned}$$

and \(\Vert \psi \Vert _{\tau}=\max\{ {\sup_{-\tau\leq {s}\leq0}}\Vert x(s) \Vert , {\sup_{-\tau\leq{s}\leq 0}}\Vert \dot{x}(s) \Vert \}\).

Because of the definition of \(\xi^{T}(t)\), we have

$$\begin{aligned}& -\xi^{T}(t)\Omega_{miqr}(t)\xi(t)\geq\lambda_{\min}(- \Omega_{miqr})\xi ^{T}(t)\xi(t)\geq\lambda_{\min}(- \Omega_{miqr}){x}^{T}(t){x}(t), \end{aligned}$$
(38)
$$\begin{aligned}& -\xi^{T}(t)\Omega_{miqr}(t)\xi(t)\geq\lambda_{\min}(- \Omega_{miqr})\xi ^{T}(t)\xi(t) \geq\lambda_{\min}(- \Omega_{miqr})\dot{x}^{T}(t)\dot{x}(t), \end{aligned}$$
(39)

where

$$\begin{aligned} \lambda_{\min}(-\Omega_{miqr}) =&\min\bigl\{ \lambda_{\min}\bigl(-\Omega _{miqr}(t)|_{\substack{h_{1}(t)=0\\h_{2}(t)=0}}\bigr), \lambda_{\min}\bigl(-\Omega _{miqr}(t)|_{\substack{h_{1}(t)=0\\h_{2}(t)=h_{2}}}\bigr), \\ &\lambda_{\min}\bigl(-\Omega_{miqr}(t)|_{\substack{h_{1}(t)=h_{1}\\ h_{2}(t)=0}}\bigr), \lambda_{\min}\bigl(-\Omega_{miqr}(t)|_{\substack{h_{1}(t)=h_{1}\\ h_{2}(t)=h_{2}}}\bigr)\bigr\} . \end{aligned}$$

Applying the integral mean value theorem, there exists \(\eta\in [t,t+1]\), \(\eta=t+\theta\), \(\theta\in[0,1]\), such that

$$ \int_{t}^{t+1}x(s)\,ds=x(\eta)=x(t+\theta). $$
(40)

Using the Newton-Leibniz formula, we know

$$\begin{aligned} \bigl\Vert x(t) \bigr\Vert &=\bigl\Vert x(t)-x(t+ \theta)+x(t+\theta) \bigr\Vert \leq\bigl\Vert x(t+\theta)-x(t) \bigr\Vert + \bigl\Vert x(t+\theta) \bigr\Vert \\ &=\biggl\Vert \int_{t}^{t+\theta}\dot{x}(s)\,ds\biggr\Vert +\biggl\Vert \int_{t}^{t+1}{x}(s)\,ds\biggr\Vert \\ &\leq \int_{t}^{t+\theta}\bigl\Vert \dot{x}(s) \bigr\Vert \,ds+ \int _{t}^{t+1}\bigl\Vert {x}(s) \bigr\Vert \,ds \leq \int_{t}^{t+1}\bigl(\bigl\Vert \dot {x}(s)\bigr\Vert +\bigl\Vert {x}(s) \bigr\Vert \bigr)\,ds \\ &\leq\frac{2}{\sqrt{\lambda_{\min}(-\Omega_{miqr})}} \int _{t}^{t+1}\xi^{T}(s) \bigl(- \Omega_{miqr}(s)\bigr)\xi(s)\,ds\rightarrow0,\quad \mbox{as } t \rightarrow\infty. \end{aligned}$$
(41)

Therefore, the model (1) or (4) has a unique equilibrium point which is globally asymptotically stable. □

Remark 3

In [8], with respect to the discrete time-varying delay \(h(t)\), \(\int_{s}^{t}x(u)\,du\) and \(\int_{s}^{t}\dot{x}(u)\,du\) are included as the element of augmented vector in the integrands. Motivated by this method, in this paper, \(\int_{t-\sigma}^{t}x(s)\,ds\) and \(x(t-\sigma)\) are considered as the elements of augmented vector in \(V_{1}\); in addition, \(\int_{s}^{t}x(u)\,du\) and \(\int_{s}^{t}\dot{x}(u)\,du\) are included in \(V_{2}\).

Remark 4

Different from [33, 34, 38], this article fully considers the relationship between time-varying delays and their upper bounds, different methods are used to enlarge the time-derivative of the Lyapunov-Krasovskii functional appropriately according to different values of the time delay \(h(t)\). Therefore, this method may lead to less conservative results.

Remark 5

It should be noted that \(V_{3}\) contains the new integral term \(\int_{s}^{t-h_{1}}\dot{x}(u)\,du\) and \(\int_{s}^{t-h_{2}}\dot {x}(u)\,du\) in the integrants and \(h_{2}\int_{t-h}^{t-h_{1}}\int_{\lambda }^{t-h_{1}}\dot{x}^{T}(s)S_{6}\dot{x}(s)\,ds\,d\lambda\) is included in \(V_{4}\). The upper limits of the integral are \('{t-h_{1}}'\) and \('{t-h_{2}}'\), respectively but not \('t'\) and \('t'\); The inner integral upper limits of the double integral is \('{t-h_{1}}'\) but not \('t'\). More information about the lower bound of the \(h_{1}(t)\) and \(h_{2}(t)\) is sufficiently used in the Lyapunov functional (7).

Remark 6

In [7, 20, 29, 37], the reciprocally convex method is usually employed to deal with the case \(N=2\) in Lemma 2.2. In this paper, since the relationship between time-varying delays and their upper bounds is fully considered, in order to deal with the case \(N>2\), Lemma 2.2 is introduced, which extends the reciprocally convex method in [7, 20, 29, 37].

Remark 7

In this paper, the inequality in Lemma 2.4 indicates the relationship between the state and the derivative of state, which is different from the inequalities in [7, 8, 12, 13, 20, 24–29, 36–38]. Both sides of these inequalities are functions regarding to the state.

Remark 8

In [12], Barbalat’s lemma is used to show Theorem 3.1. Different from [12], the integral mean value theorem is adopted in this paper to prove the considered system is globally asymptotically stable.

In the following, we will investigate the stability of delayed Markovian jump neural networks with nonlinear perturbations and unknown parameters. We have

$$ \begin{aligned} &\dot{x}(t) = -\bigl[C_{i}+\triangle C(t) \bigr]x(t-\sigma)+\bigl[A_{i}+\triangle A(t)\bigr]x \bigl(t-h_{1}(t,\delta_{t})-h_{2}(t, \ell_{t})\bigr) \\ &\hphantom{\dot{x}(t) ={}}{}+f\bigl(t,x(t-\sigma),x\bigl(t-h_{qr}(t)\bigr) \bigr), \quad t>0, \\ &x(s)=\phi(s), \quad s\in\bigl[-\max\{\sigma,h\},0\bigr], \end{aligned} $$
(42)

where \(\triangle C(t)\) and \(\triangle A(t)\) are unknown matrices denoting time-varying parameter uncertainties and such that the following condition holds:

$$ \bigl[\textstyle\begin{array}{@{}c@{\quad}c@{}} \triangle C(t)& \triangle A(t) \end{array}\displaystyle \bigr] =MF(t)[\textstyle\begin{array}{@{}c@{\quad}c@{}} V_{1} & V_{2} \end{array}\displaystyle ], $$
(43)

where M, \(V_{1}\) and \(V_{2}\) are known constant matrices and \(F(t)\) is the unknown time-varying matrix-value function satisfying

$$ F^{T}(t)F(t)\leq I,\quad \forall t\geq0. $$
(44)

Definition 2

The trivial solution of system (44) is said to be robustly globally asymptotically stable if the trivial solution of the system (44) is globally asymptotically stable for all admissible unknown parameters.

Theorem 3.2

For given scalars \(\sigma\geq0\), \(h_{1}\geq0\), \(h_{2}\geq0\), \(\mu_{1}\geq0\) and \(\mu_{2}\geq0\) satisfied (2), \(\alpha\geq0\), \(\beta\geq0\), \(E_{\alpha}\) and \(E_{\beta}\) satisfied (3), system (44) is robustly globally asymptotically stable, if there exist a constant \(\varepsilon\geq0\), positive definite matrices \(P_{iqr}\in R^{5n\times 5n}\), \(R_{1}\in R^{4n\times4n}\), \(R_{2}\in R^{3n\times3n}\), \(R_{3}\in R^{n\times n}\), \(R_{4}\in R^{n\times n}\), \(Q_{i}\in R^{4n\times4n}\) (\(i=1,2,3\)), \(Q_{j}\in R^{3n\times3n}\) (\(j=4,5\)), \(Q_{k}\in R^{2n\times 2n}\) (\(k=6,7,8\)), \(S_{q}\in R^{2n\times2n}\) (\(q=1,2,3\)), \(S_{m}\in R^{n\times n}\) (\(m=4,5,\ldots,9\)), and

$$\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} X_{11} & X_{12}&X_{13}&X_{14}\\ \ast & X_{22}&X_{23}&X_{24} \\ \ast&\ast&X_{33}&X_{34}\\ \ast&\ast&\ast&R_{3} \end{array}\displaystyle \right ]\geq0, $$

any appropriately dimensioned matrices \(U_{i}\) (\(i=1,2,3\)), \(T_{j}\) (\(j=1,2,\ldots,5\)) and \(G_{k}\) (\(k=1,2,\ldots,7\)), such that the following LMIs hold for \(i=1,2,\ldots,N_{1}\), \(q=1,2,\ldots ,N_{2}\), and \(r=1,2,\ldots,N_{3}\):

$$ \left \{ \textstyle\begin{array}{l} \Sigma_{miqr}(t)|_{\substack{h_{1}(t)=0\\h_{2}(t)=0}}< 0, \\ \Sigma_{miqr}(t)|_{\substack{h_{1}(t)=h_{1}\\h_{2}(t)=0}}< 0, \\ \Sigma_{miqr}(t)|_{\substack{h_{1}(t)=0\\h_{2}(t)=h_{2}}}< 0, \\ \Sigma_{miqr}(t)|_{\substack{h_{1}(t)=h_{1}\\h_{2}(t)=h_{2}}}< 0 \end{array}\displaystyle \right .\quad (m=1,2) $$
(45)

and

$$ \mathcal{P}_{i}>0\quad (i=1,2,3,4),\qquad \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} S_{5}& T_{4}\\ \ast&S_{5} \end{array}\displaystyle \right ]>0, \qquad \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} S_{6}& T_{5}\\ \ast&S_{6} \end{array}\displaystyle \right ]>0, $$
(46)

where

$$\begin{aligned}& \hat{\Pi}_{13}=\Pi _{13}+7e_{3}V_{1}V_{1}^{T}e_{3}^{T}+7e_{13}V_{2}V_{2}^{T}e_{13}^{T}, \qquad \hat{\Omega }_{miqr}(t)=\Omega_{miqr}(t)- \Pi_{13}+\hat{\Pi}_{13}, \\& \Sigma_{miqr}(t)= \left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} \hat{\Omega}_{miqr}(t)& G_{1}Me_{1}& G_{1}Me_{2} & G_{1}Me_{3}& G_{1}Me_{4}& G_{1}Me_{13}& G_{1}Me_{10}& G_{1}Me_{25}\\ \ast&-\frac{1}{2}I&0&0&0&0&0&0\\ \ast&\ast&-\frac{1}{2}I&0&0&0&0&0\\ \ast&\ast&\ast&-\frac{1}{2}I&0&0&0&0\\ \ast&\ast&\ast&\ast&-\frac{1}{2}I&0&0&0\\ \ast&\ast&\ast&\ast&\ast&-\frac{1}{2}I&0&0\\ \ast&\ast&\ast&\ast&\ast&\ast&-\frac{1}{2}I&0\\ \ast&\ast&\ast&\ast&\ast&\ast&\ast&-\frac{1}{2}I \end{array}\displaystyle \right ], \end{aligned}$$

with the other elements the same as in Theorem  3.1.

Proof

Consider the same Lyapunov-Krasovskii functional as in Theorem 3.1. Now replacing \(C_{i}\) and \(A_{i}\) in (35) with \(C_{i}+\triangle C(t)\) and \(A_{i}+\triangle A(t)\), we can have

$$\begin{aligned} 0 =&2\bigl[x^{T}(t)G_{1}+\dot{x}^{T}(t)G_{2}+x^{T}(t- \sigma)G_{3}+\dot{x}^{T}(t-\sigma )G_{4}+x^{T} \bigl(t-h(t)\bigr)G_{5} \\ &{}+\dot{x}^{T}(t-h)G_{6}+\mathcal{F}^{T}G_{7} \bigr] \bigl[-\dot{x}(t)-\bigl(C_{i}+ \triangle C(t)\bigr)x(t-\sigma) \\ &{}+ \bigl(A_{i}+\triangle A(t)\bigr)x\bigl(t- h(t)\bigr)+\mathcal{F}\bigr] \\ =&\xi^{T}(t)\Pi_{13}\xi(t)+2\bigl[x^{T}(t)G_{1}+ \dot {x}^{T}(t)G_{2}+x^{T}(t-\sigma)G_{3}+ \dot{x}^{T}(t-\sigma)G_{4} \\ &{}+x^{T}\bigl(t-h(t)\bigr)G_{5} +\dot{x}^{T}(t-h)G_{6}+ \mathcal{F}^{T}G_{7}\bigr] \\ &{}\times\bigl[-\triangle C(t)x(t-\sigma )+ \triangle A(t)x\bigl(t-h(t)\bigr)\bigr]. \end{aligned}$$
(47)

To this end, we have

$$\begin{aligned}& -2x^{T} (t)G_{1}\triangle C(t)x(t-\sigma) \\& \quad =-2x^{T}(t)G_{1}MF(t)V_{1}x(t- \sigma) \\& \quad \leq x^{T}(t)G_{1}MM^{T}G_{1}x(t)+x^{T}(t- \sigma)V_{1}^{T}V_{1}x(t-\sigma), \\& -2\dot{x}^{T} (t)G_{2}\triangle C(t)x(t-\sigma) \\& \quad =-2\dot {x}^{T}(t)G_{2}MF(t)V_{1}x(t-\sigma) \\& \quad \leq\dot{x}^{T}(t)G_{2}MM^{T}G_{2} \dot{x}(t)+x^{T}(t-\sigma)V_{1}^{T}V_{1}x(t- \sigma ), \\& -2x^{T} (t-\sigma)G_{3}\triangle C(t)x(t- \sigma) \\& \quad =-2x^{T}(t -\sigma)G_{3}MF(t)V_{1}x(t- \sigma) \\& \quad \leq x^{T}(t-\sigma)G_{3}MM^{T}G_{3}x(t- \sigma)+x^{T}(t-\sigma )V_{1}^{T}V_{1}x(t- \sigma), \\& -2\dot{x}^{T} (t-\sigma)G_{4}\triangle C(t)x(t-\sigma) \\& \quad =-2 \dot {x}^{T}(t-\sigma)G_{4}MF(t)V_{1}x(t-\sigma) \\& \quad \leq\dot{x}^{T}(t-\sigma)G_{4}MM^{T}G_{4} \dot{x}(t-\sigma)+x^{T} (t-\sigma)V_{1}^{T}V_{1}x(t- \sigma), \\& -2x^{T} \bigl(t-h(t)\bigr)G_{5}\triangle C(t)x(t-\sigma) \\& \quad =- 2x^{T}\bigl(t-h(t)\bigr)G_{5}MF(t)V_{1}x(t-\sigma) \\& \quad \leq x^{T}\bigl(t-h(t)\bigr)G_{5}MM^{T}G_{5}x \bigl(t-h(t)\bigr)+x^{T}(t- \sigma)V_{1}^{T}V_{1}x(t- \sigma), \\& -2\dot{x}^{T} (t-h)G_{6}\triangle C(t)x(t-\sigma) \\& \quad =-2\dot {x}^{T}(t-h)G_{6}MF(t)V_{1}x(t-\sigma) \\& \quad \leq\dot{x}^{T}(t-h)G_{6}MM^{T}G_{6} \dot{x}(t-h)+x^{T}(t-\sigma )V_{1}^{T}V_{1}x(t- \sigma), \\& -2\mathcal{F}^{T} G_{7}\triangle C(t)x(t-\sigma) \\& \quad =-2\mathcal {F}^{T}G_{7}MF(t)V_{1}x(t-\sigma) \\& \quad \leq\mathcal{F}^{T}G_{7}MM^{T}G_{7} \mathcal {F}+x^{T}(t-\sigma)V_{1}^{T}V_{1}x(t- \sigma). \end{aligned}$$

Similarly, one can show that

$$\begin{aligned}& -2x^{T} (t)G_{1}\triangle A(t)x\bigl(t-h(t)\bigr) \\& \quad \leq x^{T}(t)G_{1}MM^{T}G_{1}x(t)+x^{T} \bigl(t-h(t)\bigr)V_{2}^{T}V_{2}x\bigl(t-h(t)\bigr), \\& -2\dot{x}^{T} (t)G_{2}\triangle A(t)x\bigl(t-h(t)\bigr) \\& \quad \leq\dot{x}^{T}(t)G_{2}MM^{T}G_{2} \dot{x}(t)+x^{T}\bigl(t-h(t)\bigr)V_{2}^{T}V_{2}x \bigl(t-h(t)\bigr), \\& -2x^{T} (t-\sigma)G_{3}\triangle A(t)x\bigl(t-h(t)\bigr) \\& \quad \leq x^{T}(t-\sigma)G_{3}MM^{T}G_{3}x(t- \sigma)+x^{T}\bigl(t-h(t)\bigr)V_{2}^{T}V_{2}x \bigl(t-h(t)\bigr), \\& -2\dot{x}^{T} (t-\sigma)G_{4}\triangle A(t)x\bigl(t-h(t) \bigr) \\& \quad \leq\dot{x}^{T}(t-\sigma)G_{4}MM^{T}G_{4} \dot{x}(t-\sigma )+x^{T}\bigl(t-h(t)\bigr)V_{2}^{T}V_{2}x \bigl(t-h(t)\bigr), \\& -2x^{T} \bigl(t-h(t)\bigr)G_{5}\triangle A(t)x\bigl(t-h(t) \bigr) \\& \quad \leq x^{T}\bigl(t-h(t)\bigr)G_{5}MM^{T}G_{5}x \bigl(t-h(t)\bigr)+x^{T}\bigl(t-h(t)\bigr)V_{2}^{T}V_{2}x \bigl(t-h(t)\bigr), \\& -2\dot{x}^{T} (t-h)G_{6}\triangle A(t)x\bigl(t-h(t)\bigr) \\& \quad \leq\dot{x}^{T}(t-h)G_{6}MM^{T}G_{6} \dot {x}(t-h)+x^{T}\bigl(t-h(t)\bigr)V_{2}^{T}V_{2}x \bigl(t-h(t)\bigr), \\& -2\mathcal{F}^{T} G_{7}\triangle A(t)x\bigl(t-h(t)\bigr) \leq \mathcal {F}^{T}G_{7}MM^{T}G_{7} \mathcal{F}+x^{T}\bigl(t-h(t)\bigr)V_{2}^{T}V_{2}x \bigl(t-h(t)\bigr). \end{aligned}$$

Then along the same line as for Theorem 3.1, we can obtain the desired result by applying the Schur complement lemma. This completes the proof of Theorem 3.2. □

Next, we consider the case that the system (1) without Markovian jumping parameters and \(h(t)=h\), then the system (1) can be rewritten as

$$ \begin{aligned} &\dot{x}(t) = -Cx(t-\sigma)+Ax(t-h)+f\bigl(t,x(t- \sigma),x(t-h)\bigr),\quad t>0, \\ &x(s)=\phi(s), \quad s\in\bigl[-\max\{\sigma,h\},0\bigr]. \end{aligned} $$
(48)

For system (48), we have following result.

Corollary 3.1

For given scalars \(\sigma\geq0\), \(h\geq0\), \(\alpha\geq0\), \(\beta\geq0\), \(E_{\alpha}\), and \(E_{\beta}\), system (48) is globally asymptotically stable, if there exist a constant \(\varepsilon \geq0\), positive definite matrices \(P\in R^{5n\times5n}\), \(R_{1}\in R^{4n\times4n}\), \(R_{2}\in R^{3n\times3n}\), \(R_{3}\in R^{n\times n}\), \(R_{4}\in R^{n\times n}\), \(Q_{1}\in R^{4n\times4n}\), \(Q_{2}\in R^{2n\times 2n}\), \(S_{1}\in R^{2n\times2n}\), \(S_{2}\in R^{n\times n}\), \(S_{3}\in R^{n\times n}\), and

$$\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} X_{11} & X_{12}&X_{13}&X_{14}\\ \ast & X_{22}&X_{23}&X_{24} \\ \ast&\ast&X_{33}&X_{34}\\ \ast&\ast&\ast&R_{3} \end{array}\displaystyle \right ]\geq0, $$

any appropriately dimensioned matrices \(G_{k}\) (\(k=1,2,\ldots,7\)), such that the following LMIs hold:

$$ \Pi_{1}P\Pi_{2}^{T}+ \Pi_{2}P\Pi_{1}^{T}+\sum _{k=3}^{10}\Pi_{k}< 0, $$
(49)

where

$$\begin{aligned}& \Pi_{1}=\bigl[e_{1}-e_{7}C^{T},e_{7},e_{3},e_{8},e_{5} \bigr],\qquad \Pi_{2}= \bigl[-e_{1}C^{T}+e_{5}A^{T}+e_{11},e_{1}-e_{3},e_{4},e_{1}-e_{5},e_{6} \bigr], \\& \Pi_{3}=[e_{1},e_{2},0_{11n\times n},0_{11n\times n}]R_{1}[e_{1},e_{2},0_{11n\times n},0_{11n\times n}]^{T}- [e_{3},e_{4},e_{7},e_{1}-e_{3}]R_{1} \\& \hphantom{\Pi_{3}={}}{}\times[e_{3},e_{4},e_{7},e_{1}-e_{3}]^{T}+ \sigma^{2}[e_{1},e_{2}, 0_{11n\times n}]R_{2}[e_{1},e_{2},0_{11n\times n}]^{T}-[e_{7},e_{1} -e_{3},\sigma e_{1}-e_{7}] \\& \hphantom{\Pi_{3}={}}{}\times R_{2}[e_{7},e_{1}-e_{3}, \sigma e_{1}-e_{7}]^{T}+\sigma e_{2}R_{3}e_{2}^{T}+e_{1} \biggl(\sigma X_{11}+X_{14}+X_{14}^{T}+ \frac{\sigma^{2}}{4}R_{4}\biggr)e_{1}^{T} \\& \hphantom{\Pi_{3}={}}{}+e_{3}\bigl(\sigma X_{22}-X_{24}-X_{24}^{T} \bigr)e_{3}^{T}+\sigma e_{4}X_{33}e_{4}^{T}-e_{9}R_{4}e_{9}^{T}, \\& \Pi_{4}=\operatorname{Sym}\biggl\{ [e_{7},e_{1}-e_{3},e_{9}, \sigma e_{1}-e_{7}]R_{1}[0_{11n\times n},0_{11n\times n},e_{1},e_{2}]^{T} \\& \hphantom{\Pi_{4}={}}{}+\sigma\biggl[e_{9},\sigma e_{1}-e_{7}, \frac{\sigma^{2}}{2} e_{1}-e_{9}\biggr]R_{2}[0_{11n\times n},0_{11n\times n},e_{2}]^{T} \\& \hphantom{\Pi_{4}={}}{}+e_{1}\bigl(\sigma X_{12}-X_{14}^{T}+X_{24}^{T} \bigr)e_{3}^{T}+e_{1}\bigl(\sigma X_{13}+X_{34}^{T}\bigr)e_{4}^{T}+e_{3} \bigl(\sigma X_{23}-X_{34}^{T}\bigr)e_{4}^{T} \biggr\} , \\& \Pi_{5}=[e_{1},e_{2},0_{11n\times n},0_{11n\times n}]Q_{1}[e_{1},e_{2},0_{11n\times n},0_{11n\times n}]^{T}- [e_{5},e_{6},e_{8},e_{1}-e_{5}]Q_{1} \\& \hphantom{\Pi_{5}={}}{}\times [e_{5},e_{6},e_{8},e_{1}-e_{5}]^{T}+[e_{1},0_{11n\times n}]Q_{2}[e_{1},0_{11n\times n}]^{T}-[e_{5},e_{1}-e_{5}]Q_{2}[e_{5},e_{1}- e_{5}]^{T}, \\& \Pi_{6}=\operatorname{Sym}\bigl\{ [e_{8},e_{1}-e_{5},e_{10},h e_{1}-e_{8}]Q_{1}[0_{11n\times n},0_{11n\times n},e_{1},e_{2}]^{T} \\& \hphantom{\Pi_{6}={}}{}+[e_{8},h e_{1}-e_{8}]Q_{2}[0_{11n\times n},e_{2}]^{T} \bigr\} , \\& \Pi_{7}=h^{2}[e_{1},e_{2}]S_{1}[e_{1},e_{2}]^{T}+h^{2}e_{2}S_{2}e_{2}^{T}-[e_{8},e_{1}-e_{5}]S_{1}[e_{8},e_{1}-e_{5}]^{T} \\& \hphantom{\Pi_{7}={}}{}-\frac{\pi ^{2}}{4h^{2}}e_{8}S_{2}e_{8}^{T}- \frac{\pi^{2}}{4}e_{5}S_{2}e_{5}^{T}+ \operatorname{Sym}\biggl\{ \frac{\pi ^{2}}{4h}e_{8}S_{2}e_{5}^{T} \biggr\} , \\& \Pi_{8}=\frac{h^{4}}{4}e_{1}S_{3}e_{1}^{T}-e_{10}S_{3}e_{10}^{T}, \qquad \Pi _{9}=2\varepsilon\alpha^{2}e_{3}E_{\alpha}^{T}E_{\alpha}e_{3}^{T}+2\varepsilon\beta ^{2}e_{5}E_{\beta}^{T}E_{\beta}e_{5}^{T}-\varepsilon e_{11}e_{11}^{T}, \\& \Pi_{10}=\operatorname{Sym}\bigl\{ [e_{1}G_{1}+e_{2}G_{2}+e_{3}G_{3}+e_{4}G_{4}+e_{5}G_{5}+e_{6}G_{6}+e_{11}G_{7}] \\& \hphantom{\Pi_{10}={}}{}\times\bigl[-e_{2}^{T}-Ce_{3}^{T}+Ae_{5}^{T}+e_{11}^{T} \bigr]\bigr\} . \end{aligned}$$

Remark 9

Compared to [12], in this paper, the augmented vector \(\xi(t)\) has integrating terms \(\int_{t-h}^{t}x(s)\,ds\), \(\int _{t-\sigma}^{t}\int_{s}^{t}x(u)\,du\,ds\), and \(\int_{t-h}^{t}\int _{s}^{t}x(u)\,du\,ds\), by these terms, more information on the states is utilized in the criterion presented in Theorem 3.1, which may lead to a superior result.

4 Numerical examples

In this section, two numerical examples are introduced (by using MATLAB) to show the effectiveness and the smaller conservativeness of our results.

Example 4.1

Consider the neural networks (48) with the parameters

$$ C=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 2 & 0\\ 0 & \lambda \end{array}\displaystyle \right ],\qquad A=0, \qquad \bigl\Vert f\bigl(t,x(t-\sigma),x(t-h)\bigr)\bigr\Vert \leq0.2\bigl\Vert x(t-h) \bigr\Vert , $$

where \(h\geq0\), \(\sigma\geq0\) and \(\lambda>0\) are some real constants.

When \(h=0\), the maximum leakage delay bounds for guaranteeing the global stability of system (48) with different λ are listed in Table 1 including the results of [10, 12] and our methods. In addition, for given σ and λ (or h and λ), the upper bounds of h (or σ) are listed in Tables 2 and 3. Meanwhile, the comparisons with the results obtained by the criterion in [12] are given. It follows from Corollary 3.1, that the system (48) is globally asymptotically stable, let \(f=0.2[x_{1}(t-h),x_{2}(t-h)]^{T}\), then the simulations of state responses for system (48) with different h, σ, and λ are depicted in Figure 1.

Figure 1
figure 1

State trajectories of system ( 42 ) with different τ , σ and λ .

Table 1 Allowable upper bounds of σ with \(\pmb{h=0}\) and different values of λ
Table 2 Allowable upper bounds of h with different values of σ and λ
Table 3 Allowable upper bounds of σ with different values of h and λ

Example 4.2

Consider the neural networks (1) without Markovian jumping parameters

$$ C=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 2 & 0\\ 0 & 3 \end{array}\displaystyle \right ],\qquad A=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} -1 & 0\\ -1 & -1 \end{array}\displaystyle \right ],\qquad \bigl\Vert f\bigl(t,x(t- \sigma),x\bigl(t-h(t)\bigr)\bigr)\bigr\Vert \leq0.2\bigl\Vert x\bigl(t-h(t) \bigr)\bigr\Vert . $$

For this system, by solving the LMIs in Theorem 3.1, the maximum value of upper delay bound \(h_{1}\) and \(h_{2}\) can be obtained, which are listed in Tables 4 and 5. Figure 2 shows the state trajectory for different σ with initial state \([0.5, 0.7]^{T}\) and \([0.5, -0.7]^{T}\), respectively.

Figure 2
figure 2

State trajectories of system ( 1 ) with different σ . (a): State trajectories of system (1) with \(\sigma=0.1\). (b): State trajectories of system (1) with \(\sigma=0.2\).

Table 4 Allowable upper bounds of \(\pmb{h_{1}}\) for different \(\pmb{h_{2}}\) and σ with \(\pmb{\mu_{1}=0.6}\) and \(\pmb{\mu_{2}=0.2}\)
Table 5 Allowable upper bounds of \(\pmb{h_{2}}\) for different \(\pmb{h_{1}}\) and σ with \(\pmb{\mu_{1}=0.6}\) and \(\pmb{\mu_{2}=0.2}\)

Example 4.3

Consider the neural networks (1) with the parameters

$$\begin{aligned}& C_{1}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 1 & 0\\ 0 & 2.5 \end{array}\displaystyle \right ],\qquad A_{1}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} -1 & 0\\ -1 & -1 \end{array}\displaystyle \right ],\qquad C_{2}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} 2.3 & 0\\ 0 & 1.5 \end{array}\displaystyle \right ],\qquad A_{2}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} -2 & 0\\ 1.4 & -3.1 \end{array}\displaystyle \right ], \\& \bigl\Vert f\bigl(t,x(t-\sigma),x\bigl(t-h_{qr}(t)\bigr)\bigr)\bigr\Vert \leq0.2\bigl\Vert x\bigl(t-h(t)\bigr)\bigr\Vert . \end{aligned}$$

Here, the Markov chains are generated by \(\Pi=\bigl [{\scriptsize\begin{matrix}{} -6 & 6\cr 6 & -6\end{matrix}} \bigr ]\), \(P=\bigl [ {\scriptsize\begin{matrix}{} -4 & 4\cr 5 & -5\end{matrix}} \bigr ]\), \(L=\bigl [ {\scriptsize\begin{matrix}{} -7 & 7\cr 7 & -7\end{matrix}} \bigr ]\), and \(\triangle=0.01\), which is shown in Figure 3. Let \(f=0.2[x_{1}(t-h_{qr}(t)),x_{2}(t-h_{qr}(t))]^{T}\), the time delay is considered as \(h_{1}(t,\delta_{t}=1)=1+\sin t\), \(h_{1}(t,\delta_{t}=2)=0.5+0.5\cos t\), \(h_{2}(t,\ell_{t}=1)=0.2+0.2\sin t\), and \(h_{2}(t,\ell_{t}=2)=0.25+0.25\cos t\), we can obtain \(h_{1}(t)=1.5+0.5\sin t\) and \(h_{2}(t)=0.55+0.25\cos t\), therefore, \(h_{1}=2\), \(h_{2}=0.8\), \(\mu_{1}=0.5\) and \(\mu_{2}=0.25\). Figures 4-6 depict the states trajectories of the system (1) with different values of σ, which indicates the sensitiveness of neural networks due to the time delay in the leakage term.

Figure 3
figure 3

The Markov chain generated by Π and \(\pmb{\triangle=0.01}\) .

Figure 4
figure 4

State trajectories of system ( 1 ) with \(\pmb{\sigma=0.1}\) .

Figure 5
figure 5

State trajectories of system ( 1 ) with \(\pmb{\sigma=0.4}\) .

Figure 6
figure 6

State trajectories of system ( 1 ) with \(\pmb{\sigma=0.8}\) .

5 Conclusions

In this paper, based on the extended Wirtinger inequality and the reciprocally convex method, the robust stability problem for Markovian jump neural networks with leakage delay, two additive time-varying delays, and nonlinear perturbations have been investigated. The Markovian jumping parameters in the connection weight matrices and each of the two additive discrete delays are assumed to be different in the system model. Accordingly, a weak infinitesimal operator acting on the Lyapunov-Krasovskii functional is first proposed. The relationship between time-varying delays and their upper delay bounds is efficiently utilized to estimate the time-derivative of the Lyapunov-Krasovskii functional, which shows that more information of the lower and upper delay bounds of time-varying delays can be used. By constructing a newly augmented Lyapunov-Krasovskii functional and using the convex polyhedron method, several sufficient criteria are derived to guarantee the stability of the proposed model for all admissible parameter uncertainties. Numerical examples and their simulations are given to show the effectiveness and usefulness of the proposed method. In future work, we will study the state estimation, \(H_{\infty}\) performance, and passivity analysis of the proposed model.

References

  1. Liu, GP: Nonlinear Identification and Control: A Neural Network Approach. Springer, London (2001)

    Book  MATH  Google Scholar 

  2. Shatyrko, A, Khusainov, D: Absolute interval stability of indirect regulating systems of neutral type. J. Autom. Inf. Sci. 42, 43-54 (2010)

    Article  Google Scholar 

  3. Shatyrko, A, Khusainov, D: Investigation of absolute stability of nonlinear systems of special kind with aftereffect by Lyapunov functions method. J. Autom. Inf. Sci. 43, 61-75 (2011)

    Article  MATH  Google Scholar 

  4. Gu, K, Kharitonov, V, Chen, J: Stability of Time-Delay Systems. Birkhäuser, Basel (2003)

    Book  MATH  Google Scholar 

  5. Shatyrko, A, Diblik, J, Khusainov, D, Ruzickova, M: Stabilization of Lur’e-type nonlinear control systems by Lyapunov-Krasovski functionals. Adv. Differ. Equ. 2012, 229 (2012)

    Article  MathSciNet  Google Scholar 

  6. Liu, P: Improved delay-dependent robust stability criteria for recurrent neural networks with time-varying delays. ISA Trans. 52, 30-35 (2013)

    Article  Google Scholar 

  7. Lee, T, Park, J, Kwon, O, Lee, S: Stochastic sampled-data control for state estimation of time-varying delayed neural networks. Neural Netw. 46, 99-108 (2013)

    Article  MATH  Google Scholar 

  8. Kwon, O, Park, M, Park, J, Lee, S, Cha, E: On stability analysis for neural networks with interval time-varying delays via some new augmented Lyapunov-Krasovskii functional. Commun. Nonlinear Sci. Numer. Simul. 19, 3184-3201 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  9. Lin, C, Wang, Q, Lee, T: A less conservative robust stability test for linear uncertain time-delay systems. IEEE Trans. Autom. Control 51, 87-91 (2006)

    Article  MathSciNet  Google Scholar 

  10. Li, C, Huang, T: On the stability of nonlinear systems with leakage delay. J. Franklin Inst. 346, 366-377 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Peng, S: Global attractive periodic solutions of BAM neural networks with continuously distributed delays in the leakage terms. Nonlinear Anal., Real World Appl. 11, 2141-2151 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  12. Li, X, Fu, X, Rakkiyappan, R: Delay-dependent stability analysis for a class of dynamical systems with leakage delay and nonlinear perturbations. Appl. Math. Comput. 226, 10-19 (2014)

    Article  MathSciNet  Google Scholar 

  13. Zhao, Z, Song, Q, He, S: Passivity analysis of stochastic neural networks with time-varying delays and leakage delay. Neurocomputing 125, 22-27 (2014)

    Article  Google Scholar 

  14. Nagamani, G, Ramasamy, S: Stochastic dissipativity and passivity analysis for discrete-time neural networks with probabilistic time-varying delays in the leakage term. Appl. Math. Comput. 289, 237-257 (2016)

    MathSciNet  MATH  Google Scholar 

  15. Nagamani, G, Ramasamy, S: Dissipativity and passivity analysis for discrete-time T-S fuzzy stochastic neural networks with leakage time-varying delays based on Abel lemma approach. J. Franklin Inst. 353, 3313-3342 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  16. Zhang, D, Yu, L: \(H_{\infty}\) filtering for linear neutral systems with mixed time-varying delays and nonlinear perturbations. J. Franklin Inst. 347, 1374-1390 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  17. Han, Q: Robust stability for a class of linear systems with time-varying delay and nonlinear perturbations. Comput. Math. Appl. 47, 1201-1209 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  18. Park, J: Novel robust stability criterion for a class of neutral systems with mixed delays and nonlinear perturbations. Appl. Math. Comput. 161, 413-421 (2005)

    MathSciNet  MATH  Google Scholar 

  19. Zhang, W, Cai, X, Han, Z: Robust stability criteria for systems with interval time-varying delay and nonlinear perturbations. J. Comput. Appl. Math. 234, 174-180 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  20. Wu, Z, Shi, P, Su, H, Chu, J: Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled data. IEEE Trans. Cybern. 43, 1796-1806 (2013)

    Article  Google Scholar 

  21. Wang, Z, Liu, Y, Yu, L, Liu, X: Exponential stability of delayed recurrent neural networks with Markovian jumping parameters. Phys. Lett. A 356, 346-352 (2006)

    Article  MATH  Google Scholar 

  22. Liu, Y, Wang, Z, Liang, J, Liu, X: Stability and synchronization of discrete-time Markovian jumping neural networks with mixed mode-dependent time delays. IEEE Trans. Neural Netw. 20, 1102-1116 (2009)

    Article  Google Scholar 

  23. Yang, X, Cao, J, Lu, J: Synchronization of randomly coupled neural networks with Markovian jumping and time-delay. IEEE Trans. Circuits Syst. I, Regul. Pap. 60, 363-376 (2013)

    Article  MathSciNet  Google Scholar 

  24. Zhu, Q, Cao, J: Stability analysis of Markovian jump stochastic BAM neural networks with impulse control and mixed time delays. IEEE Trans. Neural Netw. Learn. Syst. 23, 467-479 (2012)

    Article  Google Scholar 

  25. Ali, MS: Robust stability of stochastic uncertain recurrent neural networks with Markovian jumping parameters and time-varying delays. Int. J. Mach. Learn. Cybern. 5, 13-22 (2014)

    Article  Google Scholar 

  26. Nagamani, G, Radhika, T: Dissipativity and passivity analysis of Markovian jump neural networks with two additive time-varying delays. Neural Process. Lett. 44, 571-592 (2016)

    Article  MATH  Google Scholar 

  27. Cheng, J, Zhu, H, Ding, Y, Zhong, S, Zhong, Q: Stochastic finite-time boundedness for Markovian jumping neural networks with time-varying delays. Appl. Math. Comput. 242, 281-295 (2014)

    MathSciNet  MATH  Google Scholar 

  28. Ren, J, Zhu, H, Zhong, S, Zeng, Y, Zhang, Y: State estimation of recurrent neural networks with two Markovian jumping parameters and mixed delays. In: 34th Chinese Control Conference, Hangzhou Dianzi University, Hangzhou, 28-30 July (2015)

    Google Scholar 

  29. Chen, G, Xia, J, Zhuang, G: Delay-dependent stability and dissipativity analysis of generalized neural networks with Markovian jump parameters and two delay components. J. Franklin Inst. 353, 2137-2158 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  30. Li, Q, Zhu, Q, Zhong, S, Wang, X, Cheng, J: State estimation for uncertain Markovian jump neural networks with mixed delays. Neurocomputing 182, 82-93 (2016)

    Article  Google Scholar 

  31. Shen, Y, Wang, J: Robustness analysis of global exponential stability of recurrent neural networks in the presence of time delays and random disturbances. IEEE Trans. Neural Netw. Learn. Syst. 23, 87-96 (2012)

    Article  Google Scholar 

  32. Faydasicok, O, Arik, S: A new robust stability criterion for dynamical neural networks with multiple time delays. Neurocomputing 99, 290-297 (2013)

    Article  MATH  Google Scholar 

  33. Nagamani, G, Ramasamy, S: Dissipativity and passivity analysis for uncertain discrete-time stochastic Markovian jump neural networks with additive time-varying delays. Neurocomputing 174, 795-805 (2016)

    Article  MATH  Google Scholar 

  34. Liu, Y, Lee, S, Lee, H: Robust delay-dependent stability criteria for uncertain neural networks with two additive time-varying delay components. Neurocomputing 151, 770-775 (2015)

    Article  Google Scholar 

  35. Nagamani, G, Radhika, T: A quadratic convex combination approach on robust dissipativity and passivity analysis for Takagi-Sugeno fuzzy Cohen-Grossberg neural networks with time-varying delays. Math. Methods Appl. Sci. 39, 3880-3896 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  36. Kao, Y, Guo, J, Wang, C, Sun, X: Delay-dependent robust exponential stability of Markovian jumping reaction-diffusion Cohen-Grossberg neural networks with mixed delays. J. Franklin Inst. 349, 1972-1988 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  37. Wu, Z, Park, J, Su, H, Chu, J: Robust dissipativity analysis of neural networks with time-varying delay and randomly occurring uncertainties. Nonlinear Dyn. 69, 1323-1332 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  38. Samidurai, R, Manivannan, R: Robust passivity analysis for stochastic impulsive neural networks with leakage and additive time-varying delay components. Appl. Math. Comput. 268, 743-762 (2015)

    MathSciNet  Google Scholar 

  39. Wu, H, Liao, X, Feng, W, Guo, S, Zhang, W: Robust stability analysis of uncertain systems with two additive time-varying delay components. Appl. Math. Model. 33, 4345-4353 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  40. Xiao, N, Jia, Y: New approaches on stability criteria for neural networks with two additive time-varying delay components. Neurocomputing 118, 150-156 (2013)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (No. 61273015), the key Project of Auhui Province Universities and Colleges Natural Science Foundation (No. KJ2016A553) and the China Scholarship Council (CSC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiaojiao Ren.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors drafted the manuscript, and they read and approved the final version.

Appendix

Appendix

We have

$$\begin{aligned}& \alpha_{1i}(t)=\biggl[x^{T}(t)- \int_{t-\sigma}^{t} x^{T}(s)\, dsC_{i}^{T}, \int_{t-\sigma}^{t}x^{T}(s)\,ds,x^{T}(t- \sigma ), \int_{t-h}^{t}x^{T}(s)\,ds,x^{T}(t-h) \biggr]^{T}, \\& \alpha_{2}(t)=\bigl[x^{T}(t),\dot{x}^{T}(t) \bigr]^{T},\qquad f_{1}(t,s)=\biggl[x^{T}(s), \dot{x}^{T}(s), \int _{s}^{t}x^{T}(u)\,du, \int_{s}^{t}\dot{x}^{T}(u)\,du \biggr]^{T}, \\& f_{2}(t,s)=\biggl[x^{T}(s),\dot{x}^{T}(s), \int_{s}^{t}\dot {x}^{T}(u)\,du \biggr]^{T},\qquad f_{3}(t,s)=\biggl[x^{T}(s), \dot{x}^{T}(s), \int_{s}^{t-h_{1}}\dot {x}^{T}(u)\,du \biggr]^{T}, \\& f_{4}(t,s)=\biggl[x^{T}(s),\dot{x}^{T}(s), \int_{s}^{t-h_{2}}\dot {x}^{T}(u)\,ds \biggr]^{T},\qquad f_{5}(t,s)=\biggl[x^{T}(s), \int_{s}^{t}\dot{x}^{T}(u)\,ds \biggr]^{T}, \\& \mathcal{F}=f\bigl(t,x(t-\sigma),x\bigl(t-h_{qr}(t)\bigr)\bigr), \qquad U_{i}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} U_{i1}& U_{i2}\\ U_{i3}&U_{i4} \end{array}\displaystyle \right ], \\& Q_{k}\in R^{{mn}\times{mn}}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} Q_{k1}&Q_{k2}&\cdots& Q_{km}\\ \ast& Q_{k(m+1)}&\cdots&Q_{k(2m-1)}\\ \vdots&\vdots&\ddots&\vdots\\ \ast&\ast&\cdots&Q_{k(\frac{m(m+1)}{2})} \end{array}\displaystyle \right ]\quad \bigl(k\in Z^{+}\bigr), \\& \mathcal{P}_{i}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{}} S_{i1}&S_{i2}&U_{i1}& U_{i2}\\ \ast& S_{i3}&U_{i3}&U_{i4}\\ \ast&\ast&S_{i1}&S_{i2}\\ \ast&\ast&\ast&S_{i3} \end{array}\displaystyle \right ] \quad (i=1,2,3), \qquad \mathcal{P}_{4}=\left [ \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} S_{5}&T_{1}&T_{2}\\ \ast& S_{5}&T_{3}\\ \ast&\ast&S_{5} \end{array}\displaystyle \right ], \\& \Pi_{1i}=\bigl[e_{1}-e_{14}C_{i}^{T},e_{14},e_{3},e_{17},e_{9} \bigr], \\& \Pi _{2i}=\bigl[-e_{1}C_{i}^{T}+e_{13}A_{i}^{T}+e_{25},e_{1}-e_{3},e_{4} ,e_{1}-e_{9},e_{10}\bigr], \\& \Pi_{3}=[e_{1},e_{2},0_{25n\times n},0_{25n\times n}]R_{1}[e_{1},e_{2},0_{25n\times n},0_{25n\times n}]^{T} \\& \hphantom{\Pi_{3}={}}{}- [e_{3},e_{4},e_{14},e_{1}-e_{3}]R_{1}[e_{3},e_{4},e_{14},e_{1}-e_{3}]^{T} +\sigma^{2}[e_{1},e_{2},0_{25n\times n}]R_{2}[e_{1},e_{2},0_{25n\times n}]^{T} \\& \hphantom{\Pi_{3}={}}{} -[e_{14},e_{1}-e_{3},\sigma e_{1}-e_{14}]R_{2}[e_{14},e_{1}- e_{3},\sigma e_{1}-e_{14}]^{T} \\& \hphantom{\Pi_{3}={}}{}+e_{1} \biggl(\sigma X_{11}+X_{14}+X_{14}^{T}+ \frac{\sigma^{4}}{4}R_{4}\biggr)e_{1}^{T} \\& \hphantom{\Pi_{3}={}}{} +\sigma e_{2}R_{3}e_{2}^{T}+ e_{3}\bigl(\sigma X_{22}-X_{24}-X_{24}^{T} \bigr)e_{3}^{T}+\sigma e_{4}X_{33}e_{4}^{T}-e_{21}R_{4}e_{21}^{T}, \\& \Pi_{4}=\operatorname{Sym}\biggl\{ [e_{14},e_{1}-e_{3},e_{21}, \sigma e_{1}-e_{14}]R_{1}[0_{25n\times n},0_{25n\times n},e_{1},e_{2}]^{T} \\& \hphantom{\Pi_{4}={}}{} +e_{1}\bigl(\sigma X_{12}-X_{14}+X_{24}^{T} \bigr)e_{3}^{T}+e_{1}\bigl(\sigma X_{13}+X_{34}^{T}\bigr)e_{4}^{T} \\& \hphantom{\Pi_{4}={}}{} +\sigma\biggl[e_{21},\sigma e_{1}-e_{14}, \frac{\sigma^{2}}{2}e_{1} -e_{21}\biggr]R_{2}[0_{25n\times n},0_{25n\times n},e_{2}]^{T}+e_{3} \bigl(\sigma X_{23}-X_{34}^{T}\bigr)e_{4}^{T} \biggr\} , \end{aligned}$$
$$\begin{aligned}& \Pi_{5}=[e_{1},e_{2},0_{25n\times n},0_{25n\times n} ](Q_{1}+Q_{2}+Q_{3})[e_{1},e_{2},0_{25n\times n},0_{25n \times n}]^{T} \\& \hphantom{\Pi_{5}={}}{}-[e_{5},e_{6},e_{15},e_{1}-e_{5}] Q_{1}[e_{5},e_{6},e_{15},e_{1}-e_{5}]^{T} \\& \hphantom{\Pi_{5}={}}{}-[e_{7},e_{8},e_{16},e_{1}-e_{7}]Q_{2}[e_{7},e_{8},e_{16},e_{1}-e_{7}]^{T} \\& \hphantom{\Pi_{5}={}}{} -[e_{9},e_{10},e_{17},e_{1}-e_{9}]Q_{3}[e_{9},e_{10},e_{17},e_{1}-e_{9}]^{T}, \\& \Pi_{6}=[e_{5},e_{6},0_{25n\times n}]Q_{4}[e_{5},e_{6},0_{25n\times n}]^{T}+[e_{7},e_{8},0_{25n\times n}]Q_{5}[e_{7},e_{8},0_{25n\times n}]^{T} \\& \hphantom{\Pi_{6}={}}{} +[e_{1},0_{25n\times n}](Q_{6}+Q_{7}+Q_{8})[e_{1}, 0_{25n\times n}]^{T} -[e_{9},e_{10},e_{5}-e_{9}]Q_{4}[e_{9},e_{10},e_{5}-e_{9}]^{T} \\& \hphantom{\Pi_{6}={}}{} -[e_{9},e_{10},e_{7}-e_{9}]Q_{5}[e_{9},e_{10},e_{7}- e_{9}]^{T}-(1-\mu_{1})[e_{11},e_{1}-e_{11}]Q_{6}[e_{11},e_{1}- e_{11}]^{T} \\& \hphantom{\Pi_{6}={}}{} -(1-\mu_{2})[e_{12},e_{1}-e_{12}]Q_{7}[e_{12},e_{1} -e_{12}]^{T}-(1-\mu)[e_{13},e_{1}-e_{13}]Q_{8}[e_{13}, e_{1}-e_{13}]^{T}, \\& \Pi_{7}=\operatorname{Sym}\bigl\{ \bigl([e_{15},e_{1}-e_{5},e_{22},h_{1}e_{1}-e_{15}]Q_{1}+[e_{16},e_{1}-e_{7},e_{23},h_{2}e_{1}-e_{16}]Q_{2} \\& \hphantom{\Pi_{7}={}}{} +[e_{17},e_{1}-e_{9},e_{24},he_{1}-e_{17}]Q_{3} \bigr)[0_{25n\times n},0_{25n\times n},e_{1},e_{2}]^{T} \\& \hphantom{\Pi_{7}={}}{} +[e_{17}-e_{15},e_{5}-e_{9},h_{2}e_{5}-e_{17}+e_{15}]Q_{4}[0_{25n\times n},0_{25n\times n},e_{6}]^{T} \\& \hphantom{\Pi_{7}={}}{} +[e_{17}-e_{16},e_{7}-e_{9},h_{1}e_{7}-e_{17}+e_{16}]Q_{5}[0_{25n\times n},0_{25n\times n},e_{8}]^{T} \bigr\} , \\& \Pi_{8}=\operatorname{Sym}\bigl\{ e_{18}(Q_{62}-Q_{63})e_{2}^{T}+e_{19}(Q_{72}-Q_{73})e_{2}^{T}+e_{20}(Q_{82}-Q_{83})e_{2}^{T} \bigr\} , \\& \Pi_{9}=[e_{1},e_{2}]\bigl(h_{1}^{2}S_{1}+h_{2}^{2}S_{2}+h^{2}S_{3} \bigr)[e_{1},e_{2}]^{T}+e_{2} \bigl(h^{2}S_{4}+h_{1}^{2}S_{5} \bigr)e_{2}^{T}+h_{2}^{2}e_{6}S_{6}e_{6}^{T} \\& \hphantom{\Pi_{9}={}}{} -\frac{\pi^{2}}{4h^{2}}e_{17}S_{4}e_{17}^{T}+ \operatorname{Sym}\biggl\{ \frac{\pi ^{2}}{4h}e_{17}S_{4}e_{9}^{T} \biggr\} -\frac{\pi^{2}}{4}e_{9}S_{4}e_{9}^{T}, \\& \Pi_{10}=-[e_{18},e_{1}-e_{11},e_{15}-e_{18},e_{11}-e_{5}] \mathcal {P}_{1}[e_{18},e_{1}-e_{11},e_{15}-e_{18},e_{11}-e_{5}]^{T} \\& \hphantom{\Pi_{10}={}}{} -[e_{19},e_{1}-e_{12},e_{16}-e_{19},e_{12}-e_{7}] \mathcal {P}_{2}[e_{19},e_{1}-e_{12},e_{16}-e_{19},e_{12}-e_{7}]^{T} \\& \hphantom{\Pi_{10}={}}{} -[e_{20},e_{1}-e_{13},e_{17}-e_{20},e_{13}-e_{9}] \mathcal {P}_{3}[e_{20},e_{1}-e_{13},e_{17}-e_{20},e_{13}-e_{9}]^{T}, \\& \Pi_{11}=e_{1}\biggl(\frac{h_{1}^{4}}{4}S_{7}+ \frac{h_{2}^{4}}{4}S_{8}+\frac {h^{4}}{4}S_{9} \biggr)e_{1}^{T}-e_{22}S_{7}e_{22}^{T}-e_{23}S_{8}e_{23}^{T}-e_{24}S_{9}e_{24}^{T}, \\& \Pi_{12}=2\varepsilon\alpha^{2}e_{3}E_{\alpha}^{T}E_{\alpha}e_{3}^{T}+2\varepsilon \beta^{2}e_{13}E_{\beta}^{T}E_{\beta}e_{13}^{T}-\varepsilon e_{25}e_{25}^{T}, \\& \Pi_{13}=\operatorname{Sym}\bigl\{ [e_{1}G_{1}+e_{2}G_{2}+e_{3}G_{3}+e_{4}G_{4}+e_{13}G_{5}+e_{10}G_{6}+e_{25}G_{7}] \\& \hphantom{\Pi_{13}={}}{} \times\bigl[-e_{2}^{T}-C_{i}e_{3}^{T}+A_{i}e_{13}^{T}+e_{25}^{T} \bigr]\bigr\} , \\& \Xi_{1}=\operatorname{Sym}\bigl\{ e_{1}(Q_{63}+Q_{83})e_{2}^{T} \bigr\} ,\qquad \Xi_{2}=\operatorname{Sym}\bigl\{ e_{1}(Q_{73}+Q_{83})e_{2}^{T} \bigr\} , \\& \Sigma_{1}=-[e_{1}-e_{11},e_{11}-e_{13},e_{13}- e_{5}]\mathcal{P}_{4}[e_{1}-e_{11},e_{11}-e_{13},e_{13}- e_{5}]^{T} \\& \hphantom{\Sigma_{1}={}}{}-\frac{\pi^{2}}{4h_{2}^{2}}[e_{17}-e_{15}] S_{6}[e_{17}-e_{15}]^{T} \\& \hphantom{\Sigma_{1}={}}{}+ \operatorname{Sym}\biggl\{ \frac{\pi ^{2}}{4h_{2}}(e_{17}-e_{15})S_{6}e_{9}^{T} \biggr\} -\frac{\pi^{2}}{4}e_{9}S_{6}e_{9}^{T}, \\& \Sigma_{2}=-[e_{1}-e_{11},e_{11}-e_{5}] \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} S_{5}& T_{4}\\ \ast&S_{5} \end{array}\displaystyle \right ] [e_{1}- e_{11},e_{11}-e_{5}]^{T}-[e_{5}-e_{13},e_{13}-e_{9}] \\& \hphantom{\Sigma_{2}={}}{}\times \left [ \textstyle\begin{array}{@{}c@{\quad}c@{}} S_{6}& T_{5}\\ \ast&S_{6} \end{array}\displaystyle \right ] [e_{5}-e_{13},e_{13}-e_{9}]^{T}, \\& \Omega_{miqr}(t)=\Pi_{1i}P_{iqr} \Pi_{2i}^{T}+\Pi_{2i}P_{iqr}\Pi _{1i}^{T}+\sum_{j=1}^{N_{1}} \pi_{ij}\Pi_{1j}P_{jqr}\Pi_{lj}^{T} \\& \hphantom{\Omega_{miqr}(t)={}}{}+ \sum_{k_{1}=1}^{N_{2}}p_{qk_{1}} \Pi_{1i}P_{ik_{1}r}\Pi_{1i}^{T} +\sum_{k_{2}=1}^{N_{3}}l_{rk_{2}} \Pi_{1i}l_{iqk_{2}}\Pi_{1i}^{T} \\& \hphantom{\Omega_{miqr}(t)={}}{}+ \sum _{k=3}^{13}\Pi_{k}+ \Sigma_{m}+h_{1}(t)\Xi_{1}+h_{2}(t) \Xi_{2}\quad (m=1,2), \\& \xi^{T}(t)=\biggl[x^{T}(t),\dot{x}^{T}(t),x^{T}(t- \sigma),\dot{x}^{T}(t-\sigma ),x^{T}(t-h_{1}), \dot{x}^{T}(t-h_{1}),x^{T}(t-h_{2}), \\& \hphantom{\xi^{T}(t)={}}\dot{x}^{T}(t-h_{2}),x^{T}(t-h), \dot{x}^{T}(t-h),x^{T}\bigl(t -h_{1}(t) \bigr),x^{T}\bigl(t-h_{2}(t)\bigr),x^{T}\bigl(t-h(t) \bigr), \\& \hphantom{\xi^{T}(t)={}} \int_{t-\sigma}^{t}x^{T}(s)\,ds, \int_{t- h_{1}}^{t}x^{T}(s)\,ds, \int_{t-h_{2}}^{t}x^{T}(s)\,ds, \int_{t-h}^{t}x^{T}(s)\,ds, \int_{t-h_{1}(t)}^{t}x^{T}(s)\,ds, \\& \hphantom{\xi^{T}(t)={}} \int_{t-h_{2}(t)}^{t}x^{T}(s)\,ds, \int_{t-h(t)}^{t}x^{T}(s)\,ds, \int_{t-\sigma}^{t} \int_{s}^{t} x^{T}(u)\,du\,ds, \int_{t-h_{1}}^{t} \int_{s}^{t}x^{T}(u)\,du\,ds, \\& \hphantom{\xi^{T}(t)={}} \int_{t-h_{2}}^{t} \int_{s}^{t}x^{T}(u)\,du\,ds, \int_{t-h}^{t} \int _{s}^{t}x^{T}(u)\,du\,ds, \mathcal{F}^{T}\biggr]. \end{aligned}$$

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ren, J., Zhu, H., Zhong, S. et al. Robust stability of uncertain Markovian jump neural networks with mode-dependent time-varying delays and nonlinear perturbations. Adv Differ Equ 2016, 327 (2016). https://doi.org/10.1186/s13662-016-1021-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-016-1021-1

Keywords