Skip to main content

Theory and Modern Applications

Exponential passivity conditions on neutral stochastic neural networks with leakage delay and partially unknown transition probabilities in Markovian jump

Abstract

This paper studies the problem of exponential passivity for neutral stochastic neural networks (NSNN) with leakage delay and Markovian jump. The Markovian jump has partially unknown transition probabilities (PUTPs). By utilizing the Itô differential rule, choosing a suitable Lyapunov–Krasovskii functional and combining with the inequality technique, the sufficient delay-dependent exponential passivity criteria are obtained. These sufficient conditions are provided in the form of linear matrix inequalities (LMIs), which can be easily solved by LMI toolbox in Matlab. Finally, two simulated numerical examples are discussed in detail to illustrate the effectiveness of the established results.

1 Introduction

During the past few decades, neural networks (NN) with time delays have found wide applications such as image processing, fixed-point computation, pattern recognition, associative memory, and so on. A lot of interesting results on the dynamical behaviors in different delayed neural networks have been published in the literature (see [19] and the references therein).

As a special time delay, the neutral delay has been considered in various NN in the recent years. It is more difficult to deal with neutral-type neural networks (NTNN) comparing to the traditional delayed neural networks. Park and Kwon, in [10], studied the NTNN with interval time-varying delays and obtained the delay-dependent stability criterion by the Lyapunov stability theory and inequality approach. In [11], the authors considered NTNN with both discrete and unbounded distributed delays, and some delay-dependent conditions were established. Mahmoud and Ismail [12] derived the robust exponential stability condition for NTNN via using the Lyapunov–Krasovskii functional and the integral inequality. In [1315], the issue of programming of state estimator and synchronization for NTNN were considered. Recently, another typical time delay called leakage (or forgetting) delay extensively exists in the negative feedback terms of the neural networks and it has a great impact on the dynamic behaviors of delayed neural networks; more details were discussed in [1619]. The synchronization problem for coupled NN with interval time-varying delays and leakage delay was considered in [19], and some novel delay-dependent conditions for the synchronization of the networks were established based on a suitable constructed Lyapunov–Krasovskii functional and Finsler’s lemma.

As an important technique, the passivity theory has played a key role in analyzing the stability of time delay systems [20, 21], which were extensively applied in many practical areas such as signal processing [22], complexity [23], chaos and synchronization control [24], and fuzzy control [25]. Accordingly, the passivity issue of NN with time-varying delays has attracted the attention of many researchers. Recently, a large number of the passivity results on neural network systems with time-varying delays have been reported in the literature, see [2635] and the references therein. In [30], the authors studied the passivity problem of the NN with time-varying delays and some improved criteria were obtained. [32] considered the passivity issue of uncertain NN with discrete and distributed time-varying delays, and some sufficient conditions were obtained via using a novel Lyapunov functional together with inequality approach. In [33], the passivity analysis of stochastic NN with time-varying delays and parametric uncertainties was performed, both delay-independent and delay-dependent stochastic passivity conditions were presented. In [34], the problem of passivity analysis was studied for a class of discrete-time stochastic NN with time-varying delays. In [35], the passivity issue of stochastic NN with interval time-varying delay and norm-bounded parameter uncertainties was considered, some delay-dependent passivity criteria were obtained by constructing appropriate Lyapunov–Krasovskii functionals and employing an improved inequality.

Additionally, the NN with Markovian jumping parameters have become a subject of great importance in many practical processes and were discussed by many researchers [3647]. For instance, [37] considered the exponential passivity problem (EPP) of NN, and some sufficient conditions were obtained. Following the problem, the authors of [38] investigated delay-dependent stability for Markovian jumping NTNN with time-varying delays. Furthermore, [39] studied the EPP of Markovian jumping stochastic NN with leakage and distributed delays, and some delay-dependent sufficient conditions were obtained by the Lyapunov stability theory and the free-weighting matrix approach. Generally, from a stabilization standpoint, studying more general jumping systems with PUTPs is significant and necessary, rather than the transition probabilities are all known. Recently, a lot of stability and stabilization results on the Markovian jumping systems with PUTPs have been obtained [48, 49]. For PUTPs, it should be pointed out that the information of the known elements is included to compute, while the other unknown elements need not be considered. As much as we know, the analysis of EPP for neutral stochastic neural networks (NSNN) with Markovian jump and leakage delays has received very little attention in the academic field. And the EPP for NSNN with leakage delays and Markovian jump accompanied by PUTPs has not yet been investigated, and it is very challenging.

Motivated by the above discussion, we will discuss the EPP for Markovian jumping stochastic neutral-type neural networks (SNTNN) with mixed, leakage delays and partially unknown transition probabilities. A set of Lyapunov–Krasovskii functional methods combined with the stochastic analysis techniques will be developed to obtain the sufficient conditions under which the system is the global exponential passivity. By exponential passivity theory, new delay-dependent exponential passivity criteria for Markovian jumping SNTNN with mixed, leakage delays and partially unknown transition probabilities will be established in terms of LMIs, which can be easily used to calculate the upper bounds of time delays and the maximum exponential convergence rate by LMI toolbox. Finally, numerical examples are provided to show the effectiveness of the proposed methods.

Notations: Throughout this paper, the following notations will be used. \(\mathbb{R}^{n}\) and \(\mathbb{R}^{n \times n}\) denote, respectively, the n-dimensional Euclidean space and the set of all \(n \times n\) real matrices. The superscript T denotes the transposition and the notation \(X \ge Y\) (respectively, \(X > Y\)), where X and Y are symmetric matrices, which means that \(X-Y\) is positive semi-definite (respectively, positive definite). \(\mathbf{E}\{\cdot\} \) is the mathematical expectation operator with respect to the given probability measure \(\mathcal{P}\). \(\operatorname{sym}(X)=X+X^{T}\). \(\operatorname{col} \{ \cdots \}\) denotes a column vector. \(I_{n}\) is the \(n \times n\) identity matrix. Matrices, if not explicitly stated, are assumed to have compatible dimensions.

2 Model description and preliminaries

Consider the NSNN with mixed and leakage delays expressed by the following state equations:

$$ \begin{gathered} d\Biggl[{v_{i}}(t) - \sum_{j = 1}^{n} {{e_{ij}} {v_{j}}\bigl(t - h(t)\bigr)} \Biggr] \\\quad= \Biggl[ - {a_{i}} {v_{i}}(t - \delta) + \sum _{j = 1}^{n} {{b_{ij}} {f_{j}} \bigl({v_{j}}(t)\bigr)} \\ \quad \quad{} + \sum_{j = 1}^{n} {{c_{ij}} {f_{j}}\bigl({v_{j}}\bigl(t - \tau(t) \bigr)\bigr)} + \sum_{j = 1}^{n} {{d_{ij}} \int_{t - d(t)}^{t} {{f_{j}} \bigl({v_{j}}(s)} \bigr)\,ds} + {u_{i}}(t)\Biggr]\,dt \\ \quad\quad{} + \sum_{j = 1}^{n} {{ \sigma_{ij}}\biggl({v_{j}}(t),{v_{i}}(t - \delta ),{v_{j}}\bigl(t - \tau(t)\bigr), \int_{t - d(t)}^{t} {{f_{j}} \bigl({v_{j}}(s)} \bigr)\,ds,{v_{j}}\bigl(t - h(t)\bigr),t \biggr)} \,d{\omega_{j}}(t), \\ {z_{i}}(t) = \sum_{j = 1}^{n} {{w_{1ij}} {f_{j}}\bigl({v_{j}}(t)\bigr)} + \sum _{j = 1}^{n} {{w_{2ij}} {f_{j}}\bigl({v_{j}}\bigl(t - \tau(t)\bigr)\bigr)} + {u_{i}}(t),\end{gathered} $$
(1)

where \(i=1,2,\ldots,n\) and n denotes the number of neurons, \({v_{i}}(t)\) denotes the state of the ith neuron at time t. \({f_{j}}({v_{j}}(t))\) and \({f_{j}}({v_{j}}(t - \tau(t)))\) denote the output of the jth unit at time t. \(b_{ij}\), \(c_{ij}\), \(d_{ij}\), \(e_{ij}\), \(w_{1ij}\), and \(w_{2ij}\) are the interconnection matrices representing the weight coefficients of the neurons, the scalar \({a_{i}}>0\) denotes the passive decay rate; \({f_{j}}\) is the neuron activation function; \(\tau (t)\), \(h(t)\), and \(r(t)\) denote respectively the discrete, neutral, and distributed time-varying delays with \(0 \le \tau(t) \le\tau\), \(0 \le h (t) \le h\), \(0 \le d(t) \le d\), \(\dot{\tau}(t) \le\mu<1\), and \(\dot{h}(t) \le\eta<1\) for \(t\ge0\), where τ, h, d, μ, and η are constants; \(\delta\ge0\) denotes the constant leakage delay. \(u(t)=[{u_{1}}(t), {u_{2}}(t), \ldots, {u_{n}}(t)]^{T}\) is an external input vector of neurons, \(z(t)=[{z_{1}}(t), {z_{2}}(t), \ldots, {z_{n}}(t)]^{T}\) is the output vector of neuron networks. The noise perturbation \({\sigma_{ij}}(t):\mathbb{R}^{n}\times\mathbb{R}^{n}\times \mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{+} \rightarrow\mathbb{R}^{n}\) is a Borel measurable function. \(\omega(t)= ({\omega_{1}}(t),{\omega_{2}}(t), \ldots,{\omega_{n}}(t))^{T}\) is an n-dimensional Brown motion defined on a complete probability space \((\Omega, \mathcal{F}, {\{\mathcal{F}_{t}\}_{t \ge0}}, \mathcal{P})\) with a natural filtration \({\{\mathcal{F}_{t}\}_{t \ge0}}\) generated by \(\{\omega(t) \}\), where we associate Ω with the canonical space generated by \(\omega(t)\) and denote by \({\mathcal {F}_{t}}\) the associated σ-algebra generated by \(\{\omega (s):0\le s\le t \}\) with the probability measure \(\mathcal{P}\).

For convenience, we rewrite (1) as follows:

$$ \begin{gathered} d\bigl[v(t) - Ev\bigl(t - h(t)\bigr)\bigr]\\\quad=\biggl[ - Av(t - \delta) + Bf\bigl(v(t)\bigr) + Cf\bigl(v\bigl(t - \tau (t)\bigr)\bigr) + D \int_{t - d(t)}^{t} {f\bigl(v(s)\bigr)} \,ds+ u(t)\biggr] \,dt \\ \quad\quad{} + \sigma\biggl(v(t),v(t - \delta),v\bigl(t - \tau(t)\bigr),\int_{t - d(t)}^{t} {f\bigl(v(s)\bigr)} \,ds,v\bigl(t - h(t)\bigr),t\biggr)\,d\omega (t), \\ z(t)={W_{1}}f\bigl(v(t)\bigr) + {W_{2}}f\bigl(v\bigl(t - \tau(t)\bigr)\bigr) + u(t),\end{gathered} $$
(2)

where

$$\begin{gathered} v(t)=\bigl[{v_{1}}(t), \ldots, {v_{n}}(t) \bigr]^{T},\qquad A=\operatorname{diag} \{a_{1},\ldots, a_{n} \},\qquad B=(b_{ij})_{n\times n}, \\ C=(c_{ij})_{n\times n},\qquad D=(d_{ij})_{n\times n}, \qquad E=(e_{ij})_{n\times n},\qquad W_{1}=(w_{1ij})_{n\times n}, \\ W_{2}=(w_{2ij})_{n\times n},\qquad \sigma=\bigl( \sigma_{ij}(\cdot)\bigr)_{n\times n},\qquad f\bigl(v(t)\bigr)= \bigl[{{f}_{1}}\bigl({v_{1}}(t)\bigr),\ldots,{{f}_{n}} \bigl({v_{n}}(t)\bigr)\bigr]^{T}, \\ f\bigl(v\bigl(t-\tau(t)\bigr)\bigr)= \bigl[{{f}_{1}} \bigl({v_{1}}\bigl(t-\tau(t)\bigr)\bigr),\ldots ,{{f}_{n}} \bigl({v_{n}}\bigl(t-\tau(t)\bigr)\bigr)\bigr]^{T}.\end{gathered} $$

Next, we will give the following assumptions.

Assumption 2.1

For any \(p \in1, 2, \ldots, n \), \({f_{p}}(\cdot)\) is continuous and bounded. Moreover, for each \({f_{p}}(\cdot)\), the exist constants \(l_{k}^{-}\) and \(l_{k}^{+}\) such that

$$ l_{p}^{-} \le\frac{{{f_{p}}({\alpha_{1}}) - {f_{p}}({\alpha_{2}})}}{{{\alpha_{1}} - {\alpha_{2}}}} \le l_{p}^{+} , \quad p = 1,2, \ldots,n, $$
(3)

for any \({\alpha_{1}},{\alpha_{2}} \in\mathbb{R}\), \({\alpha_{1}} \ne {\alpha_{2}}\).

Assumption 2.2

All the eigenvalues of matrix E are inside the unit circle.

Note that Assumption 2.2 guarantees the stability of difference system \(v(t) - Ev(t - h(t))=0\). By the above assumptions, using Mawhin’s continuation theorem [50] guarantees that there exists an equilibrium. For convenience, we always transform an equilibrium point \({v^{\ast}} = {[v_{1}^{\ast}, \ldots,v_{n}^{\ast}]^{T}}\) to the origin by translation \(x(t)=v(t)-{v^{\ast}}\), which yields the following system:

$$ \begin{gathered}d\bigl[x(t) - Ex\bigl(t - h(t)\bigr)\bigr]\\\quad= \biggl[ - Ax(t - \delta) + Bg\bigl(x(t)\bigr) + Cg\bigl(x\bigl(t - \tau (t)\bigr)\bigr) + D \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds+ u(t)\biggr] \,dt \\ \quad\quad{}+ \sigma\biggl(x(t),x(t - \delta),x\bigl(t - \tau(t)\bigr), \int_{t - d(t,)}^{t} {g\bigl(x(s)\bigr)} \,ds,x\bigl(t - h(t)\bigr),t\biggr)\,d\omega(t), \\ z(t)={W_{1}}g\bigl(x(t)\bigr) +{W_{1}}g\bigl(x\bigl(t - \tau(t)\bigr)\bigr) + u(t),\end{gathered} $$
(4)

where \(x(t)=[{x_{1}}(t), \ldots,{x_{n}}(t)]^{T}\), \(g(x(t))=[{g_{1}}({x_{1}}(t)),\ldots,{g_{n}}({x_{n}}(t))]^{T}\), and \({g_{p}}({x_{p}}(t)) = {f_{p}}({x_{p}}(t)+v^{\ast})-{f_{p}}(v^{\ast})\). Noting the fact that \(f(0)=0\), the trivial solution of system (4) exists. Therefore, proving the stability of \({v^{\ast}}\) of system (2) is equal to proving the stability of the trivial solution of system (4). On the other hand, Assumption 2.1 can be rewritten as follows:

$$ l_{p}^{-} \le\frac{{{g_{p}}({\alpha_{1}}) - {g_{p}}({\alpha_{2}})}}{{{\alpha_{1}} - {\alpha_{2}}}} \le l_{p}^{+} , \quad p = 1,2, \ldots,n, $$
(5)

for any \({\alpha_{1}},{\alpha_{2}} \in\mathbb{R}\), \({\alpha_{1}} \ne {\alpha_{2}}\).

Let \(\{{r(t),t \ge0} \}\) be a right-continuous Markov process on the probability space \((\Omega, \mathcal{F}, {\{\mathcal {F}_{t}\}_{t \ge0}}, \mathcal{P})\) taking values in a finite state space \(S=(1,2,\ldots,N)\) with producer \(\Pi=(\pi_{ij})_{N \times N}\) given by

$$ \mathcal{P} \bigl\{ {{r(t + \Delta)} = j| {r(t) = i} } \bigr\} = \textstyle\begin{cases} {\pi_{ij}}\Delta + o ( \Delta ), &i \ne j, \\ 1 + {\pi_{ii}}\Delta + o ( \Delta ),& i = j, \end{cases} $$

where \(\Delta>0\) and \({\lim_{\Delta \to0}}\frac{{o ( \Delta )}}{\Delta} = 0\), \({\pi_{ij}} \ge0\) is the transition rate from i to j if \(i \ne j\), while \({\pi_{ii}} = -\sum_{j = 1,j \ne i}^{N} {{\pi_{ij}}}\) for each mode i.

The transition rates of the continuous-time Markovian jumping systems are regarded as being partly accessible in this paper. For example, the transition rate matrix Π with N operation modes may be expressed as

$$ \Pi = \left [ \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} {{\pi_{11}}} & ? & \cdots & {{\pi_{1N}}} \\ {{\pi_{21}}} & ? & \cdots & ? \\ \vdots & \vdots & \ddots & \vdots \\ ? & {{\pi_{N2}}} & \cdots & {{\pi_{NN}}} \end{array}\displaystyle \right ], $$
(6)

where ? stands for the unknown transition rate. For notational perspicuity, \(\forall i \in S\), the set \(U^{i}\) denotes \(U^{i}=U_{k}^{i}\cup U_{uk}^{i}\) with

$$\begin{gathered} {U_{k}^{i}} \triangleq\{j: {\pi_{ij}}\text{ is known for }j \in S\}, \\ {U_{uk}^{i}} \triangleq\{j: {\pi_{ij}}\text{ is unknown for }j \in S\}.\end{gathered} $$

Moreover, if \({U_{k}^{i}}\ne\emptyset\), it is further described as

$$ U_{k}^{i}=\bigl\{ k_{1}^{i},k_{2}^{i}, \ldots,k_{m}^{i}\bigr\} , $$
(7)

where m is a non-negation integer with \(1\le m \le N\), and \(k_{j}^{i}\in Z^{+}\) (\(1\le k_{j}^{i} \le N\), \(j=1,\ldots,m\)) represent the jth known element of the set \(U_{k}^{i}\) in the ith row of the transition rate matrix Π.

Based on the discussions in the above section, in this paper, we consider the following Markovian jumping NSNN systems with mixed and leakage delays:

$$\begin{aligned}& d\bigl[x(t) - E\bigl(r(t)\bigr)x\bigl(t - h(t)\bigr)\bigr] \\& \quad= \biggl[ - A\bigl(r(t)\bigr)x(t - \delta) + B\bigl(r(t)\bigr)g\bigl(x(t)\bigr) \\& \quad\quad{}+ C\bigl(r(t)\bigr)g\bigl(x\bigl(t - \tau(t)\bigr)\bigr)+ D\bigl(r(t) \bigr) \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds + u(t)\biggr] \,dt \\& \quad\quad{}+ \sigma\biggl(x(t),x(t - \delta), x\bigl(t - \tau(t)\bigr), \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds,x\bigl(t - h(t)\bigr),t,r(t)\biggr)\,d\omega(t), \\& z(t)={W_{1}}\bigl(r(t)\bigr)g\bigl(x(t)\bigr) + {W_{2}} \bigl(r(t)\bigr)g\bigl(x\bigl(t - \tau(t)\bigr)\bigr) + u(t), \end{aligned}$$
(8)

where \(x(t)=[{x_{1}}(t), {x_{2}}(t), \ldots, {x_{n}}(t)]^{T} \in{\mathbb {R}^{n}}\) is the state vector associated with the neurons. \(g(x(t))= [{g_{1}}({x_{1}}(t)), {g_{2}}({x_{2}}(t)), \ldots, {g_{n}}({x_{n}}(t))]^{T}\) are the neuron activation functions, \(u(t)=[{u_{1}}(t), {u_{2}}(t), \ldots, {u_{n}}(t)]^{T}\) is an external input vector of neurons, \(z(t)\) is the output vector of neuron networks, \(\{{r(t),t \ge0} \}\) is the Markov chain and \(\sigma:\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb {R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{+}\times S \rightarrow\mathbb{R}^{n}\).

Remark 2.1

([48])

It is worthy to note that if \(U_{k}^{i}=\emptyset\), \(U^{i}=U_{uk}^{i}\), which implies that any information between the ith mode and the other \(N-1\) modes is not accessible. Then Markovian jumping systems with N modes can be viewed as the ones with \(N-1\) modes. Clearly, when \(U_{uk}^{i}=\emptyset\) and \(U^{i}=U_{k}^{i}\), system (8) becomes the usual assumption case.

For convenience, each possible value of \(r(t)\) is denoted by i in system (8). Then, for all \(i \in S\), we have

$$\begin{gathered} A\bigl(r(t)\bigr)=A_{i},\qquad B\bigl(r(t)\bigr)=B_{i},\qquad C\bigl(r(t)\bigr)=C_{i},\qquad D\bigl(r(t)\bigr)=D_{i},\qquad E \bigl(r(t)\bigr)=E_{i}, \\ {W_{1}}\bigl(r(t)\bigr)=W_{1i},\qquad{W_{2}} \bigl(r(t)\bigr)=W_{2i}.\end{gathered} $$

Hence, system (8) can be rewritten as the following Markovian jumping NSNN:

$$\begin{gathered} d\bigl[x(t) - {E_{i}}x\bigl(t - h(t)\bigr)\bigr]\\\quad= \biggl[ - {A_{i}}x(t - \delta) + {B_{i}}g\bigl(x(t)\bigr) + {C_{i}}g\bigl(x\bigl(t - \tau(t)\bigr)\bigr) + {D_{i}} \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds+ u(t)\biggr] \, \\ \quad\quad{} + \sigma\biggl(x(t),x(t - \delta),x\bigl(t - \tau(t)\bigr), \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds,x\bigl(t - h(t)\bigr),t,i\biggr)\,d\omega(t), \\ z(t)= W_{1i}g\bigl(x(t)\bigr) + W_{2i}g\bigl(x\bigl(t - \tau(t)\bigr)\bigr) + u(t).\end{gathered} $$
(9)

To conduct the stochastic analysis, the following assumptions are extensively used.

Assumption 2.3

Assume that \(\sigma:\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb {R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{+}\times S \rightarrow\mathbb{R}^{n}\) is locally Lipschitz continuous and satisfies the linear growth condition [51]. Besides, σ satisfies

$$ \begin{aligned}[b]&\operatorname{trace}\bigl[\sigma^{T}(x_{1}, x_{2}, x_{3}, x_{4}, x_{5}, t, i) \sigma(x_{1}, x_{2}, x_{3}, x_{4}, x_{5}, t, i)\bigr] \\ &\quad\le x_{1}^{T}{T_{1i}}x_{1}+x_{2}^{T}{T_{2i}}x_{2}+x_{3}^{T}{T_{3i}}x_{3}+x_{4}^{T}{T_{4i}}x_{4}+x_{5}^{T}{T_{5i}}x_{5}\end{aligned} $$
(10)

for all \(x_{1}, x_{2}, x_{3}, x_{4}, x_{5} \in\mathbb{R}^{n}\), and \(r(t)=i\), \(i \in S\), where \(T_{1i}\), \(T_{2i}\), \(T_{3i}\), \(T_{4i}\), and \(T_{5i}\) are known positive constant matrices with appropriate dimensions.

Assumption 2.4

\(\sigma(0, 0, 0, 0, 0, t, r(t))\equiv0\).

Let \(x(t;\phi)\) denote the state trajectory from the initial data \({x(\theta) = \phi(\theta)}\) on \(\theta\in[ - \rho,0]\) (\(\rho= \max \{ h,\tau,r\}>0\)). Clearly, system (8) admits a trivial solution \(x(t;0)\equiv0\) corresponding to the initial data \(\phi=0\). For simplicity, we write \(x(t;\phi)=x(t)\). In addition, \(\phi\triangleq \{\phi(\theta): - \rho \le\theta\le0 \} \in{C}_{{\mathfrak{F}_{0}}}^{2}([ - \rho,0];{\mathbb{R}^{n}})\), where \({C}_{{\mathfrak{F}_{0}}}^{2}([ - \rho,0];{\mathbb{R}^{n}})\) is a family of bounded and continuous.

To end this section, we will introduce a lemma, which is used to prove our main results.

Lemma 2.1

(Gu et al. [52])

For given symmetric positive definite matrices \(R>0\) and any differentiable function x: \([ {a,b} ] \to{\mathbb{R}^{n}}\), the following inequality holds:

$$ \int_{a}^{b} {{x^{T}}(s)Rx(s)\,ds } \ge \frac{1}{{b - a}}{ \biggl( { \int_{a}^{b} {x(s)\,ds} } \biggr)^{T}}R \biggl( { \int_{a}^{b} {x(s)\,ds} } \biggr). $$
(11)

Itô’s formula (or generalized Itô’s formula) is very important to analyze the stochastic dynamical systems [51, 53]. Before discussing the exponentially passive problem of our system, we firstly introduce Itô’s formula for a general stochastic system with Markovian switching in the following.

Consider a stochastic system \(dx(t)=f(x(t),t,r(t))\,dt+g(x(t),t,r(t))\, d\omega(t)\) on \(t\ge0\) with the initial value \(x(0)=x_{0}\in\mathbb {R}^{n}\), where \(f:\mathbb{R}^{n} \times\mathbb{R}^{+} \times S \rightarrow \mathbb{R}^{n}\), \(g:\mathbb{R}^{n} \times\mathbb{R}^{+} \times S \rightarrow \mathbb{R}^{n}\) and \(r(t)\) is the Markov chain defined above. Let \(C_{1}^{2}(\mathbb{R}^{n} \times\mathbb{R}^{+} \times S; \mathbb{R}^{+})\) be the family of all nonnegative functions \(V(x,t,i)\) on \(\mathbb{R}^{n} \times \mathbb{R}^{+} \times S\) which are continuously twice differentiable in x and differentiable in t. If \(V \in C_{1}^{2}(\mathbb{R}^{n} \times \mathbb{R}^{+} \times S; \mathbb{R}^{+})\), an operator \(\mathfrak{L}V\) is defined from \(\mathbb{R}^{n} \times\mathbb{R}^{+} \times S\) to \(\mathbb {R}\) by

$$\begin{aligned} \mathfrak{L}V(x,t,i)&=V_{t}(x,t,i)+V_{x}(x,t,i)f(x,t,i)+ \frac {1}{2}\operatorname{trace}\bigl[g^{T}(x,t,i)V_{xx}(x,t,i)g(x,t,i) \bigr] \\ &\quad{}+\sum_{j = 1}^{N} {{ \pi_{ij}}V(x,t,j)},\end{aligned} $$

where

$$\begin{gathered} V_{t}(x,t,i)=\frac{\partial V(x,t,i)}{\partial t},\qquad V_{x}(x,t,i)= \biggl( \frac{\partial V(x,t,i)}{\partial x_{1}},\ldots,\frac {\partial V(x,t,i)}{\partial x_{n}} \biggr), \\ V_{xx}(x,t,i)= \biggl(\frac{\partial^{2} V(x,t,i)}{\partial x_{j}\partial x_{k}} \biggr)_{n \times n}.\end{gathered} $$

3 Main results

In this section, we derive the delay-dependent exponential passivity criteria for Markovian jumping NSNN with mixed and leakage delays in (8) with partially unknown transition probabilities. By constructing an appropriate Lyapunov–Krasovskii functional and employing some stochastic analysis techniques, a new delay-dependent exponential passivity condition is obtained in the following theorem. For the sake of simplicity, we will denote the matrices:

$$\begin{gathered} {L_{1}} =\operatorname{diag} \bigl\{ l_{1}^{-} l_{1}^{+}, l_{2}^{-} l_{2}^{+}, \ldots , l_{n}^{-} l_{n}^{+} \bigr\} ,\qquad {L_{2}} = \operatorname{diag} \biggl\{ \frac{l_{1}^{-} + l_{1}^{+}}{2}, \frac{l_{2}^{-} + l_{2}^{+}}{2}, \ldots, \frac{l_{n}^{-} + l_{n}^{+}}{2} \biggr\} , \\ {e_{m}} = [ \textstyle\begin{array}{c@{\quad}c@{\quad}c} {{0_{n \times(m - 1)n}}} & {{I_{n}}} & {{0_{n \times(11 - m)n}}} \end{array}\displaystyle ]\quad(m = 1, 2, \ldots, 11).\end{gathered} $$

Utilizing a simple conversion, system (9) has an equivalent form as follows:

$$\begin{aligned}& d\biggl[x(t) - {E_{i}}x\bigl(t - h(t) \bigr)-{A_{i}} \int_{t - \delta}^{t} x(s) \,ds\biggr] \\& \quad= \biggl[ - {A_{i}}x(t) + {B_{i}}g\bigl(x(t)\bigr) + {C_{i}}g \bigl(x\bigl(t - \tau(t)\bigr)\bigr) + {D_{i}} \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds+ u(t)\biggr] \,dt \\& \quad\quad{}+ \sigma \biggl(x(t),x(t - \delta),x\bigl(t - \tau(t)\bigr), \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds,x\bigl(t - h(t)\bigr),t,i\biggr)\,d\omega(t), \\& z(t)= W_{1i}g\bigl(x(t)\bigr) + W_{2i}g\bigl(x\bigl(t - \tau(t)\bigr)\bigr) + u(t). \end{aligned}$$
(12)

Now, we obtain the following delay-dependent exponential passivity condition for system (8) with partially unknown transition probabilities.

Theorem 3.1

Assume that Assumptions 2.1 and 2.3 hold. Then, for given scalars δ, τ, h, r, μ, and η, the Markovian jumping NSNN described by (8) with a partly unknown transition rate matrix (6) is exponentially passive if there exist positive scalars \(\lambda_{i}\), positive definite symmetric matrices \(P_{i}\), \(Q_{1i}\), \(Q_{2}\), \(Q_{3i}\), \(Q_{4i}\), \(Q_{5}\), \(R_{1}\), \(R_{2}\), \(R_{3}\), positive diagonal matrices \(K_{1}\), \(K_{2}\), and any matrices \(W_{i}=W_{i}^{T}\), \(S_{i}=S_{i}^{T}\), \(T_{i}=T_{i}^{T}\), \(X_{i}=X_{i}^{T}\) with appropriate dimensions satisfying the following inequalities for all \(i\in S\):

$$\begin{aligned}& \begin{aligned}[b] {\Phi_{i}} &= \operatorname{sym}\bigl\{ \Pi_{1}^{T}{P_{i}} {\Pi_{2}}\bigr\} + \gamma\Pi_{1}^{T}{P_{i}} {\Pi _{1}} + \Pi_{3}^{T}\sum_{j \in U_{k}^{i}} {{ \pi_{ij}}({P_{j}-W_{i}})} {\Pi_{3}} \\ &\quad{}+ {\lambda_{i}}\bigl(e_{1}^{T}{T_{1i}} {e_{1}} + e_{3}^{T}{T_{2i}} {e_{3}} + e_{4}^{T}{T_{3i}} {e_{4}} + e_{8}^{T}{T_{4i}} {e_{8}} + e_{2}^{T}{T_{5i}} {e_{2}}\bigr) \\ &\quad{}+ e_{1}^{T}\bigl({Q_{1i}} + 2{ \delta^{2}} {Q_{2}}+{Q_{3i}}+{Q_{4i}}+2{ \tau ^{2}}R_{2}+2{h^{2}}R_{3} \bigr){e_{1}} \\ &\quad{}- {e^{-\gamma\delta}}e_{3}^{T}{Q_{1i}} {e_{3}} - {e^{-\gamma\delta}}e_{7}^{T}{Q_{2}} {e_{7}}- (1 - \eta) {e^{-\gamma h}}e_{2}^{T}{Q_{3i}} {e_{2}} - (1 - \mu){e^{-\gamma\tau}}e_{4}^{T}{Q_{4i}} {e_{4}} \\ &\quad{}+e_{5}^{T}\bigl({Q_{5}} +{d^{2}} {R_{1}}\bigr){e_{5}}- (1 - \mu){e^{-\gamma\tau}}e_{6}^{T}{Q_{5}} {e_{6}} - {e^{-\gamma d}}e_{8}^{T}{R_{1}} {e_{8}}- {e^{-\gamma\tau }}e_{9}^{T}{R_{2}} {e_{9}} \\ &\quad{}- {e^{-\gamma h}}e_{10}^{T}{R_{3}} {e_{10}}- e_{1}^{T}{L_{1}} {K_{1}} {e_{1}} + 2e_{1}^{T}{L_{2}} {K_{1}} {e_{5}} - e_{5}^{T}{K_{1}} {e_{5}} - e_{4}^{T}{L_{1}} {K_{2}} {e_{4}}+ 2e_{4}^{T}{L_{2}} {K_{2}} {e_{6}} \\ &\quad{}- e_{6}^{T}{K_{2}} {e_{6}}- {(W_{1i}{e_{5}} + W_{2i}{e_{6}}+{e_{11}})^{T}} {e_{11}} - {e_{11}^{T}} {(W_{1i}{e_{5}} + W_{2i}{e_{6}}+{e_{11}})}\\ &< 0,\end{aligned} \end{aligned}$$
(13)
$$\begin{aligned}& P_{i} \le\lambda_{i} I, \end{aligned}$$
(14)
$$\begin{aligned}& \sum_{j \in U_{k}^{i}} {{\pi_{ij}}({Q_{1j}} - {S_{i}})} \le\delta{Q_{2}}, \end{aligned}$$
(15)
$$\begin{aligned}& \sum_{j \in U_{k}^{i}} {{\pi_{ij}}({Q_{3j}} - {T_{i}})} \le h {R_{3}}, \end{aligned}$$
(16)
$$\begin{aligned}& \sum_{j \in U_{k}^{i}} {{\pi_{ij}}({Q_{4j}} - {X_{i}})} \le\tau{R_{2}}, \end{aligned}$$
(17)
$$\begin{aligned}& P_{j}-W_{i} \le0, \quad j\in U_{uk}^{i}, j\ne i, \end{aligned}$$
(18)
$$\begin{aligned}& Q_{1j}-S_{i} \le0,\quad j\in U_{uk}^{i}, j\ne i, \end{aligned}$$
(19)
$$\begin{aligned}& Q_{3j}-T_{i} \le0,\quad j\in U_{uk}^{i}, j\ne i, \end{aligned}$$
(20)
$$\begin{aligned}& Q_{4j}-X_{i} \le0,\quad j\in U_{uk}^{i}, j\ne i, \end{aligned}$$
(21)
$$\begin{aligned}& P_{j}-W_{i} \ge0,\quad j\in U_{uk}^{i}, j=i, \end{aligned}$$
(22)
$$\begin{aligned}& Q_{1j}-S_{i} \ge0,\quad j\in U_{uk}^{i}, j=i, \end{aligned}$$
(23)
$$\begin{aligned}& Q_{3j}-T_{i} \ge0,\quad j\in U_{uk}^{i}, j=i, \end{aligned}$$
(24)
$$\begin{aligned}& Q_{4j}-X_{i} \ge0,\quad j\in U_{uk}^{i}, j=i, \end{aligned}$$
(25)

where

$$\begin{gathered} {\Pi_{1}} = {e_{1}} - {E_{i}} {e_{2}} - {A_{i}} {e_{7}},\qquad {\Pi_{2}} = - {A_{i}} {e_{1}} + {B_{i}} {e_{5}} + {C_{i}} {e_{6}} + {D_{i}} {e_{8}} + {e_{11}}, \\ {\Pi_{3}} = {e_{1}} - {E_{j}} {e_{2}} - {A_{j}} {e_{7}}.\end{gathered} $$

Proof

For representation convenience, we denote

$$\begin{gathered} {y_{i}}(t)=x(t) - {E_{i}}x\bigl(t - h(t)\bigr), \\ {\alpha_{i}}(t)=- {A_{i}}x(t-\delta) + {B_{i}}g \bigl(x(t)\bigr) + {C_{i}}g\bigl(x\bigl(t - \tau (t)\bigr)\bigr)+ {D_{i}} \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds+ u(t), \\ {\beta_{i}}(t)=\sigma\biggl(x(t),x(t - \delta),x\bigl(t - \tau(t) \bigr), \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds,x\bigl(t - h(t)\bigr),t,i\biggr),\end{gathered} $$

then system (9) can be rewritten as

$$ dy_{i}(t)=\alpha_{i}(t)\,dt+\beta_{i}(t)\,d \omega(t). $$

Construct a stochastic Lyapunov–Krasovskii functional candidate for system (8) as follows:

$$ V\bigl(x_{t}, t, r(t)\bigr)=\sum _{k = 1}^{4} {{V_{i}}\bigl(x_{t}, t, r(t)\bigr)}, $$
(26)

where

$$\begin{gathered} {V_{1}}\bigl({x_{t}},t,r(t)\bigr) =\eta^{T}(t)P \bigl(r(t)\bigr)\eta(t), \\ {V_{2}}\bigl({x_{t}},t,r(t)\bigr) = \int_{t - \delta}^{t} {{e^{\gamma (s-t)}} {x^{T}}(s){Q_{1}}\bigl(r(t)\bigr)x(s)} \,ds + 2\delta \int_{t - \delta}^{t} { \int_{\theta}^{t} {{e^{\gamma (s-t)}} {x^{T}}(s){Q_{2}}x(s)} \,ds} \,d\theta, \\ \begin{aligned}{V_{3}}\bigl({x_{t}},t,r(t)\bigr) &= \int_{t - h(t)}^{t} {{e^{\gamma (s-t)}} {x^{T}}(s){Q_{3}}\bigl(r(t)\bigr)x(s)}\,ds + \int_{t - \tau(t)}^{t} {{e^{\gamma (s-t)}} {x^{T}}(s){Q_{4}}\bigl(r(t)\bigr)x(s)} \,ds \\ &\quad{}+ \int_{t - \tau(t)}^{t} {e^{\gamma (s-t)}} {g^{T}} \bigl(x(s)\bigr){Q_{5}}g\bigl(x(s)\bigr)\,ds,\end{aligned} \\ \begin{aligned}{V_{4}}\bigl({x_{t}},t,r(t)\bigr) &= d \int_{t - d}^{t} { \int_{\theta}^{t} {{e^{\gamma (s-t)}} {g^{T}} \bigl(x(s)\bigr){R_{1}}g\bigl(x(s)\bigr)} \,ds} \,d\theta \\ &\quad{}+ 2\tau \int_{t - \tau}^{t} { \int_{\theta}^{t} {{e^{\gamma (s-t)}} {x^{T}}(s){R_{2}}x(s)} \,ds} \,d\theta \\ &\quad{}+ 2h \int_{t - h}^{t} { \int_{\theta}^{t} {{e^{\gamma (s-t)}} {x^{T}}(s){R_{3}}x(s)} \,ds} \,d\theta,\end{aligned}\end{gathered} $$

with \(\eta(t)=x(t)- E(r(t))x(t - h(t))-A(r(t))\int_{t - \delta}^{t} x(s) \,ds\).

Then it follows from Itô’s formula that

$$ dV({x_{t}},t,i)=\mathfrak{L}V({x_{t}},t,i) \,dt+ V_{x} \beta_{i}(t)\,d\omega(t), $$
(27)

where

$$ \mathfrak{L} {V_{1}}({x_{t}},t,i)= \operatorname{sym}\bigl\{ \Pi_{1}^{T}{P_{i}} { \Pi_{2}}\bigr\} + \operatorname{trace}\bigl(\beta _{i}^{T}(t){P_{i}} {\beta_{i}}(t)\bigr)+\Pi_{3}^{T}\sum _{j = 1}^{N} {{\pi_{ij}} {P_{j}}} { \Pi_{3}}. $$
(28)

On the other hand, by Assumption 2.3 and condition (14), it follows that

$$\begin{aligned}& \begin{aligned}[b]&\operatorname{trace}\bigl[\beta_{i}^{T}(t){ \beta_{i}}(t)\bigr]\\&\quad\le{\lambda_{i}} \biggl[x^{T}(t){T_{1i}}x(t)+x^{T}(t - \delta){T_{2i}}x(t - \delta)+x^{T}\bigl(t - \tau (t) \bigr){T_{3i}}x\bigl(t - \tau(t)\bigr) \\ &\quad\quad{}+ \biggl( \int_{t - r(t)}^{t} {g\bigl(x(s)\bigr)} \biggr)^{T}{T_{4i}} \biggl( \int_{t - r(t)}^{t} {g\bigl(x(s)\bigr)} \biggr)+x^{T}\bigl(t - h(t)\bigr){T_{5i}}x\bigl(t - h(t)\bigr) \biggr],\end{aligned} \end{aligned}$$
(29)
$$\begin{aligned}& \begin{aligned}[b] \mathfrak{L} {V_{2}}({x_{t}},t,i) &\le-\gamma {V_{2}}({x_{t}},t,i)+{x^{T}}(t) \bigl({Q_{1i}} + 2{\delta^{2}} {Q_{2}}\bigr)x(t) - {e^{-\gamma\delta}} {x^{T}}(t - \delta){Q_{1i}}x(t - \delta) \\ &\quad{}+ \int_{t - \delta}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s)\sum_{j = 1}^{N} {{ \pi_{ij}} {Q_{1j}}}x(s)} \,ds-2\delta \int_{t - \delta}^{t} {{e^{\gamma (s-t)}} {x^{T}}(s){Q_{2}}x(s)}\,ds,\end{aligned} \end{aligned}$$
(30)
$$\begin{aligned}& \begin{aligned}[b]&\mathfrak{L} {V_{3}}({x_{t}},t,i)\\&\quad\le- \gamma{V_{3}}({x_{t}},t,i)+ {x^{T}}(t) ({Q_{3i}} + {Q_{4i}})x(t) \\ &\quad\quad{}- (1 - \eta){e^{-\gamma h}} {x^{T}}\bigl(t - h(t) \bigr){Q_{3i}}x\bigl(t - h(t)\bigr) - (1 - \mu){e^{-\gamma\tau}} {x^{T}}\bigl(t - \tau(t)\bigr){Q_{4i}}x\bigl(t - \tau (t) \bigr) \\ &\quad\quad{}+ {g^{T}}\bigl(x(t)\bigr){Q_{5}}g\bigl(x(t)\bigr)- (1 - \mu){e^{-\gamma\tau }} {g^{T}}\bigl(x\bigl(t - \tau(t)\bigr) \bigr){Q_{5}}g\bigl(x\bigl(t - \tau(t)\bigr)\bigr) \\ &\quad\quad{}+ \int_{t - h(t)}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s)\sum_{j = 1}^{N} {{ \pi_{ij}} {Q_{3j}}}x(s)}\,ds+ \int_{t - \tau(t)}^{t} {{e^{\gamma (s-t)}} {x^{T}}(s)\sum_{j = 1}^{N} {{ \pi_{ij}} {Q_{4j}}}x(s)} \,ds.\end{aligned} \end{aligned}$$
(31)

Taking into account the situation that the information of transition probabilities is not accessible completely, due to \(\sum_{j = 1}^{N} {{\pi_{ij}}}=0\), the following zero equations hold for arbitrary matrices \(W_{i}=W_{i}^{T}\), \(S_{i}=S_{i}^{T}\), \(T_{i}=T_{i}^{T}\), \(X_{i}=X_{i}^{T}\):

$$\begin{aligned}& -\Pi_{3}^{T} \Biggl(\sum _{j = 1}^{N} {{\pi_{ij}} {W_{i}}} \Biggr){\Pi_{3}}=0, \quad\forall i\in S, \end{aligned}$$
(32)
$$\begin{aligned}& - \int_{t - \delta}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s) \Biggl(\sum_{j = 1}^{N} {{\pi_{ij}} {S_{i}}} \Biggr)x(s)}\,ds=0, \quad\forall i\in S, \end{aligned}$$
(33)
$$\begin{aligned}& - \int_{t - h(t)}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s) \Biggl(\sum_{j = 1}^{N} {{\pi_{ij}} {T_{i}}} \Biggr)x(s)}\,ds=0, \quad\forall i\in S, \end{aligned}$$
(34)
$$\begin{aligned}& - \int_{t-\tau(t)}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s) \Biggl(\sum_{j = 1}^{N} {{\pi_{ij}} {X_{i}}} \Biggr)x(s)}\,ds=0, \quad\forall i\in S. \end{aligned}$$
(35)

And applying Lemma 2.1, it is easy to obtain the following inequalities:

$$\begin{aligned}& \begin{aligned}[b] \mathfrak{L} {V_{1}}({x_{t}},t,i)&\le \operatorname{sym}\bigl\{ \Pi_{1}^{T}{P_{i}} { \Pi_{2}}\bigr\} +{\lambda _{i}}\bigl(e_{1}^{T}{T_{1i}} {e_{1}} + e_{3}^{T}{T_{2i}} {e_{3}} + e_{4}^{T}{T_{3i}} {e_{4}} + e_{8}^{T}{T_{4i}} {e_{8}}+ e_{2}^{T}{T_{5i}} {e_{2}}\bigr) \\ &\quad{}+ \Pi_{3}^{T}\sum_{j \in U_{k}^{i}} {{\pi _{ij}}({P_{j}-W_{i}})} {\Pi_{3}} + \Pi_{3}^{T}\sum_{j \in U_{uk}^{i}} {{ \pi_{ij}}({P_{j}-W_{i}})} {\Pi_{3}},\end{aligned} \end{aligned}$$
(36)
$$\begin{aligned}& \begin{aligned}[b] \mathfrak{L} {V_{2}}({x_{t}},t,i)&\le-\gamma {V_{2}}({x_{t}},t,i)+{x^{T}}(t) \bigl({Q_{1i}} + 2{\delta^{2}} {Q_{2}}\bigr)x(t) - {e^{-\gamma\delta}} {x^{T}}(t - \delta){Q_{1i}}x(t - \delta) \\ &\quad{}+ \int_{t - \delta}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s)\sum_{j \in U_{k}^{i}} {{ \pi_{ij}}({Q_{1j}}-S_{i})}x(s)} \,ds-\delta \int_{t - \delta}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s){Q_{2}}x(s)}\,ds \\ &\quad{}+ \int_{t - \delta}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s)\sum_{j \in U_{uk}^{i}}{{\pi_{ij}}({Q_{1j}}-S_{i})}x(s)} \,ds \\ &\quad{}-{e^{-\gamma\delta}} { \biggl( { \int_{t - \delta}^{t} {x(s)} \,ds} \biggr)^{T}} {Q_{2}} \biggl( { \int_{t - \delta}^{t} {x(s)} \,ds} \biggr),\end{aligned} \end{aligned}$$
(37)
$$\begin{aligned}& \begin{aligned}[b] \mathfrak{L} {V_{3}}({x_{t}},t,i)&\le- \gamma{V_{3}}({x_{t}},t,i)+ {x^{T}}(t) ({Q_{3i}} + {Q_{4i}})x(t) \\ &\quad{}- (1 - \eta){e^{-\gamma h}} {x^{T}}\bigl(t - h(t) \bigr){Q_{3i}}x\bigl(t - h(t)\bigr) \\ &\quad{}- (1 - \mu){e^{-\gamma\tau}} {x^{T}}\bigl(t - \tau(t)\bigr){Q_{4i}}x\bigl(t - \tau (t) \bigr) \\ &\quad{}+ {g^{T}}\bigl(x(t)\bigr){Q_{5}}g\bigl(x(t)\bigr)- (1 - \mu){e^{-\gamma\tau }} {g^{T}}\bigl(x\bigl(t - \tau(t)\bigr) \bigr){Q_{5}}g\bigl(x\bigl(t - \tau(t)\bigr)\bigr) \\ &\quad{}+ \int_{t - h(t)}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s)\sum_{j\in U_{k}^{i}}{{\pi_{ij}}({Q_{3j}}-T_{i})}x(s)} \,ds \\ &\quad{}+ \int_{t - h(t)}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s)\sum_{j\in U_{uk}^{i}}{{\pi_{ij}}({Q_{3j}}-T_{i})}x(s)} \,ds \\ &\quad{}+ \int_{t - \tau(t)}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s)\sum_{j\in U_{k}^{i}}{{\pi_{ij}}({Q_{4j}}-X_{i})}x(s)} \,ds \\ &\quad{}+ \int_{t - \tau(t)}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s)\sum_{j\in U_{uk}^{i}}{{\pi_{ij}}({Q_{4j}}-X_{i})}x(s)} \,ds.\end{aligned} \end{aligned}$$
(38)

By using Lemma 2.1 and the following easy computation, it can be seen that

$$ \begin{aligned}[b]\mathfrak{L} {V_{4}}({x_{t}},t,i) &\le- \gamma{V_{4}}({x_{t}},t,i)+ {d^{2}} {g^{T}}\bigl(x(t)\bigr){R_{1}}g\bigl(x(t)\bigr) + {x^{T}}(t) \bigl(2{\tau^{2}} {R_{2}} + 2{h^{2}} {R_{3}}\bigr)x(t) \\ &\quad{}-\tau \int_{t - \tau(t)}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s){R_{2}}x(s)} \,ds-h \int_{t - h(t)}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s){R_{3}}x(s)}\,ds \\ &\quad{} - {e^{-\gamma d}} { \biggl( { \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds} \biggr)^{T}} {R_{1}} \biggl( { \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds} \biggr) \\ &\quad{}- {e^{-\gamma\tau}} { \biggl( { \int_{t - \tau(t)}^{t} {x(s)} \,ds} \biggr)^{T}} {R_{2}} \biggl( { \int_{t - \tau(t)}^{t} {x(s)} \,ds} \biggr) \\ &\quad{}- {e^{-\gamma h}} { \biggl( { \int_{t - h(t)}^{t} {x(s)} \,ds} \biggr)^{T}} {R_{3}} \biggl( { \int_{t - h(t)}^{t} {x(s)} \,ds} \biggr).\end{aligned} $$
(39)

For positive diagonal matrices \(K_{1}=\operatorname{diag} \{k_{11}, k_{12}, \ldots, k_{1n} \}\) and \(K_{2}=\operatorname{diag} \{k_{21}, k_{22}, \ldots, k_{2n}\}\), it follows from condition (5) that

$$\begin{aligned}& 0 \le{ \left [ \textstyle\begin{array}{c} {x(t)} \\ {g(x(t))} \end{array}\displaystyle \right ]^{T}}\left [ \textstyle\begin{array}{c@{\quad}c} { - {L_{1}}{K_{1}}} & {{L_{2}}{K_{1}}} \\ {{L_{2}}{K_{1}}} & { - {K_{1}}} \end{array}\displaystyle \right ]\left [ \textstyle\begin{array}{c} {x(t)} \\ {g(x(t))} \end{array}\displaystyle \right ], \end{aligned}$$
(40)
$$\begin{aligned}& 0 \le{ \left [ \textstyle\begin{array}{c} {x(t - \tau(t))} \\ {g(x(t - \tau(t)))} \end{array}\displaystyle \right ]^{T}}\left [ \textstyle\begin{array}{c@{\quad}c} { - {L_{1}}{K_{2}}} & {{L_{2}}{K_{2}}} \\ {{L_{2}}{K_{2}}} & { - {K_{2}}} \end{array}\displaystyle \right ]\left [ \textstyle\begin{array}{c} {x(t - \tau(t))} \\ {g(x(t - \tau(t)))} \end{array}\displaystyle \right ]. \end{aligned}$$
(41)

From Eqs. (14)–(17), (36)–(41), and computing the mathematical expectation, one can get that

$$ \begin{aligned}[b] &\mathbf{E} \bigl\{ \mathfrak{L}V({x_{t}},t,i)+\gamma V({x_{t}},t,i)-2z^{T}(t)u(t) \bigr\} \\ &\quad\le\mathbf{E} \biggl\{ \xi^{T}(t){\Phi_{i}}\xi(t)+ \Pi_{3}^{T}\sum_{j \in U_{uk}^{i}} {{ \pi_{ij}}({P_{j}-W_{i}})} {\Pi_{3}} \\ &\quad\quad{}+ \int_{t - \delta}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s)\sum_{j \in U_{uk}^{i}}{{\pi_{ij}}({Q_{1j}}-S_{i})}x(s)} \,ds \\ &\quad\quad{} + \int_{t - h(t)}^{t} {{e^{\gamma(s-t)}} {x^{T}}(s)\sum_{j\in U_{uk}^{i}}{{\pi_{ij}}({Q_{3j}}-T_{i})}x(s)} \,ds \\ &\quad\quad{} + \int_{t - \tau(t)}^{t} {{e^{\gamma (s-t)}} {x^{T}}(s)\sum_{j\in U_{uk}^{i}}{{\pi_{ij}}({Q_{4j}}-X_{i})}x(s)} \, ds \biggr\} ,\end{aligned} $$
(42)

where

$$\begin{aligned} \xi(t) &= \operatorname{col} \biggl\{ x(t),x\bigl(t - h(t)\bigr),x(t - \delta ),x\bigl(t - \tau(t)\bigr),g\bigl(x(t)\bigr),g\bigl(x\bigl(t - \tau(t)\bigr) \bigr), \\ &\quad{} \int_{t - \delta}^{t} {x(s)} \,ds, \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds, \int_{t - \tau(t)}^{t} {x(s)} \,ds, \int_{t - h(t)}^{t} {x(s)} \,ds,u(t) \biggr\} .\end{aligned} $$

Note that \({\pi_{ii}} = -\sum_{j = 1,j \ne i}^{N} {{\pi_{ij}}}\) and \(\pi_{ij} \ge0\) for all \(j\ne i\), namely \(\pi_{ii}<0\) for all \(i\in S\). Therefore, it follows from Definition 1 in [37] and easy calculation that if \(i\in U_{k}^{i}\), inequalities (13)–(21) imply that

$$ \mathbf{E} \bigl\{ \mathfrak{L}V({x_{t}},t,i)+\gamma V({x_{t}},t,i)-2z^{T}(t)u(t) \bigr\} < 0. $$
(43)

On the other hand, similarly, if \(i\in U_{uk}^{i}\), inequalities (13)–(25) also mean that inequality (43) holds. Therefore, regardless of whether the transition probability is accessible, conditions (13)–(25) imply that system (8) is exponentially passive. It completes the proof. □

Remark 3.1

Recently, many of the researchers have developed the passivity results in the literature (see [2732]). In [29], the authors studied the passivity analysis for neural networks using the Lyapunov–Krasovskii functional method. In [31], the authors investigated the passivity analysis of neural networks with time-varying discrete and distributed delays by using a delay-decomposition approach. Moreover, the authors of [32] dealt with new passivity criteria for neural networks with time-varying delay. However, the stochastic noise was not taken into account in these models. In [3335], authors investigated the problem for passivity analysis of stochastic neural networks using the free-weighting matrix method. So far, those methods cannot be applied to neural networks with leakage delay due to the existence of the term δ in those systems. In addition, the Markovian jumping was not taken into account in these models. In [39], authors studied the EPP of Markovian jumping stochastic NN with leakage and distributed delays, and some delay-dependent sufficient conditions were obtained by the Lyapunov stability theory and the free-weighting matrix approach. However, the neutral time delay and Markovian jump with partially unknown transition probabilities were not taken into account in this model. Therefore, Theorem 3.1 considers not only the NN with neutral time delay but also Markovian jump with partially unknown transition probabilities. To some extent, it is an extension for the results of [39]. It is worth pointing out that the passivity criteria in [39] cannot directly be proved, but our method can directly prove the obtained condition. Therefore, the exponential passivity criteria we obtained are more general than some existing results. Moreover, the Lyapunov functionals \(2\delta\int_{t - \delta }^{t} {\int_{\theta}^{t} {{e^{\gamma(s-t)}}{x^{T}}(s){Q_{2}}x(s)} \,ds} \, d\theta\), \(2\tau\int_{t - \tau}^{t} {\int_{\theta}^{t} {{e^{\gamma (s-t)}}{x^{T}}(s){R_{2}}x(s)} \,ds} \,d\theta\), and \(2h\int_{t - h}^{t} {\int _{\theta}^{t} {{e^{\gamma(s-t)}}{x^{T}}(s){R_{3}}x(s)} \,ds} \,d\theta\) also play an important role in Theorem 3.1. What is more, the presented approaches can also be applied to other exponential passivity problems of stochastic fuzzy neutral-type BAM neural networks, stochastic fuzzy neutral-type Hopfield neural networks, and so on.

Remark 3.2

Similar to [48] and [49], so as to get the less conservative exponential passivity criterion of Markovian jumping systems with partial information on transition probabilities, the free-connection weighting matrices are presented by making full use of the relationship of the transition rates among many subsystems, i.e., \(\sum_{j = 1}^{N} {{\pi_{ij}}}=0\) for all \(i \in S\), which overcomes the conservativeness of employing the fixed connection weighting matrices.

Remark 3.3

It should be pointed out that the more known elements in (6) we have, the lower the conservativeness of the condition would be. Namely, if there are more unknown elements in (6), the maximum of time delays and the maximum of exponential convergence rate will be lower in Theorem 3.1. It will be shown in the section of example. In fact, if all transition probabilities are known, the corresponding system can be considered as a general Markovian jumping stochastic neutral-type neural networks system. Thus, the obtained conditions in Theorem 3.1 will accordingly cover the results for Markovian jumping stochastic neutral-type neural network systems with mixed and leakage delays. Therefore, our results are more general than some existing results. On the other hand, one can employ the common Lyapunov functional method to analyze the exponential passivity for the Markovian jumping systems under the assumption that all transition probabilities are unknown, it is omitted here.

For the exponential passivity analysis of the Markovian jumping NSNN systems (8) with all transition probabilities known, we can obtain the following corollary by following a similar approach as in the proof of Theorem 3.1.

Corollary 3.1

Suppose that Assumptions 2.1 and 2.3 hold. For given scalars δ, τ, h, r, μ, and η, the Markovian jumping NSNN described by (9) is exponentially passive if there exist positive scalars \(\lambda_{i}\), positive definite symmetric matrices \(P_{i}\), \(Q_{1i}\), \(Q_{2}\), \(Q_{3i}\), \(Q_{4i}\), \(Q_{5}\), \(R_{1}\), \(R_{2}\), \(R_{3}\), and positive diagonal matrices \(K_{1}\), \(K_{2}\) such that the following inequalities hold for all \(i\in S\):

$$\begin{gathered} \begin{aligned}{\Phi_{i}} &= \operatorname{sym}\bigl\{ \Pi_{1}^{T}{P_{i}} {\Pi_{2}}\bigr\} + \gamma\Pi_{1}^{T}{P_{i}} {\Pi _{1}} + \Pi_{3}^{T}\sum _{j=1}^{N} {\pi_{ij}{P_{j}}} { \Pi_{3}} \\ &\quad{}+ {\lambda _{i}}\bigl(e_{1}^{T}{T_{1i}} {e_{1}} + e_{3}^{T}{T_{2i}} {e_{3}}+ e_{4}^{T}{T_{3i}} {e_{4}}+ e_{8}^{T}{T_{4i}} {e_{8}} + e_{2}^{T}{T_{5i}} {e_{2}}\bigr)\\ &\quad{}+ e_{1}^{T}\bigl({Q_{1i}} + 2{\delta^{2}} {Q_{2}}+{Q_{3i}}+{Q_{4i}}+2{\tau ^{2}}R_{2}\bigr){e_{1}} \\ &\quad{}+2{h^{2}}e_{1}^{T}R_{3}{e_{1}}- {e^{-\gamma\delta}}e_{3}^{T}{Q_{1i}} {e_{3}} - {e^{-\gamma\delta}}e_{7}^{T}{Q_{2}} {e_{7}}- (1 - \eta) {e^{-\gamma h}}e_{2}^{T}{Q_{3i}} {e_{2}} \\ &\quad{}- (1 - \mu){e^{-\gamma\tau }}e_{4}^{T}{Q_{4i}} {e_{4}}+e_{5}^{T}\bigl({Q_{5}}+{d^{2}} {R_{1}}\bigr){e_{5}} - (1 - \mu ){e^{-\gamma\tau}}e_{6}^{T}{Q_{5}} {e_{6}} \\ &\quad{}- {e^{-\gamma d}}e_{8}^{T}{R_{1}} {e_{8}}- {e^{-\gamma\tau }}e_{9}^{T}{R_{2}} {e_{9}}- {e^{-\gamma h}}e_{10}^{T}{R_{3}} {e_{10}}- e_{1}^{T}{L_{1}} {K_{1}} {e_{1}} \\ &\quad{}+ 2e_{1}^{T}{L_{2}} {K_{1}} {e_{5}} - e_{5}^{T}{K_{1}} {e_{5}} - e_{4}^{T}{L_{1}} {K_{2}} {e_{4}}+ 2e_{4}^{T}{L_{2}} {K_{2}} {e_{6}} - e_{6}^{T}{K_{2}} {e_{6}} \\ &\quad{}- {(W_{1i}{e_{5}} + W_{2i}{e_{6}}+{e_{11}})^{T}} {e_{11}}- {e_{11}^{T}} {(W_{1i}{e_{5}} + W_{2i}{e_{6}}+{e_{11}})}\\ &< 0,\end{aligned} \\ P_{i} \le\lambda_{i} I,\\ \sum _{j=1}^{N} {\pi_{ij}Q_{1j}}- \delta{Q_{2}} \le0,\qquad \sum_{j=1}^{N} {\pi_{ij}Q_{3j}}-h {R_{3}} \le0,\qquad \sum _{j=1}^{N} {\pi_{ij}Q_{4j}}- \tau{R_{2}} \le0,\end{gathered} $$

where

$$\begin{gathered} {\Pi_{1}} = {e_{1}} - {E_{i}} {e_{2}} - {A_{i}} {e_{7}},\qquad {\Pi_{2}} = - {A_{i}} {e_{1}} + {B_{i}} {e_{5}} + {C_{i}} {e_{6}} + {D_{i}} {e_{8}} + {e_{11}}, \\ {\Pi_{3}} = {e_{1}} - {E_{j}} {e_{2}} - {A_{j}} {e_{7}}.\end{gathered} $$

Remark 3.4

As a special case, when Markov jumping parameters are not considered, the Markov chain \(\{{r(t),t \ge0} \}\) only takes a unique value 1 (i.e., \(S= \{1 \}\)), system (8) will be reduced to the following NSNN with mixed and leakage delays:

$$ \begin{gathered}d\bigl[x(t) - Ex\bigl(t - h(t)\bigr)\bigr]\\\quad= \biggl[ - Ax(t - \delta) + Bg\bigl(x(t)\bigr) + Cg\bigl(x\bigl(t - \tau (t)\bigr)\bigr) + D \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds+ u(t)\biggr] \,dt \\ \quad\quad{}+ \sigma\biggl(x(t),x(t - \delta),x\bigl(t - \tau(t)\bigr), \int_{t - d(t)}^{t} {g\bigl(x(s)\bigr)} \,ds,x\bigl(t - h(t)\bigr),t\biggr)\,d\omega (t), \\ z(t)={W_{1}}g\bigl(x(t)\bigr) + {W_{2}}g\bigl(x\bigl(t - \tau(t)\bigr)\bigr) + u(t).\end{gathered} $$
(44)

Now, the delay-dependent exponential passivity analysis for system (8) without Markovian jump parameters is described by the following corollary.

Corollary 3.2

Suppose that Assumptions 2.1 and 2.3 hold. For given scalars δ, τ, h, r, μ, and η, the NSNN described by (44) is exponentially passive if there exist positive scalar λ, positive definite symmetric matrices P, \(Q_{i}\) (\(i=1,\ldots,5\)), \(R_{i}\) (\(i=1,2,3\)), and positive diagonal matrices \(K_{1}\), \(K_{2}\) such that the following inequalities (LMIs) hold for all \(i\in S\):

$$\begin{aligned}& P \le\lambda I, \end{aligned}$$
(45)
$$\begin{aligned}& \begin{aligned}[b] \Phi&= \operatorname{sym}\bigl\{ \Pi_{1}^{T}{P} {\Pi_{2}}\bigr\} + \gamma\Pi_{1}^{T}{P} { \Pi_{1}}+ {\lambda}\bigl(e_{1}^{T}{T_{1}} {e_{1}} + e_{3}^{T}{T_{2}} {e_{3}} + e_{4}^{T}{T_{3}} {e_{4}} + e_{8}^{T}{T_{4}} {e_{8}} + e_{2}^{T}{T_{5}} {e_{2}}\bigr) \\ &\quad{}+ e_{1}^{T}\bigl({Q_{1}} + 2{ \delta^{2}} {Q_{2}}\bigr){e_{1}}+e_{1}^{T}({Q_{3}}+{Q_{4}}){e_{1}}- {e^{-\gamma\delta}}e_{3}^{T}{Q_{1}} {e_{3}} - 2{e^{-\gamma\delta }}e_{7}^{T}{Q_{2}} {e_{7}} \\ &\quad{}- (1 - \eta) {e^{-\gamma h}}e_{2}^{T}{Q_{3}} {e_{2}}- (1 - \mu ){e^{-\gamma\tau}}e_{4}^{T}{Q_{4}} {e_{4}}+e_{5}^{T}{Q_{5}} {e_{5}}- (1 - \mu ){e^{-\gamma\tau}}e_{6}^{T}{Q_{5}} {e_{6}} \\ &\quad{}+ {d^{2}}e_{5}^{T}{R_{1}} {e_{5}} - {e^{-\gamma d}}e_{8}^{T}{R_{1}} {e_{8}} + e_{1}^{T}\bigl(2{\tau^{2}}R_{2}+2{h^{2}}R_{3} \bigr){e_{1}}- 2{e^{-\gamma\tau }}e_{9}^{T}{R_{2}} {e_{9}} \\ &\quad{}- 2{e^{-\gamma h}}e_{10}^{T}{R_{3}} {e_{10}}- e_{1}^{T}{L_{1}} {K_{1}} {e_{1}} + 2e_{1}^{T}{L_{2}} {K_{1}} {e_{5}} - e_{5}^{T}{K_{1}} {e_{5}} - e_{4}^{T}{L_{1}} {K_{2}} {e_{4}}- e_{6}^{T}{K_{2}} {e_{6}} \\ &\quad{}+ 2e_{4}^{T}{L_{2}} {K_{2}} {e_{6}}- {({W_{1}} {e_{5}} + {W_{2}} {e_{6}} + {e_{11}})^{T}} {e_{11}}- {e_{11}^{T}}({W_{1}} {e_{5}} + {W_{2}} {e_{6}} + {e_{11}})\\ &< 0,\end{aligned} \end{aligned}$$
(46)

where

$$\begin{gathered} {\Pi_{1}} = {e_{1}} - E{e_{2}} - A{e_{7}},\qquad {\Pi_{2}} = - {A} {e_{1}} + {B} {e_{5}} + {C} {e_{6}} + {D} {e_{8}} + {e_{11}}, \\ {e_{m}} = [ \textstyle\begin{array}{c@{\quad}c@{\quad}c} {{0_{n \times(m - 1)n}}} & {{I_{n}}} & {{0_{n \times(11 - m)n}}} \end{array}\displaystyle ]\quad(m = 1, 2, \ldots, 11).\end{gathered} $$

Proof

The proof of the corollary is totally similar to Theorem 3.1 and it is omitted here. □

Remark 3.5

In this paper, the relationships among the delayed maximum upper bounds of leakage delay δ, discrete delay τ, neutral delay h, distributed delay d, exponential convergence rate γ, and the upper bound of the derivatives μ, η are implicitly got in Theorem 3.1. Such relationships, however, have not been fully considered in the existing results related to exponential passivity analysis.

4 Numerical examples

Two numerical examples are provided in this section to show the feasibility of the presented results in this paper.

Example 4.1

The parameters of the discussed systems (8) with four operation modes are listed as follows:

$$\begin{aligned}& {A_{1}} = \left [ \textstyle\begin{array}{c@{\quad}c} 1.2 & 0 \\ 0 & 1.5 \end{array}\displaystyle \right ],\qquad {A_{2}} = \left [ \textstyle\begin{array}{c@{\quad}c} 1.4 & 0 \\ 0 & 0.9 \end{array}\displaystyle \right ],\\& {A_{3}} = \left [ \textstyle\begin{array}{c@{\quad}c} 2 & 0 \\ 0 & 1.1 \end{array}\displaystyle \right ],\qquad {A_{4}} = \left [ \textstyle\begin{array}{c@{\quad}c} 1.8 & 0 \\ 0 & 1.6 \end{array}\displaystyle \right ], \\& {B_{1}} = \left [ \textstyle\begin{array}{c@{\quad}c} {-0.21} & {-0.19} \\ {-0.24} & {0.1} \end{array}\displaystyle \right ],\qquad {B_{2}} = \left [ \textstyle\begin{array}{c@{\quad}c} {0.9} & {-0.9} \\ {0.5} & {-0.8} \end{array}\displaystyle \right ],\qquad {B_{3}} = \left [ \textstyle\begin{array}{c@{\quad}c} {-0.3} & {-0.2} \\ {-0.25} & {0.2} \end{array}\displaystyle \right ], \\& {B_{4}} = \left [ \textstyle\begin{array}{c@{\quad}c} {-0.15} & {-0.1} \\ {-0.2} & {0.1} \end{array}\displaystyle \right ],\qquad {C_{1}}= \left [ \textstyle\begin{array}{c@{\quad}c} {-0.09} & {-0.2} \\ {0.2} & {0.1} \end{array}\displaystyle \right ],\qquad {C_{2}} = \left [ \textstyle\begin{array}{c@{\quad}c} {0.1} & {0.1} \\ {0.2} & {0.3} \end{array}\displaystyle \right ], \\& {C_{3}} = \left [ \textstyle\begin{array}{c@{\quad}c} {-0.1} & {-0.3} \\ {0.3} & {0.2} \end{array}\displaystyle \right ],\qquad {C_{4}} = \left [ \textstyle\begin{array}{c@{\quad}c} {-0.15} & {-0.35} \\ {0.35} & {0.25} \end{array}\displaystyle \right ],\qquad {D_{1}} = \left [ \textstyle\begin{array}{c@{\quad}c} {-0.5} & {0} \\ {0} & {-0.5} \end{array}\displaystyle \right ], \\& {D_{2}} = \left [ \textstyle\begin{array}{c@{\quad}c} {0.1} & {-0.02} \\ {-0.2} & {0.07} \end{array}\displaystyle \right ],\qquad {D_{3}} = \left [ \textstyle\begin{array}{c@{\quad}c} {0.2} & {-0.03} \\ {-0.3} & {0.08} \end{array}\displaystyle \right ],\qquad {D_{4}} = \left [ \textstyle\begin{array}{c@{\quad}c} {0.25} & {-0.03} \\ {-0.35} & {0.08} \end{array}\displaystyle \right ], \\& {E_{1}}= \left [ \textstyle\begin{array}{c@{\quad}c} {-0.2} & {0} \\ {0.2} & {-0.09} \end{array}\displaystyle \right ],\qquad {E_{2}}= \left [ \textstyle\begin{array}{c@{\quad}c} {0.1} & {0} \\ {0.5} & {-0.1} \end{array}\displaystyle \right ],\qquad {E_{3}}= \left [ \textstyle\begin{array}{c@{\quad}c} {0.2} & {0} \\ {0.6} & {-0.2} \end{array}\displaystyle \right ], \\& {E_{4}}= \left [ \textstyle\begin{array}{c@{\quad}c} {0.15} & {0} \\ {0.55} & {-0.15} \end{array}\displaystyle \right ],\qquad {W_{11}}= \left [ \textstyle\begin{array}{c@{\quad}c} 1 & 0 \\ 0 & 1 \end{array}\displaystyle \right ],\qquad {W_{12}}= \left [ \textstyle\begin{array}{c@{\quad}c} 1 & 0 \\ 0 & 1 \end{array}\displaystyle \right ], \\& {W_{13}}= \left [ \textstyle\begin{array}{c@{\quad}c} 1 & 0 \\ 0 & 1 \end{array}\displaystyle \right ],\qquad {W_{14}}= \left [ \textstyle\begin{array}{c@{\quad}c} 1 & 0 \\ 0 & 1 \end{array}\displaystyle \right ],\qquad {W_{21}}= \left [ \textstyle\begin{array}{c@{\quad}c} 1 & 0 \\ 0 & 1 \end{array}\displaystyle \right ], \\& {W_{22}}= \left [ \textstyle\begin{array}{c@{\quad}c} 1 & 0 \\ 0 & 1 \end{array}\displaystyle \right ],\qquad {W_{23}}= \left [ \textstyle\begin{array}{c@{\quad}c} 1 & 0 \\ 0 & 1 \end{array}\displaystyle \right ],\qquad {W_{24}}= \left [ \textstyle\begin{array}{c@{\quad}c} 1 & 0 \\ 0 & 1 \end{array}\displaystyle \right ], \\& T_{11}=0.16I,\qquad T_{12}=T_{21}=0.17I,\qquad T_{22}=T_{31}=0.08I,\qquad T_{32}=0.09I, \\& T_{41}=T_{42}=0.01I,\qquad T_{51}=0.02I,\qquad T_{52}=0.03I,\qquad T_{13}=0.18I,\qquad T_{23}=0.09I, \\& T_{33}=0.1I, \qquad T_{43}=0.02I,\qquad T_{53}=0.04I, \qquad T_{14}=0.2I,\qquad T_{24}=0.1I, \\& T_{34}=0.16I,\qquad T_{44}=0.04I,\qquad T_{54}=0.06I. \end{aligned}$$

The partly unknown transition rate matrix Π is considered as the following two cases:

$$\begin{gathered} \text{Case I:}\quad \Pi= \left [ \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} -1.3 & 0.2 & ? & ? \\ ? & ? & 0.3 & 0.3 \\ 0.6 & ? & -1.5 & ? \\ 0.4 & ? & ? & ? \end{array}\displaystyle \right ], \\ \text{Case II:}\quad \Pi= \left [ \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} -1.3 & 0.2 & 0.5 & 0.6 \\ 0.5 & -1.1 & 0.3 & 0.3 \\ 0.6 & 0.4 & -1.5 & 0.5 \\ 0.4 & 0.2 & 0.5 & -1.1 \end{array}\displaystyle \right ].\end{gathered} $$

In this example, we take the nonlinear activation function:

$$ g_{i}(x_{i}) = \textstyle\begin{cases} 0.01\tanh(x_{i}), & x_{i}\le0, \\ 0.02x_{i}, & x_{i}>0. \end{cases} $$

We take \(l_{1}^{-} =l_{2}^{-} =-0.01\), \(l_{1}^{+}=l_{2}^{+}=0.02\). Let \({L_{1}}=\operatorname{diag}\{-0.0002,-0.0002\}\), \({L_{2}}= \operatorname {diag}\{0.005,0.005\}\). By solving Eqs. (13)–(25) in Theorem 3.1 and using the LMI Toolbox of Matlab , when \(\delta =0.1\), \(\eta=0.01\), and \(\gamma=0.3\), the maximum of the time delay \(\tau=h=d\) for different cases and different μ can be computed, respectively, as shown in Table 1. It is easily seen from Table 1 that the more knowledge of transition probabilities we have, the larger the maximum of delay can be obtained for ensuring stochastically exponential passivity. This shows the tradeoff between the cost of obtaining transition probabilities and the system performance. Furthermore, the maximum exponential convergence rate γ obtained in Theorem 3.1 for different cases and different values of μ and η when \(\delta=0.1\), \(\tau=1\), \(h=0.1\), and \(d=0.2\) are listed in Table 2. Therefore, it follows from Theorem 3.1 that the neural networks (8) are stochastically exponentially passive in the sense of Definition 1 in [37]. From this example, it is proved that the approach presented in this paper is feasible.

Table 1 Maximum allowable upper bounds \(\tau=h=d\) for different values of μ
Table 2 Maximum exponential convergence rate

Example 4.2

Consider the two-dimensional stochastic neural networks of neutral type with mixed and leakage delays (44) with the following parameters:

$$\begin{gathered} A = \left [ \textstyle\begin{array}{c@{\quad}c} 2 & 0 \\ 0 & 3 \end{array}\displaystyle \right ],\qquad B = \left [ \textstyle\begin{array}{c@{\quad}c} 0.3 & -0.2 \\ -0.2 & 0.3 \end{array}\displaystyle \right ],\qquad C = \left [ \textstyle\begin{array}{c@{\quad}c} {-0.1} & { -0.2} \\ {-0.2} & {0.1} \end{array}\displaystyle \right ], \\ D = \left [ \textstyle\begin{array}{c@{\quad}c} {-0.2} & {0.1} \\ {-0.1} & {-0.2} \end{array}\displaystyle \right ],\qquad E = \left [ \textstyle\begin{array}{c@{\quad}c} {-0.1} & {0.2} \\ {-0.2} & {-0.1} \end{array}\displaystyle \right ],\\ T_{1}=0.16I,\qquad T_{2}=0.17I,\qquad T_{3}=0.08I,\qquad T_{4}=0.01I,\qquad T_{5}=0.02I.\end{gathered} $$

In this example, the activation function is \(g(x)=\frac {1}{2}(|x+1|-|x-1|)\). Clearly, it satisfies Assumption 2.1 with \(l_{1}^{-} =l_{2}^{-} =0.5\), \(l_{1}^{+}=l_{2}^{+}=0.55\), \({L_{1}}=\operatorname {diag}\{0.275,0.275\}\), \({L_{2}}=\operatorname{diag}\{0.525,0.525\}\). By computing LMIs (45) and (46) in Corollary 3.2, the maximum rates γ obtained in Corollary 3.2 for different values of μ and η when \(\delta=0.1\), \(\tau=1\), \(h=0.2\), and \(d=0.3\) are listed in Table 3. On the other hand, for different values of μ and η, by Corollary 3.2, we can get the upper bounds τ, h, and d to guarantee that the neural networks (44) are exponentially passive in the sense of Definition 1 in [37]. By employing the Matlab LMI toolbox, the upper bounds τ, h, and d are calculated and tabulated in Table 4 corresponding to different values of μ with \(\eta=0.5\), \(\gamma=0.5\), and \(\delta=0.1\). The numerical results in Table 4 illustrate the effectiveness of the delay-dependent exponential passivity condition proposed in Corollary 3.2.

Table 3 Maximum exponential convergence rate
Table 4 Maximum allowable upper bounds \(\tau=h=d\) for different values of μ

5 Conclusion

In this paper, we have studied the issue of exponential passivity for Markovian jumping NSNN with mixed, leakage delays and partially unknown transition probabilities. By utilizing an appropriate Lyapunov–Krasovskii functional, the sufficient conditions of exponential passivity are given in terms of linear matrix inequalities (LMIs), which can be easily computed by the LMI toolbox of Matlab. Finally, two examples are given to show the advantage of the presented results in this paper.

References

  1. Chen, W.H., Zheng, W.X.: Improved delay-dependent asymptotic stability criteria for delayed neural networks. IEEE Trans. Neural Netw. 19, 2154–2161 (2008)

    Article  Google Scholar 

  2. Haykin, S.: Neural Networks: A Comprehensive Foundation. Prentice Hall, New York (1994)

    MATH  Google Scholar 

  3. Li, X., Fu, X.: Lag synchronization of chaotic delayed neural networks via impulsive control. IMA J. Math. Control Inf. 29, 133–145 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  4. Stamova, I., Stamov, T., Li, X.: Global exponential stability of a class of impulsive cellular neural networks with supremums. Int. J. Adapt. Control Signal Process. 28, 1227–1239 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  5. Zhang, X., Lv, X., Li, X.: Sampled-data based lag synchronization of chaotic delayed neural networks with impulsive control. Nonlinear Dyn. 90, 2199–2207 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  6. Zhang, X., Li, X., Cao, J.: Design of delay-dependent controllers for finite-time stabilization of delayed neural networks with uncertainty. J. Franklin Inst. 355, 5394–5413 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  7. Arik, S.: An analysis of exponential stability of delayed neural networks with time-varying delays. Neural Netw. 17, 1027–1031 (2004)

    Article  MATH  Google Scholar 

  8. Xiong, L.L., Cheng, J., Cao, J.D., Liu, Z.X.: Novel inequality with application to improve the stability criterion for dynamical systems with two additive time-varying delays. Appl. Math. Comput. 321, 672–688 (2018)

    MathSciNet  Google Scholar 

  9. Cao, J., Yuan, K., Li, H.X.: Global asymptotical stability of recurrent neural networks with multiple discrete delays and distributed delays. IEEE Trans. Neural Netw. 17, 1646–1651 (2006)

    Article  Google Scholar 

  10. Park, J.H., Kwon, O.M.: Global stability for neural networks of neutral-type with interval time-varying delays. Chaos Solitons Fractals 41, 1174–1181 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Feng, J., Xu, S., Zou, Y.: Delay-dependent stability of neutral type neural networks with distributed delays. Neurocomputing 72, 2576–2580 (2009)

    Article  Google Scholar 

  12. Mahmoud, M.S., Ismail, A.: Improved results on robust exponential stability criteria for neutral-type delayed neural networks. Appl. Math. Comput. 217, 3011–3019 (2010)

    MathSciNet  MATH  Google Scholar 

  13. Park, J.H.: Synchronization of cellular neural networks of neutral-type via dynamic feedback controller. Chaos Solitons Fractals 42, 1299–1304 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  14. Rakkiyappan, R., Balasubramaniam, P.: New global exponential stability results for neutral-type neural networks with distributed time delays. Neurocomputing 71, 1039–1045 (2008)

    Article  MATH  Google Scholar 

  15. Lee, S.M., Kwon, O.M., Park, J.H.: A novel delay-dependent criterion for delayed neural networks of neutral type. Phys. Lett. A 374, 1843–1848 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  16. Gopalsamy, K.: Leakage delays in BAM. J. Math. Anal. Appl. 325, 1117–1132 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  17. Li, X., Fu, X.: Effect of leakage time-varying delay on stability of nonlinear differential Systems. J. Franklin Inst. 350, 1335–1344 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  18. Li, X., Rakkiyappan, R.: Stability results for Takagi-Sugeno fuzzy uncertain BAM neural networks with time delays in the leakage term. Neural Comput. Appl. 22, 203–219 (2013)

    Article  Google Scholar 

  19. Park, M.J., Kwon, O.M., Park, J.H., Lee, S.M., Cha, E.J.: Synchronization criteria for coupled stochastic neural networks with time-varying delays and leakage delay. J. Franklin Inst. 349, 1699–1720 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hilla, D.J., Moylan, P.J.: Stability results for nonlinear feedback systems. Automatica 13, 377–382 (1977)

    Article  Google Scholar 

  21. Lin, W., Byrnes, C.I.: Passivity and absolute stabilization of a class of discrete-time nonlinear systems. Automatica 31, 263–267 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  22. Xie, L.H., Fu, M.Y., Li, H.Z.: Passivity analysis and passification for uncertain signal processing systems. IEEE Trans. Signal Process. 46, 2394–2403 (1998)

    Article  Google Scholar 

  23. Chua, L.O.: Passivity and complexity. IEEE Trans. Circuits Syst. I 46, 71–82 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  24. Wu, C.W.: Synchronization in arrays of coupled nonlinear systems: passivity, circle criterion, and observer design. IEEE Trans. Circuits Syst. I 48, 1257–1261 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  25. Calcev, G.: Passivity approach to fuzzy control systems. Automatica 33, 339–344 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  26. Lou, X.Y., Cui, B.T.: Passivity analysis of integro-differential neural networks with time-varying delays. Neurocomputing 70, 1071–1078 (2007)

    Article  Google Scholar 

  27. Song, Q., Wang, Z.: New results on passivity analysis of uncertain neural networks with time-varying delays. Int. J. Comput. Math. 87, 668–678 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  28. Li, C.G., Liao, X.F.: Passivity analysis of neural networks with time delay. IEEE Trans. Circuits Syst. II 52, 471–475 (2005)

    Article  Google Scholar 

  29. Park, J.H.: Further results on passivity analysis of delayed cellular neural networks. Chaos Solitons Fractals 34, 1546–1551 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  30. Zhang, Z., Mou, S., Lam, J., Gao, H.: New passivity criteria for neural networks with time-varying delays. Neurocomputing 22, 864–868 (2009)

    MATH  Google Scholar 

  31. Chen, Y., Li, W., Bi, W.: Improved results on passivity analysis of uncertain neural networks with time-varying discrete and distributed delays. Neural Process. Lett. 30, 155–169 (2009)

    Article  Google Scholar 

  32. Chen, B., Li, H., Lin, C., Zhou, Q.: Passivity analysis for uncertain neural networks with discrete and distributed time-varying delays. Phys. Lett. A 373, 1242–1248 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  33. Chen, Y., Wang, H., Xue, A., Lu, R.: Passivity analysis of stochastic time-delay neural networks. Nonlinear Dyn. 61, 71–82 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  34. Song, Q., Liang, J., Wang, J.: Passivity analysis of discrete-time stochastic neural networks with time-varying delays. Neurocomputing 72, 1782–1788 (2009)

    Article  Google Scholar 

  35. Fu, J., Zhang, H., Ma, T., Zhang, Q.: On passivity analysis for stochastic neural networks with interval time-varying delay. Neurocomputing 71, 1039–1045 (2008)

    Article  Google Scholar 

  36. Balasubramaniam, P., Rakkiyappan, R.: Delay-dependent robust stability analysis for Markovian jumping stochastic Cohen–Grossberg neural networks with discrete interval and distributed time-varying delays. Nonlinear Anal. Hybrid Syst. 3, 207–214 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  37. Zhu, S., Shen, Y., Chen, G.: Exponential passivity of neural networks with time-varying delay and uncertainty. Phys. Lett. A 375, 136–142 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  38. Chen, W., Wang, L.: Delay-dependent stability for neutral-type neural networks with time-varying delays and Markovian jumping parameters. Neurocomputing 120, 569–576 (2013)

    Article  Google Scholar 

  39. Senthilraj, S., Raja, R., Zhu, Q., et al.: Exponential passivity analysis of stochastic neural networks with leakage, distributed delays and Markovian jumping parameters. Neurocomputing 175, 401–410 (2016)

    Article  MATH  Google Scholar 

  40. Zhu, Q.X.: Razumikhin-type theorem for stochastic functional differential equations with Lévy noise and Markov switching. Int. J. Control 90(8), 1703–1712 (2017)

    Article  MATH  Google Scholar 

  41. Zhu, Q.X., Zhang, Q.Y.: pth moment exponential stabilization of hybrid stochastic differential equations by feedback controls based on discrete-time state observations with a time delay. IET Control Theory Appl. 11(12), 1992–2003 (2017)

    Article  MathSciNet  Google Scholar 

  42. Wang, B., Zhu, Q.X.: Stability analysis of Markov switched stochastic differential equations with both stable and unstable subsystems. Syst. Control Lett. 105, 55–61 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  43. Song, R.L., Zhu, Q.X.: Stability of linear stochastic delay differential equations with infinite Markovian switchings. Int. J. Robust Nonlinear Control 28(3), 825–837 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  44. Zhu, Q.X., Rakkiyappan, R., Chandrasekar, A.: Stochastic stability of Markovian jump BAM neural networks with leakage delays and impulse control. Neurocomputing 136, 136–151 (2014)

    Article  Google Scholar 

  45. Zhu, Q.X.: pth moment exponential stability of impulsive stochastic functional differential equations with Markovian switching. J. Franklin Inst. 351(7), 3965–3986 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  46. Rakkiyappan, R., Latha, V.P., Zhu, Q.X., Yao, Z.S.: Exponential synchronization of Markovian jumping chaotic neural networks with sampled-data and saturating actuators. Nonlinear Anal. Hybrid Syst. 24, 28–44 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  47. Zhu, Q.X., Li, X.D.: Exponential and almost sure exponential stability of stochastic fuzzy delayed Cohen–Grossberg neural networks. Fuzzy Sets Syst. 203, 74–94 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  48. Zhang, Y., He, Y., Wu, M., Zhang, J.: Stabilization for Markovian jump systems with partial information on transition probability based on free-connection weighting matrices. Automatica 47, 79–84 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  49. Xiong, L.L., Tian, J., Liu, X.: Stability analysis for neutral Markovian jump systems with partially unknown transition probabilities. J. Franklin Inst. 349, 2193–2214 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  50. Mawhin, J.: Periodic solutions of nonlinear functional differential equations. J. Differ. Equ. 10, 240–261 (1971)

    Article  MathSciNet  MATH  Google Scholar 

  51. Mao, X.: Stability of stochastic differential equations with Markovian switching. Stoch. Process. Appl. 79, 45–67 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  52. Gu, K., Kharitonov, V.L., Chen, J.: Stability of Time-Delay Systems. Birkhauser, Basel (2003)

    Book  MATH  Google Scholar 

  53. Mao, X., Yuan, C.: Stochastic Differential Equations with Markovian Switching. Imperial College Press, London (2006)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the reviewers for their useful comments.

Funding

The research of TW and LX was supported by the National Nature Science Foundation under Grants 11461082, 11601474, 61573096, and 61472093, the Key Laboratory of Numerical Simulation of Sichuan Province under Grant 2017KF002, the Natural Science Foundation of China under Grant 61463050, the NSF of Yunnan Province under Grant 2015FB113. TW was supported by Yunnan Provincial Department of Education Science Research Fund Project under Grant 2018Y105. JC was supported by the Jiangsu Provincial Key Laboratory of Networked Collective Intelligence under Grant BM2017002.

Author information

Authors and Affiliations

Authors

Contributions

Each of the authors, TW, LX, JC, and BA contributed to each part of this work equally and read and approved the final version of the manuscript.

Corresponding author

Correspondence to Lianglin Xiong.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, T., Cao, J., Xiong, L. et al. Exponential passivity conditions on neutral stochastic neural networks with leakage delay and partially unknown transition probabilities in Markovian jump. Adv Differ Equ 2018, 317 (2018). https://doi.org/10.1186/s13662-018-1732-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-018-1732-6

Keywords