Skip to main content

Theory and Modern Applications

Delay-dependent passivity analysis of nondeterministic genetic regulatory networks with leakage and distributed delays against impulsive perturbations

Abstract

This work is concerned with the problem for stochastic genetic regulatory networks (GRNs) subject to mixed time delays via passivity control in which mixed time delays consist of leakage, discrete, and distributed delays. The main aim of this paper is constructing a passivity-based criteria under impulsive perturbations such that the proposed GRNs are stochastically stable. Based on the Lyapunov functional method and Jensen’s integral inequality, we obtain a new set of novel passivity based delay-dependent sufficient condition in the form of LMIs, which can be determined via existing numerical software. Finally, we propose numerical simulations to show the efficiency of the proposed method.

1 Introduction

The regulatory networks have become an inevitable and a thriving field of research in the most acclaimed biomedical and biological scientific sectors [1]. The living organisms are regulated either directly or indirectly by genes and proteins, which make the genes get interacted [2]. This interactive mode makes up dynamical GRNs, which control the complex dynamic system of cellular functions [3]. In recent years the molecular-based GRNs have been paid much research attention since they are a powerful tool for studying gene regulation processes in living organisms by modeling the genetic networks with dynamical systems [4]. Applications of GRNs strongly depend on the dynamic behavior of the equilibrium point. If the equilibrium point of a neural network system is globally asymptotically stable, then the attraction domain of the equilibrium point is the whole space, and convergence is a real-time phenomenon [5]. Thus it is necessary to study the stability of GRNs in theoretical and practical situations [6].

The genetic network models are basically classified into two types, the Boolean model and the linear differential equation model [7]. In the Boolean model the activity of each gene is to be considered either ON or OFF, and the Boolean function of the states of genes describes the state of a single gene [8]. In the second type the division of gene commodities is completely resolved by proteins and mRNAs and gene systems. Besides, based on the experiment results, time delays are unavoidable owing to the diffusion, translation, transcription, and translocation processes of genes, which may affect the whole dynamics process of the biological models. Hence it is essential and important to take the time delays while designing the biochemical systems. When the time delays are discrete, it becomes easier to express the movement of a macromolecule. While modeling GRNs, distributed delays can be a proper modeling framework [9]. However, there have appeared few works on the GRNs with distributed time delays; see, for example, [9, 10].

It is important to say that when constructing GRNs, the biological networks or genetic networks are always concerned with stochastic noise in real-world gene regulation process [11]. Moreover, in the real-world phenomena, stochastic perturbation from internal and external noises may affect the dynamics of the biological or genetic networks [12]. Furthermore, external noises occur mostly from the random variation of one or more control inputs and are always expressed by a Brownian motion for the controller design, whereas internal noises are built for the GRNs [13]. Therefore it is necessary to consider the stochastic noises in our proposed GRNs model. Recently, Zhang et al. [14] reported the problem of stochastic GRNs subject to mixed time-varying delays in mean square asymptotically stable condition. Under the Lyapunov stability and new integral techniques, a new set of sufficient conditions is derived in [15] for the stability of GRNs with stochastic perturbations.

On the other hand, the problem of passivity analysis has been studied by many researchers in many types of complex dynamic systems, including neural networks, networked models, and other biological models. The passivity property means that the increase of the energy stored inside the system is no more than that of the energy supplied to the system, which guarantees the stability of the concerned system [16]. Passivity theory, which was invented from electric circuit analysis, has been widely applied to many areas of dynamical systems such as complex systems, chaos control, signal processing, fuzzy control, and neural networks [17]. For example, the problem of passivity control of recurrent neural networks subject to time delays and impulse perturbations is reported in [18]. Very recently, Yang et al. [19] investigated the result of passivity condition of neural networks with respect to distributed and discrete delays. Based on the Lyapunov stability method and integral technique, we obtain a novel passivity condition in terms of LMIs to ensure the Markov jump GRNs to be passive. However, there are no results on passivity criteria for stochastic GRNs with discrete and distributed delays.

Besides, the study of leakage time delays in a dynamical system is one of the highlight research topics and is widely successfully applied by many researchers in the recent works. Hence the leakage time delays also have great deal on the analysis of neural networks, and it is essential to study the problem for neural networks with leakage delays; see, for example, [4]. Also, it is well known that impulse effects arise in the neural network systems [20]. Moreover, impulsive phenomena can be easily seen in biological control systems such as biological neural networks and optimal control models, where many sudden changes occur instantly owing to the presence of impulsive effects [21]. In this connection, it is necessary to consider the impulsive control to the passivity of neural network systems to reflect a more realistic dynamics. Recently, the problem of dissipative control of fuzzy neural networks subject to impulsive uncertainties and Markovian switching is studied in [22]. The authors in [23] investigated the exponential stability of complex-valued neural networks with respect to impulsive perturbations and time delays under fuzzy rule. To the best of authors’ knowledge, there is no literature on the passivity of stochastic GRNs with leakage delays and impulsive perturbations, which encourages the current study. The main contributions of this paper are as follows:

  1. (i)

    The proposed stochastic GRNs are a comprehensive model over the other existing GRNs [9, 13]. Particularly, we have taken many uncertain factors such as impulsive effects, leakage delay, passivity performance, and distributed delay in a practical point of view.

  2. (ii)

    The first attempt to report the problem of stochastic GRNs with mixed time delays via a novel passivity control.

  3. (iii)

    Based on the appropriate LKF, Jensen’s integral inequality, and stochastic derivative formula, we obtain a new set of passivity criterion in the form of LMIs to ensure that the stochastic GRNs are stochastically stable and obey the passivity performance index.

  4. (iv)

    Several numerical simulations are provided to establish the feasibility of the proposed method.

Notations. \(\mathcal{R}^{n}\) means the n-dimensional real space; \(\mathcal{R}^{m \times n}\) is the set of all \(m\times n\) matrices; \(\mathcal{A} > 0\) represents a positive definite symmetric matrix; \(\mathcal{A} \geq 0\) denotes a positive semidefinite symmetric matrix; \(X^{T}\) or \(\mathcal{A}^{T}\) is the transpose of a vector X or a matrix \(\mathcal{A}\).

2 Problem formulation

We take the following genetic networks with n proteins and n mRNAs of the form

$$\begin{aligned} &\dot{n}_{i}(\varsigma ) = -a_{i}n_{i}( \varsigma -\nu _{1}) + b_{i}\bigl(r_{1}\bigl( \varsigma -\mu (\varsigma )\bigr)\bigr), \quad r_{2}\bigl(\varsigma - \mu ( \varsigma )\bigr), \ldots, r_{n}\bigl(\varsigma -\mu (\varsigma ) \bigr), \\ &\dot{r}_{i}(\varsigma ) = -c_{i}r_{i}( \varsigma -\nu _{2}) + d_{i}n_{i}\bigl(\varsigma -\eta (\varsigma )\bigr), \quad i=1,2,\ldots,n, \end{aligned}$$
(1)

where \(\nu _{1}\) and \(\nu _{2}\) denote the leakage delays of model (1), \(\eta (\varsigma )\) and \(\mu (\varsigma )\) are discrete time-varying delays; \(n_{i}(\varsigma )\) and \(r_{i}(\varsigma )\) are the absorptions of mRNA and protein of the ith node. In model (1), there are only one output but many inputs for a single node or gene, \(a_{i}\) and \(c_{i}\) denote the deterioration or dilution rates of mRNA and protein, \(d_{i}\) represents the translation rate, and the regulatory function \(b_{i}\) is of the form \(b_{i}(r_{1}(\varsigma ), r_{2}(\varsigma ),\ldots, r_{n}(\varsigma )) = \sum_{j=1}^{n} b_{ij}(r_{j}(\varsigma ))\). The monotonic function \(b_{ij}(r_{j}(\varsigma ))\) is of the Hill form. If the transcription factor j is an activator of gene i, then \(b_{ij}(r_{j}(\varsigma )) = \alpha _{ij} \frac{(r_{j}(\varsigma )/\beta _{j})^{H_{j}}}{1+(r_{j}(\varsigma )/\beta _{j})^{H_{j}}}\), and if the transcription factor j is a repressor of gene i, then \(b_{ij}(r_{j}(\varsigma )) = \alpha _{ij} \frac{1}{1+(r_{j}(\varsigma )/\beta _{j})^{H_{j}}} \). Here \(H_{j}\) denotes the Hill coefficient, \(\beta _{j}\) is a positive constant, and \(\alpha _{ij}\) is a bounded real constant that describes the rate from gene j to i. Hence Eq. (1) can be transformed into the form

$$\begin{aligned} \textstyle\begin{cases} \dot{n}_{i}(\varsigma ) = -a_{i}n_{i}(\varsigma -\nu _{1}) + \sum_{j=1}^{n} b_{ij}g_{j}(r_{j}(\varsigma -\mu (\varsigma ))) + u_{i}, \\ \dot{r}_{i}(\varsigma ) = -c_{i}r_{i}(\varsigma -\nu _{2}) + d_{i}n_{i}( \varsigma -\eta (\varsigma )),\quad i=1,2,\ldots,n. \end{cases}\displaystyle \end{aligned}$$
(2)

In Eq. (2), \(g_{j}(x)=(x/\beta _{j})^{H_{j}}/(1+(x/\beta _{j})^{H_{j}})\), and \(u_{i}=\sum_{j\in V_{i1}}\alpha _{ij}; u_{i}\) is the basal rate. Based on the above analysis, Eq. (2) can be written in the form

$$\begin{aligned} \textstyle\begin{cases} \dot{n}(\varsigma ) = -\mathcal{A}n(\varsigma -\nu _{1}) + \mathcal{B}g(r(\varsigma -\mu (\varsigma ))) + u, \\ \dot{r}(\varsigma ) = -\mathcal{C}r(\varsigma -\nu _{2}) + \mathcal{D}n(\varsigma -\eta (\varsigma )), \end{cases}\displaystyle \end{aligned}$$
(3)

where \(n(\varsigma )=[n_{1}(\varsigma ), n_{2}(\varsigma ), \ldots, n_{n}( \varsigma )]^{T}\), \(r(\varsigma )=[r_{1}(\varsigma ), r_{2}(\varsigma ), \ldots, r_{n}( \varsigma )]^{T}\), \(n(\varsigma -\eta (\varsigma ))=[n_{1}(\varsigma -\eta (\varsigma )), n_{2}(\varsigma -\eta (\varsigma )), \ldots, n_{n}(\varsigma -\eta ( \varsigma ))]^{T}\), \(g(r(\varsigma -\eta (\varsigma )))=[g_{1}(r_{1}(\varsigma -\eta ( \varsigma ))), g_{2}(r_{2}(\varsigma -\eta (\varsigma ))), \ldots, g_{n}(r_{n}( \varsigma -\eta (\varsigma )))]^{T}\), \(\mathcal{A}=\operatorname{diag} (a_{1}, a_{2}, \ldots, a_{n})\), \(\mathcal{C}=\operatorname{diag} (c_{1}, c_{2}, \ldots, c_{n})\), \(\mathcal{D}= \operatorname{diag} (d_{1}, d_{2}, \ldots, d_{n})\), and \(u=[u_{1}, u_{2}, \ldots, u_{n}]^{T}\). Here we move an intended stable point \((n^{*},r^{*})\) to the origin for model (3) by taking \(x(\varsigma )=n(\varsigma )-n^{*}\), \(y(\varsigma )=r(\varsigma )-r^{*}\). Therefore we have

$$\begin{aligned} \textstyle\begin{cases} \dot{x}(\varsigma ) = -\mathcal{A}x(\varsigma -\nu _{1}) + \mathcal{B}f(y(\varsigma -\mu (\varsigma ))), \\ \dot{y}(\varsigma ) = -\mathcal{C}y(\varsigma -\nu _{2}) + \mathcal{D}x(\varsigma -\eta (\varsigma )), \end{cases}\displaystyle \end{aligned}$$
(4)

where \(x(\varsigma )=[x_{1}(\varsigma ), x_{2}(\varsigma ), \ldots, x_{n}( \varsigma )]^{T}\), \(y(\varsigma )=[y_{1}(\varsigma ), y_{2}(\varsigma ), \ldots, y_{n}( \varsigma )]^{T}\), \(f(y(\varsigma ))=[f_{1}(y_{1}(\varsigma )), f_{2}(y_{2}(\varsigma )), \ldots, f_{n}(y_{n}(\varsigma ))]^{T}\) with \(f(y(\varsigma ))=g(y(\varsigma )+r^{*})-g(r^{*})\), where \(g_{i}\) is an increasing function with monotonic saturation such that for all \(x, y \in R\),

$$\begin{aligned} 0 \leq \frac{g_{i}(x)-g_{i}(y)}{x-y} \leq k_{i}. \end{aligned}$$

It is well known that f obeys the sector condition

$$\begin{aligned} 0 \leq \frac{f_{i}(x)}{x} \leq k_{i}. \end{aligned}$$
(5)

In this work, we consider stochastic noise and distributed delays with standard genetic network. Then the system is given by

$$\begin{aligned} \textstyle\begin{cases} \mathrm{d}x(\varsigma ) = [-\mathcal{A}x(\varsigma -\nu _{1}) + \mathcal{B}f(y(\varsigma -\mu (\varsigma ))) + \mathcal{E} \int _{ \varsigma -l(\varsigma )}^{\varsigma }f(y(s))\,ds + u_{1}(\varsigma ) ]\,d\varsigma \\ \phantom{\mathrm{d}x(\varsigma ) =}{} + \delta (x(\varsigma ), x(\varsigma -\eta ( \varsigma )), y(\varsigma ), y(\varsigma -\mu (\varsigma ))) \,d\omega ( \varsigma ), \\ \mathrm{d}y(\varsigma ) = [ -\mathcal{C}y(\varsigma -\nu _{2}) + \mathcal{D}x(\varsigma -\eta (\varsigma )) + \mathcal{F} \int _{ \varsigma -h(\varsigma )}^{\varsigma }x(s)\,ds + u_{2}(\varsigma ) ]\,d \varsigma, \end{cases}\displaystyle \end{aligned}$$
(6)

where \(\eta (\varsigma )\), \(\mu (\varsigma )\), \(h(\varsigma )\), and \(l(\varsigma )\) are time-varying delays satisfying

$$\begin{aligned} &0\leq \eta _{1}\leq \eta (\varsigma )\leq \eta _{2}, \qquad \dot{\eta }( \varsigma )\leq \eta _{d}, \qquad 0\leq \mu _{1}\leq \mu (\varsigma ) \leq \mu _{2}, \qquad \dot{\mu }( \varsigma )\leq \mu _{d}, \\ &0\leq h(\varsigma )\leq h, \qquad \dot{h}(\varsigma )\leq h_{d}, \qquad 0\leq l(\varsigma )\leq l, \qquad \dot{l}(\varsigma )\leq l_{d}, \end{aligned}$$
(7)

where \(\eta _{1}, \eta _{2}, \mu _{1}, \mu _{2}, h, l, \eta _{d}, \mu _{d}, h_{d}, l_{d}\) are constant scalars. We define an m-dimensional Brownian motion \(\omega (\varsigma )=[\omega _{1}(\varsigma ), \omega _{2}(\varsigma ), \ldots, \omega _{m}(\varsigma )]^{T}\in R^{m}\) on a probability space, and δ satisfies the following linear growth condition for any matrices \(G_{1}, G_{2}, G_{3}\), and \(G_{4}\) of appropriate dimensions:

$$\begin{aligned} &\operatorname{trace}\bigl[\delta ^{T}\bigl(\varsigma,x(\varsigma ), x \bigl(\varsigma -\eta ( \varsigma )\bigr), y(\varsigma ), y\bigl(\varsigma -\mu ( \varsigma )\bigr)\bigr)\delta \bigl( \varsigma,x(\varsigma ), \\ &\qquad x\bigl(\varsigma -\eta (\varsigma )\bigr), y( \varsigma ), y\bigl( \varsigma -\mu (\varsigma )\bigr)\bigr)\bigr] \\ &\quad \leq x^{T}(\varsigma )G_{1}x(\varsigma ) + x^{T}\bigl(\varsigma -\eta ( \varsigma )\bigr)G_{2} x\bigl( \varsigma -\eta (\varsigma )\bigr) + y^{T}(\varsigma )G_{3}y( \varsigma ) \\ &\qquad{}+ y^{T}\bigl(\varsigma -\mu (\varsigma )\bigr)G_{4} y\bigl(\varsigma - \mu (\varsigma )\bigr). \end{aligned}$$
(8)

By extending the model with impulsive control we obtain

$$\begin{aligned} \textstyle\begin{cases} \mathrm{d}x(\varsigma ) = [-\mathcal{A}x(\varsigma -\nu _{1}) + \mathcal{B}f(y(\varsigma -\mu (\varsigma ))) + \mathcal{E} \int _{ \varsigma -l(\varsigma )}^{\varsigma }f(y(s))\,ds + u_{1}(\varsigma ) ]\,d\varsigma \\ \phantom{\mathrm{d}x(\varsigma ) = }{} + \delta (x(\varsigma ), x(\varsigma -\eta ( \varsigma )), y(\varsigma ), y(\varsigma -\mu (\varsigma ))) \,d\omega ( \varsigma ), \quad \varsigma \neq \varsigma _{k}, \\ \Delta x(\varsigma _{k}) = N_{k} x(\varsigma _{k}), \varsigma = \varsigma _{k},\quad k=1,2,\ldots, \\ \mathrm{d}y(\varsigma ) = [ -\mathcal{C}y(\varsigma -\nu _{2}) + \mathcal{D}x(\varsigma -\eta (\varsigma )) + \mathcal{F} \int _{ \varsigma -h(\varsigma )}^{\varsigma }x(s)\,ds + u_{2}(\varsigma ) ]\,d \varsigma, \quad \varsigma \neq \varsigma _{k}, \\ \Delta y(\varsigma _{k}) = G_{k} y(\varsigma _{k}), \varsigma = \varsigma _{k}, \quad k=1,2,\ldots, \end{cases}\displaystyle \end{aligned}$$
(9)

and

$$\begin{aligned} z_{x}(\varsigma ) = H_{1} x(\varsigma ), \qquad z_{y}(\varsigma ) = H_{2} y(\varsigma ). \end{aligned}$$

To prove the required results, we need the following definition.

Definition 1

The genetic network (9) is said to be stochastically passive if there exists \(\gamma > 0\) such that

$$\begin{aligned} 2 \mathbb{E} \biggl\{ \int _{0}^{\varsigma _{f}} \bigl[u_{1}^{T}(s)z_{x}(s) + u_{2}^{T}(s)z_{y}(s)\bigr]\,ds \biggr\} \geq - \gamma \mathbb{E} \biggl\{ \int _{0}^{\varsigma _{f}} \bigl[u_{1}^{T}(s)u_{1}(s) + u_{2}^{T}(s)u_{2}(s)\bigr]\,ds \biggr\} \end{aligned}$$
(10)

for all \(\varsigma _{f}\geq 0\).

3 Main results

Here we discuss the passivity condition for a class of stochastic GRNs (9) subject to leakage and distributed delays via impulsive control.

By transformation the network system (6) can be rewritten as follows:

$$\begin{aligned} &\frac{d}{d\varsigma } \biggl[ x(\varsigma ) - \mathcal{A} \int _{ \varsigma -\nu _{1}}^{\varsigma }x(s)\,ds \biggr] \\ &\quad= \biggl[- \mathcal{A}x(\varsigma ) + \mathcal{B}f\bigl(y\bigl(\varsigma -\mu (\varsigma ) \bigr)\bigr) + \mathcal{E} \int _{\varsigma -l(\varsigma )}^{\varsigma }f\bigl(y(s)\bigr)\,ds + u_{1}( \varsigma ) \biggr], \\ &\frac{d}{d\varsigma } \biggl[ y(\varsigma ) - \mathcal{C} \int _{ \varsigma -\nu _{2}}^{\varsigma }y(s)\,ds \biggr] \\ &\quad= \biggl[- \mathcal{C}y(\varsigma ) + \mathcal{D}x\bigl(\varsigma -\eta (\varsigma )\bigr) + \mathcal{F} \int _{\varsigma -h(\varsigma )}^{\varsigma }x(s)\,ds + u_{2}( \varsigma ) \biggr]. \end{aligned}$$

Now we derive a delay-dependent sufficient LMI conditions for passivity of system (9).

Theorem 3.1

For given values \(\eta _{1}, \eta _{2}, \mu _{1}, \mu _{2}, \nu _{1}, \nu _{2}, h, l, \eta _{d}, \mu _{d}, h_{d}\), and \(l_{d}\), the neural network (9) is stochastically passive if there are values \(\lambda _{i}>0 \) \((i=1,2,3)\), positive definite matrices \(P_{i}, Q_{i} \ (i=1,2,\ldots,4), R_{i} \ (i=1,2,\ldots,8), Z_{i}\ (i=1, \ldots,4)\), positive diagonal matrices \(T_{1}, T_{2}\), and matrices \(M_{i}, N_{i}, S_{i}, U_{i} \) \((i=1,2)\) of appropriate dimensions such that the following conditions are satisfied:

$$\begin{aligned} &P_{1} \leq \lambda _{1} I, \end{aligned}$$
(11)
$$\begin{aligned} &Z_{3} \leq \lambda _{2} I, \end{aligned}$$
(12)
$$\begin{aligned} &Z_{4} \leq \lambda _{3} I, \end{aligned}$$
(13)
$$\begin{aligned} & \begin{bmatrix} P_{1} & (I-N_{k})^{T}P_{1} \\ * & P_{1} \end{bmatrix} \geq 0, \quad k \in \mathcal{Z}_{+}, \end{aligned}$$
(14)
$$\begin{aligned} & \begin{bmatrix} P_{2} & (I-G_{k})^{T}P_{2} \\ * & P_{2} \end{bmatrix} \geq 0, \quad k \in \mathcal{Z}_{+}, \end{aligned}$$
(15)

and

$$\begin{aligned} \Phi = \begin{bmatrix} \Phi _{1} & \sqrt{\eta _{2}}N & \sqrt{\eta _{12}}M & \sqrt{\eta _{12}}S & N & M & S \\ * & -Z_{1} & 0 & 0 & 0 & 0 & 0 \\ * & * & -Z_{2} & 0 & 0 & 0 & 0 \\ * & * & * & -(Z_{1}+Z_{2}) & 0 & 0 & 0 \\ * & * & * & * & -Z_{3} & 0 & 0 \\ * & * & * & * & * & -Z_{4} & 0 \\ * & * & * & * & * & * & -(Z_{3}+Z_{4}) \end{bmatrix} < 0, \end{aligned}$$
(16)

where

$$\begin{aligned} &\Phi _{1} = (\Phi _{ij})_{21 \times 21}, \\ &\Phi _{11} = -P_{1}\mathcal{A} - \mathcal{A}^{T}P_{1} + \bigl[ \lambda _{1} + \eta _{2} \lambda _{2} + (\eta _{2} - \eta _{1}) \lambda _{3} \bigr] G_{1} + P_{3} + Q_{1} + Q_{2} + R_{1} + R_{3} + \nu _{1}^{2}R_{5} \\ & \phantom{\Phi _{11} = }{} + h^{2}R_{8} + N_{1} + N_{1}^{T}, \qquad \Phi _{12} = - N_{1}+N_{2}^{T} + S_{1} - M_{1}, \\ & \Phi _{13} = \Phi _{14} = 0, \qquad \Phi _{15} = M_{1}, \qquad\Phi _{16} = -S_{1},\qquad \Phi _{17}=\Phi _{18}=\Phi _{19}=0,\\ & \Phi _{1_{10}}= P_{1}\mathcal{B} + U_{1}\mathcal{B},\qquad \Phi _{1_{11}} = -U_{1}, \qquad \Phi _{1_{12}} = U_{1}-H_{1}^{T}+P_{1}, \\ &\Phi _{1_{13}} = 0, \qquad\Phi _{1_{14}}=-U_{1} \mathcal{A}, \qquad \Phi _{1_{15}} = 0, \qquad \Phi _{1_{16}} = \mathcal{A}^{T}P_{1}\mathcal{A},\\ & \Phi _{1_{17}}= \Phi _{1_{18}}= \Phi _{1_{19}}=0, \qquad\Phi _{1_{20}} = P_{1}\mathcal{E}+U_{1}\mathcal{E}, \qquad \Phi _{1_{21}}=0, \\ & \Phi _{22}=\bigl[\lambda _{1} + \eta _{2} \lambda _{2} + (\eta _{2} - \eta _{1})\lambda _{3} \bigr] G_{2}-(1-\eta _{d})R_{1}-N_{2}-N_{2}^{T}+S_{2} +S_{2}^{T}-M_{2}-M_{2}^{T}, \\ & \Phi _{23}=D^{T}P_{2},\qquad \Phi _{24}=0, \qquad \Phi _{25}=M_{2}, \qquad \Phi _{26}=-S_{2}, \\ &\Phi _{27} = \Phi _{28}=\Phi _{29}=\Phi _{2_{10}}=\Phi _{2_{11}}= \Phi _{2_{12}}=\Phi _{2_{13}}= \Phi _{2_{14}}=\Phi _{2_{15}}=\Phi _{2_{16}}=0,\\ & \Phi _{2_{17}}=-\mathcal{D}^{T}P_{2}\mathcal{C}, \qquad \Phi _{2_{18}} = \Phi _{2_{19}}=\Phi _{2_{20}}=\Phi _{2_{21}}=0,\\ & \Phi _{33}=-P_{2}\mathcal{C}- \mathcal{C}^{T}P_{2}+\bigl[\lambda _{1} + \eta _{2} \lambda _{2} + (\eta _{2} - \eta _{1})\lambda _{3}\bigr] G_{3}+P_{4}+Q_{3} +Q_{4}+R_{2}+\nu _{2}^{2}R_{6},\\ & \Phi _{34}=\Phi _{35}= \Phi _{36}=\Phi _{37}=\Phi _{38}=0,\qquad \Phi _{39}=KT_{1}, \qquad \Phi _{3_{10}} = \Phi _{3_{11}}=\Phi _{3_{12}}=0,\\ & \Phi _{3_{13}}=-H_{2}^{T}+P_{2}, \qquad \Phi _{3_{14}}=\Phi _{3_{15}}= \Phi _{3_{16}}=0, \qquad \Phi _{3_{17}}=\mathcal{C}^{T}P_{2}\mathcal{C}, \\ &\Phi _{3_{18}} = \Phi _{3_{19}} =\Phi _{3_{20}}=0,\qquad \Phi _{3_{21}}=P_{2}\mathcal{F}, \\ & \Phi _{44}= \bigl[\lambda _{1} + \eta _{2} \lambda _{2} + ( \eta _{2} - \eta _{1})\lambda _{3} \bigr] G_{4}-(1- \mu _{d})R_{2}, \\ &\Phi _{45} = \Phi _{46}=\Phi _{47}=\Phi _{48}=\Phi _{49}=0,\qquad \Phi _{4_{10}} =KT_{2},\\ & \Phi _{4_{11}} =\Phi _{4_{12}} = \Phi _{4_{13}} =\Phi _{4_{14}} =\Phi _{4_{15}} = \Phi _{4_{16}} =\Phi _{4_{17}} =\Phi _{4_{18}} =\Phi _{4_{19}} =\Phi _{4_{20}} =\Phi _{4_{21}}=0, \\ & \Phi _{55}=-Q_{1},\\ & \Phi _{56}=\Phi _{57}=\Phi _{58}=\Phi _{59} = \Phi _{5_{10}}=\Phi _{5_{11}}=\Phi _{5_{12}}=\Phi _{5_{13}}= \Phi _{5_{14}}=\Phi _{5_{15}}=\Phi _{5_{16}}=\Phi _{5_{17}}\\ &\phantom{\Phi _{56}}=\Phi _{5_{18}}= \Phi _{5_{19}}= \Phi _{5_{20}}=\Phi _{5_{21}}=0, \\ &\Phi _{66} = -Q_{2}, \Phi _{67}=\Phi _{68}=\Phi _{69}= \Phi _{6_{10}}=\Phi _{6_{11}}=\Phi _{6_{12}}=\Phi _{6_{13}}=\Phi _{6_{14}}= \Phi _{6_{15}}=\Phi _{6_{16}}=\Phi _{6_{17}}\\ & \phantom{\Phi _{66} }= \Phi _{6_{18}} = \Phi _{6_{19}}=\Phi _{6_{20}}=\Phi _{6_{21}}=0,\qquad \Phi _{77}=-Q_{3}, \\ & \Phi _{78}=\Phi _{79}=\Phi _{7_{10}}=\Phi _{7_{11}}= \Phi _{7_{12}}=\Phi _{7_{13}}=\Phi _{7_{14}}=\Phi _{7_{15}} \\ &\phantom{\Phi _{66} } = \Phi _{7_{16}}=\Phi _{7_{17}}=\Phi _{7_{18}}=\Phi _{7_{19}}= \Phi _{7_{20}}=\Phi _{7_{21}}=0, \qquad\Phi _{88}=-Q_{4}, \\ & \Phi _{89}= \Phi _{8_{10}}=\Phi _{8_{11}}=\Phi _{8_{12}}=\Phi _{8_{13}} \\ & \phantom{\Phi _{66} } = \Phi _{8_{14}}= \Phi _{8_{15}}=\Phi _{8_{16}}=\Phi _{8_{17}}= \Phi _{8_{18}}=\Phi _{8_{19}}=\Phi _{8_{20}}=\Phi _{8_{21}}=0,\\ & \Phi _{99}=R_{4}+l^{2}R_{7}-2T_{1}, \\ &\Phi _{9_{10}} = \Phi _{9_{11}}=\Phi _{9_{12}}=\Phi _{9_{13}}= \Phi _{9_{14}}=\Phi _{9_{15}}=\Phi _{9_{16}}=\Phi _{9_{17}}=\Phi _{9_{18}}\\ &\phantom{\Phi _{9_{10}}}= \Phi _{9_{19}} =\Phi _{9_{20}}=\Phi _{9_{21}}=0, \\ &\Phi _{10_{10}} = -2T_{2}, \qquad \Phi _{10_{11}}= \mathcal{B}^{T}U_{2}^{T}, \qquad \Phi _{10_{12}}=\Phi _{10_{13}}=\Phi _{10_{14}}= \Phi _{10_{15}}=0,\\ & \Phi _{10_{16}}=-\mathcal{B}^{T}P_{1} \mathcal{A}, \\ &\Phi _{10_{17}} = \Phi _{10_{18}}=\Phi _{10_{19}}=\Phi _{10_{20}}= \Phi _{10_{21}}=0, \qquad \Phi _{11_{11}}=\eta _{2}Z_{1}+(\eta _{2}- \eta _{1})Z_{2}-U_{2}-U_{2}^{T}, \\ &\Phi _{11_{12}} = U_{2}, \qquad \Phi _{11_{13}}=0, \Phi _{11_{14}}=-U_{2}\mathcal{A}, \\ & \Phi _{11_{15}}=\Phi _{11_{16}}= \Phi _{11_{17}}=\Phi _{11_{18}}=\Phi _{11_{19}}=0, \qquad \Phi _{11_{20}}=U_{2} \mathcal{E}, \\ &\Phi _{11_{21}} = 0, \qquad \Phi _{12_{12}}=-\gamma I,\qquad \Phi _{12_{13}}=\Phi _{12_{14}}=\Phi _{12_{15}}=0, \qquad \Phi _{12_{16}}=-P_{1} \mathcal{A}, \\ & \Phi _{12_{17}}=\Phi _{12_{18}}=\Phi _{12_{19}}= \Phi _{12_{20}} = \Phi _{12_{21}}=0, \qquad \Phi _{13_{13}}=-\gamma I,\\ & \Phi _{13_{14}}=\Phi _{13_{15}}=\Phi _{13_{16}}=0, \qquad \Phi _{13_{17}}=-P_{2} \mathcal{C}, \\ &\Phi _{13_{18}} = \Phi _{13_{19}}=\Phi _{13_{20}}=\Phi _{13_{21}}=0,\qquad \Phi _{14_{14}}=-P_{3}, \\ & \Phi _{14_{15}}=\Phi _{14_{16}}= \Phi _{14_{17}}=\Phi _{14_{18}}=\Phi _{14_{19}} = \Phi _{14_{20}}=\Phi _{14_{21}}=0, \qquad \Phi _{15_{15}}=-P_{4},\\ & \Phi _{15_{16}}=\Phi _{15_{17}}=\Phi _{15_{18}}=\Phi _{15_{19}}= \Phi _{15_{20}}=\Phi _{15_{21}}=0, \\ &\Phi _{16_{16}} = -R_{5}, \qquad \Phi _{16_{17}}=\Phi _{16_{18}}= \Phi _{16_{19}}=0, \qquad \Phi _{16_{20}}=- \mathcal{A}^{T}P_{1} \mathcal{E}, \qquad \Phi _{16_{21}}=0, \\ & \Phi _{17_{17}}=-R_{6}, \qquad \Phi _{17_{18}} = \Phi _{17_{19}}=\Phi _{17_{20}}=0,\qquad \Phi _{17_{21}}=-\mathcal{C}^{T}P_{2}\mathcal{F}, \\ & \Phi _{18_{18}}=-(1-h_{d})R_{3},\qquad \Phi _{18_{19}}=\Phi _{18_{20}}=\Phi _{18_{21}}=0,\qquad \Phi _{19_{19}} = -(1-l_{d})R_{4}, \\ & \Phi _{19_{20}} = \Phi _{19_{21}}=0, \qquad \Phi _{20_{20}}= -R_{7}, \qquad\Phi _{20_{21}}=0, \Phi _{21_{21}}=-R_{8}, \\ &M^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} M_{1}^{T} & M_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &N^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} N_{1}^{T} & N_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &S^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} S_{1}^{T} & S_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &U^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad } c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} U_{1}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & U_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr]. \end{aligned}$$

Proof

See the Appendix. □

Remark 1

To diminish the conservativeness, while getting the derivative of \(\mathcal{L}V_{4}(x(\varsigma ), y(\varsigma ))\), the integral term \(-\int _{\varsigma -\eta _{2}}^{\varsigma -\eta _{1}} g_{1}^{T}(s)Z_{2}g_{1}(s)\,ds \) is separated into two equal parts \(-\int _{\varsigma -\eta (\varsigma )}^{\varsigma -\eta _{1}} g_{1}^{T}(s)Z_{2}g_{1}(s)\,ds \) and \(-\int _{\varsigma -\eta _{2}}^{\varsigma -\eta (\varsigma )} g_{1}^{T}(s)Z_{2}g_{1}(s)\,ds\) by using the information about \(\eta _{1} \leq \eta (\varsigma ) \leq \eta _{2}\), which gives conservative results.

Remark 2

In the existing works, passivity criteria for genetic networks are rarely considered, although some investigators only studied the passivity problem of stochastic genetic networks. We found only one relevant paper [17] in the concerned field. Also, leakage delay is considered in the genetic network model (9). The novelty of the results in this paper pays more attention to the impulsive effects and leakage delays appearing in the stochastic genetic networks and the relevant passivity analysis with respect to the given time interval, and the main result of this paper is more general. In [17] the authors studied the passivity of stochastic genetic networks with Markovian jump parameters. In comparison with [17], in our paper, we consider leakage, distributed delays, and impulse control. Also, discrete delays are assumed belong to an interval. Therefore our results extend and improve the results in [17]. However, to the best of our knowledge, the passivity condition of genetic networks subject to leakage and distributed delays via impulse effects has never been studied in the literature, which shows the effectiveness of our approach.

Remark 3

Consider the network system (9) without impulse control and stochastic effects

$$\begin{aligned} \textstyle\begin{cases} \mathrm{d}x(\varsigma ) = [-\mathcal{A}x(\varsigma -\nu _{1}) + \mathcal{B}f(y(\varsigma -\mu (\varsigma ))) + \mathcal{E} \int _{ \varsigma -l(\varsigma )}^{\varsigma }f(y(s))\,ds + u_{1}(\varsigma ) ]\,d\varsigma, \\ \mathrm{d}y(\varsigma ) = [ -\mathcal{C}y(\varsigma -\nu _{2}) + \mathcal{D}x(\varsigma -\eta (\varsigma )) + \mathcal{F} \int _{ \varsigma -h(\varsigma )}^{\varsigma }x(s)\,ds + u_{2}(\varsigma ) ]\,d \varsigma, \end{cases}\displaystyle \end{aligned}$$
(17)

and

$$\begin{aligned} z_{x}(\varsigma ) = H_{1} x(\varsigma ), \qquad z_{y}(\varsigma ) = H_{2} y(\varsigma ). \end{aligned}$$

Using the result of Theorem 3.1, we get the following corollary.

Corollary 3.2

Under Definition 1, for given constants \(\eta _{1}, \eta _{2}, \mu _{1}, \mu _{2}, \nu _{1}, \nu _{2}, h, l, \eta _{d}, \mu _{d}, h_{d}\), and \(l_{d}\), the genetic network (17) is passive if there are positive definite matrices \(P_{i}, Q_{i} \ (i=1,2,\ldots,4), R_{i} \ (i=1,2,\ldots,8)\), \(Z_{1}, Z_{2}\), positive diagonal matrices \(T_{1}, T_{2}\), and matrices \(M_{i}, N_{i}, S_{i}, U_{i} \ (i=1,2)\) of appropriate dimensions such that the following LMI holds:

$$\begin{aligned} \Phi = \begin{bmatrix} \Phi _{1} & \sqrt{\eta _{2}}N & \sqrt{\eta _{12}}M & \sqrt{\eta _{12}}S \\ * & -Z_{1} & 0 & 0 \\ * & * & -Z_{2} & 0 \\ * & * & * & -(Z_{1}+Z_{2}) \end{bmatrix} < 0, \end{aligned}$$
(18)

where

$$\begin{aligned} &\Phi _{1} = (\Phi _{ij})_{21 \times 21}, \\ &\Phi _{11} = -P_{1}\mathcal{A} - \mathcal{A}^{T}P_{1} + P_{3} + Q_{1} + Q_{2} + R_{1} + R_{3} + \nu _{1}^{2}R_{5} + h^{2}R_{8} + N_{1} + N_{1}^{T}, \\ &\Phi _{12} = - N_{1}+N_{2}^{T} + S_{1} - M_{1},\qquad \Phi _{13} = \Phi _{14} = 0, \qquad \Phi _{15} = M_{1}, \\ &\Phi _{16} = -S_{1}, \qquad \Phi _{17}=\Phi _{18}=\Phi _{19}=0,\qquad \Phi _{1_{10}}= P_{1}\mathcal{B} + U_{1}\mathcal{B},\qquad \Phi _{1_{11}} = -U_{1}, \\ & \Phi _{1_{12}} = U_{1}-H_{1}^{T}+P_{1}, \qquad \Phi _{1_{13}} = 0, \qquad \Phi _{1_{14}}=-U_{1} \mathcal{A}, \qquad \Phi _{1_{15}} = 0,\\ & \Phi _{1_{16}} = \mathcal{A}^{T}P_{1}\mathcal{A}, \qquad \Phi _{1_{17}}= \Phi _{1_{18}}= \Phi _{1_{19}}=0, \\ &\Phi _{1_{20}} = P_{1}\mathcal{E}+U_{1}\mathcal{E}, \qquad \Phi _{1_{21}}=0, \\ & \Phi _{22}= -(1-\eta _{d})R_{1}-N_{2}-N_{2}^{T}+S_{2} +S_{2}^{T}-M_{2}-M_{2}^{T}, \qquad\Phi _{23} = \mathcal{D}^{T}P_{2}, \\ & \Phi _{24}=0, \Phi _{25}=M_{2}, \qquad \Phi _{26}=-S_{2}, \\ &\Phi _{27} = \Phi _{28}=\Phi _{29}=\Phi _{2_{10}}=\Phi _{2_{11}}= \Phi _{2_{12}}=\Phi _{2_{13}}= \Phi _{2_{14}}=\Phi _{2_{15}}=\Phi _{2_{16}}=0,\\ & \Phi _{2_{17}}=-\mathcal{D}^{T}P_{2}\mathcal{C}, \qquad \Phi _{2_{18}} = \Phi _{2_{19}}=\Phi _{2_{20}}=\Phi _{2_{21}}=0,\\ & \Phi _{33}=-P_{2}\mathcal{C}- \mathcal{C}^{T}P_{2} +P_{4}+Q_{3}+Q_{4}+R_{2}+ \nu _{2}^{2}R_{6}, \\ &\Phi _{34} = \Phi _{35}=\Phi _{36}=\Phi _{37}=\Phi _{38}=0,\qquad \Phi _{39}=KT_{1}, \qquad \Phi _{3_{10}}=\Phi _{3_{11}}=\Phi _{3_{12}}=0, \\ &\Phi _{3_{13}} = -H_{2}^{T}+P_{2}, \qquad \Phi _{3_{14}}= \Phi _{3_{15}}=\Phi _{3_{16}}=0, \qquad \Phi _{3_{17}}=\mathcal{C}^{T}P_{2} \mathcal{C}, \\ &\Phi _{3_{18}} = \Phi _{3_{19}} =\Phi _{3_{20}}=0,\qquad \Phi _{3_{21}}=P_{2}\mathcal{F}, \qquad \Phi _{44}=-(1-\mu _{d})R_{2}, \\ &\Phi _{45} = \Phi _{46}=\Phi _{47}=\Phi _{48}=\Phi _{49}=0,\qquad \Phi _{4_{10}} =KT_{2}, \\ & \Phi _{4_{11}} =\Phi _{4_{12}} = \Phi _{4_{13}} =\Phi _{4_{14}} =\Phi _{4_{15}} = \Phi _{4_{16}} =\Phi _{4_{17}} =\Phi _{4_{18}} =\Phi _{4_{19}} =\Phi _{4_{20}} =\Phi _{4_{21}}=0, \\ & \Phi _{55}=-Q_{1},\\ & \Phi _{56}=\Phi _{57}=\Phi _{58}=\Phi _{59} = \Phi _{5_{10}}=\Phi _{5_{11}}=\Phi _{5_{12}}=\Phi _{5_{13}}= \Phi _{5_{14}}=\Phi _{5_{15}}=\Phi _{5_{16}}=\Phi _{5_{17}}\\ &\phantom{\Phi _{56}}=\Phi _{5_{18}}= \Phi _{5_{19}}= \Phi _{5_{20}}=\Phi _{5_{21}}=0, \\ &\Phi _{66} = -Q_{2}, \Phi _{67}=\Phi _{68}=\Phi _{69}= \Phi _{6_{10}}=\Phi _{6_{11}}=\Phi _{6_{12}}=\Phi _{6_{13}}=\Phi _{6_{14}}= \Phi _{6_{15}}=\Phi _{6_{16}}\\ & \phantom{\Phi _{66}}=\Phi _{6_{17}}= \Phi _{6_{18}} = \Phi _{6_{19}}=\Phi _{6_{20}}=\Phi _{6_{21}}=0,\qquad \Phi _{77}=-Q_{3}, \\ & \Phi _{78}=\Phi _{79}=\Phi _{7_{10}}=\Phi _{7_{11}}= \Phi _{7_{12}}=\Phi _{7_{13}}=\Phi _{7_{14}}=\Phi _{7_{15}} \\ &\phantom{\Phi _{66}} = \Phi _{7_{16}}=\Phi _{7_{17}}=\Phi _{7_{18}}=\Phi _{7_{19}}= \Phi _{7_{20}}=\Phi _{7_{21}}=0, \qquad \Phi _{88}=-Q_{4}, \\ & \Phi _{89}= \Phi _{8_{10}}=\Phi _{8_{11}}=\Phi _{8_{12}}=\Phi _{8_{13}} = \Phi _{8_{14}} =\Phi _{8_{15}}= \Phi _{8_{16}}=\Phi _{8_{17}}= \Phi _{8_{18}}=\Phi _{8_{19}}\\ &\phantom{\Phi _{89}}=\Phi _{8_{20}}=\Phi _{8_{21}}=0,\qquad \Phi _{99}=R_{4}+l^{2}R_{7}-2T_{1}, \\ &\Phi _{9_{10}} = \Phi _{9_{11}}=\Phi _{9_{12}}=\Phi _{9_{13}}= \Phi _{9_{14}}=\Phi _{9_{15}}=\Phi _{9_{16}}=\Phi _{9_{17}}=\Phi _{9_{18}}\\ &\phantom{\Phi _{9_{10}}}= \Phi _{9_{19}} =\Phi _{9_{20}}=\Phi _{9_{21}}=0, \\ &\Phi _{10_{10}} = -2T_{2}, \qquad \Phi _{10_{11}}= \mathcal{B}^{T}U_{2}^{T}, \qquad \Phi _{10_{12}}=\Phi _{10_{13}}=\Phi _{10_{14}}= \Phi _{10_{15}}=0, \\ &\Phi _{10_{16}}=-\mathcal{B}^{T}P_{1} \mathcal{A},\qquad \Phi _{10_{17}} = \Phi _{10_{18}}=\Phi _{10_{19}}=\Phi _{10_{20}}= \Phi _{10_{21}}=0, \\ & \Phi _{11_{11}}=\eta _{2}Z_{1}+(\eta _{2}- \eta _{1})Z_{2}-U_{2}-U_{2}^{T}, \\ &\Phi _{11_{12}} = U_{2},\qquad \Phi _{11_{13}}=0,\qquad \Phi _{11_{14}}=-U_{2}\mathcal{A}, \\ & \Phi _{11_{15}}= \Phi _{11_{16}}= \Phi _{11_{17}}=\Phi _{11_{18}}=\Phi _{11_{19}}=0, \qquad \Phi _{11_{20}}=U_{2} \mathcal{E},\qquad \Phi _{11_{21}} = 0, \\ & \Phi _{12_{12}}=-\gamma I,\qquad \Phi _{12_{13}}=\Phi _{12_{14}}=\Phi _{12_{15}}=0, \qquad \Phi _{12_{16}}=-P_{1} \mathcal{A}, \\ & \Phi _{12_{17}}=\Phi _{12_{18}}=\Phi _{12_{19}}= \Phi _{12_{20}} = \Phi _{12_{21}}=0,\qquad \Phi _{13_{13}}=-\gamma I,\\ & \Phi _{13_{14}}=\Phi _{13_{15}}=\Phi _{13_{16}}=0, \qquad \Phi _{13_{17}}=-P_{2} \mathcal{C}, \\ &\Phi _{13_{18}} = \Phi _{13_{19}}=\Phi _{13_{20}}=\Phi _{13_{21}}=0,\qquad \Phi _{14_{14}}=-P_{3}, \\ & \Phi _{14_{15}}=\Phi _{14_{16}}= \Phi _{14_{17}}=\Phi _{14_{18}}=\Phi _{14_{19}} = \Phi _{14_{20}}=\Phi _{14_{21}}=0, \qquad \Phi _{15_{15}}=-P_{4},\\ & \Phi _{15_{16}}=\Phi _{15_{17}}=\Phi _{15_{18}}=\Phi _{15_{19}}= \Phi _{15_{20}}=\Phi _{15_{21}}=0, \\ &\Phi _{16_{16}} = -R_{5}, \Phi _{16_{17}}=\Phi _{16_{18}}= \Phi _{16_{19}}=0, \qquad \Phi _{16_{20}}=- \mathcal{A}^{T}P_{1} \mathcal{E}, \qquad \Phi _{16_{21}}=0, \\ & \Phi _{17_{17}}=-R_{6}, \qquad \Phi _{17_{18}} = \Phi _{17_{19}}=\Phi _{17_{20}}=0,\qquad \Phi _{17_{21}}=-\mathcal{C}^{T}P_{2}\mathcal{F}, \\ & \Phi _{18_{18}}=-(1-h_{d})R_{3},\qquad \Phi _{18_{19}}=\Phi _{18_{20}}=\Phi _{18_{21}}=0, \qquad \Phi _{19_{19}} = -(1-l_{d})R_{4}, \\ & \Phi _{19_{20}}= \Phi _{19_{21}}=0, \qquad \Phi _{20_{20}} = -R_{7}, \qquad \Phi _{20_{21}}=0,\qquad \Phi _{21_{21}}=-R_{8}, \\ &M^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} M_{1}^{T} & M_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &N^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} N_{1}^{T} & N_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &S^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} S_{1}^{T} & S_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &U^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} U_{1}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & U_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr]. \end{aligned}$$

Proof

We get the proof directly from Theorem 3.1. □

Remark 4

Change the genetic network (17) without distributed delay (i.e., \(\mathcal{E}=\mathcal{F}=0\)) to

$$\begin{aligned} \textstyle\begin{cases} \mathrm{d}x(\varsigma ) = [-\mathcal{A}x(\varsigma -\nu _{1}) + \mathcal{B}f(y(\varsigma -\mu (\varsigma ))) + u_{1}(\varsigma ) ]\,d\varsigma, \\ \mathrm{d}y(\varsigma ) = [ -\mathcal{C}y(\varsigma -\nu _{2}) + \mathcal{D}x(\varsigma -\eta (\varsigma )) + u_{2}(\varsigma ) ]\,d \varsigma, \end{cases}\displaystyle \end{aligned}$$
(19)

and

$$\begin{aligned} z_{x}(\varsigma ) = H_{1} x(\varsigma ), z_{y}( \varsigma ) = H_{2} y(\varsigma ). \end{aligned}$$

Corollary 3.3 gives the passivity criteria for system (19).

Corollary 3.3

Under Definition 1, for given constants \(\eta _{1}, \eta _{2}, \mu _{1}, \mu _{2}, \nu _{1}, \nu _{2}, \eta _{d}\), and \(\mu _{d}\), the neural network described by (19) is passive if there are positive matrices \(P_{i}, Q_{i} \ (i=1,2,\ldots,4), R_{1}, R_{2}, R_{5}, R_{6}, Z_{1}, Z_{2}\), positive diagonal matrices \(T_{1}, T_{2}\), and matrices \(M_{i}, N_{i}, S_{i}, U_{i} \ (i=1,2)\) of any appropriate dimensions such that the following LMI holds:

$$\begin{aligned} \Phi = \begin{bmatrix} \Phi _{1} & \sqrt{\eta _{2}}N & \sqrt{\eta _{12}}M & \sqrt{\eta _{12}}S \\ * & -Z_{1} & 0 & 0 \\ * & * & -Z_{2} & 0 \\ * & * & * & -(Z_{1}+Z_{2}) \end{bmatrix} < 0, \end{aligned}$$
(20)

where

$$\begin{aligned} &\Phi _{1} = (\Phi _{ij})_{17 \times 17}, \\ &\Phi _{11} = -P_{1}\mathcal{A} - \mathcal{A}^{T}P_{1} + P_{3} + Q_{1} + Q_{2} + R_{1} + \nu _{1}^{2}R_{5} + N_{1} + N_{1}^{T},\\ & \Phi _{12} = - N_{1}+N_{2}^{T} + S_{1} - M_{1}, \qquad \Phi _{13} = \Phi _{14} = 0,\qquad \Phi _{15} = M_{1},\qquad \Phi _{16} = -S_{1}, \\ & \Phi _{17}=\Phi _{18}=\Phi _{19}=0,\qquad \Phi _{1_{10}}= P_{1}\mathcal{B} + U_{1}\mathcal{B}, \\ &\Phi _{1_{11}} = -U_{1}, \qquad \Phi _{1_{12}} = U_{1}-H_{1}^{T}+P_{1},\qquad \Phi _{1_{13}} = 0, \qquad \Phi _{1_{14}}=-U_{1}\mathcal{A}, \\ & \Phi _{1_{15}} = 0, \qquad \Phi _{1_{16}} = \mathcal{A}^{T}P_{1} \mathcal{A},\qquad \Phi _{1_{17}} = 0, \\ & \Phi _{22} = -(1-\eta _{d})R_{1}-N_{2}-N_{2}^{T}+S_{2} +S_{2}^{T}-M_{2}-M_{2}^{T}, \qquad \Phi _{23} = \mathcal{D}^{T}P_{2}, \Phi _{24}=0, \\ &\Phi _{25} = M_{2},\qquad \Phi _{26}=-S_{2},\\ & \Phi _{27} =\Phi _{28}=\Phi _{29}=\Phi _{2_{10}}=\Phi _{2_{11}}=\Phi _{2_{12}}= \Phi _{2_{13}}= \Phi _{2_{14}}=\Phi _{2_{15}}=\Phi _{2_{16}}=0, \\ &\Phi _{2_{17}} = -\mathcal{D}^{T}P_{2}\mathcal{C}, \qquad \Phi _{33}=-P_{2}\mathcal{C}-\mathcal{C}^{T}P_{2} +P_{4}+Q_{3}+Q_{4}+R_{2}+ \nu _{2}^{2}R_{6}, \\ &\Phi _{34} = \Phi _{35}=\Phi _{36}=\Phi _{37}=\Phi _{38}=0, \qquad\Phi _{39}=KT_{1}, \qquad \Phi _{3_{10}}=\Phi _{3_{11}}=\Phi _{3_{12}}=0, \\ &\Phi _{3_{13}} = -H_{2}^{T}+P_{2}, \qquad \Phi _{3_{14}}= \Phi _{3_{15}}=\Phi _{3_{16}}=0, \qquad \Phi _{3_{17}}=\mathcal{C}^{T}P_{2} \mathcal{C}, \\ & \Phi _{44}=-(1-\mu _{d})R_{2}, \qquad \Phi _{45} = \Phi _{46}=\Phi _{47}=\Phi _{48}=\Phi _{49}=0,\qquad \Phi _{4_{10}} =KT_{2}, \\ & \Phi _{4_{11}} =\Phi _{4_{12}} = \Phi _{4_{13}} =\Phi _{4_{14}} =\Phi _{4_{15}} = \Phi _{4_{16}} =\Phi _{4_{17}}=0, \qquad \Phi _{55}=-Q_{1},\\ & \Phi _{56}=\Phi _{57}=\Phi _{58}=\Phi _{59} = \Phi _{5_{10}}=\Phi _{5_{11}}=\Phi _{5_{12}}=\Phi _{5_{13}}= \Phi _{5_{14}}=\Phi _{5_{15}}=\Phi _{5_{16}}=\Phi _{5_{17}}=0,\\ & \Phi _{66} = -Q_{2},\\ &\Phi _{67} = \Phi _{68}=\Phi _{69}=\Phi _{6_{10}}=\Phi _{6_{11}}= \Phi _{6_{12}}=\Phi _{6_{13}}=\Phi _{6_{14}}=\Phi _{6_{15}}=\Phi _{6_{16}}= \Phi _{6_{17}}=0, \\ & \Phi _{77}=-Q_{3},\\ & \Phi _{78} = \Phi _{79}=\Phi _{7_{10}}=\Phi _{7_{11}}=\Phi _{7_{12}}= \Phi _{7_{13}}=\Phi _{7_{14}}=\Phi _{7_{15}} = \Phi _{7_{16}}=\Phi _{7_{17}}=0, \\ &\Phi _{88} = -Q_{4}, \qquad \Phi _{89}=\Phi _{8_{10}}=\Phi _{8_{11}}= \Phi _{8_{12}}=\Phi _{8_{13}}=\Phi _{8_{14}}=\Phi _{8_{15}}=\Phi _{8_{16}}= \Phi _{8_{17}}=0, \\ &\Phi _{99} = -2T_{1}, \qquad \Phi _{9_{10}}=\Phi _{9_{11}}= \Phi _{9_{12}}=\Phi _{9_{13}}=\Phi _{9_{14}}=\Phi _{9_{15}}=\Phi _{9_{16}}= \Phi _{9_{17}}=0, \\ &\Phi _{10_{10}} = -2T_{2}, \qquad \Phi _{10_{11}}= \mathcal{B}^{T}U_{2}^{T}, \qquad \Phi _{10_{12}}=\Phi _{10_{13}}=\Phi _{10_{14}}= \Phi _{10_{15}}=0, \\ & \Phi _{10_{16}}=-\mathcal{B}^{T}P_{1} \mathcal{A}, \qquad\Phi _{10_{17}} = 0, \qquad \Phi _{11_{11}}=\eta _{2}Z_{1}+( \eta _{2}-\eta _{1})Z_{2}-U_{2}-U_{2}^{T}, \\ & \Phi _{11_{12}}= U_{2}, \Phi _{11_{13}}=0, \qquad\Phi _{11_{14}} = -U_{2}\mathcal{A}, \qquad \Phi _{11_{15}}= \Phi _{11_{16}}=\Phi _{11_{17}}=0, \\ & \Phi _{12_{12}}=-\gamma I,\qquad \Phi _{12_{13}}=\Phi _{12_{14}}= \Phi _{12_{15}}=0,\qquad \Phi _{12_{16}} = -P_{1}\mathcal{A}, \\ & \Phi _{12_{17}}=0, \Phi _{13_{13}}=-\gamma I, \qquad \Phi _{13_{14}}=\Phi _{13_{15}}= \Phi _{13_{16}}=0, \qquad \Phi _{13_{17}}=-P_{2}\mathcal{C}, \\ &\Phi _{14_{14}} = -P_{3}, \qquad \Phi _{14_{15}}=\Phi _{14_{16}}= \Phi _{14_{17}}=0, \qquad \Phi _{15_{15}}=-P_{4}, \qquad \Phi _{15_{16}}= \Phi _{15_{17}}=0, \\ &\Phi _{16_{16}} = -R_{5}, \Phi _{16_{17}}=0, \Phi _{17_{17}}=-R_{6}, \\ &M^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad } c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad } c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} M_{1}^{T} & M_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &N^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad } c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad } c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} N_{1}^{T} & N_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &S^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad } c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad } c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} S_{1}^{T} & S_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &U^{T} =\bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad } c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad } c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} U_{1}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & U_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr]. \end{aligned}$$

Proof

If we set \(R_{3}=R_{4}=R_{7}=R_{8}=0\) in Corollary 3.2 and delete \(x(\varsigma -h(\varsigma )), f(y(\varsigma -l(\varsigma ))), ( \int _{\varsigma -l(\varsigma )}^{\varsigma }f(y(s))\,ds ), ( \int _{\varsigma -h(\varsigma )}^{\varsigma }x(s)\,ds )\) from \(\zeta (\varsigma )\), then we can get LMI (20), and we omit the proof. □

Remark 5

To the best of our knowledge, even systems (17) and (19) still have not been investigated in the previous literature, since in our system, we consider leakage delay. Therefore the results in Corollaries 3.2 and 3.3 are new.

Remark 6

Consider the following genetic network without leakage delays:

$$\begin{aligned} \textstyle\begin{cases} \mathrm{d}x(\varsigma ) = [-\mathcal{A}x(\varsigma ) + \mathcal{B}f(y(\varsigma -\mu (\varsigma ))) + u_{1}(\varsigma ) ]\,d\varsigma, \\ \mathrm{d}y(\varsigma ) = [ -\mathcal{C}y(\varsigma ) + \mathcal{D}x(\varsigma -\eta (\varsigma )) + u_{2}(\varsigma ) ]\,d \varsigma, \end{cases}\displaystyle \end{aligned}$$
(21)

and

$$\begin{aligned} z_{x}(\varsigma ) = H_{1} x(\varsigma ), z_{y}( \varsigma ) = H_{2} y(\varsigma ). \end{aligned}$$

Corollary 3.4

Under Definition 1, for given constants \(\eta _{1}, \eta _{2}, \mu _{1}, \mu _{2}, \eta _{d}\), and \(\mu _{d}\), the genetic network (21) is passive if there are positive definite matrices \(P_{1}, P_{2}, Q_{i},\ (i=1,2,\ldots,4), R_{1}, R_{2}, Z_{1}, Z_{2}\), positive diagonal matrices \(T_{1}, T_{2}\), and matrices \(M_{i}, N_{i}, S_{i}, U_{i} \ (i=1,2)\) of appropriate dimensions such that the following LMI holds:

$$\begin{aligned} \Phi = \begin{bmatrix} \Phi _{1} & \sqrt{\eta _{2}}N & \sqrt{\eta _{12}}M & \sqrt{\eta _{12}}S \\ * & -Z_{1} & 0 & 0 \\ * & * & -Z_{2} & 0 \\ * & * & * & -(Z_{1}+Z_{2}) \end{bmatrix} < 0, \end{aligned}$$
(22)

where

$$\begin{aligned} &\Phi _{1} = (\Phi _{ij})_{13 \times 13}, \\ &\Phi _{11} = -P_{1}\mathcal{A} - \mathcal{A}^{T}P_{1} + Q_{1} + Q_{2} + R_{1} + N_{1} + N_{1}^{T}-U_{1}\mathcal{A}-\mathcal{A}^{T}U_{1}^{T}, \\ & \Phi _{12} = - N_{1}+N_{2}^{T} + S_{1} - M_{1},\qquad \Phi _{13} = \Phi _{14} = 0,\qquad \Phi _{15} = M_{1},\qquad \Phi _{16} = -S_{1}, \\ & \Phi _{17}=\Phi _{18}=\Phi _{19}=0,\qquad \Phi _{1_{10}}= P_{1}\mathcal{B} + U_{1}\mathcal{B}, \\ &\Phi _{1_{11}} = -U_{1}-\mathcal{A}^{T}U_{2}^{T}, \qquad \Phi _{1_{12}} = U_{1}-H_{1}^{T}+P_{1}, \qquad \Phi _{1_{13}} = 0, \\ &\Phi _{22} = -(1-\eta _{d})R_{1}-N_{2}-N_{2}^{T}+S_{2} +S_{2}^{T}-M_{2}-M_{2}^{T}, \qquad \Phi _{23} = \mathcal{D}^{T}P_{2}, \qquad \Phi _{24}=0, \\ &\Phi _{25} = M_{2}, \qquad \Phi _{26}=-S_{2}, \qquad \Phi _{27} =\Phi _{28}=\Phi _{29}=\Phi _{2_{10}}=\Phi _{2_{11}}=\Phi _{2_{12}}= \Phi _{2_{13}}=0, \\ &\Phi _{33} = -P_{2}\mathcal{C}-\mathcal{C}^{T}P_{2} +Q_{3}+Q_{4}+R_{2},\qquad \Phi _{34}= \Phi _{35}=\Phi _{36}=\Phi _{37}=\Phi _{38}=0, \\ &\Phi _{39} = KT_{1}, \qquad \Phi _{3_{10}}=\Phi _{3_{11}}= \Phi _{3_{12}}=0, \qquad \Phi _{3_{13}}=-H_{2}^{T}+P_{2}, \\ & \Phi _{44}=-(1- \mu _{d})R_{2}, \qquad \Phi _{45} = \Phi _{46}=\Phi _{47}=\Phi _{48}=\Phi _{49}=0,\qquad \Phi _{4_{10}} =KT_{2}, \\ & \Phi _{4_{11}} =\Phi _{4_{12}} = \Phi _{4_{13}}=0, \qquad \Phi _{55}=-Q_{1}, \\ &\Phi _{56} = \Phi _{57}=\Phi _{58}=\Phi _{59} = \Phi _{5_{10}}= \Phi _{5_{11}}=\Phi _{5_{12}}=\Phi _{5_{13}}=0, \qquad \Phi _{66} = -Q_{2}, \\ &\Phi _{67} = \Phi _{68}=\Phi _{69}=\Phi _{6_{10}}=\Phi _{6_{11}}= \Phi _{6_{12}}=\Phi _{6_{13}}=0, \qquad \Phi _{77}=-Q_{3}, \\ &\Phi _{78} = \Phi _{79}=\Phi _{7_{10}}=\Phi _{7_{11}}=\Phi _{7_{12}}= \Phi _{7_{13}}=0, \qquad \Phi _{88} = -Q_{4},\\ & \Phi _{89}=\Phi _{8_{10}}= \Phi _{8_{11}}=\Phi _{8_{12}}=\Phi _{8_{13}}=0, \\ &\Phi _{99} = -2T_{1}, \qquad \Phi _{9_{10}}=\Phi _{9_{11}}= \Phi _{9_{12}}=\Phi _{9_{13}}=0, \\ & \Phi _{10_{10}}=-2T_{2},\qquad \Phi _{10_{11}}= \mathcal{B}^{T}U_{2}^{T},\qquad \Phi _{10_{12}} = \Phi _{10_{13}}=0, \\ & \Phi _{11_{11}}= \eta _{2}Z_{1}+(\eta _{2}-\eta _{1})Z_{2}-U_{2}-U_{2}^{T}, \qquad \Phi _{11_{12}}= U_{2}, \qquad \Phi _{11_{13}}=0, \\ &\Phi _{12_{12}} = -\gamma I, \qquad \Phi _{12_{13}}=0,\qquad \Phi _{13_{13}} = -\gamma I, \\ &M^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} M_{1}^{T} & M_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &N^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} N_{1}^{T} & N_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &S^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} S_{1}^{T} & S_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &U^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{}} U_{1}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & U_{2}^{T} & 0 & 0 \end{array}\displaystyle \bigr]. \end{aligned}$$

Proof

Choose the L–K functional candidate as follows:

$$\begin{aligned} &V_{1}\bigl(x(\varsigma ),y(\varsigma )\bigr) = x^{T}( \varsigma ) P_{1} x(\varsigma ) + y^{T}(\varsigma ) P_{2} y(\varsigma ), \\ &V_{2}\bigl(x(\varsigma ),y(\varsigma )\bigr) = \int _{\varsigma - \eta _{1}}^{\varsigma }x^{T}(s)Q_{1}x(s) \,ds + \int _{\varsigma -\eta _{2}}^{\varsigma }x^{T}(s)Q_{2}x(s) \,ds \\ &\phantom{V_{2}(x(\varsigma ),y(\varsigma )) =}{} + \int _{\varsigma -\mu _{1}}^{\varsigma }y^{T}(s)Q_{3}y(s) \,ds + \int _{\varsigma -\mu _{2}}^{\varsigma }y^{T}(s)Q_{4}y(s) \,ds, \\ &V_{3}\bigl(x(\varsigma ),y(\varsigma )\bigr) = \int _{\varsigma - \eta (\varsigma )}^{\varsigma }x^{T}(s)R_{1}x(s) \,ds + \int _{\varsigma - \mu (\varsigma )}^{\varsigma }y^{T}(s)R_{2}y(s) \,ds, \\ &V_{4}\bigl(x(\varsigma ),y(\varsigma )\bigr) = \int _{-\eta _{2}}^{0} \int _{\varsigma +\theta }^{\varsigma }g_{1}^{T}(s) Z_{1} g_{1}(s)\,ds\,d \theta + \int _{-\eta _{2}}^{-\eta _{1}} \int _{\varsigma +\theta }^{\varsigma }g_{1}^{T}(s) Z_{2} g_{1}(s)\,ds\,d\theta. \end{aligned}$$

Similarly to Theorem 3.1, taking

$$\begin{aligned} \zeta ^{T}(\varsigma ) = {} & \bigl[ x^{T}(\varsigma )\quad x^{T}\bigl( \varsigma -\eta (\varsigma )\bigr)\quad y^{T}( \varsigma ) \quad y^{T}\bigl( \varsigma -\mu (\varsigma )\bigr) \quad x^{T}(\varsigma -\eta _{1}) \quad x^{T}( \varsigma -\eta _{2}) \\ & y^{T}(\varsigma -\mu _{1})\quad y^{T}(\varsigma -\mu _{2})\quad f^{T}\bigl(y(\varsigma )\bigr)\quad f^{T}\bigl(y\bigl(\varsigma -\mu (\varsigma )\bigr)\bigr)\quad g_{1}^{T}(\varsigma ) \\ & u_{1}^{T}( \varsigma )\quad u_{2}^{T}( \varsigma ) \bigr], \end{aligned}$$

we can obtain (22). □

4 Stability criteria of GRN

In this section, we can reduce the LMI in Corollary 3.4 to stability conditions and compare with those reported in [10].

Remark 7

Consider the following genetic network system:

$$\begin{aligned} \textstyle\begin{cases} \mathrm{d}x(\varsigma ) = [-\mathcal{A}x(\varsigma ) + \mathcal{B}f(y(\varsigma -\mu (\varsigma ))) ]\,d\varsigma, \\ \mathrm{d}y(\varsigma ) = [ -\mathcal{C}y(\varsigma ) + \mathcal{D}x(\varsigma -\eta (\varsigma )) ]\,d\varsigma. \end{cases}\displaystyle \end{aligned}$$
(23)

Theorem 4.1

For given constants \(\eta _{1}, \eta _{2}, \mu _{1}, \mu _{2}, \eta _{d}\), and \(\mu _{d}\), the neural network (23) is stable if there are positive definite matrices \(P_{1}, P_{2}, Q_{i} \ (i=1,2,\ldots,4), R_{1}, R_{2}, Z_{1}, Z_{2}\), positive diagonal matrices \(T_{1}, T_{2}\), and matrices \(M_{i}, N_{i}, S_{i}, U_{i} \ (i=1,2)\) of appropriate dimensions such that the following LMI holds:

$$\begin{aligned} \Phi = \begin{bmatrix} \Phi _{1} & \sqrt{\eta _{2}}N & \sqrt{\eta _{12}}M & \sqrt{\eta _{12}}S \\ * & -Z_{1} & 0 & 0 \\ * & * & -Z_{2} & 0 \\ * & * & * & -(Z_{1}+Z_{2}) \end{bmatrix} < 0, \end{aligned}$$
(24)

where

$$\begin{aligned} &\Phi _{1} = (\Phi _{ij})_{11 \times 11}, \\ &\Phi _{11} = -P_{1}\mathcal{A} - \mathcal{A}^{T}P_{1} + Q_{1} + Q_{2} + R_{1} + N_{1} + N_{1}^{T}-U_{1}\mathcal{A}-\mathcal{A}^{T}U_{1}^{T}, \\ & \Phi _{12} = - N_{1}+N_{2}^{T} + S_{1} - M_{1}, \qquad \Phi _{13} = \Phi _{14} = 0, \qquad \Phi _{15} = M_{1},\qquad \Phi _{16} = -S_{1}, \\ &\Phi _{17}=\Phi _{18}=\Phi _{19}=0,\qquad \Phi _{1_{10}}= P_{1}B + U_{1}B, \qquad \Phi _{1_{11}} = -U_{1}-\mathcal{A}^{T}U_{2}^{T}, \\ & \Phi _{22} = -(1-\eta _{d})R_{1}-N_{2}-N_{2}^{T}+S_{2} +S_{2}^{T}-M_{2}-M_{2}^{T}, \qquad \Phi _{23} = \mathcal{D}^{T}P_{2}, \\ &\Phi _{24} = 0, \qquad \Phi _{25}=M_{2},\qquad \Phi _{26}=-S_{2},\qquad \Phi _{27} =\Phi _{28}=\Phi _{29}=\Phi _{2_{10}}=\Phi _{2_{11}}=0, \\ &\Phi _{33} = -P_{2}\mathcal{C}-\mathcal{C}^{T}P_{2} +Q_{3}+Q_{4}+R_{2},\qquad \Phi _{34}= \Phi _{35}=\Phi _{36}=\Phi _{37}=\Phi _{38}=0, \\ &\Phi _{39} = KT_{1}, \qquad \Phi _{3_{10}}=\Phi _{3_{11}}=0,\qquad \Phi _{44}=-(1-\mu _{d})R_{2}, \\ &\Phi _{45} = \Phi _{46}=\Phi _{47}=\Phi _{48}=\Phi _{49}=0,\qquad \Phi _{4_{10}} =KT_{2}, \qquad \Phi _{4_{11}} =0, \Phi _{55}=-Q_{1}, \\ &\Phi _{56} = \Phi _{57}=\Phi _{58}=\Phi _{59} = \Phi _{5_{10}}= \Phi _{5_{11}}=0, \qquad\Phi _{66} = -Q_{2}, \\ &\Phi _{67} = \Phi _{68}=\Phi _{69}=\Phi _{6_{10}}=\Phi _{6_{11}}=0, \qquad\Phi _{77}=-Q_{3}, \\ &\Phi _{78} = \Phi _{79}=\Phi _{7_{10}}=\Phi _{7_{11}}=0, \qquad\Phi _{88} = -Q_{4}, \qquad \Phi _{89}=\Phi _{8_{10}}=\Phi _{8_{11}}=0, \\ &\Phi _{99} = -2T_{1}, \qquad \Phi _{9_{10}}=\Phi _{9_{11}}=0,\qquad \Phi _{10_{10}}=-2T_{2}, \qquad \Phi _{10_{11}}=\mathcal{B}^{T}U_{2}^{T}, \\ &\Phi _{11_{11}} = \eta _{2}Z_{1}+(\eta _{2}-\eta _{1})Z_{2}-U_{2}-U_{2}^{T}, \\ &M^{T} = 2[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad } c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad } c@{}} M_{1}^{T} & M_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle ], \\ &N^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad } c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad } c@{}} N_{1}^{T} & N_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &S^{T} =\bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad } c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad } c@{}} S_{1}^{T} & S_{2}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle \bigr], \\ &U^{T} = \bigl[\textstyle\begin{array}{@{}c@{\quad }c@{\quad}c@{\quad }c@{\quad } c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad } c@{}} U_{1}^{T} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & U_{2}^{T} \end{array}\displaystyle \bigr]. \end{aligned}$$

Proof

Set ting \(u_{1}(\varsigma )=u_{2}(\varsigma )=0\) in Corollary 3.4, we can obtain (24). □

5 Numerical examples

Example 1

We consider GRNs (9) subject to leakage and impulsive perturbations and take the following parameters:

$$\begin{aligned} &\mathcal{A} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \qquad \mathcal{B} = \begin{bmatrix} 0.4 & -0.4 \\ 0 & 0.4 \end{bmatrix}, \qquad \mathcal{C} = \mathcal{D} = \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix},\qquad \mathcal{E} = \begin{bmatrix} 1.2 & 0 \\ 0 & 1.2 \end{bmatrix}, \\ &\mathcal{F} = \begin{bmatrix} 0.6 & 0 \\ 0 & 0.6 \end{bmatrix}, \qquad H_{1} = \begin{bmatrix} 2 & 1 \\ 0 & -1 \end{bmatrix},\qquad H_{2} = \begin{bmatrix} 2.16 & 1.1 \\ 0 & -1.08 \end{bmatrix},\\ & K = \begin{bmatrix} 0.65 & 0 \\ 0 & 0.65 \end{bmatrix},\qquad N_{k} = G_{k} = \begin{bmatrix} 0.4 & 0 \\ 0 & 0.4 \end{bmatrix}, \qquad G_{1}=G_{2}=G_{3}=G_{4}=0.2I. \end{aligned}$$

The activation function considered in this example is \(f(s)=s^{2}/(s^{2}+1)\). For \(\eta _{1}=0.4, \eta _{2}=1.2, \mu _{1}=\mu _{2}=0.25, \nu _{1}=\nu _{2}=0.15, h=0.2, l=0.15\), and \(\eta _{d}=\mu _{d}=h_{d}=l_{d}=0.1\), solving Theorem 3.1 by using the Matlab software, we can obtain the feasible solutions and some of the feasible matrices:

$$\begin{aligned} &P_{1} = \begin{bmatrix} 129.2220 & 25.7043 \\ 25.7043 & 134.4782 \end{bmatrix}, \qquad P_{2} = \begin{bmatrix} 67.7742 & 0.3776 \\ 0.3776 & 65.7241 \end{bmatrix}, \\ & P_{3} = 10^{3} \times \begin{bmatrix} 9.4552 & 0.0081 \\ 0.0081 & 9.4605 \end{bmatrix}, \qquad P_{4} = \begin{bmatrix} 1.5824 & 0.1443 \\ 0.1443 & 0.9982 \end{bmatrix}, \\ & Q_{1} = 10^{3} \times \begin{bmatrix} 9.4389 & 0.0021 \\ 0.0021 & 7.3254 \end{bmatrix}, \qquad Q_{2} = 10^{3} \times \begin{bmatrix} 8.25478 & 0.0103 \\ 0.0103 & 5.3587 \end{bmatrix}, \\ & R_{1} = 10^{3} \times \begin{bmatrix} 9.9357 & 1.2544 \\ 1.2544 & 9.9357 \end{bmatrix}, \qquad R_{2} = \begin{bmatrix} 25.4005 & -0.0903 \\ -0.0903 & 24.9030 \end{bmatrix}, \\ & R_{3} = \begin{bmatrix} 4.0538 & 0.1067 \\ 0.1067 & 2.4146 \end{bmatrix}. \end{aligned}$$

For the origin criterion \(x(s)=[\cos (s); \sin (s)]\) and \(y(s)=[\sin (s); \cos (s)]\), we present the simulation results for system (9) in Fig. 1 and 2, which show that the constructed system is passive under Definition 1.

Figure 1
figure 1

State responses of the mRNA concentration with impulsive control

Figure 2
figure 2

State responses of the protein concentration with impulsive control

Example 2

Consider genetic network (17) with the following parameters:

$$\begin{aligned} &\mathcal{A} = \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix}, \qquad \mathcal{B} = \begin{bmatrix} 1 & -2 \\ 0.8 & 0 \end{bmatrix}, \qquad \mathcal{C} = \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix}, \qquad \mathcal{D} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \\ & \mathcal{E} = \begin{bmatrix} 1 & -2 \\ 0.8 & 0 \end{bmatrix}, \qquad \mathcal{F} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \qquad H_{1} = \begin{bmatrix} 1.5 & 0.8 \\ 0 & -0.5 \end{bmatrix},\qquad H_{2} = \begin{bmatrix} 2 & 1 \\ 0 & -1 \end{bmatrix}. \end{aligned}$$

Assume that \(\eta _{1}=0.35, \eta _{2}=1.75, \mu _{1}=\mu _{2}=0.2, \nu _{1}=\nu _{2}=0.1, h=0.3, l=0.25\), and \(f(s)=s^{2}/(s^{2}+1)\). By determining LMI (18) in Corollary 3.2, we can find that the GRN (17) is passive under Definition 1 and list some of the solutions:

$$\begin{aligned} &P_{1} = \begin{bmatrix} 6.1804 & -1.4806 \\ -1.4806 & 14.1601 \end{bmatrix}, \qquad P_{2} = \begin{bmatrix} 42.8973 & 0.0415 \\ 0.0415 & 42.3577 \end{bmatrix},\\ & P_{3} = \begin{bmatrix} 116.8016 & -5.8024 \\ -5.8024 & 148.1912 \end{bmatrix},\qquad P_{4} = \begin{bmatrix} 7.7099 & 0.0497 \\ 0.0497 & 7.4925 \end{bmatrix},\\ & Q_{1} = \begin{bmatrix} 109.8649 & 0.1124 \\ 0.1124 & 109.8649 \end{bmatrix}, \qquad Q_{2} = \begin{bmatrix} 101.3564 & 0.0054 \\ 0.0054 & 101.3564 \end{bmatrix}. \end{aligned}$$

Example 3

Consider the genetic network (19) with the parametric coefficients

$$\begin{aligned} &\mathcal{A} = \begin{bmatrix} 2.7 & 0 & 0 \\ 0 & 2.3 & 0 \\ 0 & 0 & 2.2 \end{bmatrix}, \qquad \mathcal{B} = \begin{bmatrix} 0 & 0 & -1.5 \\ -3.5 & 0 & 0 \\ 0 & -2.4 & 0 \end{bmatrix}, \qquad \mathcal{C} = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{bmatrix}, \\ &\mathcal{D} = \begin{bmatrix} 0.5 & 0 & 0 \\ 0 & 0.5 & 0 \\ 0 & 0 & 0.5 \end{bmatrix}, \qquad H_{1} = \begin{bmatrix} 0.2 & 0.1 & 0 \\ 0 & -0.1 & 0 \\ 0 & 0 & 0.2 \end{bmatrix}, \qquad H_{2} = \begin{bmatrix} 0.1 & 0 & 0 \\ 0 & 0.2 & -0.1 \\ 0.1 & 0 & 0.1 \end{bmatrix}. \end{aligned}$$

Choose the activation function as \(f(x)=x^{2}/(x^{2}+1)\), which means that \(K=0.65I\). When \(\eta _{1}=\mu _{1}=1\) and \(\nu _{1}=\nu _{2}=0.1\), the maximum allowable upper bounds (MAUBs) of \(\eta _{2}=\mu _{2}\) for different values of \(\eta _{d}=\mu _{d}\) calculated by Corollary 3.3 are listed in Table 1.

Table 1 MAUBs of \(\eta _{2}=\mu _{2}\) for different \(\eta _{d}=\mu _{d}\)

Example 4

Consider GRN (21) with the following parameters:

$$\begin{aligned} &\mathcal{A} = \mathcal{C} = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{bmatrix}, \qquad \mathcal{B} = 0.5 \times \begin{bmatrix} 0 & 0 & -2.5 \\ -2.5 & 0 & 0 \\ 0 & -2.5 & 0 \end{bmatrix}, \\ &\mathcal{D} = \begin{bmatrix} 1.2 & 0 & 0 \\ 0 & 1.2 & 0 \\ 0 & 0 & 1.2 \end{bmatrix}, \qquad H_{1} = \begin{bmatrix} 0.2 & 0.1 & 0 \\ 0 & -0.1 & 0 \\ 0 & 0 & 0.2 \end{bmatrix}, \qquad H_{2} = \begin{bmatrix} 0.1 & 0 & 0 \\ 0 & 0.2 & -0.1 \\ 0.1 & 0 & 0.1 \end{bmatrix}. \end{aligned}$$

The activation functions are selected the same as that in Example 3.

When \(\eta _{1}=\mu _{1}=0.5\), the MAUBs of \(\eta _{2}=\mu _{2}\) for various values of \(\eta _{d}=\mu _{d}\) obtained by Corollary 3.4 are calculated and listed in Table 2. We can see from Table 2 that the condition given in Corollary 3.4 still guarantees the passivity performance of the neural network (21).

Table 2 The MAUBs of \(\eta _{2}=\mu _{2}\) for different \(\eta _{d}=\mu _{d}\)

Example 5

Consider GRN (23) with the following parameters:

$$\begin{aligned} &\mathcal{A} = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 3 \end{bmatrix}, \qquad \mathcal{B} = \begin{bmatrix} 0 & 0 & -2.5 \\ -2.5 & 0 & 0 \\ 0 & -2.5 & 0 \end{bmatrix}, \\ & \mathcal{C} = \begin{bmatrix} 2.5 & 0 & 0 \\ 0 & 2.5 & 0 \\ 0 & 0 & 2.5 \end{bmatrix}, \qquad \mathcal{D} = \begin{bmatrix} 0.8 & 0 & 0 \\ 0 & 0.8 & 0 \\ 0 & 0 & 0.8 \end{bmatrix}, \end{aligned}$$

Using these parameters, the MAUBs \(\eta _{2}\) compared with the results in [10] with different \(\eta _{1}\) when \(\mu _{1}=0.125\) and \(\mu _{2}=0.25\) are listed in Tables 3. It is easy to see that our method indeed gives improved results over the existing ones. This point can be also illustrated from another kind of comparison, that is, the MAUBs \(\mu _{2}\) for different values of \(\mu _{1}\) when \(\eta _{1}=0.25, \eta _{2}=0.5\). Some computational results are given in Table 4. From Tables 3 and 4 we can clearly obtain that Theorem 4.1 can indeed provide much large admissible upper bounds than the stability criterion in [10].

Table 3 MAUBs of \(\eta _{2}\) for various values of \(\eta _{1}\) (\(\mu _{1}=0.125, \mu _{2}=0.25\))
Table 4 Maximum value of \(\mu _{2}\) for different values of \(\mu _{1}\) (\(\eta _{1}=0.25, \eta _{2}=0.5\))

6 Conclusion

In this work, we discuss the stochastic passivity for GRNs with leakage, discrete, and distributed delays via impulsive perturbations. In particular, we utilize stochastic variables satisfying the Bernoulli distribution to design the given random uncertainties. Based on the LMI technique [25, 26] and Jensen’s integral inequality, we propose a novel delay-dependent passivity condition, which not only depends upon discrete delay, but also depends upon distributed delay. Numerical simulations are given to demonstrate the effectiveness of the considered result. The issue of disturbance rejection of fractional-order T-S fuzzy neural networks with probabilistic faults and probabilistic cyber-attacks is an untreated work, which will be the problem of our future work [27, 28].

Availability of data and materials

Data sharing not applicable to this paper as no data sets were generated or analyzed during the current study.

References

  1. Iswarya, M., Raja, R., Cao, J., Niezabitowski, M., Alzabut, J., Maharajan, C.: New results on exponential input-to-state stability analysis of memristor based complex-valued inertial neural networks with proportional and distributed delays. Math. Comput. Simul. (2021). https://doi.org/10.1016/j.matcom.2021.01.020

    Article  Google Scholar 

  2. Iswarya, M., Raja, R., Rajchakit, G., Alzabut, J., Lim, C.P.: A perspective on graph theory based stability analysis of impulsive stochastic recurrent neural networks with time-varying delays. Adv. Differ. Equ. 2019, 1 (2019)

    Article  MathSciNet  Google Scholar 

  3. Rajchakit, G., Pratap, A., Raja, R., Cao, J., Alzabut, J., Huang, C.: Hybrid control scheme for projective lag synchronization of Riemann–Liouville sense fractional order memristive BAM neural networks with mixed delays. Mathematics 7, 1–23 (2019)

    Article  Google Scholar 

  4. Stamov, G., Stamova, I., Alzabut, J.: Global exponential stability for a class of impulsive BAM neural networks with distributed delays. Appl. Math. Inf. Sci. 7, 1539–1546 (2013)

    Article  MathSciNet  Google Scholar 

  5. Stamov, G.T., Alzabut, J.: Almost periodic solutions of impulsive integro-differential neural networks. Math. Model. Anal. 15, 505–516 (2010)

    Article  MathSciNet  Google Scholar 

  6. Lakshmanan, S., Rihan, F.A., Rakkiyappan, R., Park, J.H.: Stability analysis of the differential genetic regulatory networks model with time-varying delays and Markovian jumping parameters. Nonlinear Anal. Hybrid Syst. 14, 1–15 (2014)

    Article  MathSciNet  Google Scholar 

  7. Zhang, X., Li, R., Han, C., Yao, R.: Robust stability analysis of uncertain genetic regulatory networks with mixed time delays. Int. J. Mach. Learn. Cybern. 7, 1005–1022 (2016)

    Article  Google Scholar 

  8. Ando, S., Sakamoto, E., Iba, H.: Evolutionary modeling and inference of gene network. Inf. Sci. 145, 237–259 (2002)

    Article  MathSciNet  Google Scholar 

  9. Meng, Q., Jiang, H.: Robust stochastic stability analysis of Markovian switching genetic regulatory networks with discrete and distributed delays. Neurocomputing 74, 362–368 (2010)

    Article  Google Scholar 

  10. Wang, W., Zhong, S.: Stochastic stability analysis of uncertain genetic regulatory networks with mixed time-varying delays. Neurocomputing 82, 143–156 (2012)

    Article  Google Scholar 

  11. Asaduzzaman, M., Zulfikar Ali, M.: Existence of solution to fractional order impulsive partial hyperbolic differential equations with infinite delay. Adv. Theory Nonlinear Anal. Appl. 4, 77–91 (2020)

    Google Scholar 

  12. Marasi, H.R., Afshari, H., Daneshbastam, M., Zhai, C.B.: Fixed points of mixed monotone operators for existence and uniqueness of nonlinear fractional differential equations. J. Contemp. Math. Anal. 52, 8–13 (2017)

    Article  MathSciNet  Google Scholar 

  13. Chen, B.S., Wang, Y.C.: On the attenuation and amplification of molecular noise in genetic regulatory networks. BMC Bioinform. 7, 52 (2006)

    Article  Google Scholar 

  14. Zhang, X., Wang, Y., Zhang, X.: Improved stochastic integral inequalities to stability analysis of stochastic genetic regulatory networks with mixed time-varying delays. IET Control Theory Appl. 14, 2439–2448 (2020)

    Article  Google Scholar 

  15. Pandiselvi, S., Raja, R., Cao, J., Rajchakit, G.: Stabilization of switched stochastic genetic regulatory networks with leakage and impulsive effects. Neural Process. Lett. 49, 593–610 (2019)

    Article  Google Scholar 

  16. Poongodi, T., Saravanakumar, T., Zhu, Q.: Extended dissipative control for Markovian jump time-delayed systems with bounded disturbances. Math. Probl. Eng. 2020, 1–15 (2020)

    Article  MathSciNet  Google Scholar 

  17. Zhang, D., Yu, L.: Passivity analysis for stochastic Markovian switching genetic regulatory networks with time-varying delays. Commun. Nonlinear Sci. Numer. Simul. 16, 2985–2992 (2011)

    Article  MathSciNet  Google Scholar 

  18. Wang, Y., Cao, Y., Guo, Z., Wen, S.: Passivity and passification of memristive recurrent neural networks with multi-proportional delays and impulse. Appl. Math. Comput. 369, 124838 (2020)

    Article  MathSciNet  Google Scholar 

  19. Yang, B., Wang, J., Hao, M., Zeng, H.: Further results on passivity analysis for uncertain neural networks with discrete and distributed delays. Inf. Sci. 430, 77–86 (2018)

    Article  MathSciNet  Google Scholar 

  20. Yang, C.B., Huang, T.Z.: Improved stability criteria for a class of neural networks with variable delays and impulsive perturbations. Appl. Math. Comput. 243, 923–935 (2014)

    MathSciNet  MATH  Google Scholar 

  21. Lakshmikantham, V., Bainov, D., Simeonov, P.: Theory of Impulsive Differential Equations. World Scientific, Singapore (1989)

    Book  Google Scholar 

  22. Nirmala, V.J., Saravanakumar, T., Zhu, Q.: Dissipative criteria for Takagi–Sugeno fuzzy Markovian jumping neural networks with impulsive perturbations using delay partitioning approach. Adv. Differ. Equ. 2019, 140 (2019)

    Article  MathSciNet  Google Scholar 

  23. Jian, J., Wan, P.: Global exponential convergence of fuzzy complex-valued neural networks with time-varying delays and impulsive effects. Fuzzy Sets Syst. 338, 23–39 (2018)

    Article  MathSciNet  Google Scholar 

  24. Ren, F., Cao, J.: Asymptotic and robust stability of genetic regulatory networks with time-varying delays. Neurocomputing 71, 834–842 (2008)

    Article  Google Scholar 

  25. Baleanu, D., Jajarmi, A., Mohammadi, H., Rezapour, S.: A new study on the mathematical modelling of human liver with Caputo–Fabrizio fractional derivative. Chaos Solitons Fractals 134, 109705 (2020)

    Article  MathSciNet  Google Scholar 

  26. Tuan, N.H., Mohammadi, H., Rezapour, S.: A mathematical model for COVID-19 transmission by using the Caputo fractional derivative. Chaos Solitons Fractals 140, 110107 (2020)

    Article  MathSciNet  Google Scholar 

  27. Mohammadi, H., Kumar, S., Rezapour, S., Etemad, S.: A theoretical study of the Caputo–Fabrizio fractional modeling for hearing loss due to Mumps virus with optimal control. Chaos Solitons Fractals 144, 110668 (2021)

    Article  MathSciNet  Google Scholar 

  28. Rezapour, S., Mohammadi, H., Jajarmi, A.: A new mathematical model for Zika virus transmission. Adv. Differ. Equ. 589, 1–15 (2020)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

J. Alzabut would like to thank Prince Sultan University and OSTİM Technical University for their support.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

The authors declare that the study was realized in collaboration with equal responsibility. All authors read and approved the final manuscript.

Corresponding author

Correspondence to J. Alzabut.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Appendix:  Proof of Theorem 3.1

Appendix:  Proof of Theorem 3.1

Proof

Let

$$\begin{aligned} &g_{1}(\varsigma ) = -\mathcal{A}x(\varsigma -\nu _{1}) + \mathcal{B}f\bigl(y\bigl(\varsigma -\mu (\varsigma )\bigr)\bigr) + \mathcal{E} \int _{ \varsigma -l(\varsigma )}^{\varsigma }f\bigl(y(s)\bigr)\,ds + u_{1}(\varsigma ), \\ &g_{2}(\varsigma ) = \delta \bigl(x(\varsigma ), x\bigl(\varsigma - \eta (\varsigma )\bigr), y(\varsigma ), y\bigl(\varsigma -\mu (\varsigma )\bigr) \bigr). \end{aligned}$$

To prove the passivity criteria for the genetic network (9), we construct the following Lyapunov functional:

$$\begin{aligned} V\bigl(x(\varsigma ),y(\varsigma )\bigr) = \sum_{i=1}^{4} V_{i}\bigl(x(\varsigma ),y( \varsigma )\bigr), \end{aligned}$$
(25)

where

$$\begin{aligned} {V}_{1}\bigl(x(\varsigma ),y(\varsigma )\bigr) = {} & \biggl[ x( \varsigma ) - \mathcal{A} \int _{\varsigma -\nu _{1}}^{\varsigma }x(s)\,ds \biggr]^{T} P_{1} \biggl[ x(\varsigma ) - \mathcal{A} \int _{ \varsigma -\nu _{1}}^{\varsigma }x(s)\,ds \biggr] \\ &{} + \biggl[ y(\varsigma ) - \mathcal{C} \int _{\varsigma -\nu _{2}}^{\varsigma }y(s)\,ds \biggr]^{T} P_{2} \biggl[ y(\varsigma ) - \mathcal{C} \int _{\varsigma -\nu _{2}}^{\varsigma }y(s)\,ds \biggr] \\ &{} + \int _{\varsigma -\nu _{1}}^{\varsigma }x^{T}(s)P_{3}x(s) \,ds + \int _{\varsigma -\nu _{2}}^{\varsigma }y^{T}(s)P_{4}y(s) \,ds, \\ V_{2}\bigl(x(\varsigma ),y(\varsigma )\bigr) = {} & \int _{\varsigma - \eta _{1}}^{\varsigma }x^{T}(s)Q_{1}x(s) \,ds + \int _{\varsigma -\eta _{2}}^{\varsigma }x^{T}(s)Q_{2}x(s) \,ds \\ &{} + \int _{\varsigma -\mu _{1}}^{\varsigma }y^{T}(s)Q_{3}y(s) \,ds + \int _{\varsigma -\mu _{2}}^{\varsigma }y^{T}(s)Q_{4}y(s) \,ds, \\ V_{3}\bigl(x(\varsigma ),y(\varsigma )\bigr) = {} & \int _{\varsigma - \eta (\varsigma )}^{\varsigma }x^{T}(s)R_{1}x(s) \,ds + \int _{\varsigma - \mu (\varsigma )}^{\varsigma }y^{T}(s)R_{2}y(s) \,ds \\ &{} + \int _{\varsigma -h(\varsigma )}^{\varsigma }x^{T}(s)R_{3}x(s) \,ds + \int _{\varsigma -l(\varsigma )}^{\varsigma }f^{T}\bigl(y(s) \bigr)R_{4}f\bigl(y(s)\bigr)\,ds \\ &{} + \nu _{1} \int _{-\nu _{1}}^{0} \int _{\varsigma +\xi }^{\varsigma }x^{T}(s) R_{5} x(s) \,ds \,d\xi + \nu _{2} \int _{-\nu _{2}}^{0} \int _{\varsigma +\xi }^{\varsigma }y^{T}(s) R_{6} y(s) \,ds \,d\xi \\ &{} + l \int _{-l}^{0} \int _{\varsigma +\theta }^{\varsigma }f^{T}\bigl(y(s) \bigr)R_{7}f\bigl(y(s)\bigr)\,ds\,d \theta + h \int _{-h}^{0} \int _{\varsigma +\theta }^{\varsigma }x^{T}(s) R_{8} x(s)\,ds\,d\theta, \\ V_{4}\bigl(x(\varsigma ),y(\varsigma )\bigr) = {}& \int _{-\eta _{2}}^{0} \int _{\varsigma +\theta }^{\varsigma }g_{1}^{T}(s) Z_{1} g_{1}(s)\,ds\,d \theta + \int _{-\eta _{2}}^{-\eta _{1}} \int _{\varsigma +\theta }^{\varsigma }g_{1}^{T}(s) Z_{2} g_{1}(s)\,ds\,d\theta \\ &{} + \int _{-\eta _{2}}^{0} \int _{\varsigma +\theta }^{\varsigma }\operatorname{tr} \bigl[g_{2}^{T}(s) Z_{3} g_{2}(s)\bigr] \,ds\,d\theta + \int _{-\eta _{2}}^{-\eta _{1}} \int _{\varsigma +\theta }^{\varsigma }\operatorname{tr}\bigl[g_{2}^{T}(s) Z_{4} g_{2}(s)\bigr]\,ds\,d \theta. \end{aligned}$$

Using the stochastic differentiation rule along system (9), we have

$$\begin{aligned} &\mathcal{L}V_{1}\bigl(x(\varsigma ),y(\varsigma )\bigr) \\ &\quad= 2 \biggl[ x( \varsigma ) - \mathcal{A} \int _{\varsigma -\nu _{1}}^{\varsigma }x(s)\,ds \biggr]^{T} P_{1} \frac{d}{d\varsigma } \biggl[ x(\varsigma ) - \mathcal{A} \int _{\varsigma -\nu _{1}}^{\varsigma }x(s)\,ds \biggr] \\ &\qquad{} + 2 \biggl[ y(\varsigma ) - \mathcal{C} \int _{\varsigma -\nu _{2}}^{\varsigma }y(s)\,ds \biggr]^{T} P_{2} \frac{d}{d\varsigma } \biggl[ y( \varsigma ) - \mathcal{C} \int _{\varsigma -\nu _{2}}^{\varsigma }y(s)\,ds \biggr] + \operatorname{tr} \bigl[g_{2}^{T}(\varsigma )P_{1}g_{2}( \varsigma )\bigr] \\ &\qquad{} + x^{T}(\varsigma )P_{3}x(\varsigma ) - x^{T}(\varsigma -\nu _{1})P_{3}x( \varsigma -\nu _{1}) + y^{T}(\varsigma )P_{4}y(\varsigma ) - y^{T}( \varsigma -\nu _{2})P_{4}y(\varsigma -\nu _{2}) \\ &\quad\leq 2 \biggl[ x(\varsigma ) - \mathcal{A} \int _{ \varsigma -\nu _{1}}^{\varsigma }x(s)\,ds \biggr]^{T} P_{1} \\ &\qquad{}\times \biggl[- \mathcal{A}x(\varsigma ) + \mathcal{B}f\bigl(y\bigl( \varsigma -\mu (\varsigma )\bigr)\bigr) + \mathcal{E} \int _{\varsigma -l(\varsigma )}^{\varsigma }f\bigl(y(s)\bigr)\,ds + u_{1}( \varsigma ) \biggr] \\ &\qquad{} + 2 \biggl[ y(\varsigma ) - \mathcal{C} \int _{\varsigma -\nu _{2}}^{\varsigma }y(s)\,ds \biggr]^{T} \\ &\qquad{}\times P_{2} \biggl[-\mathcal{C}y(\varsigma ) + \mathcal{D}x\bigl(\varsigma - \eta (\varsigma )\bigr) + \mathcal{F} \int _{ \varsigma -h(\varsigma )}^{\varsigma }x(s)\,ds + u_{2}(\varsigma ) \biggr] \\ &\qquad{} + \lambda _{1} \bigl[x^{T}(\varsigma )G_{1}x(\varsigma ) + x^{T}\bigl( \varsigma -\eta (\varsigma )\bigr)G_{2} x\bigl(\varsigma -\eta (\varsigma )\bigr) + y^{T}( \varsigma )G_{3}y(\varsigma ) \\ &\qquad{} + y^{T}\bigl(\varsigma -\mu (\varsigma )\bigr)G_{4} y \bigl(\varsigma -\mu ( \varsigma )\bigr)\bigr] + x^{T}(\varsigma )P_{3}x(\varsigma ) \\ &\qquad{}- x^{T}( \varsigma -\nu _{1})P_{3}x(\varsigma -\nu _{1}) + y^{T}(\varsigma )P_{4}y( \varsigma ) \\ &\qquad{} - y^{T}(\varsigma -\nu _{2})P_{4}y(\varsigma -\nu _{2}), \end{aligned}$$
(26)
$$\begin{aligned} &\mathcal{L}V_{2}\bigl(x(\varsigma ),y(\varsigma )\bigr) \\ &\quad = x^{T}( \varsigma )[Q_{1}+Q_{2}]x(\varsigma )-x^{T}(\varsigma -\eta _{1})Q_{1}x( \varsigma - \eta _{1})-x^{T}(\varsigma -\eta _{2})Q_{2}x( \varsigma - \eta _{2}) \\ &\qquad{}\times y^{T}(\varsigma )[Q_{3}+Q_{4}]y( \varsigma )-y^{T}(\varsigma - \mu _{1})Q_{3}y( \varsigma -\mu _{1})-y^{T}(\varsigma -\mu _{2})Q_{4}y( \varsigma -\mu _{2}), \end{aligned}$$
(27)
$$\begin{aligned} &\mathcal{L}V_{3}\bigl(x(\varsigma ),y(\varsigma )\bigr) \\ &\quad = x^{T}( \varsigma )\bigl[R_{1}+R_{3}+\nu _{1}^{2}R_{5}+h^{2}R_{8} \bigr]x(\varsigma ) + y^{T}( \varsigma )\bigl[R_{2}+\nu _{2}^{2}R_{6}\bigr]y(\varsigma ) \\ &\qquad{}-\bigl(1-\dot{ \eta }( \varsigma )\bigr)x^{T}\bigl(\varsigma -\eta (\varsigma )\bigr) \\ &\qquad{} \times R_{1}x\bigl(\varsigma -\eta (\varsigma )\bigr) -\bigl(1- \dot{\mu }( \varsigma )\bigr)y^{T}\bigl(\varsigma -\mu (\varsigma ) \bigr)R_{2}y\bigl(\varsigma -\mu ( \varsigma )\bigr) \\ &\qquad{}-\bigl(1-\dot{h}( \varsigma )\bigr)x^{T}\bigl(\varsigma -h(\varsigma )\bigr) \\ &\qquad{} \times R_{3}x\bigl(\varsigma -h(\varsigma )\bigr) + f^{T}\bigl(y(\varsigma )\bigr)\bigl[R_{4}+l^{2}R_{7} \bigr]f\bigl(y( \varsigma )\bigr) \\ &\qquad{}-\bigl(1-\dot{l}(\varsigma )\bigr)f^{T} \bigl(y\bigl(\varsigma -l(\varsigma )\bigr)\bigr)R_{4}f\bigl(y\bigl( \varsigma -l(\varsigma )\bigr)\bigr) \\ &\qquad{} -\nu _{1} \int _{\varsigma -\nu _{1}}^{\varsigma }x^{T}(s)R_{5}x(s) \,ds -\nu _{2} \int _{\varsigma -\nu _{2}}^{\varsigma }y^{T}(s)R_{6}y(s) \,ds \\ &\qquad{} - l \int _{\varsigma -l}^{\varsigma }f^{T}\bigl(y(s) \bigr)R_{7}f\bigl(y(s)\bigr)\,ds - h \int _{\varsigma -h}^{\varsigma }x^{T}(s)R_{8}x(s) \,ds \\ &\quad\leq x^{T}(\varsigma )\bigl[R_{1}+R_{3}+\nu _{1}^{2}R_{5}+h^{2}R_{8} \bigr]x( \varsigma ) + y^{T}(\varsigma )\bigl[R_{2}+\nu _{2}^{2}R_{6}\bigr] \\ &\qquad{}\times y(\varsigma )-(1- \eta _{d})x^{T}\bigl(\varsigma -\eta (\varsigma )\bigr) \\ &\qquad{} \times R_{1}x\bigl(\varsigma -\eta (\varsigma )\bigr) -(1-\mu _{d})y^{T}\bigl( \varsigma -\mu (\varsigma ) \bigr)R_{2}y\bigl(\varsigma -\mu (\varsigma )\bigr) \\ &\qquad{}-(1-h_{d})x^{T} \bigl( \varsigma -h(\varsigma )\bigr)R_{3}x\bigl(\varsigma -h(\varsigma )\bigr) \\ &\qquad{} + f^{T}\bigl(y(\varsigma )\bigr)\bigl[R_{4}+l^{2}R_{7} \bigr]f\bigl(y(\varsigma )\bigr)-(1-l_{d})f^{T}\bigl(y\bigl( \varsigma -l(\varsigma )\bigr)\bigr)R_{4}f\bigl(y\bigl(\varsigma -l( \varsigma )\bigr)\bigr) \\ &\qquad{} -\nu _{1} \int _{\varsigma -\nu _{1}}^{\varsigma }x^{T}(s)R_{5}x(s) \,ds -\nu _{2} \int _{\varsigma -\nu _{2}}^{\varsigma }y^{T}(s)R_{6}y(s) \,ds \\ &\qquad{} - l \int _{\varsigma -l}^{\varsigma }f^{T}\bigl(y(s) \bigr)R_{7}f\bigl(y(s)\bigr)\,ds - h \int _{\varsigma -h}^{\varsigma }x^{T}(s)R_{8}x(s) \,ds, \end{aligned}$$
(28)
$$\begin{aligned} &\mathcal{L}V_{4}\bigl(x(\varsigma ),y(\varsigma )\bigr) \\ &\quad= g_{1}^{T}( \varsigma )\bigl[\eta _{2}Z_{1}+( \eta _{2}-\eta _{1})Z_{2}\bigr]g_{1}( \varsigma )- \int _{\varsigma -\eta _{2}}^{\varsigma }g_{1}^{T}(s)Z_{1}g_{1}(s) \,ds - \int _{\varsigma -\eta _{2}}^{\varsigma -\eta _{1}} g_{1}^{T}(s)Z_{2}g_{1}(s) \,ds \\ &\qquad{} + \eta _{2} \operatorname{tr}\bigl[g_{2}^{T}(t)Z_{3}g_{2}( \varsigma )\bigr]- \int _{ \varsigma -\eta _{2}}^{\varsigma }\operatorname{tr} \bigl[g_{2}^{T}(s)Z_{3}g_{2}(s)\bigr] \,ds \\ &\qquad{} + (\eta _{2}-\eta _{1}) \operatorname{tr} \bigl[g_{2}^{T}(\varsigma )Z_{4}g_{2}( \varsigma )\bigr]- \int _{\varsigma -\eta _{2}}^{\varsigma -\eta _{1}} \operatorname{tr} \bigl[g_{2}^{T}(s)Z_{4}g_{2}(s)\bigr] \,ds \\ &\quad\leq g_{1}^{T}(\varsigma )\bigl[\eta _{2}Z_{1}+(\eta _{2}- \eta _{1})Z_{2} \bigr]g_{1}(\varsigma )- \int _{\varsigma -\eta _{2}}^{\varsigma }g_{1}^{T}(s)Z_{1}g_{1}(s) \,ds - \int _{\varsigma -\eta _{2}}^{ \varsigma -\eta _{1}} g_{1}^{T}(s)Z_{2}g_{1}(s) \,ds \\ &\qquad{} + \eta _{2} \lambda _{2} \bigl[x^{T}( \varsigma )G_{1}x(\varsigma ) + x^{T}\bigl(\varsigma -\eta ( \varsigma )\bigr)G_{2} x\bigl(\varsigma -\eta ( \varsigma )\bigr) + y^{T}(\varsigma )G_{3}y(\varsigma ) \\ &\qquad{} + y^{T}\bigl( \varsigma - \mu (\varsigma )\bigr)G_{4} y\bigl(\varsigma -\mu ( \varsigma )\bigr)\bigr] \\ &\qquad{} + (\eta _{2}-\eta _{1}) \lambda _{3} \bigl[x^{T}(\varsigma )G_{1}x( \varsigma ) + x^{T} \bigl(\varsigma -\eta (\varsigma )\bigr)G_{2} x\bigl(\varsigma - \eta ( \varsigma )\bigr) + y^{T}(\varsigma )G_{3}y(\varsigma ) \\ &\qquad{} + y^{T}\bigl(\varsigma -\mu (\varsigma )\bigr)G_{4} y \bigl(\varsigma -\mu ( \varsigma )\bigr)\bigr] - \int _{\varsigma -\eta _{2}}^{\varsigma } \operatorname{tr} \bigl[g_{2}^{T}(s)Z_{3}g_{2}(s)\bigr] \,ds \\ &\qquad{}- \int _{\varsigma -\eta _{2}}^{\varsigma -\eta _{1}} \operatorname{tr}\bigl[ g_{2}^{T}(s)Z_{4} g_{2}(s)\bigr]\,ds. \end{aligned}$$
(29)

By Jensen’s integral inequality [16] we get

$$\begin{aligned} &{-}\nu _{1} \int _{\varsigma -\nu _{1}}^{\varsigma }x^{T}(s)R_{5}x(s) \,ds \leq - \biggl( \int _{\varsigma -\nu _{1}}^{\varsigma }x(s)\,ds \biggr)^{T} R_{5} \biggl( \int _{\varsigma -\nu _{1}}^{\varsigma }x(s)\,ds \biggr), \end{aligned}$$
(30)
$$\begin{aligned} &{-}\nu _{2} \int _{\varsigma -\nu _{2}}^{\varsigma }y^{T}(s)R_{6}y(s) \,ds \leq - \biggl( \int _{\varsigma -\nu _{2}}^{\varsigma }y(s)\,ds \biggr)^{T} R_{6} \biggl( \int _{\varsigma -\nu _{2}}^{\varsigma }y(s)\,ds \biggr), \end{aligned}$$
(31)
$$\begin{aligned} &{-} l \int _{\varsigma -l}^{\varsigma }f^{T}\bigl(y(s) \bigr)R_{7}f\bigl(y(s)\bigr)\,ds \leq - \biggl( \int _{\varsigma -l(\varsigma )}^{\varsigma }f\bigl(y(s)\bigr)\,ds \biggr)^{T} R_{7} \biggl( \int _{\varsigma -l(\varsigma )}^{\varsigma }f\bigl(y(s)\bigr)\,ds \biggr), \end{aligned}$$
(32)
$$\begin{aligned} &{-} h \int _{\varsigma -h}^{\varsigma }x^{T}(s)R_{8}x(s) \,ds \leq - \biggl( \int _{\varsigma -h(\varsigma )}^{\varsigma }x(s)\,ds \biggr)^{T} R_{8} \biggl( \int _{\varsigma -h(\varsigma )}^{\varsigma }x(s)\,ds \biggr), \end{aligned}$$
(33)

By the Leibnitz formula, for any matrices \(M, N, S, U\) of appropriate dimensions, we have the following equalities:

$$\begin{aligned} &0 = 2 \zeta ^{T}(\varsigma ) N \biggl[ x(\varsigma ) - x\bigl( \varsigma -\eta (\varsigma )\bigr) - \int _{\varsigma -\eta (\varsigma )}^{\varsigma }g_{1}(s)\,ds - \int _{\varsigma -\eta (\varsigma )}^{\varsigma }g_{2}(s) \,d\omega (s) \biggr], \end{aligned}$$
(34)
$$\begin{aligned} &0 = 2 \zeta ^{T}(\varsigma ) S \biggl[ x\bigl(\varsigma -\eta ( \varsigma )\bigr) - x(\varsigma -\eta _{2}) - \int _{\varsigma -\eta _{2}}^{ \varsigma -\eta (\varsigma )} g_{1}(s)\,ds - \int _{\varsigma -\eta _{2}}^{ \varsigma -\eta (\varsigma )} g_{2}(s) \,d\omega (s) \biggr], \end{aligned}$$
(35)
$$\begin{aligned} &0 = 2 \zeta ^{T}(\varsigma ) M \biggl[ x(\varsigma -\eta _{1}) - x\bigl(\varsigma -\eta (\varsigma )\bigr) - \int _{\varsigma -\eta ( \varsigma )}^{\varsigma -\eta _{1}} g_{1}(s)\,ds - \int _{\varsigma - \eta (\varsigma )}^{\varsigma -\eta _{1}} g_{2}(s) \,d\omega (s) \biggr], \end{aligned}$$
(36)
$$\begin{aligned} &0 = 2 \zeta ^{T}(\varsigma ) U \biggl[ -Ax(\varsigma -\nu _{1}) + B f\bigl(y\bigl(\varsigma -\mu (\varsigma )\bigr)\bigr) \\ &\phantom{0 =}{}+ E \int _{\varsigma -l( \varsigma )}^{\varsigma }f\bigl(y(s)\bigr) \,ds + u_{1}(\varsigma ) - g_{1}( \varsigma )\biggr], \end{aligned}$$
(37)

where

$$\begin{aligned} \zeta ^{T}(\varsigma ) = {}& \biggl[ x^{T}(\varsigma )\quad x^{T}\bigl( \varsigma -\eta (\varsigma )\bigr) \quad y^{T}( \varsigma ) \quad y^{T}\bigl( \varsigma -\mu (\varsigma )\bigr) \quad x^{T}(\varsigma -\eta _{1}) \quad x^{T}( \varsigma -\eta _{2}) \\ & y^{T}(\varsigma -\mu _{1}) \quad y^{T}( \varsigma -\mu _{2}) \quad f^{T}\bigl(y(\varsigma )\bigr) \quad f^{T}\bigl(y\bigl(\varsigma -\mu (\varsigma )\bigr)\bigr) \quad g_{1}^{T}(\varsigma ) \quad u_{1}^{T}( \varsigma ) \\ & u_{2}^{T}(\varsigma ) \quad x^{T}(\varsigma - \nu _{1}) \quad y^{T}( \varsigma -\nu _{2}) \quad \biggl( \int _{\varsigma -\nu _{1}}^{\varsigma }x(s)\,ds \biggr)^{T} \quad \biggl( \int _{\varsigma -\nu _{2}}^{\varsigma }y(s)\,ds \biggr)^{T} \\ & x^{T}\bigl(\varsigma -h(\varsigma )\bigr) \quad f^{T} \bigl(y\bigl(\varsigma -l( \varsigma )\bigr)\bigr) \quad \biggl( \int _{\varsigma -l(\varsigma )}^{\varsigma }f\bigl(y(s)\bigr)\,ds \biggr)^{T} \quad \biggl( \int _{\varsigma -h( \varsigma )}^{\varsigma }x(s)\,ds \biggr)^{T} \biggr]. \end{aligned}$$

In addition, from Eq. (5) we have

$$\begin{aligned} &f_{i}\bigl(y_{i}(\varsigma )\bigr)\bigl[f_{i} \bigl(y_{i}(\varsigma )\bigr)-k_{i}y_{i}( \varsigma )\bigr]\leq 0, \quad i=1,2,\ldots,n, \\ &f_{i}\bigl(y_{i}\bigl(\varsigma -\mu (\varsigma )\bigr) \bigr)\bigl[f_{i}\bigl(y_{i}\bigl(\varsigma -\mu ( \varsigma )\bigr)\bigr)-k_{i}y_{i}\bigl(\varsigma -\mu (\varsigma ) \bigr)\bigr]\leq 0, \quad i=1,2, \ldots,n. \end{aligned}$$

Thus, for any \(T_{j}=\operatorname{diag} \{\varsigma _{1j}, \varsigma _{2j},\ldots, \varsigma _{nj}\}\geq 0, j=1,2\), it follows that

$$\begin{aligned} 0 \leq {} &{ -}2\sum_{i=1}^{n} \varsigma _{i1}f_{i}\bigl(y_{i}( \varsigma )\bigr) \bigl[f_{i}\bigl(y_{i}(\varsigma )\bigr)-k_{i}y_{i}( \varsigma )\bigr] \\ &{}-2\sum_{i=1}^{n} \varsigma _{i2}f_{i}\bigl(y_{i}\bigl(\varsigma -\mu ( \varsigma )\bigr)\bigr)\bigl[f_{i}\bigl(y_{i}\bigl( \varsigma -\mu (\varsigma )\bigr)\bigr)-k_{i}y_{i}\bigl( \varsigma -\mu (\varsigma )\bigr)\bigr] \\ = {} & {-}2f^{T}\bigl(y(\varsigma )\bigr)T_{1} f\bigl(y( \varsigma )\bigr) + 2y^{T}( \varsigma )KT_{1}f\bigl(y( \varsigma )\bigr) \\ &{}- 2f^{T}\bigl(y\bigl(\varsigma -\mu ( \varsigma ) \bigr)\bigr)T_{2}f\bigl(y\bigl(\varsigma -\mu (\varsigma )\bigr)\bigr) + 2y^{T}\bigl( \varsigma -\mu (\varsigma )\bigr) \\ &{} \times KT_{2}f\bigl(y\bigl(\varsigma -\mu (\varsigma )\bigr) \bigr). \end{aligned}$$
(38)

On the other side, the following inequalities are true for any matrices \(Z_{i}\geq 0, i=1,2,3,4\):

$$\begin{aligned} &{-}2\zeta ^{T}(\varsigma )N \int _{\varsigma -\eta (\varsigma )}^{\varsigma }g_{1}(s)\,ds \leq \eta _{2}\zeta ^{T}(\varsigma ) NZ_{1}^{-1}N^{T} \zeta (\varsigma ) + \int _{\varsigma -\eta (\varsigma )}^{\varsigma }g_{1}^{T}(s)Z_{1} g_{1}(s)\,ds, \end{aligned}$$
(39)
$$\begin{aligned} &{-}2\zeta ^{T}(\varsigma )M \int _{\varsigma -\eta (\varsigma )}^{ \varsigma -\eta _{1}} g_{1}(s)\,ds \leq \eta _{12}\zeta ^{T}( \varsigma ) MZ_{2}^{-1}M^{T} \zeta (\varsigma ) + \int _{\varsigma - \eta (\varsigma )}^{\varsigma -\eta _{1}} g_{1}^{T}(s)Z_{2} g_{1}(s)\,ds, \end{aligned}$$
(40)
$$\begin{aligned} &{-}2\zeta ^{T}(\varsigma )S \int _{\varsigma -\eta _{2}}^{\varsigma - \eta (\varsigma )} g_{1}(s)\,ds \\ &\quad \leq \eta _{12}\zeta ^{T}( \varsigma ) S(Z_{1}+Z_{2})^{-1}S^{T} \zeta (\varsigma ) + \int _{ \varsigma -\eta _{2}}^{\varsigma -\eta (\varsigma )} g_{1}^{T}(s) (Z_{1}+Z_{2}) g_{1}(s)\,ds, \end{aligned}$$
(41)
$$\begin{aligned} &{-}2\zeta ^{T}(\varsigma )N \int _{\varsigma -\eta (\varsigma )}^{\varsigma }g_{2}(s)\,d\omega (s) \\ &\quad \leq \zeta ^{T}(\varsigma ) NZ_{3}^{-1}N^{T} \zeta (\varsigma ) + \biggl( \int _{\varsigma -\eta (\varsigma )}^{\varsigma }g_{2}(s) \,d\omega (s) \biggr)^{T} Z_{3} \biggl( \int _{\varsigma -\eta ( \varsigma )}^{\varsigma }g_{2}(s) \,d\omega (s) \biggr), \end{aligned}$$
(42)
$$\begin{aligned} &{-}2\zeta ^{T}(\varsigma )M \int _{\varsigma -\eta (\varsigma )}^{ \varsigma -\eta _{1}} g_{2}(s)\,d\omega (s) \\ &\quad\leq \zeta ^{T}( \varsigma ) MZ_{4}^{-1}M^{T} \zeta (\varsigma ) + \biggl( \int _{\varsigma -\eta (\varsigma )}^{\varsigma - \eta _{1}} g_{2}(s) \,d\omega (s) \biggr)^{T} Z_{4} \biggl( \int _{ \varsigma -\eta (\varsigma )}^{\varsigma -\eta _{1}} g_{2}(s) \,d \omega (s) \biggr), \end{aligned}$$
(43)
$$\begin{aligned} &{-}2\zeta ^{T}(\varsigma )S \int _{\varsigma -\eta _{2}}^{\varsigma - \eta (\varsigma )} g_{2}(s)\,d\omega (s) \\ &\quad{}\leq \zeta ^{T}( \varsigma ) S(Z_{3}+Z_{4})^{-1}S^{T} \zeta (\varsigma ) \\ &\qquad{}+ \biggl( \int _{\varsigma -\eta _{2}}^{\varsigma -\eta ( \varsigma )} g_{2}(s) \,d\omega (s) \biggr)^{T} (Z_{3}+Z_{4}) \biggl( \int _{\varsigma -\eta _{2}}^{\varsigma -\eta (\varsigma )} g_{2}(s) \,d \omega (s) \biggr). \end{aligned}$$
(44)

Combining (26)–(44), we get

$$\begin{aligned} &\mathcal{L}V\bigl(x(\varsigma ),y(\varsigma )\bigr)-2\bigl[u_{1}^{T}( \varsigma )z_{x}( \varsigma )+u_{2}^{T}(\varsigma )z_{y}(\varsigma )\bigr]-\gamma \bigl[u_{1}^{T}( \varsigma )u_{1}(\varsigma )+u_{2}^{T}(\varsigma )u_{2}(\varsigma )\bigr] \\ &\quad\leq \zeta ^{T}(\varsigma ) \bigl[ \Phi _{1} + \eta _{2} NZ_{1}^{-1}N^{T} + \eta _{12} MZ_{2}^{-1}M^{T} \\ &\qquad{}+ \eta _{12} S(Z_{1}+Z_{2})^{-1}S^{T} + NZ_{3}^{-1}N^{T} + MZ_{4}^{-1}M^{T} \\ &\qquad{}+ S(Z_{3}+Z_{4})^{-1}S^{T} \bigr]\zeta (\varsigma ) - \int _{ \varsigma -\eta _{2}}^{\varsigma }g_{1}^{T}(s)Z_{1}g_{1}(s) \,ds \\ &\qquad{}- \int _{ \varsigma -\eta _{2}}^{\varsigma -\eta _{1}} g_{1}^{T}(s)Z_{2}g_{1}(s) \,ds - \int _{\varsigma -\eta _{2}}^{\varsigma } \operatorname{tr} \bigl[g_{2}^{T}(s)Z_{3}g_{2}(s)\bigr] \,ds \\ &\qquad{}- \int _{\varsigma -\eta _{2}}^{\varsigma -\eta _{1}} \operatorname{tr}\bigl[ g_{2}^{T}(s)Z_{4} g_{2}(s)\bigr]\,ds + \int _{\varsigma -\eta (\varsigma )}^{\varsigma }g_{1}^{T}(s)Z_{1} g_{1}(s)\,ds + \int _{\varsigma -\eta (\varsigma )}^{\varsigma -\eta _{1}} g_{1}^{T}(s)Z_{2} g_{1}(s)\,ds \\ &\qquad{}+ \int _{\varsigma -\eta _{2}}^{\varsigma -\eta (\varsigma )} g_{1}^{T}(s) (Z_{1}+Z_{2}) g_{1}(s)\,ds + \biggl( \int _{\varsigma -\eta (\varsigma )}^{\varsigma }g_{2}(s) \,d\omega (s) \biggr)^{T} Z_{3} \biggl( \int _{\varsigma -\eta ( \varsigma )}^{\varsigma }g_{2}(s) \,d\omega (s) \biggr) \\ &\qquad{}+ \biggl( \int _{\varsigma -\eta (\varsigma )}^{\varsigma -\eta _{1}} g_{2}(s) \,d\omega (s) \biggr)^{T} Z_{4} \biggl( \int _{\varsigma - \eta (\varsigma )}^{\varsigma -\eta _{1}} g_{2}(s) \,d\omega (s) \biggr) \\ &\qquad{}+ \biggl( \int _{\varsigma -\eta _{2}}^{\varsigma -\eta (\varsigma )} g_{2}(s) \,d\omega (s) \biggr)^{T} (Z_{3}+Z_{4}) \biggl( \int _{ \varsigma -\eta _{2}}^{\varsigma -\eta (\varsigma )} g_{2}(s) \,d \omega (s) \biggr) \\ &\quad\leq \zeta ^{T}(\varsigma ) \Phi \zeta (\varsigma ) - \int _{ \varsigma -\eta _{2}}^{\varsigma }g_{1}^{T}(s)Z_{1}g_{1}(s) \,ds - \int _{ \varsigma -\eta _{2}}^{\varsigma -\eta _{1}} g_{1}^{T}(s)Z_{2}g_{1}(s) \,ds \\ &\qquad{}- \int _{\varsigma -\eta _{2}}^{\varsigma } \operatorname{tr} \bigl[g_{2}^{T}(s)Z_{3}g_{2}(s)\bigr] \,ds \\ &\qquad{}- \int _{\varsigma -\eta _{2}}^{\varsigma -\eta _{1}} \operatorname{tr}\bigl[ g_{2}^{T}(s)Z_{4} g_{2}(s)\bigr]\,ds + \int _{\varsigma -\eta (\varsigma )}^{\varsigma }g_{1}^{T}(s)Z_{1} g_{1}(s)\,ds + \int _{\varsigma -\eta (\varsigma )}^{\varsigma -\eta _{1}} g_{1}^{T}(s)Z_{2} g_{1}(s)\,ds \\ &\qquad{}+ \int _{\varsigma -\eta _{2}}^{\varsigma -\eta (\varsigma )} g_{1}^{T}(s) (Z_{1}+Z_{2}) g_{1}(s)\,ds + \biggl( \int _{\varsigma -\eta (\varsigma )}^{\varsigma }g_{2}(s) \,d\omega (s) \biggr)^{T} Z_{3} \biggl( \int _{\varsigma -\eta ( \varsigma )}^{\varsigma }g_{2}(s) \,d\omega (s) \biggr) \\ &\qquad{}+ \biggl( \int _{\varsigma -\eta (\varsigma )}^{\varsigma -\eta _{1}} g_{2}(s) \,d\omega (s) \biggr)^{T} Z_{4} \biggl( \int _{\varsigma -\eta ( \varsigma )}^{\varsigma -\eta _{1}} g_{2}(s) \,d\omega (s) \biggr) \\ &\qquad{}+ \biggl( \int _{\varsigma -\eta _{2}}^{\varsigma -\eta (\varsigma )} g_{2}(s) \,d\omega (s) \biggr)^{T} (Z_{3}+Z_{4}) \biggl( \int _{\varsigma -\eta _{2}}^{ \varsigma -\eta (\varsigma )} g_{2}(s) \,d\omega (s) \biggr), \end{aligned}$$

where

$$\begin{aligned} \Phi = {}& \Phi _{1} + \eta _{2} NZ_{1}^{-1}N^{T} + \eta _{12} MZ_{2}^{-1}M^{T} + \eta _{12} S(Z_{1}+Z_{2})^{-1}S^{T} + NZ_{3}^{-1}N^{T} + \operatorname{MZ}_{4}^{-1}M^{T} \\ &{} + S(Z_{3}+Z_{4})^{-1}S^{T}. \end{aligned}$$

Note that

$$\begin{aligned} &\mathbb{E} \biggl\{ \biggl[ \int _{\varsigma -\eta (\varsigma )}^{\varsigma }g_{2}(s)\,d\omega (s) \biggr]^{T} Z_{3} \biggl[ \int _{ \varsigma -\eta (\varsigma )}^{\varsigma }g_{2}(s)\,d\omega (s) \biggr] \biggr\} = \mathbb{E} \biggl\{ \int _{\varsigma -\eta ( \varsigma )}^{\varsigma }\operatorname{tr}\bigl[g_{2}^{T}(s) Z_{3} g_{2}(s)\bigr] \,ds \biggr\} , \\ &\mathbb{E} \biggl\{ \biggl[ \int _{\varsigma -\eta (\varsigma )}^{ \varsigma -\eta _{1}} g_{2}(s)\,d\omega (s) \biggr]^{T} Z_{4} \biggl[ \int _{\varsigma -\eta (\varsigma )}^{\varsigma -\eta _{1}} g_{2}(s)\,d \omega (s) \biggr] \biggr\} = \mathbb{E} \biggl\{ \int _{ \varsigma -\eta (\varsigma )}^{\varsigma -\eta _{1}} \operatorname{tr} \bigl[g_{2}^{T}(s) Z_{4} g_{2}(s)\bigr] \,ds \biggr\} , \\ &\mathbb{E} \biggl\{ \biggl[ \int _{\varsigma -\eta _{2}}^{\varsigma - \eta (\varsigma )} g_{2}(s)\,d\omega (s) \biggr]^{T} (Z_{3}+Z_{4}) \biggl[ \int _{\varsigma -\eta _{2}}^{\varsigma -\eta (\varsigma )} g_{2}(s)\,d \omega (s) \biggr] \biggr\} \\ &\quad= \mathbb{E} \biggl\{ \int _{ \varsigma -\eta _{2}}^{\varsigma -\eta (\varsigma )} \operatorname{tr} \bigl[g_{2}^{T}(s) (Z_{3}+Z_{4}) g_{2}(s)\bigr] \,ds \biggr\} . \end{aligned}$$

So we get

$$\begin{aligned} &\mathbb{E} \bigl\{ \mathcal{L}V\bigl(x(\varsigma ),y(\varsigma )\bigr)-2 \bigl[u_{1}^{T}( \varsigma )z_{x}(\varsigma )+u_{2}^{T}(\varsigma )z_{y}(\varsigma )\bigr]- \gamma \bigl[u_{1}^{T}(\varsigma )u_{1}(\varsigma )+u_{2}^{T}(\varsigma )u_{2}( \varsigma )\bigr] \bigr\} \\ &\quad \leq \mathbb{E} \bigl\{ \zeta ^{T}(\varsigma ) \Phi \zeta ( \varsigma ) \bigr\} . \end{aligned}$$
(45)

Now we define the changes of \(V(x_{\varsigma })\) at impulse times \(\varsigma =\varsigma _{k}, k\in \mathcal{Z_{+}}\). By (14) we have

$$\begin{aligned} x(\varsigma _{k}) - \mathcal{A} \int _{\varsigma _{k} -\nu _{1}}^{ \varsigma _{k}} x(s) \,ds = {} & x\bigl(\varsigma _{k}^{-}\bigr) - N_{k} \biggl[x\bigl(\varsigma _{k}^{-}\bigr) - \mathcal{A} \int _{\varsigma _{k} - \nu _{1}}^{\varsigma _{k}} x(s) \,ds \biggr] - \mathcal{A} \int _{ \varsigma _{k} - \nu _{1}}^{\varsigma _{k}} x(s) \,ds \\ = {} & (I - N_{k}) \biggl[ x\bigl(\varsigma _{k}^{-} \bigr) - \mathcal{A} \int _{\varsigma _{k} - \nu _{1}}^{\varsigma _{k}} x(s) \,ds \biggr]. \end{aligned}$$
(46)

Moreover, from (14) we get

$$\begin{aligned} & \begin{bmatrix} P_{1} & (I-N_{k})^{T} P_{1} \\ * & P_{1} \end{bmatrix} \geq 0 \\ &\quad\Leftrightarrow\quad \begin{bmatrix} I & -(I-N_{k})^{T} \\ 0 & I \end{bmatrix} \begin{bmatrix} P_{1} & (I-N_{k})^{T} P_{1} \\ * & P_{1} \end{bmatrix} \begin{bmatrix} I & 0 \\ -(I - N_{k}) & I \end{bmatrix} \geq 0 \\ &\quad\Leftrightarrow\quad \begin{bmatrix} P_{1} -(I - N_{k})^{T} P_{1} (I - N_{k}) & 0 \\ * & P_{1} \end{bmatrix} \geq 0 \\ &\quad\Leftrightarrow\quad P_{1} - (I - N_{k})^{T} P_{1} (I - N_{k}) \geq 0. \end{aligned}$$
(47)

Combining (46) and (47), we have

$$\begin{aligned} V_{1}\bigl(x(\varsigma _{k})\bigr) = {} & \biggl[x( \varsigma _{k}) - \mathcal{A} \int _{\varsigma _{k} - \nu _{1}}^{\varsigma _{k}} x(s) \,ds \biggr]^{T} P_{1} \biggl[x(\varsigma _{k}) - \mathcal{A} \int _{ \varsigma _{k} - \nu _{1}}^{\varsigma _{k}} x(s) \,ds \biggr] \\ = {} & \biggl[x\bigl(\varsigma _{k}^{-}\bigr) - \mathcal{A} \int _{ \varsigma _{k} - \nu _{1}}^{\varsigma _{k}} x(s) \,ds \biggr]^{T} (I - N_{k})^{T} P_{1} (I - N_{k}) \biggl[x \bigl(\varsigma _{k}^{-}\bigr) - \mathcal{A} \int _{ \varsigma _{k} - \nu _{1}}^{\varsigma _{k}} x(s) \,ds \biggr] \\ \leq {} & \biggl[x\bigl(\varsigma _{k}^{-}\bigr) - \mathcal{A} \int _{ \varsigma _{k} - \nu _{1}}^{\varsigma _{k}} x(s) \,ds \biggr]^{T} P_{1} \biggl[x\bigl(\varsigma _{k}^{-}\bigr) - \mathcal{A} \int _{\varsigma _{k} - \nu _{1}}^{\varsigma _{k}} x(s) \,ds \biggr], \\ V_{1}\bigl(x(\varsigma _{k})\bigr) \leq {} & V_{1}\bigl(x\bigl(\varsigma _{k}^{-}\bigr) \bigr). \end{aligned}$$

Similarly, we can define the changes of \(V(y_{\varsigma })\) at impulse times \(\varsigma =\varsigma _{k}, k\in \mathcal{Z_{+}}\). By (15) we have

$$\begin{aligned} y(\varsigma _{k}) - \mathcal{C} \int _{\varsigma _{k} -\nu _{2}}^{ \varsigma _{k}} y(s) \,ds & = y\bigl(\varsigma _{k}^{-}\bigr) - G_{k} \biggl[y\bigl(\varsigma _{k}^{-}\bigr) - \mathcal{C} \int _{\varsigma _{k} - \nu _{2}}^{\varsigma _{k}} y(s) \,ds \biggr] - C \int _{\varsigma _{k} - \nu _{2}}^{\varsigma _{k}} y(s) \,ds \\ & = (I - G_{k}) \biggl[ y\bigl(\varsigma _{k}^{-} \bigr) - \mathcal{C} \int _{\varsigma _{k} - \nu _{2}}^{\varsigma _{k}} y(s) \,ds \biggr]. \end{aligned}$$
(48)

It follows From Eq. (15) that

$$\begin{aligned} & \begin{bmatrix} P_{2} & (I-G_{k})^{T} P_{2} \\ * & P_{2} \end{bmatrix} \geq 0 \\ &\quad\Leftrightarrow\quad \begin{bmatrix} I & -(I-G_{k})^{T} \\ 0 & I \end{bmatrix} \begin{bmatrix} P_{2} & (I-G_{k})^{T} P_{2} \\ * & P_{2} \end{bmatrix} \begin{bmatrix} I & 0 \\ -(I - G_{k}) & I \end{bmatrix} \geq 0 \\ &\quad\Leftrightarrow\quad \begin{bmatrix} P_{2} -(I - G_{k})^{T} P_{2} (I - G_{k}) & 0 \\ * & P_{1} \end{bmatrix} \geq 0 \\ &\quad\Leftrightarrow\quad P_{2} - (I - G_{k})^{T} P_{2} (I - G_{k}) \geq 0. \end{aligned}$$
(49)

Combining (48) and (49), we have

$$\begin{aligned} V_{1}\bigl(y(\varsigma _{k})\bigr) = {}& \biggl[y( \varsigma _{k}) - \mathcal{C} \int _{\varsigma _{k} - \nu _{2}}^{\varsigma _{k}} y(s) \,ds \biggr]^{T} P_{2} \biggl[y(\varsigma _{k}) - \mathcal{C} \int _{ \varsigma _{k} - \nu _{2}}^{\varsigma _{k}} y(s) \,ds \biggr] \\ = {} & \biggl[y\bigl(\varsigma _{k}^{-}\bigr) - \mathcal{C} \int _{ \varsigma _{k} - \nu _{2}}^{\varsigma _{k}} y(s) \,ds \biggr]^{T} (I - G_{k})^{T} P_{2} (I - G_{k}) \biggl[y \bigl(\varsigma _{k}^{-}\bigr) - \mathcal{C} \int _{ \varsigma _{k} - \nu _{2}}^{\varsigma _{k}} y(s) \,ds \biggr] \\ \leq {} & \biggl[y\bigl(\varsigma _{k}^{-}\bigr) - \mathcal{C} \int _{ \varsigma _{k} - \nu _{2}}^{\varsigma _{k}} y(s) \,ds \biggr]^{T} P_{2} \biggl[y\bigl(\varsigma _{k}^{-}\bigr) - \mathcal{C} \int _{\varsigma _{k} - \nu _{2}}^{\varsigma _{k}} y(s) \,ds \biggr], \\ V_{1}\bigl(y(\varsigma _{k})\bigr) \leq {} & V_{1}\bigl(y\bigl(\varsigma _{k}^{-}\bigr) \bigr). \end{aligned}$$

Also, it is obvious that \(V_{2}(\varsigma _{k})\leq V_{2}(\varsigma _{k}^{-})\), \(V_{3}(\varsigma _{k}) \leq V_{3}(\varsigma _{k}^{-})\), \(V_{4}(\varsigma _{k})\leq V_{4}(\varsigma _{k}^{-})\). Therefore

$$\begin{aligned} V\bigl(x(\varsigma _{k}), y(\varsigma _{k})\bigr) \leq V \bigl(x\bigl( \varsigma _{k}^{-}\bigr), y\bigl(\varsigma _{k}^{-}\bigr)\bigr), \quad k \in \mathcal{Z_{+}}. \end{aligned}$$
(50)

Using (45), we can write

$$\begin{aligned} &\mathbb{E} \bigl\{ \mathcal{L}V\bigl(x(\varsigma ),y(\varsigma )\bigr)-2 \bigl[u_{1}^{T}( \varsigma )z_{x}(\varsigma )+u_{2}^{T}(\varsigma )z_{y}(\varsigma )\bigr]- \gamma \bigl[u_{1}^{T}(\varsigma )u_{1}(\varsigma )+u_{2}^{T}(\varsigma )u_{2}( \varsigma )\bigr] \bigr\} \\ &\quad\leq \bigl\{ \zeta ^{T}(\varsigma ) \Phi \zeta ( \varsigma ) \bigr\} . \end{aligned}$$
(51)

By applying the Schur complement lemma we can see that Φ is equivalent to (16), and we get

$$\begin{aligned} &\mathbb{E} \bigl\{ \mathcal{L}V\bigl(x(\varsigma ),y(\varsigma )\bigr)-2 \bigl[u_{1}^{T}( \varsigma )z_{x}(\varsigma )+u_{2}^{T}(\varsigma )z_{y}(\varsigma )\bigr]- \gamma \bigl[u_{1}^{T}(\varsigma )u_{1}(\varsigma )+u_{2}^{T}(\varsigma )u_{2}( \varsigma )\bigr] \bigr\} \\ &\quad < 0. \end{aligned}$$
(52)

By integrating (52) over the time period from 0 to \(\varsigma _{f}\) we have

$$\begin{aligned} &2\mathbb{E} \biggl\{ \int _{0}^{\varsigma _{f}}\bigl[u_{1}^{T}(s)z_{x}(s) + u_{2}^{T}(s)z_{y}(s)\bigr]\,ds \biggr\} \\ &\quad \geq V\bigl(x(\varsigma _{f})\bigr)-V\bigl(x(0)\bigr)- \gamma \mathbb{E} \biggl\{ \int _{0}^{\varsigma _{f}} \bigl[u_{1}^{T}(s)u_{1}(s) + u_{2}^{T}(s)u_{2}(s)\bigr]\,ds \biggr\} \\ &\quad \geq -\gamma \mathbb{E} \biggl\{ \int _{0}^{\varsigma _{f}} \bigl[u_{1}^{T}(s)u_{1}(s) + u_{2}^{T}(s)u_{2}(s)\bigr]\,ds \biggr\} . \end{aligned}$$

Therefore the stochastic genetic regulatory network (9) is passive under Definition 1, which completes the proof. □

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Senthilraj, S., Saravanakumar, T., Raja, R. et al. Delay-dependent passivity analysis of nondeterministic genetic regulatory networks with leakage and distributed delays against impulsive perturbations. Adv Differ Equ 2021, 353 (2021). https://doi.org/10.1186/s13662-021-03504-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-021-03504-8

Keywords