Skip to main content

Theory and Modern Applications

A new inequality of \(\mathcal{L}\)-operator and its application to stochastic non-autonomous impulsive neural networks with delays

Abstract

In this paper, based on the properties of \(\mathcal{L}\)-operator and \(\mathcal{M}\)-matrix, we develop a new inequality of \(\mathcal{L}\)-operator to be effective for non-autonomous stochastic systems. From the new inequality obtained above, we derive the sufficient conditions ensuring the global exponential stability of the stochastic non-autonomous impulsive cellular neural networks with delays. Our conclusions generalize some works published before. One example is provided to illustrate the superiority of the proposed results.

1 Introduction

Recently, the dynamical behaviors for cellular neural networks have been popular with many researchers because of their extensive applications in signal processing, pattern recognition, optimization problems and many other fields. The stability of networks is one of the crucial properties in such applications. Delay and impulsive effects exist widely in many practical models such as population models and neural networks. There are many works on the dynamic behaviors of various kinds of neural networks with delays or impulses [1–9]. Furthermore, some real systems are usually affected by external disturbance with great uncertainty which may be treated as random. In real nervous systems and in the implementation of artificial networks, Haykin [10] has pointed out that the synaptic transmission is a noisy process which is caused by random fluctuations from the release of neurotransmitters and other probabilistic factors. Hence, noise must be taken into consideration in the model construction. Among them, stability analysis of different stochastic systems has been a focused research subject in the literature. Stochastic perturbation is the main factor that affect the stability of systems including neural networks in performing the computation. In addition, the results in [11] suggested that certain stochastic inputs can stabilize or destabilize one neural network. This implies that the stability analysis of stochastic neural networks has primary significance in its design and applications. Therefore, some results on the stability of neural networks with stochastic perturbations have been reported [11–21]. In addition, when we consider long-time dynamic behaviors of a system, the parameters of the system frequently vary with time due to the environmental disturbances. In this case, a non-autonomous neural network model is the best choice for accurately depicting evolutionary processes of networks. Therefore, it is of great significance to study the dynamic behaviors of non-autonomous neural networks [22–38].

By using a Lyapunov function, the authors of [34, 35] have investigated the stability of non-autonomous systems without impulses and obtained the determinant conditions for asymptotic stability or exponential stability of the corresponding system. However, the conditions are true only for all \(t\geq0\). Besides, many authors used the linear matrix inequality (LMI) and the Lyapunov-Krasovskii functional to study the dynamic behaviors of various kinds of neural networks and obtained many interesting new results [6, 16, 21, 25]. However, the results given in LMI form are commonly dependent on the delays. Particularly, for time-varying delays system [6, 16], one must require constraint conditions such as the differentiability of delay functions. The authors of [25] considered the stability and existence of periodic solution to bidirectional associative memory non-autonomous neural networks with delay and obtained some new results which are given in a function matrix inequality, but it is not easy to compute by Matlab LMI Control Toolbox. Thus, LMI technique is ineffective for dealing with the non-autonomous system. In addition to the methods mentioned before, a differential inequality is also a very useful tool for studying the dynamic behaviors of differential dynamical systems [24, 30, 32, 37–44], but many of the inequalities obtained before cannot be used to investigate the non-autonomous systems. The authors in [32] considered the periodic attractor and dissipativity of non-autonomous cellular neural networks with delays. The authors of [30] investigated the invariant and attracting sets of neural networks with reaction-diffusion terms. However, the results in [30, 32] require the time-varying coefficients to have a common factor. In practical applications, this condition is very strict and not easy to meet. The authors of [38] developed an inequality to investigate the stability of non-autonomous cellular neural networks with impulse and time-varying delays, but this inequality cannot handle stochastic non-autonomous neural networks. The authors of [37] investigated the exponential p-stability of stochastic Takagi-Sugeno non-autonomous neural networks with impulses and time-varying delays, but the conditions imposed on the diffusion coefficient matrix are very strict. As far as we know, there are no results on the stability of non-autonomous stochastic neural networks with time delays and impulses except for [37].

Motivated by the previous analysis, in this paper, applying the properties of \(\mathcal{L}\)-operator and \(\mathcal{M}\)-matrix, we develop a new inequality of \(\mathcal{L}\)-operator that is effective for stochastic non-autonomous system. Based on the new inequality of \(\mathcal{L}\)-operator, we study the stochastic non-autonomous impulsive cellular neural networks with time-varying delays and obtain the sufficient conditions for the pth moment exponential stability of the corresponding systems. Our main results do not require common factors of the coefficients of the system, relax the conditions imposed on the diffusion coefficient matrix, and generalize some early results. One example is provided to demonstrate the effectiveness of the proposed results.

2 Preliminaries

Let \({R^{m \times n}}\) be the set of \(m\times n \) real matrices. Usually E denotes an \(n \times n\) unit matrix. \(R^{n}\) denotes the space of n-dimensional real column vectors, \(|\cdot|\) denotes Euclidean norm, \(\mathcal {N}\stackrel{\Delta}{=}\{1,2,\ldots,n\}\), \(\mathbb{N}\stackrel{\Delta }{=}\{1,2,\ldots\}\), \(R_{+} \stackrel{\Delta}{=} [ {0, + \infty} )\). For \(M, N \in R^{m\times n}\) or \(M, N \in R^{n}\), the notation \(M\geq N\) (\(M>N\)) indicates that each pair of corresponding elements of M and N satisfies the inequality ‘≥ (>)’. Particularly, \(M\in R^{m\times n}\) is called a non-negative matrix if \(M\geq0\), and \(x\in R^{n}\) is called a positive vector if \(x> 0\). Let \(\rho(M)\) denote the spectral radius of square matrix M.

\(L^{1}(R_{+},R_{+})\) denotes the family of all continuous functions \(h:R_{+}\rightarrow R_{+}\) satisfying \(\int_{0}^{+\infty}h(t)\,dt<\infty\). \(C[ X,Y]\) denotes the space of continuous mappings from X to Y. In particular, let \(\mathcal{C}\stackrel{\Delta}{=} C[[-\tau, 0], R^{n}]\) denote the family of all \(R^{n}\)-valued functions φ which is bounded continuous and defined on \([-\tau, 0]\). The norm of \(\mathcal{C}\) is defined by \(\|\varphi\| = \sup_{-\tau\leq\theta\leq0}|\varphi(\theta)|\).

\(PC[J,R^{n}]= \{\phi: J\rightarrow R^{n} | \phi(v)\mbox{ is continuous for all but at most countable points }v \in J\mbox{ and}\mbox{ }\mbox{at these points }v\in J, \phi(v^{+})\mbox{ and }\phi(v^{-})\mbox{ exist and }\phi(v) = \phi(v^{+})\}\), where \(\phi(v^{-})\) and \(\phi(v^{+})\) denote the left-hand and right-hand limits of the function \(\phi(v)\) at time v, respectively, and \(J\subset R\) is an interval. Especially, let \(\mathcal{PC}\stackrel{\Delta}{=} PC [[-\tau, 0] , R^{n}]\).

For any \(x\in R^{n}\), \(\phi\in\mathcal{C}\) or \(\phi\in\mathcal{PC}\), \(p>0\), we define

$$\begin{aligned}& [x]^{+p}= \bigl(|x_{1}|^{p},\ldots,|x_{n}|^{p} \bigr)^{T},\qquad \bigl[\phi(t) \bigr]_{\tau}= \bigl( \bigl[ \phi_{1}(t) \bigr]_{\tau },\ldots, \bigl[\phi_{n}(t) \bigr]_{\tau} \bigr)^{T}, \\& \bigl[\phi_{i}(t) \bigr]_{\tau}=\sup_{-\tau\leq s\leq0} \bigl|\phi_{i}(t+s) \bigr|,\quad i\in\mathcal{N}, \end{aligned}$$

and \(D^{+}\phi(t)\) denotes the upper-right-hand derivative of \(\phi(t)\) at time t.

\((\Omega, \mathscr{F}, \{\mathscr{F}_{t}\}_{t\geq0}, P)\) denotes a complete probability space with a filtration \(\{\mathscr{F}_{t}\}_{t\geq0}\) satisfying the usual conditions (i.e., it is right continuous and \(\mathscr{F}_{0}\) contains all P-null sets). Let \(C_{\mathscr{F}_{0}}^{b}[[t_{0}-\tau,t_{0}],R^{n}]\) (\(C_{\mathscr{F}_{t}}^{b}[[t_{0}-\tau,t_{0}],R^{n}]\)) denote the family of all bounded \(\mathscr{F}_{0} (\mathscr{F}_{t})\)-measurable, \(C[[t_{0}-\tau,t_{0}],R^{n}]\)-value random variables Ï•, let \(PC_{\mathscr{F}_{0}}^{b}[[t_{0}-\tau,t_{0}],R^{n}]\) (\(PC_{\mathscr{F}_{t}}^{b}[[t_{0}-\tau,t_{0}],R^{n}]\)) denote the family of all bounded \(\mathscr{F}_{0} (\mathscr{F}_{t})\)-measurable, \(PC[[t_{0}-\tau,t_{0}],R^{n}]\)-value random variables Ï•, satisfying \(\|\phi\|_{Lp}^{p}=\sup_{t_{0}-\tau\leq\theta\leq t_{0}}\mathbb{E}|\phi(\theta)|^{p}<\infty\) for \(p>0\), where \(\mathbb{E}\) denotes the expectation of stochastic process.

We study the following stochastic non-autonomous impulsive cellular neural networks with delays:

$$ \left \{ \textstyle\begin{array}{@{}l} dx_{i}(t) = [-a_{i}(t)x_{i}(t)+ \sum_{j=1}^{n}b_{ij}(t)f_{j}(x_{j}(t))+\sum_{j=1}^{n}c_{ij}(t)g_{j}(x_{j}(t-\tau _{ij}(t)))+I_{i}(t) ]\,dt \\ \hphantom{dx_{i}(t) =}{}+\sum_{j=1}^{n}\sigma_{ij}(t,x(t),x(t-\tau(t)))\,d\omega_{j}(t), \quad t\geq t_{0}, t\neq t_{k}, \\ x_{i}(t_{k})=I_{ik}(x(t_{k}^{-})),\quad t=t_{k}, \\ x_{i}(s)=\phi_{i}(s),\quad t_{0}-\tau\leq s\leq t_{0}, \end{array}\displaystyle \right . $$
(2.1)

where \(i\in\mathcal{N}\), and n is the number of units in a neural network; \(x_{i}(t)\) is the state variable of the ith unit at time t; \(f_{j}(\cdot)\) and \(g_{j}(\cdot)\) are the activation functions of the jth unit at time t and \(t-\tau_{ij}(t)\), respectively; \(\tau_{ij}(t)\) is the time-varying delay satisfying \(0\leq\tau_{ij}(t)\leq\tau\) and \(\tau>0\) at time t; \(a_{i}(t)>0\) is the rate with which the ith unit will reset its potential to the resting state in isolation when disconnected from the network and external inputs; \(b_{ij}(t)\), \(c_{ij}(t)\) denote the strengths of the jth neuron on the ith unit at time t and \(t-\tau_{ij}(t)\), respectively; \(I_{i}(t)\) denotes the bias of the ith unit at time t; \(\sigma(t,x(t),x(t-\tau(t)))=(\sigma_{ij}(t,x(t),x(t-\tau(t))))_{n\times n}\) is the diffusion coefficient matrix, and \(\omega(t)=(\omega_{1}(t),\ldots,\omega_{n}(t))^{T}\) is an n-dimensional Brownian motion defined on a complete probability space \((\Omega, \mathscr{F}, \{\mathscr{F}_{t}\}_{t\geq0}, P)\); \(\phi(s)=(\phi_{1}(s),\phi_{2}(s),\ldots,\phi_{n}(s))^{T}\in PC_{\mathscr{F}_{0}}^{b}[[t_{0}-\tau,t_{0}],R^{n}]\) is the initial function vector. The impulsive function \(I_{k}=(I_{1k},\ldots,I_{nk})^{T}\in C[R^{n},R^{n}]\), and the fixed impulsive moments \(t_{k} \) (\(k\in \mathbb{N}\)) satisfy \(t_{0}< t_{1}< t_{2}<\cdots\) and \(\lim_{k\rightarrow\infty}t_{k}=\infty\).

Throughout this paper, we assume that \(f_{j}(\cdot)\), \(g_{j}(\cdot)\), \(\sigma_{ij}(t,\cdot,\cdot)\) satisfy the linear growth condition and are Lipschitz continuous as well. Therefore, we can know that system (2.1) has a unique global solution denoted by \(x(t)=(x_{1}(t),\ldots,x_{n}(t))^{T}\) on \(t\geq t_{0}\) and \(\mathbb{E}(\sup_{t_{0}\leq s\leq t}|x(t)|^{r})<\infty\) for all \(t\geq t_{0}\) and \(r>0\).

Definition 2.1

System (2.1) is called globally and exponentially p-stable, if there exist constants \(M\geq1\) and \(\lambda>0\) such that for any two solutions \(x(t,\phi)\) and \(x(t,\psi)\) with \(\phi, \psi\in PC_{\mathscr{F}_{0}}^{b}[[t_{0}-\tau,t_{0}],R^{n}]\), respectively, one has

$$\mathbb{E}\bigl|x(t,\phi)-x(t,\psi)\bigr|^{p}\leq M\|\phi-\psi\| _{L_{p}}^{p}e^{-\lambda(t-t_{0})},\quad t\geq t_{0}. $$

Furthermore, if \(x^{*}\) is an equilibrium point of system (2.1), then we call the equilibrium point \(x^{*}\) exponentially p-stable.

Definition 2.2

([45])

Let matrix \(D=(d_{ij})_{n\times n}\) satisfy \(d_{ij}\leq0\), \(i\neq j\), then the statement ‘D is a nonsingular \(\mathcal{M}\)-matrix’ is equivalent to one of the following conditions.

  1. (i)

    \(D=B-M\) and \(\rho(B^{-1}M)<1\), where \(M\geq 0\), \(B=\operatorname{diag}\{b_{1},\ldots,b_{n}\}\).

  2. (ii)

    All the leading principal minors of D are positive.

  3. (iii)

    The diagonal elements of D are all positive and there exists a positive vector d such that \(Dd>0\) or \(D^{T}d>0\).

For a \(\mathcal{M}\)-matrix D, from (iii) of Definition 2.1, we know \(\Omega_{\mathcal{M}}(D)\stackrel{\Delta}{=} \{z\in R^{n}|Dz>0, z>0\}\neq\phi\) and satisfies \(k_{1}z_{1}+k_{2}z_{2}\in\Omega_{\mathcal{M}}(D)\) for any vectors \(z_{1}, z_{2}\in\Omega_{\mathcal{M}}(D)\) and scalars \(k_{1}, k_{2}>0\).

For \(A\in R^{n\times n}\) and \(|A|\neq0\), we denote \(\Omega_{\rho}(A)\stackrel{\Delta}{=} \{z\in R^{n}|Az=\rho(A)z\}\), where \(\rho(A)\) is an eigenvalue of A. Then \(\Omega_{\rho}(A)\) includes all positive eigenvectors of A provided that the matrix A has at least one positive eigenvector (see Ref. [46]).

Lemma 2.1

([47])

For \(\alpha_{i}>0\) and \(\sum_{i=1}^{n}\alpha_{i}=1\), \(x_{i}\geq0\), we have

$$\prod_{i=1}^{n}x_{i}^{\alpha_{i}} \leq \sum_{i=1}^{n}\alpha_{i}x_{i}, $$

where the sign of equality holds if and only if \(x_{i}=x_{j}\) for all \(i, j\in\mathcal{N}\).

Lemma 2.2

([12])

For \(a_{i}\geq0\), \(x_{i}\geq0\), \(i\in \mathcal{N}\) and any integral number \(p>0\), we have

$$\Biggl(\sum_{i=1}^{n}a_{i}x_{i} \Biggr)^{p}\leq \Biggl(\sum_{i=1}^{n}a_{i} \Biggr)^{p-1}\sum_{i=1}^{n}a_{i}x_{i}^{p}. $$

Lemma 2.3

([12])

For an integral number \(p\geq2\), there exists \(e_{p}(n)>0\) such that

$$ e_{p}(n) \Biggl(\sum_{i=1}^{n}|x_{i}|^{2} \Biggr)^{\frac{p}{2}}\leq \sum_{i=1}^{n}|x_{i}|^{p}, \quad \forall x=(x_{1},\ldots,x_{n})^{T}\in R^{n}. $$

3 A new \(\mathcal{L}\)-operator inequality

Let \(C^{2,1}[R^{n}\times R_{+};R_{+}]\) denote the family of non-negative functions \(V(x,t)\) on \(R^{n}\times R_{+}\) which are once continuously differentiable in t and twice continuously differentiable in x. Associated with the system (2.1), for each \(V(x,t)\in C^{2,1}[R^{n}\times R_{+};R_{+}]\), we define an operator \(\mathcal{L}V\) from \(R^{n}\times R^{n}\times R_{+}\) to R by

$$\begin{aligned}& \begin{aligned}[b] \mathcal{L}V(x,t)={}&V_{t}(x,t)+V_{x}(x,t) \bigl[-A(t)x(t)+B(t)f\bigl(x(t)\bigr)+C(t)g(y)+I(t) \bigr] \\ &{}+\frac{1}{2}\operatorname{trace} \bigl[\sigma^{T}(t,x,y)V_{xx} \sigma(t,x,y) \bigr], \end{aligned} \\& y=x\bigl(t-\tau(t)\bigr),\qquad V_{t}(x,t)=\frac{\partial V(x,t)}{\partial t},\\& V_{x}(x,t)= \biggl(\frac{\partial V(x,t)}{\partial x_{1}},\ldots,\frac {\partial V(x,t)}{\partial x_{n}} \biggr),\qquad V_{xx}= \biggl(\frac{\partial V^{2}(x,t)}{\partial x_{i}\partial x_{j}} \biggr)_{n\times n}. \end{aligned}$$

The differential inequality is the main tool for investigating differential equations. Therefore, by using the properties of the \(\mathcal{L}\)-operator and the \(\mathcal{M}\)-matrix, we introduce a new inequality of the \(\mathcal{L}\)-operator that is effective for a stochastic non-autonomous system.

Theorem 3.1

Let \(\sigma< b\leq+\infty\), \(P=(p_{ij})_{n\times n}\), \(p_{ij}\geq0\) for \(i\neq j\), \(Q=(q_{ij})_{n\times n}\geq 0\), \(\alpha(t)=(\alpha_{ij}(t))_{n\times n}\geq 0\), \(\beta(t)=(\beta_{ij}(t))_{n\times n}\geq 0\), \(r(t)=(r_{1}(t),\ldots,r_{n}(t))^{T}\geq0\) and for any \(t\geq\sigma\), there exists a constant \(\delta>0\) such that \(e^{\delta(t-\sigma)}r_{i}(t)\), \(\alpha_{ij}(t)\), \(\beta_{ij}(t)\), \(i,j\in\mathcal{N}\) are integrable functions on \([\sigma,t]\). The functions \(V_{i}(x)\in C^{2}[R^{n},R_{+}]\) satisfy

$$\begin{aligned} &\mathcal{L}V_{i}(x)\leq\sum _{j=1}^{n} \bigl[\bigl(p_{ij}+\alpha _{ij}(t)\bigr)V_{j}\bigl(x(t)\bigr)+\bigl(q_{ij}+ \beta_{ij}(t)\bigr)V_{j}\bigl(x\bigl(t-\tau(t)\bigr)\bigr) \bigr]+r_{i}(t), \\ &\quad t\in[\sigma,b), \forall i\in\mathcal{N}. \end{aligned}$$
(3.1)

Suppose that \(\Pi=-(P+Q)\) is an \(\mathcal{M}\)-matrix, we obtain

$$ \mathbb{E}V_{i}\bigl(x(t)\bigr)\leq z_{i}e^{-\lambda(t-\sigma)}e^{\int_{\sigma }^{t}\theta(s)\,ds},\quad t\in[\sigma,b), \forall i \in\mathcal{N}, $$
(3.2)

provided that \(\mathbb{E}V_{i}(x(t))<\infty\) for all \(t\in[\sigma,b)\) and

$$ \mathbb{E}V_{i}\bigl(x(t)\bigr)\leq z_{i}e^{-\lambda(t-\sigma)},\quad t\in[\sigma-\tau ,\sigma], \forall i\in \mathcal{N}, $$
(3.3)

where \(\lambda\in(0,\delta]\), \(z\in\Omega_{\mathcal{M}}(\Pi)\) with \(z_{i}\geq1\), \(\forall i\in\mathcal{N,}\) satisfy

$$ \bigl(\lambda E+P+Qe^{\lambda\tau}\bigr)z< 0, $$
(3.4)

and \(\theta(s)=\max_{1\leq i\leq n}\{\sum_{j=1}^{n}(\alpha_{ij}(s)+\beta _{ij}(s)e^{\lambda\tau})\frac{z_{j}}{z_{i}}+e^{\lambda(s-\sigma)}r_{i}(s)\}\).

Proof

By using the Itô formula, for the solution process \(x(t)\) of (2.1) and \(V_{i}(x)\in C^{2}[R^{n},R_{+}]\), we can obtain

$$ \begin{aligned}[b] V_{i}\bigl(x(t)\bigr)={}&V_{i}\bigl(x(\sigma) \bigr)+ \int_{\sigma}^{t}\mathcal{L}V_{i}\bigl(x(s) \bigr)\,ds\\ &{}+ \int _{\sigma}^{t}\frac{\partial V_{i}(x(s))}{\partial x}\sigma\bigl(s,x(s),x \bigl(s-\tau(s)\bigr)\bigr)\,d\omega(s), \quad t\geq\sigma, \forall i\in \mathcal{N}. \end{aligned} $$
(3.5)

Then we get

$$ \mathbb{E}V_{i}\bigl(x(t)\bigr)=\mathbb{E}V_{i} \bigl(x(\sigma)\bigr)+ \int_{\sigma}^{t}\mathbb {E}\mathcal{L}V_{i} \bigl(x(s)\bigr)\,ds,\quad t\geq\sigma, \forall i\in \mathcal{N}. $$
(3.6)

For small enough \(\bigtriangleup t>0\), we have

$$ \mathbb{E}V_{i}\bigl(x(t+\bigtriangleup t)\bigr)= \mathbb{E}V_{i}\bigl(x(\sigma)\bigr)+ \int_{\sigma}^{t+\bigtriangleup t}\mathbb{E}\mathcal{L}V_{i} \bigl(x(s)\bigr)\,ds,\quad t\geq\sigma, \forall i\in \mathcal{N}. $$
(3.7)

Therefore, from (3.1), (3.6), and (3.7), we have

$$\begin{aligned} &\mathbb{E}V_{i}\bigl(x(t+\bigtriangleup t)\bigr)- \mathbb{E}V_{i}\bigl(x(t)\bigr) \\ &\quad=\int_{t}^{t+\bigtriangleup t}\mathbb{E}\mathcal{L}V_{i} \bigl(x(s)\bigr)\,ds \\ &\quad\leq \int_{t}^{t+\bigtriangleup t} \Biggl[\sum _{j=1}^{n}\bigl(p_{ij}+ \alpha_{ij}(s)\bigr)\mathbb{E}V_{j}\bigl(x(s)\bigr) +\sum_{j=1}^{n}\bigl(q_{ij}+ \beta_{ij}(s)\bigr)\mathbb{E}V_{j}\bigl(x\bigl(s-\tau (s) \bigr)\bigr)+r_{i}(s) \Biggr]\,ds \\ &\quad\leq \int_{t}^{t+\bigtriangleup t} \Biggl[\sum _{j=1}^{n}\bigl(p_{ij}+ \alpha_{ij}(s)\bigr)\mathbb{E}V_{j}\bigl(x(s)\bigr) \\ &\qquad{} +\sum_{j=1}^{n}\bigl(q_{ij}+ \beta_{ij}(s)\bigr)\bigl[\mathbb{E}V_{j}\bigl(x(s)\bigr) \bigr]_{\tau }+r_{i}(s) \Biggr]\,ds,\quad t\geq\sigma, \forall i\in\mathcal{N}. \end{aligned}$$
(3.8)

Because \(\mathbb{E}V_{i}(x(t))<\infty\) for all \(t\in[\sigma,b)\), we know \(\mathbb{E}V_{i}(x(t))\) is continuous. Thus, from (3.8), we can obtain

$$\begin{aligned} &D^{+}\mathbb{E}V_{i}\bigl(x(t)\bigr)\leq \sum _{j=1}^{n} \bigl[\bigl(p_{ij}+ \alpha_{ij}(t)\bigr)\mathbb {E}V_{j}\bigl(x(t)\bigr)+ \bigl(q_{ij}+\beta_{ij}(t)\bigr)\bigl[\mathbb{E}V_{j} \bigl(x(t)\bigr)\bigr]_{\tau} \bigr]+r_{i}(t), \\ &\quad t\in[ \sigma,b), \forall i\in\mathcal{N}. \end{aligned}$$
(3.9)

Let \(v_{i}(t)=\mathbb{E}V_{i}(x(t))\). For proving Theorem 3.1, we only need to prove

$$ v_{i}(t)\leq z_{i}e^{-\lambda(t-\sigma)}e^{\int_{\sigma}^{t}\theta(s)\,ds}, \quad t\in [\sigma,b), \forall i\in\mathcal{N,} $$
(3.10)

provided that

$$ v_{i}(t)\leq z_{i}e^{-\lambda(t-\sigma)},\quad t \in[\sigma-\tau,\sigma], \forall i\in\mathcal{N,} $$
(3.11)

holds.

Because Π is an \(\mathcal{M}\)-matrix, we can get a vector \(z\in \Omega_{\mathcal{M}}(\Pi)\) with \(z_{i}\geq1\), \(i\in\mathcal{N}\) and \(\Pi z>0\), that is, \((P+Q)z<0\). From the continuity of the function, we know there exists a constant \(\lambda\in(0,\delta]\) satisfying (3.4).

For proving (3.10), we first of all prove, for any given \(\epsilon>0\),

$$ v_{i}(t)< (1+\epsilon)ze^{-\lambda(t-\sigma)}e^{\int_{\sigma}^{t}\theta (s)\,ds} \stackrel{\Delta}{=} \xi_{i}(t),\quad t\in[\sigma,b), \forall i\in \mathcal{N}. $$
(3.12)

If (3.12) is not true, given that \(v_{i}(t)\) is continuous on \([\sigma,b)\) and the fact (3.11) holds, then there must be a constant \(t^{*}\in(\sigma,b)\) and \(m\in\mathcal{N}\) such that

$$\begin{aligned}& v_{m}\bigl(t^{*}\bigr)=\xi_{m}\bigl(t^{*}\bigr),\qquad D^{+}v_{m}\bigl(t^{*}\bigr)\geq\xi'_{m}\bigl(t^{*} \bigr), \end{aligned}$$
(3.13)
$$\begin{aligned}& v_{i}(t)\leq\xi_{i}(t),\quad t\in\bigl[\sigma-\tau,t^{*} \bigr], \forall i\in\mathcal{N}. \end{aligned}$$
(3.14)

By using (3.9), (3.11)-(3.14), \(z_{m}\geq1\), \(p_{ij}\geq 0 \) (\(i\neq j\)), and \(Q\geq0\), we can get

$$\begin{aligned} D^{+}v_{m}\bigl(t^{*}\bigr) \leq& \sum _{j=1}^{n} \bigl[\bigl(p_{mj}+\alpha _{mj}\bigl(t^{*}\bigr)\bigr)v_{j}\bigl(t^{*}\bigr)+ \bigl(q_{mj}+\beta_{mj}\bigl(t^{*}\bigr)\bigr) \bigl[v_{j}\bigl(t^{*}\bigr)\bigr]_{\tau} \bigr]+r_{m} \bigl(t^{*}\bigr) \\ \leq& \sum_{j=1}^{n} \bigl[ \bigl(p_{mj}+\alpha_{mj}\bigl(t^{*}\bigr)\bigr) (1+\epsilon )z_{j}e^{-\lambda(t^{*}-\sigma)}e^{\int_{\sigma}^{t^{*}}\theta(s)\,ds} \\ &{} +\bigl(q_{mj}+\beta_{mj}\bigl(t^{*}\bigr) \bigr)e^{\lambda\tau}(1+\epsilon )z_{j}e^{-\lambda(t^{*}-\sigma)}e^{\int_{\sigma}^{t^{*}}\theta(s)\,ds} \bigr]+r_{m}\bigl(t^{*}\bigr) \\ \leq&\sum_{j=1}^{n}\bigl(p_{mj}+q_{mj}e^{\lambda\tau} \bigr) (1+\epsilon )z_{j}e^{-\lambda(t^{*}-\sigma)}e^{\int_{\sigma}^{t^{*}}\theta(s)\,ds} \\ &{}+\sum_{j=1}^{n}\bigl( \alpha_{mj}\bigl(t^{*}\bigr)+\beta_{mj}\bigl(t^{*} \bigr)e^{\lambda\tau}\bigr)\frac {z_{j}}{z_{m}}(1+\epsilon)z_{m}e^{-\lambda(t^{*}-\sigma)}e^{\int_{\sigma }^{t^{*}}\theta(s)\,ds} \\ &{}+e^{\lambda(t^{*}-\sigma)}r_{m}\bigl(t^{*}\bigr) (1+\epsilon)z_{m}e^{-\lambda(t^{*}-\sigma )}e^{\int_{\sigma}^{t^{*}}\theta(s)\,ds}+r_{m} \bigl(t^{*}\bigr) \bigl[1-(1+\epsilon )z_{m}e^{\int_{\sigma}^{t^{*}}\theta(s)\,ds} \bigr] \\ < &-\lambda(1+\epsilon)z_{m}e^{-\lambda(t^{*}-\sigma)}e^{\int_{\sigma }^{t^{*}}\theta(s)\,ds}+\theta \bigl(t^{*}\bigr) (1+\epsilon)z_{m}e^{-\lambda(t^{*}-\sigma )}e^{\int_{\sigma}^{t^{*}}\theta(s)\,ds} \\ =&\xi'_{m}\bigl(t^{*}\bigr), \end{aligned}$$
(3.15)

which contradicts the second inequality in (3.13). Thus, (3.12) holds. Letting \(\epsilon\rightarrow0^{+}\) in (3.12), we obtain (3.10). □

Remark 3.1

If \(\alpha(t)\equiv0\) and \(\beta(t)\equiv 0\) in (3.1), we can get Theorem 1 in [44].

4 Application to neural networks

For system (2.1), some assumptions are given in the following:

(A1) For \(i,j\in \mathcal{N}\), \(a_{i}(t)>0\), \(b_{ij}(t)\), \(c_{ij}(t)\) and \(I_{i}(t)\) are bounded continuous functions defined on \(R_{+}\).

(A2) There are positive constants \(l_{j}\) and \(k_{j}\), \(j\in\mathcal{N}\) such that

$$\bigl|f_{j}(r)-f_{j}(s)\bigr|\leq l_{j}|r-s|,\qquad \bigl|g_{j}(r)-g_{j}(s)\bigr|\leq k_{j}|r-s|, \quad\forall r,s\in R. $$

(A3) There exist non-negative bounded functions \(m_{ij}(t)\), \(n_{ij}(t)\), and \(h_{i}(t)\), \(i,j\in\mathcal{N}\) such that

$$\begin{aligned} &\bigl[\sigma_{ij}(t,v,u)-\sigma_{ij}(t,\bar{v},\bar{u}) \bigr]^{2}\leq m_{ij}(t) (v_{j}- \bar{v}_{j})^{2}+n_{ij}(t) (u_{j}- \bar{u}_{j})^{2}+h_{i}(t), \\ &\quad\forall u,\bar{u},v, \bar{v}\in R^{n}, t\geq t_{0}. \end{aligned}$$

(A4) There exist non-negative integrable functions \(\hat{\alpha}_{ij}(t)\), \(\hat{\beta}_{ij}(t)\) on \([t_{0},t]\) such that

$$\begin{aligned}& P(t)\leq\widehat{P}+\hat{\alpha}(t),\qquad Q(t)\leq \widehat{Q}+\hat{\beta}(t) \quad \mbox{and} \\& \widehat{\Pi}=-(\widehat {P}+\widehat{Q}) \mbox{ is a nonsingular } \mathcal{M}\mbox{-matrix}, \end{aligned}$$

where \(P(t)=(p_{ij}(t))_{n\times n}\), \(p_{ii}(t)=-pa_{i}(t)+(p-1)\sum_{j=1}^{n} (|b_{ij}(t)|l_{j}+|c_{ij}(t)|k_{j})+\frac{1}{2}(p-1)(p-2) \sum_{j=1}^{n}(m_{ij}(t)+n_{ij}(t))+|b_{ii}(t)|l_{i} +(p-1)m_{ii}(t)+\frac{1}{2}(p-1)(p-2)\), \(p_{ij}(t)=|b_{ij}(t)|l_{j}+(p-1)m_{ij}(t)\), \(i\neq j\), \(Q(t)=(q_{ij}(t))_{n\times n}\), \(q_{ij}(t)=|c_{ij}(t)|k_{j}+(p-1)n_{ij}(t)\), \(\hat{\alpha}(t)=(\hat{\alpha}_{ij}(t))_{n\times n}\), \(\hat{\beta}(t)=(\hat{\beta}_{ij}(t))_{n\times n}\), \(\widehat{P}=(\hat{p}_{ij})_{n\times n}\), \(\hat{p}_{ij}\geq 0\), \(i\neq j\), \(\widehat{Q}=(\hat{q}_{ij})_{n\times n}\geq 0\), \(i,j\in\mathcal{N}\), \(p\geq2\). Let \(\hat{r}_{i}(t)=(p-1)(nh_{i}(t))^{\frac{p}{2}}\), \(i\in\mathcal{N}\), and there exists a constant \(\hat{\delta}>0\) such that \(e^{\hat{\delta}(t-t_{0})}\hat{r}_{i}(t)\) is an integrable function on \([t_{0},t]\).

(A5) For any \(u,v\in R^{n}\), there exist matrices \(R_{k}=(r_{ij}^{(k)})_{n\times n}\geq0\) such that

$$\bigl[I_{k}(u)-I_{k}(v)\bigr]^{+}\leq R_{k}[u-v]^{+},\quad k\in\mathbb{N}. $$

Let \(\hat{R}_{k}= (\hat{r}_{ij}^{(k)} )_{n\times n}\), \(\hat{r}_{ij}^{(k)}\geq r_{ij}^{(k)} (\sum_{j=1}^{n}r_{ij}^{(k)} )^{p-1}\).

(A6) The set \(\Omega=\bigcap_{k=1}^{\infty}(\Omega_{\rho}(\hat{R}_{k}))\cap \Omega_{\mathcal{M}}(\widehat{\Pi})\) is nonempty (i.e., \(\Omega \neq \emptyset\)), for a given \(z\in\Omega\), the scalar \(\lambda\in(0,\hat{\delta}]\) satisfies

$$ \bigl(\lambda E+\widehat{P}+\widehat{Q}e^{\lambda\tau}\bigr)z< 0. $$
(4.1)

(A7) There are constants \(0\leq\mu<\lambda\) and \(b\geq0\) which satisfy

$$ \int_{t_{0}}^{t}\hat{\theta}(s)\,ds\leq \mu(t-t_{0})+b, $$
(4.2)

where \(\hat{\theta}(s)=\max_{1\leq i\leq n} \{\sum_{j=1}^{n}(\hat {\alpha}_{ij}(s)+\hat{\beta}_{ij}(s)e^{\lambda\tau})\frac {z_{j}}{z_{i}}+e^{\lambda(s-t_{0})}\hat{r}_{i}(s) \}\).

(A8) There exists a constant γ such that

$$ \frac{\ln\gamma_{k}}{t_{k}-t_{k-1}}\leq\gamma< \lambda-\mu,\quad k\in\mathbb{N}, $$
(4.3)

where \(\gamma_{k}\geq\max\{1, \rho(\hat{R}_{k})\}\).

Theorem 4.1

Assume that (A1)-(A8) are all true. Then we know system (2.1) is exponentially p-stable and the exponential convergent rate is no less than \(\lambda-\mu-\gamma\).

Proof

For any two solutions \(x(t)\) and \(y(t)\) of system (2.1) corresponding to initial values \(\phi(s), \varphi(s)\in PC_{\mathscr{F}_{0}}^{b}[[t_{0}-\tau,t_{0}],R^{n}]\), respectively. Let \(z_{i}(t)=x_{i}(t)-y_{i}(t)\), \(i\in\mathcal{N}\). Then from (2.1), we get

$$ \left \{ \textstyle\begin{array}{@{}l} dz_{i}(t)= [-a_{i}(t)z_{i}(t)+ \sum_{j=1}^{n}b_{ij}(t)(f_{j}(x_{j}(t))-f_{j}(y_{j}(t))) \\ \hphantom{dz_{i}(t)=}{}+\sum_{j=1}^{n}c_{ij}(t)(g_{j}(x_{j}(t-\tau_{ij}(t)))-g_{j}(y_{j}(t-\tau _{ij}(t)))) ]\,dt \\ \hphantom{dz_{i}(t)=}{}+\sum_{j=1}^{n}(\sigma_{ij}(t,x(t),x(t-\tau(t)))-\sigma _{ij}(t,y(t),y(t-\tau(t))))\,d\omega_{j}(t), \\ \hphantom{dz_{i}(t)=}\quad t\geq t_{0}, t\neq t_{k}, \\ z_{i}(t_{k})= x_{i}(t_{k})- y_{i}(t_{k})=I_{ik}(x(t_{k}^{-}))-I_{ik}(y(t_{k}^{-})),\quad t=t_{k}, \\ z_{i}(s)=\phi_{i}(s)-\varphi_{i}(s),\quad t_{0}-\tau\leq s\leq t_{0}. \end{array}\displaystyle \right . $$
(4.4)

Let \(V_{i}(z(t))=|z_{i}(t)|^{p}\), \(p\geq2\), \(i\in\mathcal{N}\). Then we get

$$\frac{\partial V_{i}(z)}{\partial z_{i}}=p|z_{i}|^{p-1}\operatorname {sgn}(z_{i})=p|z_{i}|^{p-2}z_{i},\qquad \frac{\partial^{2}V_{i}(z)}{\partial z_{i}^{2}}=p(p-1)|z_{i}|^{p-2}, $$

where \(\operatorname{sgn}(\cdot)\) denotes sign function. Therefore, from (A1)-(A4), Lemma 2.1, and (4.4), we get

$$\begin{aligned} \mathcal{L}V_{i}(z) =& p\bigl|z_{i}(t)\bigr|^{p-2}z_{i}(t) \Biggl\{ -a_{i}(t)z_{i}(t)+\sum _{j=1}^{n}b_{ij}(t)\bigl[f_{j} \bigl(x_{j}(t)\bigr)-f_{j}\bigl(y_{j}(t)\bigr) \bigr] \\ &{} +\sum_{j=1}^{n}c_{ij}(t) \bigl[g_{j}\bigl(x_{j}\bigl(t-\tau_{ij}(t)\bigr) \bigr)-g_{j}\bigl(y_{j}\bigl(t-\tau _{ij}(t) \bigr)\bigr)\bigr] \Biggr\} \\ &{}+\frac{1}{2}p(p-1)|z_{i}|^{p-2}\sum _{j=1}^{n}\bigl[\sigma _{ij}\bigl(t,x(t),x \bigl(t-\tau(t)\bigr)\bigr)-\sigma_{ij}\bigl(t,y(t),y\bigl(t-\tau(t) \bigr)\bigr)\bigr]^{2} \\ \leq&-pa_{i}(t)\bigl|z_{i}(t)\bigr|^{p}+p\bigl|z_{i}(t)\bigr|^{p-1} \sum_{j=1}^{n}\bigl|b_{ij}(t)\bigr|l_{j}\bigl|z_{j}(t)\bigr| \\ &{}+p\bigl|z_{i}(t)\bigr|^{p-1} \sum_{j=1}^{n}\bigl|c_{ij}(t)\bigr|k_{j}\bigl|z_{j} \bigl(t-\tau_{ij}(t)\bigr)\bigr| \\ &{}+\frac{1}{2}p(p-1)\bigl|z_{i}(t)\bigr|^{p-2}\sum _{j=1}^{n}\bigl[m_{ij}(t)\bigl|z_{j}(t)\bigr|^{2}+n_{ij}(t)\bigl|z_{j} \bigl(t-\tau _{ij}(t)\bigr)\bigr|^{2}+h_{i}(t)\bigr] \\ \leq&-pa_{i}(t)\bigl|z_{i}(t)\bigr|^{p}+\sum _{j=1}^{n}|b_{ij}|l_{j} \bigl[(p-1)\bigl|z_{i}(t)\bigr|^{p}+\bigl|z_{j}(t)\bigr|^{p} \bigr] \\ &{}+\sum_{j=1}^{n}|c_{ij}|k_{j} \bigl[(p-1)\bigl|z_{i}(t)\bigr|^{p}+\bigl|z_{j}\bigl(t-\tau _{ij}(t)\bigr)\bigr|^{p}\bigr] \\ &{}+\frac{1}{2}(p-1) (p-2)\sum _{j=1}^{n}m_{ij}(t)\bigl|z_{i}(t)\bigr|^{p} \\ &{}+(p-1)\sum_{j=1}^{n}m_{ij}(t)\bigl|z_{j}(t)\bigr|^{p}+ \frac{1}{2}(p-1) (p-2)\sum_{j=1}^{n}n_{ij}(t)\bigl|z_{i}(t)\bigr|^{p} \\ &{}+(p-1)\sum_{j=1}^{n}n_{ij}(t)\bigl|z_{j} \bigl(t-\tau_{ij}(t)\bigr)\bigr|^{p}+\frac {1}{2}(p-1) (p-2)\bigl|z_{i}(t)\bigr|^{p}+(p-1) \bigl(nh_{i}(t) \bigr)^{\frac{p}{2}} \\ \leq&\sum_{j=1}^{n} \bigl[p_{ij}(t)V_{j} \bigl(z(t)\bigr)+q_{ij}(t)V_{j}\bigl(z\bigl(t-\tau (t)\bigr) \bigr) \bigr]+\hat{r}_{i}(t) \\ \leq&\sum_{j=1}^{n} \bigl[\bigl( \hat{p}_{ij}+\hat{\alpha }_{ij}(t)\bigr)V_{j} \bigl(z(t)\bigr)+\bigl(\hat{q}_{ij}+\hat{\beta}_{ij}(t) \bigr)V_{j}\bigl(z\bigl(t-\tau (t)\bigr)\bigr) \bigr]+ \hat{r}_{i}(t). \end{aligned}$$
(4.5)

For the initial conditions \(\phi(s), \varphi(s)\in PC_{\mathscr {F}_{0}}^{b}[[t_{0}-\tau,t_{0}],R^{n}]\), we know \(z(s)=\phi(s)-\varphi(s)\in PC_{\mathscr{F}_{0}}^{b}[[t_{0}-\tau,t_{0}],R^{n}]\). From the assumption that, for any initial value in \(PC_{\mathscr{F}_{0}}^{b}[[t_{0}-\tau,t_{0}],R^{n}]\), model (2.1) has a global solution satisfying \(\mathbb{E} (\sup_{t_{0}\leq s\leq t}|x(t)|^{r})<\infty\) for all \(t\geq t_{0}\) and \(r>0\), we know \(\mathbb{E}(\sup_{t_{0}\leq s\leq t}|z(t)|^{r})<\infty\) for all \(t\geq t_{0}\) and \(r>0\). Thus, we know \(\mathbb{E}V_{i}(z(t))<\infty\). Since \(\widehat{\Pi}=-(\widehat{P}+\widehat{Q})\) is an \(\mathcal{M}\)-matrix and Ω is nonempty, there must be a positive vector \(z\in\Omega\) and a constant \(\lambda\in(0,\hat{\delta}]\) such that (4.1) holds and

$$ \mathbb{E}V_{i}\bigl(z(t)\bigr)\leq\frac{z_{i}}{\min_{1\leq j\leq n}\{z_{j}\}} \kappa\| \phi-\varphi\|_{L^{p}}^{p}e^{-\lambda( t-t_{0})},\quad t \in[t_{0}-\tau,t_{0}], $$
(4.6)

where \(\kappa>0\) is a constant such that \(\kappa\|\phi-\varphi\| _{L^{p}}^{p}\geq1\).

From (A4), (4.5), (4.6), and Theorem 3.1, we get

$$ \mathbb{E}V_{i}\bigl(z(t)\bigr)\leq\frac{z_{i}}{\min_{1\leq j\leq n}\{z_{j}\}} \kappa\| \phi-\varphi\|_{L^{p}}^{p}e^{-\lambda( t-t_{0})}e^{\int_{t_{0}}^{t}\hat{\theta }(s)\,ds}, \quad t\in[t_{0},t_{1}). $$
(4.7)

Assume that the inequalities

$$\begin{aligned} &\mathbb{E}V_{i}\bigl(z(t)\bigr)\leq \gamma_{0}\gamma_{1}\cdots\gamma_{m-1} \frac{z_{i}}{\min_{1\leq j\leq n}\{z_{j}\}}\kappa\|\phi-\varphi\|_{L^{p}}^{p}e^{-\lambda( t-t_{0})}e^{\int_{t_{0}}^{t}\hat{\theta}(s)\,ds}, \\ &\quad t_{m-1}\leq t< t_{m}, \end{aligned}$$
(4.8)

hold for all \(m=1,2,\ldots,k\), where \(\gamma_{0}=1\). Then, from (4.8), (A5), and Lemma 2.2, we obtain

$$\begin{aligned} \mathbb{E}V_{i}\bigl(z(t_{k})\bigr) =& \mathbb{E}\bigl|x_{i}(t_{k})-y_{i}(t_{k})\bigr|^{p} \\ =&\mathbb{E}\bigl|I_{ik}\bigl(x\bigl(t^{-}_{k}\bigr) \bigr)-I_{ik}\bigl(y\bigl(t_{k}^{-}\bigr)\bigr)\bigr|^{p} \\ \leq&\mathbb{E} \Biggl(\sum_{j=1}^{n}r_{ij}^{(k)}\bigl|z_{j} \bigl(t_{k}^{-}\bigr)\bigr| \Biggr)^{p} \\ \leq& \Biggl(\sum_{j=1}^{n}r_{ij}^{(k)} \Biggr)^{p-1}\sum_{j=1}^{n}r_{ij}^{(k)} \mathbb{E}\bigl|z_{j}\bigl(t_{k}^{-}\bigr)\bigr|^{p} \\ \leq&\sum_{j=1}^{n}\hat{r}_{ij}^{(k)} \mathbb{E}\bigl|z_{j}\bigl(t_{k}^{-}\bigr)\bigr|^{p}=\sum _{j=1}^{n}\hat{r}_{ij}^{(k)} \mathbb{E}V_{j}\bigl(z\bigl(t_{k}^{-}\bigr)\bigr) \\ \leq&\gamma_{0}\gamma_{1}\cdots\gamma_{k-1}\sum _{j=1}^{n}\hat {r}_{ij}^{(k)} \frac{z_{j}}{\min_{1\leq j\leq n}\{z_{j}\}}\kappa\|\phi -\varphi\|_{L^{p}}^{p}e^{-\lambda( t_{k}-t_{0})}e^{\int_{t_{0}}^{t_{k}}\hat{\theta }(s)\,ds} \\ =&\gamma_{0}\gamma_{1}\cdots\gamma_{k-1}\rho( \hat{R}_{k})\frac{z_{i}}{\min_{1\leq j\leq n}\{z_{j}\}}\kappa\|\phi-\varphi\|_{L^{p}}^{p}e^{-\lambda( t_{k}-t_{0})}e^{\int_{t_{0}}^{t_{k}}\hat{\theta}(s)\,ds} \\ \leq&\gamma_{0}\gamma_{1}\cdots\gamma_{k-1} \gamma_{k}\frac{z_{i}}{\min_{1\leq j\leq n}\{z_{j}\}}\kappa\|\phi-\varphi\|_{L^{p}}^{p}e^{-\lambda( t_{k}-t_{0})}e^{\int_{t_{0}}^{t_{k}}\hat{\theta}(s)\,ds}. \end{aligned}$$
(4.9)

This, together with (4.8) and \(\gamma_{k}\geq1\), leads to

$$\begin{aligned} \mathbb{E}V_{i}\bigl(z(t)\bigr) \leq& \gamma_{0}\gamma_{1}\cdots\gamma_{k-1}\gamma _{k}\frac{z_{i}}{\min_{1\leq j\leq n}\{z_{j}\}}\kappa\|\phi-\varphi\| _{L^{p}}^{p}e^{-\lambda( t-t_{0})}e^{\int_{t_{0}}^{t}\hat{\theta}(s)\,ds} \\ =&\gamma_{0}\gamma_{1}\cdots\gamma_{k}e^{\int_{0}^{t_{k}}\hat{\theta }(s)\,ds}e^{-\lambda(t_{k}-t_{0})} \frac{z_{i}}{\min_{1\leq j\leq n}\{z_{j}\} }\kappa\|\phi-\varphi\|_{L^{p}}^{p}e^{-\lambda( t-t_{k})}, \\ &{} t\in[t_{k}-\tau,t_{k}]. \end{aligned}$$
(4.10)

Let

$$\begin{aligned}& \tilde{z}_{i}=\gamma_{0}\gamma_{1}\cdots \gamma_{k}e^{\int_{0}^{t_{k}}\hat{\theta }(s)\,ds}\frac{z_{i}}{\min_{1\leq j\leq n}\{z_{j}\}}\kappa\|\phi-\varphi \|_{L^{p}}^{p},\qquad \tilde{z}=(\tilde{z}_{1},\ldots ,\tilde{z}_{n})^{T},\\& U_{i}\bigl(z(t) \bigr)=e^{\lambda(t_{k}-t_{0})}V_{i}\bigl(z(t)\bigr), \end{aligned}$$

then we know the vector \(\tilde{z}\in \Omega_{\mathcal{M}}(\widehat{\Pi})\) with \(\tilde{z}_{i}\geq 1\), \(i\in\mathcal{N}\). From (4.10), we get

$$ \mathbb{E}U_{i}\bigl(z(t)\bigr)\leq \tilde{z}_{i}e^{-\lambda(t-t_{k})},\quad t\in[t_{k}- \tau,t_{k}]. $$
(4.11)

From (4.5), we obtain

$$ \mathcal{L}U_{i}\bigl(z(t)\bigr)\leq\sum _{j=1}^{n} \bigl[\bigl(\hat{p}_{ij}+\hat{ \alpha }_{ij}(t)\bigr)U_{j}\bigl(z(t)\bigr)+\bigl( \hat{q}_{ij}+\hat{\beta}_{ij}(t)\bigr)U_{j} \bigl(z\bigl(t-\tau (t)\bigr)\bigr) \bigr] +e^{\lambda(t_{k}-t_{0})}\hat{r}_{i}(t). $$
(4.12)

Furthermore, we can easily get

$$\begin{aligned} \breve{\theta}(s)&=\max_{1\leq i\leq n}\Biggl\{ \sum_{j=1}^{n}\bigl(\hat{\alpha}_{ij}(s)+\hat{\beta}_{ij}(s)e^{\lambda\tau }\bigr)\frac{\tilde{z}_{j}}{\tilde{z}_{i}}+e^{\lambda(s-t_{k})}e^{\lambda (t_{k}-t_{0})}\hat{r}_{i}(s)\Biggr\} \\ &=\max_{1\leq i\leq n}\Biggl\{ \sum_{j=1}^{n}\bigl(\hat {\alpha}_{ij}(s)+\hat{\beta}_{ij}(s)e^{\lambda\tau}\bigr)\frac {z_{j}}{z_{i}}+e^{\lambda(s-t_{0})}\hat{r}_{i}(s)\Biggr\} =\hat{\theta}(s). \end{aligned}$$

Therefore, from (A4), (4.11), (4.12), and Theorem 3.1, we get

$$ \mathbb{E}U_{i}\bigl(z(t)\bigr)\leq \tilde{z}_{i}e^{-\lambda(t-t_{k})}e^{\int _{t_{k}}^{t}\hat{\theta}(s)\,ds},\quad t \in[t_{k},t_{k+1}), $$
(4.13)

that is,

$$\begin{aligned} &\mathbb{E}V_{i}\bigl(z(t)\bigr)\leq \gamma_{0}\gamma_{1}\cdots\gamma_{k-1} \gamma_{k}\frac {z_{i}}{\min_{1\leq j\leq n}\{z_{j}\}}\kappa\|\phi-\varphi\| _{L^{p}}^{p} e^{-\lambda( t-t_{0})}e^{\int_{t_{0}}^{t}\hat{\theta}(s)\,ds}, \\ &\quad t\in [t_{k},t_{k+1}). \end{aligned}$$
(4.14)

By using the mathematical induction method, we know

$$\begin{aligned} &\mathbb{E}V_{i}\bigl(z(t)\bigr)\leq \gamma_{0}\gamma_{1}\cdots\gamma_{k-1} \frac {z_{i}}{\min_{1\leq j\leq n}\{z_{j}\}}\kappa\|\phi-\varphi\| _{L^{p}}^{p} e^{-\lambda( t-t_{0})}e^{\int_{t_{0}}^{t}\hat{\theta}(s)\,ds}, \\ &\quad t\in [t_{k-1},t_{k}), k\in\mathbb{N}. \end{aligned}$$
(4.15)

From (4.3), we know \(\gamma_{k}\leq e^{\gamma(t_{k}-t_{k-1})}\). Then we can use (4.2) and (4.15) to get

$$\begin{aligned} &\bigl|x_{i}(t)-y_{i}(t)\bigr|^{p} \\ &\quad= \mathbb{E}V_{i}\bigl(z(t)\bigr)\leq e^{\gamma(t_{1}-t_{0})}\cdots e^{\gamma(t_{k-1}-t_{k-2})}\frac{z_{i}}{\min_{1\leq j\leq n}\{z_{j}\} }\kappa\|\phi-\varphi\|_{L^{p}}^{p}e^{-\lambda( t-t_{0})}e^{\int_{t_{0}}^{t}\hat {\theta}(s)\,ds} \\ &\quad\leq\frac{z_{i}}{\min_{1\leq j\leq n}\{z_{j}\}}\hat{\kappa}\|\phi-\varphi \|_{L^{p}}^{p}e^{-(\lambda-\mu-\gamma)(t-t_{0})}, \quad \forall t\in[t_{0},t_{k}), k\in \mathbb{N}, \end{aligned}$$
(4.16)

where \(\hat{\kappa}\geq\kappa\) is a proper constant.

From (4.16) and Lemma 2.3, we get

$$\begin{aligned} \mathbb{E}\bigl|x(t)-y(t)\bigr|^{p} \leq&\frac{1}{e_{p}(n)}\sum _{i=1}^{n}\mathbb {E}\bigl|x_{i}(t)-y_{i}(t)\bigr|^{p}= \frac{1}{e_{p}(n)}\sum_{i=1}^{n}\mathbb {E}V_{i}\bigl(z(t)\bigr) \\ \leq&\frac{1}{e_{p}(n)}\sum_{i=1}^{n} \frac{z_{i}}{\min_{1\leq j\leq n}\{ z_{j}\}}\hat{\kappa}\|\phi-\varphi\|_{L^{p}}^{p}e^{-(\lambda-\mu-\gamma )(t-t_{0})} \\ \triangleq& M\|\phi-\varphi\|_{L^{p}}^{p}e^{-(\lambda-\mu-\gamma)(t-t_{0})},\quad t \geq t_{0}. \end{aligned}$$
(4.17)

Therefore, the conclusion of Theorem 4.1 holds.

If \(I_{i}(t)\equiv0\), \(\sigma_{ij}(t,0,0)\equiv0\) for \(t\geq t_{0}\), \(I_{ik}(0)=0\), \(f_{j}(0)=g_{j}(0)=0\), \(i,j\in\mathcal{N}\), \(k\in\mathbb{N}\) then the system (2.1) becomes the following system:

$$ \left \{ \textstyle\begin{array}{@{}l} dx_{i}(t)= [-a_{i}(t)x_{i}(t)+ \sum_{j=1}^{n}b_{ij}(t)f_{j}(x_{j}(t))+\sum_{j=1}^{n}c_{ij}(t)g_{j}(x_{j}(t-\tau _{ij}(t))) ]\,dt \\ \hphantom{dx_{i}(t)=}{}+\sum_{j=1}^{n}\sigma_{ij}(t,x(t),x(t-\tau(t)))\,d\omega_{j}(t), \quad t\geq t_{0}, t\neq t_{k}, \\ x_{i}(t_{k})=I_{ik}(x(t_{k}^{-})),\quad t=t_{k}, \\ x_{i}(s)=\phi_{i}(s),\quad t_{0}-\tau\leq s\leq t_{0}, \end{array}\displaystyle \right . $$
(4.18)

with an equilibrium point \(x^{*}=0\). From Theorem 4.1, we can get the following conclusion. □

Corollary 4.1

Assume that the conditions (A1)-(A8) are all true. Then the zero solution \(x^{*}=0\) of (4.18) is exponentially p-stable and the exponential convergent rate is no less than \(\lambda-\mu-\gamma\).

Remark 4.1

The authors in [24] obtained some new results on p-moment exponential stability of non-autonomous stochastic differential equation with delay. The model (4.18) without impulses is a special case of equation (2) in [24]. However, the results in [24] require the coefficients to have a common factor and \(h_{i}(t)\equiv0 \) (\(i\in \mathcal{N}\), \(t\geq t_{0}\)) in assumption (A3) to be true.

If \(I_{ik}(x)=x_{i}\), \(i\in\mathcal{N}\), \(k\in\mathbb{N}\) and \(\phi(s)=(\phi _{1}(s),\ldots,\phi_{n}(s))^{T}\in C_{\mathscr{F}_{0}}^{b}[[t_{0}-\tau,t_{0}],R^{n}]\), from system (2.1), we can get the following model without impulses:

$$ \left \{ \textstyle\begin{array}{@{}l} dx_{i}(t) = [-a_{i}(t)x_{i}(t)+ \sum_{j=1}^{n}b_{ij}(t)f_{j}(x_{j}(t))+\sum_{j=1}^{n}c_{ij}(t)g_{j}(x_{j}(t-\tau _{ij}(t)))+I_{i}(t) ]\,dt \\ \hphantom{dx_{i}(t) =}{}+\sum_{j=1}^{n}\sigma_{ij}(t,x(t),x(t-\tau(t)))\,d\omega_{j}(t), \quad t\geq t_{0}, \\ x_{i}(s)=\phi_{i}(s),\quad t_{0}-\tau\leq s\leq t_{0}. \end{array}\displaystyle \right . $$
(4.19)

Then we can get the following conclusion.

Theorem 4.2

Assume that (A1)-(A4) hold, (A7) holds for \(\lambda\in(0,\hat{\delta}]\) which satisfies

$$ \bigl(\lambda E+\widehat{P}+\widehat{Q}e^{\lambda\tau}\bigr)z< 0,\quad z\in\Omega _{\mathcal{M}}(\widehat{\Pi}). $$
(4.20)

Then the system (4.19) is exponentially p-stable and the exponential convergent rate is no less than \(\lambda-\mu\).

If \(I_{i}(t)\equiv0\), \(\sigma_{ij}(t,0,0)\equiv0\) for \(t\geq t_{0}\), \(f_{j}(0)=g_{j}(0)=0\), \(i,j\in\mathcal{N}\), then the system (4.19) becomes the following model:

$$ \left \{ \textstyle\begin{array}{@{}l} dx_{i}(t) = [-a_{i}(t)x_{i}(t)+ \sum_{j=1}^{n}b_{ij}(t)f_{j}(x_{j}(t))+\sum_{j=1}^{n}c_{ij}(t)g_{j}(x_{j}(t-\tau _{ij}(t))) ]\,dt \\ \hphantom{dx_{i}(t) =}{}+\sum_{j=1}^{n}\sigma_{ij}(t,x(t),x(t-\tau(t)))\,d\omega_{j}(t), \quad t\geq t_{0}, \\ x_{i}(s)=\phi_{i}(s), \quad t_{0}-\tau\leq s\leq t_{0}, \end{array}\displaystyle \right . $$
(4.21)

with an equilibrium point \(x^{*}=0\). From Theorem 4.2, we get the following corollary.

Corollary 4.2

Assume that (A1)-(A4) hold, (A7) holds for \(\lambda\in(0,\hat{\delta}]\) which satisfies the inequality (4.20). Then the zero solution \(x^{*}=0\) of (4.21) is exponentially p-stable and the exponential convergent rate is no less than \(\lambda-\mu\).

Remark 4.2

The models investigated in [14, 21] can be considered as special cases of model (4.21), but they require the differentiability of delay functions and \(\sup_{t\geq t_{0}}\dot{\tau}_{ij}(t)<1\). In addition, combining \(\sigma_{ij}(t,0,0)\equiv0\) for \(t\geq t_{0}\) with (A3), we can get

$$\begin{aligned} \operatorname{trace} \bigl[\sigma^{T}(t,v,u)\sigma(t,v,u) \bigr] =& \sum_{i=1}^{n}\sum _{j=1}^{n}\sigma_{ij}^{2}(t,v,u) \\ \leq&\sum_{j=1}^{n} \Biggl[\Biggl(\sum _{i=1}^{n}m_{ij}(t) \Biggr)v_{j}^{2}+\Biggl(\sum_{i=1}^{n}n_{ij}(t) \Biggr)u_{j}^{2} +\sum_{i=1}^{n}h_{i}(t) \Biggr]. \end{aligned}$$

However, the authors of [14, 21] require that \(h_{i}(t)\equiv0 \) (\(i\in\mathcal{N}\), \(t\geq t_{0}\)) in assumption (A3) is true.

Remark 4.3

The authors in [23] used the methods in [34, 35] to study the p-moment exponential stability of non-autonomous stochastic Cohen-Grossberg neural networks and obtained some new results. It is well known that the model (4.21) is a special case of equation (3) in [23], however, the condition (12) in [23] is equivalent to requiring that \(-(P(t)+Q(t))\) is a nonsingular \(\mathcal{M}\)-matrix for all \(t\geq t_{0}\). In addition, the results in [23] require that \(h_{i}(t)\equiv0 \) (\(i\in \mathcal{N}\), \(t\geq t_{0}\)) in assumption (A3) is true.

5 Examples

Example 5.1

Consider the following system:

$$ \left \{ \textstyle\begin{array}{@{}l} d x_{1}(t) =[-(3-\frac{3}{8}|\cos t|)x_{1}(t)+(1+\sin te^{-0.2t})f_{1}(x_{1}(t))+(\frac{3}{5}+e^{-0.1t})f_{2}(x_{2}(t)) \\ \hphantom{d x_{1}(t) =}{}+(\frac{1}{2}+2e^{-0.1t})g_{1}(x_{1}(t-0.1|\sin20t|))\\ \hphantom{d x_{1}(t) =}{}+(\frac {1}{2}+e^{-0.1t})g_{2}(x_{2}(t-0.1|\sin30t|))]\,dt \\ \hphantom{d x_{1}(t) =}{}+(\frac{\sqrt{5}}{10}x_{1}(t)+e^{-t}\sin^{2}x_{1}(t))\,d\omega_{1}(t)+(\frac {\sqrt{5}}{10}x_{2}(t)+e^{-t}\sin^{2}x_{2}(t))\,d\omega_{2}(t), \\ d x_{2}(t) = [-(\frac{57}{20}-\frac{3}{8}|\cos t|)x_{2}(t)+(\frac {4}{15}+e^{-0.2t})f_{1}(x_{1}(t))+(1+\cos te^{-0.2t}))f_{2}(x_{2}(t)) \\ \hphantom{d x_{2}(t) =}{}+(\frac{7}{12}+e^{-0.2t})g_{1}(x_{1}(t-0.1|\sin30t|))\\ \hphantom{d x_{2}(t) =}{}+(\frac {1}{2}+e^{-0.2t})g_{2}(x_{2}(t-0.1|\sin40t|))]\,dt \\ \hphantom{d x_{2}(t) =}{}+(\frac{\sqrt{5}}{10}x_{1}(t)+e^{-t}\sin^{2}x_{1}(t))\,d\omega_{1}(t)+(\frac {\sqrt{5}}{10}x_{2}(t)+e^{-t}\sin^{2}x_{2}(t))\,d\omega_{2}(t), \\ x_{i}(s)=\phi_{i}(s),\quad -0.1\leq s\leq0, \end{array}\displaystyle \right . $$
(5.1)

where \(f_{1}(s)=f_{2}(s)=\frac{1}{2}(|s+1|-|s-1|)\), \(g_{1}(s)=g_{2}(s)=s\). We can easily know \(\tau=0.1\), \(l_{1}=l_{2}=k_{1}=k_{2}=1\), \(f_{1}(0)=f_{2}(0)=g_{1}(0)=g_{2}(0)=0\), \(\sigma _{ij}(t,0,0)\equiv 0\), \(i,j=1,2\). Evidently, model (5.1) has an equilibrium point zero.

For \(\sigma_{ij}(t,x(t),x(t-\tau(t)))=\frac{\sqrt{5}}{10}x_{j}(t)+e^{-t}\sin ^{2}x_{j}(t)\), \(i,j=1,2\), we can derive \(|\sigma_{ij}(t,x(t),x(t-\tau(t)))|^{2}\leq \frac{1}{10}x_{j}^{2}(t)+2e^{-2t}\), that is, \(m_{ij}(t)\equiv \frac{1}{10}\), \(n_{ij}(t)\equiv 0\), \(h_{i}(t)=2e^{-2t}\), \(\hat{r}_{i}(t)=4e^{-2t}\), \(i,j=1,2\).

Case 1. Let \(p=2\), by simple computation, we can get the parameters of (A4) as follows:

$$\begin{aligned} &{{P(t)= \begin{pmatrix} -\frac{23}{10}+\frac{3}{4}|\cos t|+2|\sin t|e^{-0.2t}+4e^{-0.1t} & \frac{7}{10}+e^{-0.1t}\\ \frac{11}{30}+e^{-0.2t} &-\frac{45}{20}+\frac{3}{4}|\cos t|+2|\cos t|e^{-0.2t}+3e^{-0.2t} \end{pmatrix}}}, \\ &Q(t)= \begin{pmatrix} \frac{1}{2}+2e^{-0.1t} & \frac{1}{2}+e^{-0.1t}\\ \frac{7}{12}+e^{-0.2t} & \frac{1}{2}+2e^{-0.2t} \end{pmatrix}, \qquad \widehat{P}= \begin{pmatrix} -\frac{23}{10} & \frac{7}{10}\\ \frac{11}{30} &-\frac{45}{20} \end{pmatrix}, \\ &\widehat{Q}= \begin{pmatrix} \frac{1}{2} & \frac{1}{2}\\ \frac{7}{12} & \frac{1}{2} \end{pmatrix}, \qquad\widehat{\Pi}=-(\widehat{P}+ \widehat{Q})= \begin{pmatrix} 1.8 & -1.2\\ -0.95 &1.75 \end{pmatrix}, \\ &\begin{aligned}[b] \hat{\alpha}(t)={}&\bigl(\hat{\alpha}^{1}_{ij}(t) \bigr)_{2\times2}+\bigl(\hat{\alpha }^{2}_{ij}(t) \bigr)_{2\times2}= \begin{pmatrix} \frac{3}{4}|\cos t| & 0\\ 0 &\frac{3}{4}|\cos t| \end{pmatrix} \\ &{}+ \begin{pmatrix} 2|\sin t|e^{-0.2t}+4e^{-0.1t} & e^{-0.1t}\\ e^{-0.2t} &2|\cos t|e^{-0.2t}+3e^{-0.2t} \end{pmatrix}, \end{aligned} \\ &\hat{\beta}(t)=\bigl(\hat{\beta}_{ij}(t)\bigr)_{2\times2}= \begin{pmatrix} 2e^{-0.1t} & e^{-0.1t}\\ e^{-0.2t} & 2e^{-0.2t} \end{pmatrix}. \end{aligned}$$

We can easily know that Π̂ is a nonsingular \(\mathcal{M}\)-matrix, and we obtain

$$\Omega_{\mathcal{M}}(\widehat{\Pi})= \biggl\{ (z_{1},z_{2})^{T}>0\Big| \frac {19}{35}z_{1}< z_{2}< \frac{3}{2}z_{1} \biggr\} . $$

Apparently, \(z=(1,1)^{T}\in\Omega_{\mathcal{M}}(\widehat{\Pi})\), and \(\lambda=0.54\) satisfies

$$\bigl(\lambda E+\widehat{P}+\widehat{Q}e^{\lambda \tau}\bigr)z=(-0.0045, -0.1999)^{T}< (0,0)^{T}. $$

We compute

$$J(t)\stackrel{\Delta}{=} \int_{0}^{t}|\cos s|\,ds. $$

For any \(t\geq0\), there must be an integer \(n\geq0\) satisfying \(n\pi-\frac{\pi}{2}\leq t< n\pi+\frac{\pi}{2}\). Let \(t=n\pi-\frac{\pi}{2}+u\), \(0\leq u<\pi\), then we get

$$\begin{aligned} J(t) \stackrel{\Delta}{=}& \int_{0}^{t}|\cos s|\,ds \\ =& \int_{0}^{\frac{\pi}{2}}\cos s\,ds+\sum _{k=1}^{n-1}(-1)^{k} \int_{k\pi -\frac{\pi}{2}}^{k\pi+\frac{\pi}{2}}\cos s\,ds+(-1)^{n} \int_{n\pi-\frac{\pi }{2}}^{t}\cos s\,ds \\ =&1+\sum_{k=1}^{n-1}(-1)^{k}\biggl( \sin\biggl( k\pi-\frac{\pi}{2}\biggr)-\sin\biggl(k\pi+\frac {\pi}{2} \biggr)\biggr)+(-1)^{n}\biggl(\sin t-\sin\biggl(n\pi-\frac{\pi}{2}\biggr)\biggr) \\ =&1+\sum_{k=1}^{n-1}\bigl((-1)^{2k}-(-1)^{2k-1} \bigr)+(-1)^{n}\biggl(\sin\biggl(n\pi-\frac {\pi}{2}+u \biggr)-(-1)^{n-1}\biggr) \\ =&(2n-\cos u)=\frac{2}{\pi}\biggl(n\pi-\frac{\pi}{2}\biggr)+1-\cos u \\ \leq&\frac{2}{\pi} t+(1-\cos u),\quad 0\leq u< \pi. \end{aligned}$$
(5.2)

Since \(\hat{\theta}(s)=\frac{3}{4}|\cos s|+\max_{1\leq i\leq 2}\{\sum_{j=1}^{2}(\hat{\alpha}^{2}_{ij}(s)+\hat{\beta}_{ij}(s)e^{\lambda \tau})+e^{0.54s}\hat{r}_{i}(s)\}\stackrel{\Delta}{=}\frac{3}{4}|\cos s|+\hat{\theta}^{*}(s)\) and \(\hat{\alpha}^{2}_{ij}, \hat{\beta}_{ij}\in L^{1}[R_{+},R_{+}]\), \(e^{0.54s}\hat{r}_{i}(s)=4e^{-1.46s}\in L^{1}[R_{+},R_{+}]\), \(i,j=1,2\), we easily know \(\hat{\theta}^{*}(s)\in L^{1}[R_{+},R_{+}]\). Combined with (5.2), we obtain

$$\begin{aligned} e^{\int_{0}^{t}\hat{\theta}(s)\,ds} =&e^{\int_{0}^{t}\hat{\theta}^{*}(s)\,ds}e^{\frac {3}{4}\int_{0}^{t}|\cos s|\,ds} \\ \leq& e^{\int_{0}^{t}\hat{\theta}^{*}(s)\,ds}e^{\frac{3}{2\pi} t}e^{\frac {3}{4}(1-\cos u)} \\ \leq& M e^{\frac{3}{2\pi} t}, \end{aligned}$$
(5.3)

where \(M\geq1\) is a constant.

Thus, from Corollary 4.2, we know the zero solution \(x^{*}=0\) of (5.1) is exponentially 2-stable (see Figure 1), the exponential convergent rate is no less than \(0.54-\frac{3}{2\pi}=0.0625\).

Figure 1
figure 1

The state trajectories of model ( 5.1 ) without impulses.

Remark 5.1

Apparently, \(-(P(t)+Q(t))\) is not a nonsingular \(\mathcal{M}\)-matrix for all \(t\geq 0\), and \(h_{i}(t)=2e^{-2t}\equiv0\) is not true for all \(t\geq0\), thus the results in [23, 24] are invalid for (5.1). In addition, the delay functions \(\tau_{ij}(t)\) do not satisfy \(\sup_{t\geq 0}\dot{\tau}_{ij}(t)<1\), therefore, when model (5.1) is autonomous, the results in [14, 21] are invalid for it.

Case 2. If

$$ \begin{aligned} &x_{1}(t_{k})=I_{1k} \bigl(x\bigl(t_{k}^{-}\bigr)\bigr)=0.2e^{0.02}x_{1} \bigl(t_{k}^{-}\bigr)+0.8e^{0.02}x_{2} \bigl(t_{k}^{-}\bigr), \\ &x_{2}(t_{k})=I_{2k}\bigl(x\bigl(t_{k}^{-} \bigr)\bigr)=0.6e^{0.02}x_{1}\bigl(t_{k}^{-} \bigr)+0.4e^{0.02}x_{2}\bigl(t_{k}^{-}\bigr),\quad t_{k}-t_{k-1}=1, k\in\mathbb{N}, \end{aligned} $$
(5.4)

then we can get the following parameters of (A5), (A6), (A8):

$$\begin{aligned} \hat{R}_{k}=e^{0.04} \begin{pmatrix} 0.2 & 0.8\\ 0.6 &0.4 \end{pmatrix}, \qquad \rho( \hat{R}_{k})=e^{0.04},\qquad \Omega_{\rho}( \hat{R}_{k})= \bigl\{ (z_{1},z_{2})>0|z_{1}=z_{2} \bigr\} . \end{aligned}$$

Therefore, \(\Omega=\bigcap_{k=1}^{\infty}(\Omega_{\rho}(\hat {R}_{k}))\cap \Omega_{\mathcal{M}}(\widehat{\Pi})= \{(z_{1},z_{2})>0|z_{1}=z_{2} \}\) is not empty. Let \(z=(1,1)^{T}\in\Omega\) and \(\gamma_{k}=e^{0.04}\), we can obtain for \(k\in\mathbb{N}\)

$$\frac{\ln\gamma_{k}}{t_{k}-t_{k-1}}= \frac{\ln e^{0.04}}{1}=0.04=\gamma< \lambda-\mu=0.0625. $$

From Corollary 4.1, we know the zero solution to (5.1) with impulses (5.4) is exponentially 2-stable (see Figure 2).

Figure 2
figure 2

The state trajectories of model ( 5.1 ) with impulses ( 5.2 ).

6 Conclusion

In this paper, we have analyzed the stochastic non-autonomous impulsive cellular neural networks with delays. Based on the properties of \(\mathcal{L}\)-operator and \(\mathcal{M}\)-matrix, we have developed a new inequality of \(\mathcal{L}\)-operator. We have applied the new inequality to stochastic non-autonomous neural networks and derived the sufficient conditions for the pth moment exponential stability of the considered system without impulses or with impulses. Our results do not require differentiability of the delay functions and have relaxed the conditions imposed on the diffusion coefficient matrix. The results obtained have generalized some early works.

References

  1. Cao, J, Zhou, D: Stability analysis of delayed cellular neural networks. Neural Netw. 11, 1601-1605 (1998)

    Article  Google Scholar 

  2. Chua, LO, Yang, L: Cellular neural networks: theory. IEEE Trans. Circuits Syst. 35, 1257-1272 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  3. Arik, S: An analysis of global asymptotic stability of delayed cellular neural networks. IEEE Trans. Neural Netw. 13, 1239-1242 (2002)

    Article  Google Scholar 

  4. Liao, X, Wang, J: Algebraic criteria for global exponential stability of cellular neural networks with multiple time delays. IEEE Trans. Circuits Syst. 50, 268-274 (2003)

    Article  MathSciNet  Google Scholar 

  5. Lakshmikantham, V, Bainov, D, Simeonov, P: Theory of Impulsive Differential Equations. World Scientific, Singapore (1989)

    Book  MATH  Google Scholar 

  6. Long, S, Xu, D: Delay-dependent stability analysis for impulsive neural networks with time varying delays. Neurocomputing 71, 1705-1713 (2008)

    Article  Google Scholar 

  7. Gopalsamy, K: Stability of artificial neural networks with impulses. Appl. Math. Comput. 154, 783-813 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  8. Pan, J, Liu, X, Zhong, S: Stability criteria for impulsive reaction-diffusion Cohen-Grossberg neural networks with time-varying delays. Math. Comput. Model. 51, 1037-1050 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. Long, S, Song, Q, Wang, X, Li, D: Stability analysis of fuzzy cellular neural networks with time delay in the leakage term and impulsive perturbations. J. Franklin Inst. 349, 2461-2479 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  10. Haykin, S: Neural Networks. Prentice Hall, Englewood Cliffs (1994)

    MATH  Google Scholar 

  11. Blythe, S, Mao, X, Liao, X: Stability of stochastic delay neural networks. J. Franklin Inst. 4, 338-481 (2001)

    MathSciNet  Google Scholar 

  12. Yang, Z, Xu, D, Xiang, L: Exponential p-stability of impulsive stochastic differential equations with delays. Phys. Lett. A 359, 129-137 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  13. Balasubramaniam, P, Syed Ali, M, Arik, S: Global asymptotic stability of stochastic fuzzy cellular neural networks with multiple time-varying delays. Expert Syst. Appl. 37, 7737-7744 (2010)

    Article  Google Scholar 

  14. Sun, Y, Cao, J: pth moment exponential stability of stochastic recurrent neural networks with time-varying delays. Nonlinear Anal., Real World Appl. 8, 1171-1185 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  15. Cheng, Q, Cao, J: Global synchronization of complex networks with discrete time delays and stochastic disturbances. Neural Comput. Appl. 20, 1167-1179 (2011)

    Article  Google Scholar 

  16. Li, H, Chen, B, Lin, C, Zhou, Q: Mean square exponential stability of stochastic fuzzy Hopfield neural networks with discrete and distributed time-varying delays. Neurocomputing 72, 2017-2023 (2009)

    Article  Google Scholar 

  17. Luo, Q, Zhang, Y: Almost sure exponential stability of stochastic reaction diffusion systems. Nonlinear Anal. 71, 487-493 (2009)

    Article  MathSciNet  Google Scholar 

  18. Huang, C, Cao, J: Almost sure exponential stability of stochastic cellular networks with unbounded distributed delays. Neurocomputing 72, 3352-3356 (2009)

    Article  Google Scholar 

  19. Ren, F, Cao, J: Anti-synchronization of stochastic perturbed delayed chaotic neural networks. Neural Comput. Appl. 5, 515-521 (2009)

    Article  Google Scholar 

  20. Mao, X: Stochastic Differential Equations and Applications. Ellis Horwood, New York (1997)

    MATH  Google Scholar 

  21. Huang, C, He, Y, Huang, L, Zhu, W: pth moment stability analysis of stochastic recurrent neural networks with time-varying delays. Inf. Sci. 178, 2194-2203 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  22. Yuan, Z, Yuan, L, Huang, L, Hu, D: Boundedness and global convergence of non-autonomous neural networks with variable delays. Nonlinear Anal., Real World Appl. 10, 2195-2206 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  23. Huang, C, He, Y, Huang, L: Stability analysis of non-autonomous stochastic Cohen-Grossberg neural networks. Nonlinear Dyn. 57, 469-478 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  24. Li, B, Zhou, Y, Song, Q: P-Moment asymptotic behavior of non-autonomous stochastic differential equation with delay. In: Advances in Neural Networks - ISNN 2010. Lecture Notes in Computer Science, vol. 6063, pp. 561-568 (2010)

    Chapter  Google Scholar 

  25. Jiang, M, Shen, Y: Stability of non-autonomous bidirectional associative memory neural networks with delay. Neurocomputing 71, 863-874 (2008)

    Article  Google Scholar 

  26. Zhao, H, Mao, Z: Boundedness and stability of nonautonomous cellular neural networks with reaction-diffusion terms. Math. Comput. Simul. 79, 1603-1617 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  27. Lou, X, Cui, B: Boundedness and exponential stability for nonautonomous cellular neural networks with reaction-diffusion terms. Chaos Solitons Fractals 33, 653-662 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  28. Wu, A, Fu, C: Global exponential stability of non-autonomous FCNNs with Dirichlet boundary conditions and reaction-diffusion terms. Appl. Math. Model. 34, 3022-3029 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  29. Niu, S, Jiang, H, Teng, Z: Boundedness and exponential stability for nonautonomous FCNNs with distributed delays and reaction-diffusion terms. Neurocomputing 73, 2913-2919 (2010)

    Article  Google Scholar 

  30. Yang, Z, Xu, D: Global dynamics for non-autonomous reaction-diffusion neural networks with time-varying delays. Theor. Comput. Sci. 403, 3-10 (2008)

    Article  MATH  Google Scholar 

  31. Li, J, Zhang, F, Yan, J: Global exponential stability of non-autonomous neural networks with time-varying delays and reaction-diffusion terms. J. Comput. Appl. Math. 233, 241-247 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  32. Huang, Y, Xu, D, Yang, Z: Dissipativity and periodic attractor for non-autonomous neural networks with time-varying delays. Neurocomputing 70, 2953-2958 (2007)

    Article  Google Scholar 

  33. Xu, D, Long, S: Attracting and quasi-invariant sets of non-autonomous neural networks with delays. Neurocomputing 77, 222-228 (2012)

    Article  Google Scholar 

  34. Zhang, Q, Wei, X, Xu, J: Delay-dependent exponential stability criteria for non-autonomous cellular neural networks with time-varying delays. Chaos Solitons Fractals 36, 985-990 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  35. Zhang, Q, Wei, X, Xu, J: Global exponential stability for nonautonomous cellular neural networks with delays. Phys. Lett. A 351, 153-160 (2006)

    Article  MATH  Google Scholar 

  36. Long, S, Wang, X, Li, D: Attracting and invariant sets of non-autonomous reaction-diffusion neural networks with time-varying delays. Math. Comput. Simul. 82, 2199-2214 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  37. Long, S, Xu, D: Global exponential p-stability of stochastic nonautonomous Takagi-Sugeno fuzzy cellular neural networks with time-varying delays and impulses. Fuzzy Sets Syst. 253, 82-100 (2014)

    Article  MathSciNet  Google Scholar 

  38. Long, S, Xu, D: Global exponential stability of nonautonomous cellular neural networks with impulses and time-varying delays. Commun. Nonlinear Sci. Numer. Simul. 18, 1463-1472 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  39. Wang, X, Guo, Q, Xu, D: Exponential p-stability of impulsive stochastic Cohen-Grossberg neural networks with mixed delays. Math. Comput. Simul. 79, 1698-1710 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  40. Zhu, W: Global exponential stability of impulsive reaction-diffusion equation with variable delays. Appl. Math. Comput. 205, 362-369 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  41. Zhu, W, Xu, D, Yang, C: Exponential stability of singularly perturbed impulsive delay differential equations. J. Math. Anal. Appl. 328, 1161-1172 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  42. Xu, D, Yang, Z: Impulsive delay differential inequality and stability of neural networks. J. Math. Anal. Appl. 305, 107-120 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  43. Xu, L, Xu, D: P-Attracting and p-invariant sets for a class of impulsive stochastic functional differential equations. Comput. Math. Appl. 57, 54-61 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  44. Long, S, Xu, D: Stability analysis of stochastic fuzzy cellular neural networks with time-varying delays. Neurocomputing 74, 2385-2391 (2011)

    Article  Google Scholar 

  45. Berman, A, Plemmons, R: Nonnegative Matrices in Mathematical Sciences. Academic Press, New York (1979)

    MATH  Google Scholar 

  46. Horn, R, Johnson, C: Matrix Analysis. Cambridge University Press, Cambridge (1985)

    Book  MATH  Google Scholar 

  47. Beckenbach, E, Bellman, R (eds.): Inequalities. Springer, New York (1961)

    Google Scholar 

Download references

Acknowledgements

This work is supported by National Natural Science Foundation of China under Grant 11271270, Fundamental Research Fund of Sichuan Provincial Science and Technology Department under Grants 2013JYZ014, 2012JYZ019, Scientific Research Fund of Sichuan Provincial Education Department under Grant 16TD0029 and Project of Leshan Normal University under Grant Z1324.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shujun Long.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors have made equal and significant contributions in writing this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Luo, T., Long, S. A new inequality of \(\mathcal{L}\)-operator and its application to stochastic non-autonomous impulsive neural networks with delays. Adv Differ Equ 2016, 9 (2016). https://doi.org/10.1186/s13662-015-0697-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-015-0697-y

MSC

Keywords