Skip to main content

Theory and Modern Applications

Mean-square exponential input-to-state stability of stochastic inertial neural networks

Abstract

By introducing some parameters perturbed by white noises, we propose a class of stochastic inertial neural networks in random environments. Constructing two Lyapunov–Krasovskii functionals, we establish the mean-square exponential input-to-state stability on the addressed model, which generalizes and refines the recent results. In addition, an example with numerical simulation is carried out to support the theoretical findings.

1 Introduction

Recently, Babcock and Westervelt [1, 2] have introduced the well-known inertial neural networks that take the following second-order delay differential equations:

$$\begin{aligned} x_{i}''(t) =&-a_{i}x_{i}'(t)-b_{i}x_{i}(t)+ \sum_{j=1}^{n}c_{ij}f_{j} \bigl(x_{j}(t)\bigr) \\ &{}+ \sum_{j=1}^{n}d_{ij}g_{j} \bigl(x_{j}(t-\tau _{j})\bigr)+I_{i}(t), \quad i \in J=\{1,2,\ldots,n\}, \end{aligned}$$
(1.1)

to discover the complicated dynamic behavior of electronic neural networks. Here the initial conditions are defined as

$$\begin{aligned}& \begin{aligned}&x_{i}(s)=\psi _{i}(s), \\ & x_{i}'(s)=\psi '_{i}(s),\quad -\tau \leq s \leq 0, \psi _{i}\in C^{1}\bigl([-\tau , 0], \mathbb{R}\bigr), i\in J,\tau = \max_{j\in J}\{\tau _{j} \}, \end{aligned} \end{aligned}$$
(1.2)

where \(x(t)=(x_{1}(t), x_{2}(t),\ldots , x_{n}(t))\) is the state vector, \(x_{i}''(t)\) is called the ith inertial term, the positive parameters \(a_{i}\), \(b_{i}\), the nonnegative parameters \(\tau _{j}\), and the other parameters \(c_{ij}\), \(d_{ij }\) are all constant, \(I_{i}(t)\) is the external input of ith neuron at time t and \(I=(I_{1}(t), I_{2}(t),\ldots , I_{n}(t))\in \ell _{\infty }\), where \(\ell _{\infty }\) denotes the family of essential bounded functions I from \([0,\infty )\) to \(\mathbb{R}^{n}\) with norm \(\|I\|_{\infty }=\operatorname{ess}\sup_{t\geq 0}\sqrt{\sum_{i=1}^{n}I_{i}^{2}(t)} \). The activation functions \(f_{j} \) and \(g_{j} \) satisfy \(f_{j}(0)=g_{j}(0)=0\) and Lipschitz conditions, i.e., there exist positive constants \(F_{j}\) and \(G_{j}\) such that

$$ \bigl\vert f_{j}(u )-f_{j}(v ) \bigr\vert \leq F_{j} \vert u -v \vert ,\qquad \bigl\vert g_{j}(u )-g_{j}(v ) \bigr\vert \leq G_{j} \vert u -v \vert \quad \text{for all } u , v \in \mathbb{ R}. $$
(1.3)

There are two main methods to study inertial neural network (1.1). One is the so-called reduced order method that has been adopted to study Hopf bifurcation [38], stability of equilibrium point [913] and periodicity [1416], synchronization [1721] and dissipativity [22, 23]. The other is the non-reduced order method that can overcome the great increase of dimension, and many researchers have used this approach to consider dynamic behaviors of (1.1) and its generalizations [2236].

However, both reduced order and non-reduced order methods involve only deterministic inertial neural networks, do not incorporate stochastic inertial neural networks under the effect of environmental fluctuations. Remarkably, Haykin [37] has pointed out that synaptic transmission, caused by random fluctuations in neurotransmitter release and other probabilistic factors, is a noisy process in real nervous systems and in the implementation of artificial neural networks, hence one should take into consideration noise in modeling since it is unavoidable.

Assume that the parameter \(b_{i}\) (\(i\in J\)) is affected by environmental noise, with \(b_{i}\rightarrow b_{i}-\sigma _{i}\,dB_{i}(t)\), where \(B_{i}(t)\) is independent white noise (i.e., standard Brownian motion) with \(B_{i}(0) = 0\) defined on a complete probability space \((\Omega ,\{\mathcal{F}_{t}\}_{t\geq 0},\mathcal{P})\), \(\sigma _{i}^{2}\) denotes noise intensity. Then, corresponding to inertial neural network (1.1), we obtain the following stochastic system:

$$\begin{aligned} dx_{i}'(t) =&\Biggl[-a_{i}x_{i}'(t)-b_{i}x_{i}(t)+ \sum_{j=1}^{n}c_{ij}f_{j} \bigl(x_{j}(t)\bigr)+ \sum_{j=1}^{n}d_{ij}g_{j} \bigl(x_{j}(t-\tau _{j})\bigr)+I_{i}(t)\Biggr] \,dt \\ &{}+ \sigma _{i}x_{i}(t)\,dB_{i}(t),\quad i\in J. \end{aligned}$$
(1.4)

Obviously, the white noise disturbance term \(\sigma _{i}x_{i}(t)\,dB_{i}(t)\) will induce randomness such that the traditional deterministic inertial neural network (1.1) becomes stochastic system (1.4). One difficulty of this paper is to process white noise disturbances and the other is to introduce a suitable concept of stability to explain the dynamics of (1.4) precisely. The main aim of this paper is to investigate the mean-square exponential input-to-state stability of stochastic inertial neural network (1.4) with initial conditions (1.2). Input-to-state stability, different from the traditional stability such as asymptotical stability, almost sure stability, and exponential stability that means the system states will converge to an equilibrium point as time tends to infinity, can describe the system states varying within a certain region under external control. For more details about input-to-state stability, one can refer to [3842]. However, as far as we know, almost no one has studied mean-square exponential input-to-state stability of stochastic inertial neural networks.

The remaining part of this paper includes four sections. In Sect. 2, we give the main result: several sufficient conditions that ensue the stochastic inertial neural network (1.4) is mean-square exponentially input-to-state stable. In Sect. 3, we provide numerical examples to check the effectiveness of the developed result. Finally, we summarize and evaluate our work in Sect. 4.

2 Mean-square exponential stability

Although Wang and Chen [43] have studied the mean-square exponential stability of stochastic inertial neural network (1.4) with two groups of different initial conditions (1.2), it is not appropriate to mean-square exponentially input-to-state stability. Fortunately, motivated by Zhu and Cao [38], who introduced the definition of the mean-square exponential input-to-state stability for stochastic delayed neural networks, together with the mean-square exponential stability (Wang and Chen [43]), we present the following definition.

Definition 2.1

Let \(x (t,\psi )=(x_{1}(t), x_{2}(t),\ldots , x_{n}(t))\) be a solution of (1.4) with initial conditions (1.2) \(\psi (s)=(\psi _{1}(s), \psi _{2}(s),\ldots , \psi _{n}(s))\). The stochastic inertial neural network (1.4) is said to be mean-square exponentially input-to-state stable if there exist positive constants λ, η, and K such that

$$ E\bigl( \bigl\Vert x(t,\psi ) \bigr\Vert ^{2}+ \bigl\Vert x'(t,\psi ) \bigr\Vert ^{2}\bigr)\leq Ke^{-\lambda t}+ \eta \Vert I \Vert ^{2}_{\infty } \quad \text{for all } t\geq 0, $$

where \(\|\bullet \|\) means square norm.

Theorem 2.1

Under assumptions (1.3), the stochastic inertial neural network (1.4) is mean-square exponentially input-to-state stable if there exist positive constants \(\beta _{i}\), \(\bar{\beta }_{i}\), and nonzero constants \(\alpha _{i}\), \(\gamma _{i}\), \(\bar{\alpha }_{i}\), \(\bar{\gamma }_{i}\), \(i\in J\) such that

$$ A_{i}< 0,\qquad B_{i}< 0,\qquad 4A_{i}B_{i}>C_{i}^{2} $$
(2.1)

and

$$ \bar{A}_{i}< 0,\qquad \bar{B}_{i}< 0,\qquad 4 \bar{A}_{i}\bar{B}_{i}>\bar{C}_{i}^{2}, $$
(2.2)

where

$$ \textstyle\begin{cases} A_{i}=-\alpha _{i}^{2}a_{i}+\alpha _{i}\gamma _{i}+\frac{1}{2}\alpha _{i}^{2} \sum_{j=1}^{n}( \vert c_{ij} \vert F_{j}+ \vert d_{ij} \vert G_{j}+1), \\ B_{i}=-\alpha _{i}\gamma _{i}b_{i}+\frac{1}{2}\alpha _{i}^{2}\sigma _{i}^{2}+ \frac{1}{2}\sum_{j=1}^{n}(\alpha _{j}^{2}+ \vert \alpha _{j} \gamma _{j} \vert )( \vert d_{ji} \vert G_{i}e^{\lambda \tau _{i}}+ \vert c_{ji} \vert F_{i}) \\ \hphantom{B_{i}=}{} +\frac{1}{2} \vert \alpha _{i}\gamma _{i} \vert (\sum_{j=1}^{n} \vert d_{ij} \vert G_{j}+1), \\ C_{i} =\beta _{i}+\gamma _{i}^{2}-\alpha _{i}^{2}b_{i}-\alpha _{i} \gamma _{i}a_{i}, \\ \bar{A}_{i}=-(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2})a_{i}+\bar{\alpha }_{i} \bar{\gamma }_{i}+\frac{1}{2}(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2})( \sum_{j=1}^{n} \vert c_{ij} \vert F_{j}+\sum_{j=1}^{n} \vert d_{ij} \vert G_{j}+1), \\ \bar{B}_{i}=\frac{1}{2}(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2})\sigma _{i}^{2}- \bar{\alpha }_{i}\bar{\gamma }_{i}b_{i}+\frac{1}{2}\sum_{j=1}^{n}( \bar{\beta }_{j}+\bar{\alpha }_{j}^{2}+ \vert \bar{\alpha }_{j}\bar{\gamma }_{j} \vert )( \vert d_{ji} \vert G_{i}+ \vert c_{ji} \vert F_{i}) \\ \hphantom{\bar{B}_{i}=}{} +\frac{1}{2} \vert \bar{\alpha }_{i}\bar{\gamma }_{i} \vert (\sum_{j=1}^{n} \vert d_{ij} \vert G_{j}+1), \\ \bar{C}_{i}=-\bar{\beta }_{i}b_{i}-\bar{\alpha }_{i}^{2}b_{i}-\bar{ \alpha }_{i}\bar{\gamma }_{i}a_{i}+\bar{\gamma }^{2}_{i}. \end{cases} $$

Proof

Let \(x (t)=(x_{1}(t), x_{2}(t),\ldots , x_{n}(t))\) be a solution of stochastic system (1.4) with initial values (1.2) such that \(x_{i}(s)=\psi _{i}(s)\), \(x_{i}'(s) = \psi _{i}'(s)\), \(s\in [-\tau , 0]\), \(i\in J\). In view of (2.1) and (2.2), for \(i\in J\), we can find a sufficient little positive number λ such that

$$ A_{i}^{\lambda }< 0,\qquad B_{i}^{\lambda }< 0,\qquad 4A_{i}^{\lambda }B_{i}^{\lambda }> \bigl(C_{i}^{\lambda }\bigr)^{2} $$
(2.3)

and

$$ \bar{A}_{i}^{\lambda }< 0,\qquad \bar{B}_{i}^{\lambda }< 0, \qquad 4\bar{A}_{i}^{\lambda }\bar{B}_{i}^{\lambda }> \bigl(\bar{C}_{i}^{\lambda }\bigr)^{2}, $$
(2.4)

where

$$ \textstyle\begin{cases} A_{i}^{\lambda }=-\alpha _{i}^{2}(a_{i}-\frac{\lambda }{2})+\alpha _{i} \gamma _{i}+\frac{1}{2}\alpha _{i}^{2}(\sum_{j=1}^{n} \vert c_{ij} \vert F_{j}+ \sum_{j=1}^{n} \vert d_{ij} \vert G_{j}+1), \\ B_{i}^{\lambda }=-\alpha _{i}\gamma _{i}b_{i}+\frac{1}{2}\alpha _{i}^{2} \sigma _{i}^{2}+\frac{\lambda }{2}(\beta _{i}+\gamma _{i}^{2}) \\ \hphantom{B_{i}^{\lambda }=}{} +\frac{1}{2}\sum_{j=1}^{n}(\alpha _{j}^{2}+ \vert \alpha _{j} \gamma _{j} \vert )( \vert d_{ji} \vert G_{i}e^{\lambda \tau _{i}}+ \vert c_{ji} \vert F_{i})+ \frac{1}{2} \vert \alpha _{i}\gamma _{i} \vert (\sum_{j=1}^{n} \vert d_{ij} \vert G_{j}+1), \\ C_{i}^{\lambda }=\beta _{i}+\gamma _{i}^{2}-\alpha _{i}^{2}b_{i}- \alpha _{i}\gamma _{i}(a_{i}-\lambda ), \\ \bar{A}_{i}^{\lambda }=-(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2})(a_{i}- \frac{\lambda }{2})+\bar{\alpha }_{i}\bar{\gamma }_{i}+\frac{1}{2}(\bar{ \beta }_{i}+\bar{\alpha }_{i}^{2})(\sum_{j=1}^{n} \vert c_{ij} \vert F_{j}+ \sum_{j=1}^{n} \vert d_{ij} \vert G_{j}+1), \\ \bar{B}_{i}^{\lambda }=\frac{1}{2}\bar{\gamma }_{i}^{2}\lambda + \frac{1}{2}(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2})\sigma _{i}^{2}-\bar{ \alpha }_{i}\bar{\gamma }_{i}b_{i}, \\ \hphantom{\bar{B}_{i}^{\lambda }=}{} +\frac{1}{2}\sum_{j=1}^{n}(\bar{\beta }_{j}+\bar{ \alpha }_{j}^{2}+ \vert \bar{\alpha }_{j}\bar{\gamma }_{j} \vert )( \vert d_{ji} \vert G_{i}e^{ \lambda \tau _{i}}+ \vert c_{ji} \vert F_{i})+\frac{1}{2} \vert \bar{\alpha }_{i}\bar{ \gamma }_{i} \vert (\sum_{j=1}^{n} \vert d_{ij} \vert G_{j}+1), \\ \bar{C}_{i}^{\lambda }=-\bar{\beta }_{i}b_{i}-\bar{\alpha }_{i}^{2}b_{i}- \bar{\alpha }_{i}\bar{\gamma }_{i}(a_{i}-\lambda )+\bar{\gamma }^{2}_{i}. \end{cases} $$

Then we construct the following two Lyapunov–Krasovskii functionals:

$$\begin{aligned} U(t) =& \sum_{i=1}^{n}\beta _{i}x_{i}^{2}(t)e^{\lambda t}+ \sum _{i=1}^{n}\bigl(\alpha _{i} x _{i}'(t)+\gamma _{i}x_{i}(t) \bigr)^{2}e^{ \lambda t} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2}+ \vert \alpha _{i}\gamma _{i} \vert \bigr) \vert d_{ij} \vert G_{j}e^{\lambda \tau _{j}} \int _{t- \tau _{j}}^{t}x_{j}^{2}(s)e^{\lambda s} \,ds \end{aligned}$$

and

$$\begin{aligned} V(t) =& \sum_{i=1}^{n}\bar{\beta }_{i}\bigl(x'_{i}(t)\bigr)^{2}e^{ \lambda t} +\sum_{i=1}^{n}\bigl(\bar{\alpha }_{i} x _{i}'(t)+\bar{\gamma }_{i}x_{i}(t)\bigr)^{2}e^{ \lambda t} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bar{\beta }_{i}+\bar{ \alpha }_{i}^{2}+ \vert \bar{\alpha }_{i}\bar{ \gamma }_{i} \vert \bigr) \vert d_{ij} \vert G_{j}e^{ \lambda \tau _{j}} \int _{t-\tau _{j}}^{t}x_{j}^{2}(s)e^{\lambda s} \,ds. \end{aligned}$$

Using Itô’s formula, we obtain the following stochastic differential:

$$ dU(t)=\mathcal{L}U(t)\,dt+\sum_{i=1}^{n}2 \bigl(\alpha _{i}^{2} \sigma _{i}x_{i}(t)x'_{i}(t)+ \alpha _{i}\gamma _{i}\sigma _{i}x_{i}^{2}(t) \bigr)e^{ \lambda t}\,dB_{i}(t) $$
(2.5)

and

$$ dV(t)=\mathcal{L}V(t)\,dt+\sum_{i=1}^{n}2 \bigl(\bigl(\bar{\beta }_{i}+ \bar{\alpha }_{i}^{2} \bigr)\sigma _{i}x_{i}(t)x'_{i}(t)+ \bar{\alpha }_{i} \bar{\gamma }_{i}\sigma _{i}x_{i}^{2}(t)\bigr)e^{\lambda t} \,dB_{i}(t), $$
(2.6)

where \(\mathcal{L}\) is the weak infinitesimal operator such that

$$\begin{aligned} \mathcal{L}U(t) =&2 \sum_{i=1}^{n}\biggl[ \bigl(\beta _{i}+\gamma _{i}^{2}- \alpha _{i}^{2}b_{i}-\alpha _{i}\gamma _{i}(a_{i}-\lambda )\bigr)x_{i}(t)x'_{i}(t) \\ &{}+ \biggl( \alpha _{i}\gamma _{i}-\alpha _{i}^{2} \biggl(a_{i}-\frac{\lambda }{2}\biggr)\biggr) \bigl(x'_{i}(t) \bigr)^{2} \\ &{}+\biggl(\frac{1}{2}\alpha _{i}^{2}\sigma _{i}^{2}+\frac{\lambda }{2}\bigl( \beta _{i}+ \gamma _{i}^{2}\bigr)-\alpha _{i}\gamma _{i}b_{i}\biggr)x^{2}_{i}(t) \biggr]e^{ \lambda t} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2}+ \vert \alpha _{i}\gamma _{i} \vert \bigr) \vert d_{ij} \vert G_{j}e^{\lambda \tau _{j}}x_{j}^{2}(t)e^{ \lambda t} \\ &{}-\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2}+ \vert \alpha _{i}\gamma _{i} \vert \bigr) \vert d_{ij} \vert G_{j}x_{j}^{2}(t-\tau _{j})e^{ \lambda t} \\ &{}+2\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2}x'_{i}(t)+ \alpha _{i}\gamma _{i}x_{i}(t) \bigr)e^{\lambda t}c_{ij}f_{j}\bigl(x_{j}(t) \bigr) \\ &{}+2\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2}x'_{i}(t)+ \alpha _{i}\gamma _{i}x_{i}(t) \bigr)e^{\lambda t}d_{ij}g_{j}\bigl(x_{j}(t- \tau _{j})\bigr) \\ &{}+2\sum_{i=1}^{n}\bigl(\alpha _{i}^{2}x'_{i}(t)+\alpha _{i} \gamma _{i}x_{i}(t)\bigr)e^{\lambda t}I_{i}(t) \\ \leq &2\sum_{i=1}^{n}\biggl[\bigl(\beta _{i}+\gamma _{i}^{2}-\alpha _{i}^{2}b_{i}- \alpha _{i}\gamma _{i}(a_{i}-\lambda ) \bigr)x_{i}(t)x'_{i}(t) \\ &{}+\biggl(\alpha _{i} \gamma _{i}-\alpha _{i}^{2} \biggl(a_{i}-\frac{\lambda }{2}\biggr)\biggr) \bigl(x'_{i}(t) \bigr)^{2} \\ &{}+\biggl(\frac{1}{2}\alpha _{i}^{2}\sigma _{i}^{2}+\frac{\lambda }{2}\bigl( \beta _{i}+ \gamma _{i}^{2}\bigr)-\alpha _{i}\gamma _{i}b_{i}\biggr)x^{2}_{i}(t) \biggr]e^{ \lambda t} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2}+ \vert \alpha _{i}\gamma _{i} \vert \bigr) \vert d_{ij} \vert G_{j}e^{\lambda \tau _{j}}x_{j}^{2}(t)e^{ \lambda t} \\ &{}-\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2}+ \vert \alpha _{i}\gamma _{i} \vert \bigr) \vert d_{ij} \vert G_{j}x_{j}^{2}(t-\tau _{j})e^{ \lambda t} \\ &{}+2\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2} \bigl\vert x'_{i}(t) \bigr\vert + \vert \alpha _{i}\gamma _{i} \vert \bigl\vert x_{i}(t) \bigr\vert \bigr)e^{\lambda t} \vert c_{ij} \vert \bigl\vert f_{j}\bigl(x_{j}(t)\bigr)-f_{j}(0) \bigr\vert \\ &{}+2\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2} \bigl\vert x'_{i}(t) \bigr\vert + \vert \alpha _{i}\gamma _{i} \vert \bigl\vert x_{i}(t) \bigr\vert \bigr)e^{\lambda t} \vert d_{ij} \vert \bigl\vert g_{j}\bigl(x_{j}(t- \tau _{j}) \bigr)-g_{j}(0) \bigr\vert \\ &{}+2\sum_{i=1}^{n}\bigl(\alpha _{i}^{2} \bigl\vert x'_{i}(t) \bigr\vert + \vert \alpha _{i} \gamma _{i} \vert \bigl\vert x_{i}(t) \bigr\vert \bigr)e^{\lambda t} \bigl\vert I_{i}(t) \bigr\vert \\ \leq &2\sum_{i=1}^{n}\biggl[\bigl(\beta _{i}+\gamma _{i}^{2}-\alpha _{i}^{2}b_{i}- \alpha _{i}\gamma _{i}(a_{i}-\lambda ) \bigr)x_{i}(t)x'_{i}(t) \\ &{}+\biggl(\alpha _{i} \gamma _{i}-\alpha _{i}^{2} \biggl(a_{i}-\frac{\lambda }{2}\biggr)\biggr) \bigl(x'_{i}(t) \bigr)^{2} \\ &{}+\biggl(\frac{1}{2}\alpha _{i}^{2}\sigma _{i}^{2}+\frac{\lambda }{2}\bigl( \beta _{i}+ \gamma _{i}^{2}\bigr)-\alpha _{i}\gamma _{i}b_{i}\biggr)x^{2}_{i}(t) \biggr]e^{ \lambda t} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2}+ \vert \alpha _{i}\gamma _{i} \vert \bigr) \vert d_{ij} \vert G_{j}e^{\lambda \tau _{j}}x_{j}^{2}(t)e^{ \lambda t} \\ &{}-\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2}+ \vert \alpha _{i}\gamma _{i} \vert \bigr) \vert d_{ij} \vert G_{j}x_{j}^{2}(t-\tau _{j})e^{ \lambda t} \\ &{}+2\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2} \bigl\vert x'_{i}(t) \bigr\vert + \vert \alpha _{i}\gamma _{i} \vert \bigl\vert x_{i}(t) \bigr\vert \bigr)e^{\lambda t} \vert c_{ij} \vert F_{j} \bigl\vert x_{j}(t) \bigr\vert \\ &{}+2\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2} \bigl\vert x'_{i}(t) \bigr\vert + \vert \alpha _{i}\gamma _{i} \vert \bigl\vert x_{i}(t) \bigr\vert \bigr)e^{\lambda t} \vert d_{ij} \vert G_{j} \bigl\vert x_{j}(t- \tau _{j}) \bigr\vert \\ &{}+2\sum_{i=1}^{n}\bigl(\alpha _{i}^{2} \bigl\vert x'_{i}(t) \bigr\vert + \vert \alpha _{i} \gamma _{i} \vert \bigl\vert x_{i}(t) \bigr\vert \bigr)e^{\lambda t} \bigl\vert I_{i}(t) \bigr\vert \\ \leq &2\sum_{i=1}^{n}\biggl[\bigl(\beta _{i}+\gamma _{i}^{2}-\alpha _{i}^{2}b_{i}- \alpha _{i}\gamma _{i}(a_{i}-\lambda ) \bigr)x_{i}(t)x'_{i}(t) \\ &{}+\biggl(\alpha _{i} \gamma _{i}-\alpha _{i}^{2} \biggl(a_{i}-\frac{\lambda }{2}\biggr)\biggr) \bigl(x'_{i}(t) \bigr)^{2} \\ &{}+\biggl(\frac{1}{2}\alpha _{i}^{2}\sigma _{i}^{2}+\frac{\lambda }{2}\bigl( \beta _{i}+ \gamma _{i}^{2}\bigr)-\alpha _{i}\gamma _{i}b_{i}\biggr)x^{2}_{i}(t) \biggr]e^{ \lambda t} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2}+ \vert \alpha _{i}\gamma _{i} \vert \bigr) \vert d_{ij} \vert G_{j}e^{\lambda \tau _{j}}x_{j}^{2}(t)e^{ \lambda t} \\ &{}-\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\alpha _{i}^{2}+ \vert \alpha _{i}\gamma _{i} \vert \bigr) \vert d_{ij} \vert G_{j}x_{j}^{2}(t-\tau _{j})e^{ \lambda t} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl[(\alpha _{i}^{2} \bigl(\bigl(x'_{i}(t)\bigr)^{2}+x_{j}^{2}(t) \bigr)+ \vert \alpha _{i}\gamma _{i} \vert \bigl(x_{i}^{2}(t)+x_{j}^{2}(t)\bigr) \bigr]e^{\lambda t} \vert c_{ij} \vert F_{j} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl[\alpha _{i}^{2} \bigl(\bigl(x'_{i}(t)\bigr)^{2}+x_{j}^{2}(t- \tau _{j})\bigr)+ \vert \alpha _{i}\gamma _{i} \vert \bigl(x_{i}^{2}(t)+x_{j}^{2}(t- \tau _{j})\bigr)\bigr]e^{ \lambda t} \vert d_{ij} \vert G_{j} \\ &{}+\sum_{i=1}^{n}\bigl[\alpha _{i}^{2}\bigl(\bigl(x'_{i}(t) \bigr)^{2}+I_{i}^{2}(t)\bigr)+ \vert \alpha _{i}\gamma _{i} \vert \bigl(x_{i}^{2}(t)+I_{i}^{2}(t) \bigr)\bigr]e^{\lambda t} \\ =&\sum_{i=1}^{n}\Biggl[2\bigl(\beta _{i}+\gamma _{i}^{2}-\alpha _{i}^{2}b_{i}- \alpha _{i}\gamma _{i}a_{i}(a_{i}- \lambda )\bigr)x_{i}(t)x'_{i}(t) \\ &{}+\Biggl(-\alpha _{i}^{2}(2a_{i}-\lambda )+2 \alpha _{i}\gamma _{i}+\alpha _{i}^{2} \Biggl( \sum_{j=1}^{n} \vert c_{ij} \vert F_{j}+\sum_{j=1}^{n} \vert d_{ij} \vert G_{j}+1\Biggr)\Biggr) \bigl(x'_{i}(t)\bigr)^{2} \\ &{}+\Biggl(-2\alpha _{i}\gamma _{i}b_{i}+ \alpha _{i}^{2}\sigma _{i}^{2}+ \lambda \bigl(\beta _{i}+\gamma _{i}^{2}\bigr) \\ &{}+\sum _{j=1}^{n}\bigl(\alpha _{j}^{2}+ \vert \alpha _{j}\gamma _{j} \vert \bigr) \bigl( \vert d_{ji} \vert G_{i}e^{\lambda \tau _{i}}+ \vert c_{ji} \vert F_{i}\bigr) \\ &{}+ \vert \alpha _{i}\gamma _{i} \vert \Biggl(\sum _{j=1}^{n} \vert d_{ij} \vert G_{j}+1\Biggr)\Biggr)x^{2}_{i}(t) \Biggr]e^{ \lambda t}+\sum_{i=1}^{n}\bigl( \alpha _{i}^{2}+ \vert \alpha _{i} \gamma _{i} \vert \bigr)I_{i}^{2}(t)e^{\lambda t} \\ =&\sum_{i=1}^{n}\Biggl[2A_{i}^{\lambda } \biggl(x_{i}'(t)+ \frac{C_{i}^{\lambda }}{2A_{i}^{\lambda }}x_{i}(t) \biggr)^{2}+2\biggl(B_{i}^{\lambda }- \frac{(C_{i}^{\lambda })^{2}}{4A_{i}^{\lambda }} \biggr)x^{2}_{i}(t) \\ &{}+\sum_{i=1}^{n}\bigl(\alpha _{i}^{2}+ \vert \alpha _{i}\gamma _{i} \vert \bigr)I_{i}^{2}(t) \Biggr]e^{ \lambda t} \\ \leq &\sum_{i=1}^{n}\biggl[2A_{i}^{\lambda } \biggl(x_{i}'(t)+ \frac{C_{i}^{\lambda }}{2A_{i}^{\lambda }}x_{i}(t) \biggr)^{2}+2\biggl(B_{i}^{\lambda }- \frac{(C_{i}^{\lambda })^{2}}{4A_{i}^{\lambda }} \biggr)x^{2}_{i}(t)\biggr]e^{\lambda t} \\ &{}+e^{\lambda t}\max_{i\in J}\bigl(\alpha _{i}^{2}+ \vert \alpha _{i} \gamma _{i} \vert \bigr) \Vert I \Vert _{\infty }^{2} ( ) \end{aligned}$$
(2.7)

and

$$\begin{aligned} \mathcal{L}V(t) =& \sum_{i=1}^{n}\biggl[2 \bigl(-\bar{\beta }_{i}b_{i}- \bar{\alpha }_{i}^{2}b_{i}-\bar{\alpha }_{i}\bar{ \gamma }_{i}(a_{i}- \lambda )+\bar{\gamma }^{2}_{i}\bigr)x_{i}(t)x'_{i}(t) \\ &{}+2\biggl(-\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2} \bigr) \biggl(a_{i}-\frac{\lambda }{2}\biggr)+ \bar{\alpha }_{i}\bar{\gamma }_{i}\biggr) \bigl(x'_{i}(t) \bigr)^{2} \\ &{}+\bigl(\bar{\gamma }_{i}^{2} \lambda + \bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2}\bigr) \sigma _{i}^{2}-2\bar{ \alpha }_{i}\bar{\gamma }_{i}b_{i}\bigr)x^{2}_{i}(t) \biggr]e^{\lambda t} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bar{\beta }_{i}+ \bar{ \alpha }_{i}^{2}+ \vert \bar{\alpha }_{i}\bar{ \gamma }_{i} \vert \bigr) \vert d_{ij} \vert G_{j}e^{ \lambda \tau _{j}}x_{j}^{2}(t)e^{\lambda t} \\ &{}-\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bar{\beta }_{i}+ \bar{ \alpha }_{i}^{2}+ \vert \bar{\alpha }_{i}\bar{ \gamma }_{i} \vert \bigr) \vert d_{ij} \vert G_{j}x_{j}^{2}(t- \tau _{j})e^{\lambda t} \\ &{}+2\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bigl(\bar{\beta }_{i}+ \bar{\alpha }_{i}^{2}\bigr)x'_{i}(t)+ \bar{\alpha }_{i}\bar{\gamma }_{i}x_{i}(t) \bigr)c_{ij}f_{j}\bigl(x_{j}(t) \bigr)e^{ \lambda t} \\ &{}+2\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bigl(\bar{\beta }_{i}+ \bar{\alpha }_{i}^{2}\bigr)x'_{i}(t)+ \bar{\alpha }_{i}\bar{\gamma }_{i}x_{i}(t) \bigr)d_{ij}g_{j}\bigl(x_{j}(t- \tau _{j})\bigr)e^{\lambda t} \\ &{}+2\sum_{i=1}^{n}\bigl(\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2}\bigr)x'_{i}(t)+ \bar{\alpha }_{i}\bar{\gamma }_{i}x_{i}(t) \bigr)I_{i}(t)e^{\lambda t} \\ \leq & \sum_{i=1}^{n}\biggl[2\bigl(-\bar{ \beta }_{i}b_{i}-\bar{\alpha }_{i}^{2}b_{i}- \bar{\alpha }_{i}\bar{\gamma }_{i}(a_{i}-\lambda )+\bar{\gamma }^{2}_{i}\bigr)x_{i}(t)x'_{i}(t) \\ &{}+2\biggl(-\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2} \bigr) \biggl(a_{i}-\frac{\lambda }{2}\biggr)+ \bar{\alpha }_{i}\bar{\gamma }_{i}\biggr) \bigl(x'_{i}(t) \bigr)^{2} \\ &{}+\bigl(\bar{\gamma }_{i}^{2} \lambda + \bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2}\bigr) \sigma _{i}^{2}-2\bar{ \alpha }_{i}\bar{\gamma }_{i}b_{i}\bigr)x^{2}_{i}(t) \biggr]e^{\lambda t} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bar{\beta }_{i}+ \bar{ \alpha }_{i}^{2}+ \vert \bar{\alpha }_{i}\bar{ \gamma }_{i} \vert \bigr) \vert d_{ij} \vert G_{j}e^{ \lambda \tau _{j}}x_{j}^{2}(t)e^{\lambda t} \\ &{}-\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bar{\beta }_{i}+ \bar{ \alpha }_{i}^{2}+ \vert \bar{\alpha }_{i}\bar{ \gamma }_{i} \vert \bigr) \vert d_{ij} \vert G_{j}x_{j}^{2}(t- \tau _{j})e^{\lambda t} \\ &{}+2\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bigl(\bar{\beta }_{i}+ \bar{\alpha }_{i}^{2}\bigr) \bigl\vert x'_{i}(t) \bigr\vert + \vert \bar{\alpha }_{i}\bar{\gamma }_{i} \vert \bigl\vert x_{i}(t) \bigr\vert \bigr)e^{ \lambda t} \vert c_{ij} \vert \bigl\vert f_{j} \bigl(x_{j}(t)\bigr)-f_{j}(0) \bigr\vert \\ &{}+2\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bigl(\bar{\beta }_{i}+ \bar{\alpha }_{i}^{2}\bigr) \bigl\vert x'_{i}(t) \bigr\vert + \vert \bar{\alpha }_{i}\bar{\gamma }_{i} \vert \bigl\vert x_{i}(t) \bigr\vert \bigr)e^{ \lambda t} \vert d_{ij} \vert \bigl\vert g_{j} \bigl(x_{j}(t-\tau _{j})\bigr)-g_{j}(0) \bigr\vert \\ &{}+2\sum_{i=1}^{n}\bigl(\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2}\bigr) \bigl\vert x'_{i}(t) \bigr\vert + \vert \bar{\alpha }_{i}\bar{\gamma }_{i} \vert \bigl\vert x_{i}(t) \bigr\vert \bigr)e^{\lambda t} \bigl\vert I_{i}(t) \bigr\vert \\ \leq & \sum_{i=1}^{n}\biggl[2\bigl(-\bar{ \beta }_{i}b_{i}-\bar{\alpha }_{i}^{2}b_{i}- \bar{\alpha }_{i}\bar{\gamma }_{i}(a_{i}-\lambda )+\bar{\gamma }^{2}_{i}\bigr)x_{i}(t)x'_{i}(t) \\ &{}+2\biggl(-\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2} \bigr) \biggl(a_{i}-\frac{\lambda }{2}\biggr)+ \bar{\alpha }_{i}\bar{\gamma }_{i}\biggr) \bigl(x'_{i}(t) \bigr)^{2} \\ &{}+\bigl(\bar{\gamma }_{i}^{2} \lambda + \bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2}\bigr) \sigma _{i}^{2}-2\bar{ \alpha }_{i}\bar{\gamma }_{i}b_{i}\bigr)x^{2}_{i}(t) \biggr]e^{\lambda t} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bar{\beta }_{i}+ \bar{ \alpha }_{i}^{2}+ \vert \bar{\alpha }_{i}\bar{ \gamma }_{i} \vert \bigr) \vert d_{ij} \vert G_{j}e^{ \lambda \tau _{j}}x_{j}^{2}(t)e^{\lambda t} \\ &{}-\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bar{\beta }_{i}+ \bar{ \alpha }_{i}^{2}+ \vert \bar{\alpha }_{i}\bar{ \gamma }_{i} \vert \bigr) \vert d_{ij} \vert G_{j}x_{j}^{2}(t- \tau _{j})e^{\lambda t} \\ &{}+2\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bigl(\bar{\beta }_{i}+ \bar{\alpha }_{i}^{2}\bigr) \bigl\vert x'_{i}(t) \bigr\vert + \vert \bar{\alpha }_{i}\bar{\gamma }_{i} \vert \bigl\vert x_{i}(t) \bigr\vert \bigr)e^{ \lambda t} \vert c_{ij} \vert F_{j} \bigl\vert x_{j}(t) \bigr\vert \\ &{}+2\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bigl(\bar{\beta }_{i}+ \bar{\alpha }_{i}^{2}\bigr) \bigl\vert x'_{i}(t) \bigr\vert + \vert \bar{\alpha }_{i}\bar{\gamma }_{i} \vert \bigl\vert x_{i}(t) \bigr\vert \bigr)e^{ \lambda t} \vert d_{ij} \vert G_{j} \bigl\vert x_{j}(t- \tau _{j}) \bigr\vert \\ &{}+2\sum_{i=1}^{n}\bigl(\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2}\bigr) \bigl\vert x'_{i}(t) \bigr\vert + \vert \bar{\alpha }_{i}\bar{\gamma }_{i} \vert \bigl\vert x_{i}(t) \bigr\vert \bigr)e^{\lambda t} \bigl\vert I_{i}(t) \bigr\vert \\ \leq & \sum_{i=1}^{n}\biggl[2\bigl(-\bar{ \beta }_{i}b_{i}-\bar{\alpha }_{i}^{2}b_{i}- \bar{\alpha }_{i}\bar{\gamma }_{i}(a_{i}-\lambda )+\bar{\gamma }^{2}_{i}\bigr)x_{i}(t)x'_{i}(t) \\ &{}+2\biggl(-\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2} \bigr) \biggl(a_{i}-\frac{\lambda }{2}\biggr)+ \bar{\alpha }_{i}\bar{\gamma }_{i}\biggr) \bigl(x'_{i}(t) \bigr)^{2} \\ &{}+\bigl(\bar{\gamma }_{i}^{2} \lambda + \bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2}\bigr) \sigma _{i}^{2}-2\bar{ \alpha }_{i}\bar{\gamma }_{i}b_{i}\bigr)x^{2}_{i}(t) \biggr]e^{\lambda t} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bar{\beta }_{i}+ \bar{ \alpha }_{i}^{2}+ \vert \bar{\alpha }_{i}\bar{ \gamma }_{i} \vert \bigr) \vert d_{ij} \vert G_{j}e^{ \lambda \tau _{j}}x_{j}^{2}(t)e^{\lambda t} \\ &{}-\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl(\bar{\beta }_{i}+ \bar{ \alpha }_{i}^{2}+ \vert \bar{\alpha }_{i}\bar{ \gamma }_{i} \vert \bigr) \vert d_{ij} \vert G_{j}x_{j}^{2}(t- \tau _{j})e^{\lambda t} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl[\bigl(\bar{\beta }_{i}+ \bar{\alpha }_{i}^{2}\bigr) \bigl(\bigl(x'_{i}(t) \bigr)^{2}+x_{j}^{2}(t)\bigr)+ \vert \bar{\alpha }_{i} \bar{\gamma }_{i} \vert \bigl(x_{i}^{2}(t)+x_{j}^{2}(t) \bigr)\bigr]e^{\lambda t} \vert c_{ij} \vert F_{j} \\ &{}+\sum_{i=1}^{n}\sum _{j=1}^{n}\bigl[\bigl(\bar{\beta }_{i}+ \bar{\alpha }_{i}^{2}\bigr) \bigl(\bigl(x'_{i}(t) \bigr)^{2}+x_{j}^{2}(t-\tau _{j})\bigr) \\ &{}+ \vert \bar{ \alpha }_{i}\bar{\gamma }_{i} \vert \bigl(x_{i}^{2}(t)+x_{j}^{2}(t-\tau _{j})\bigr)\bigr]e^{ \lambda t} \vert d_{ij} \vert G_{j} \\ &{}+\sum_{i=1}^{n}\bigl[\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2}\bigr) \bigl( \bigl(x'_{i}(t)\bigr)^{2}+I_{i}^{2}(t) \bigr)+ \vert \bar{\alpha }_{i}\bar{\gamma }_{i} \vert \bigl(x_{i}^{2}(t)+I_{i}^{2}(t)\bigr) \bigr]e^{ \lambda t} \\ =& \sum_{i=1}^{n}\Biggl[2\bigl(-\bar{\beta }_{i}b_{i}-\bar{\alpha }_{i}^{2}b_{i}- \bar{\alpha }_{i}\bar{\gamma }_{i}(a_{i}-\lambda )+\bar{\gamma }^{2}_{i}\bigr)x_{i}(t)x'_{i}(t) \\ &{}+\Biggl(-\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2} \bigr) (2a_{i}-\lambda )+2\bar{ \alpha }_{i}\bar{\gamma }_{i} \\ &{}+\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2}\bigr) \Biggl( \sum_{j=1}^{n} \vert c_{ij} \vert F_{j}+\sum _{j=1}^{n} \vert d_{ij} \vert G_{j}+1\Biggr)\Biggr) \bigl(x_{i}'(t) \bigr)^{2} \\ &{}+\Biggl(\bar{\gamma }_{i}^{2}\lambda +\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2}\bigr) \sigma _{i}^{2}-2\bar{\alpha }_{i}\bar{\gamma }_{i}b_{i} \\ &{}+\sum_{j=1}^{n} \bigl( \bar{\beta }_{j}+\bar{\alpha }_{j}^{2}+ \vert \bar{\alpha }_{j}\bar{\gamma }_{j} \vert \bigr) \bigl( \vert d_{ji} \vert G_{i}e^{ \lambda \tau _{i}}+ \vert c_{ji} \vert F_{i}\bigr) \\ &{}+ \vert \bar{\alpha }_{i}\bar{\gamma }_{i} \vert \Biggl(\sum_{j=1}^{n} \vert d_{ij} \vert G_{j}+1\Biggr)\Biggr)x^{2}_{i}(t) \Biggr]e^{ \lambda t}+\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2}+ \vert \bar{\alpha }_{i} \bar{\gamma }_{i} \vert \bigr)I^{2}_{i}(t)e^{\lambda t} \\ =& \sum_{i=1}^{n}\biggl[2 \bar{B}_{i}^{\lambda }\biggl(x_{i}(t)+ \frac{\bar{C}_{i}^{\lambda }}{2\bar{B}_{i}^{\lambda }}x'_{i}(t)\biggr) ^{2}+2\biggl( \bar{A}_{i}^{\lambda }-\frac{(\bar{C}_{i}^{\lambda })^{2}}{4\bar{B}_{i} ^{\lambda }}\biggr) \bigl(x'_{i}(t)\bigr)^{2} \\ &{}+\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2}+ \vert \bar{\alpha }_{i}\bar{\gamma }_{i} \vert \bigr)I^{2}_{i}(t)\biggr]e^{ \lambda t} \\ \leq & \sum_{i=1}^{n}\biggl[2 \bar{B}_{i}^{\lambda }\biggl(x_{i}(t)+ \frac{\bar{C}_{i}^{\lambda }}{2\bar{B}_{i}^{\lambda }}x'_{i}(t)\biggr)^{2}+2\biggl( \bar{A}_{i}^{\lambda }-\frac{(\bar{C}_{i}^{\lambda })^{2}}{4\bar{B}_{i}^{\lambda }}\biggr) \bigl(x'_{i}(t)\bigr)^{2}\biggr]e^{ \lambda t} \\ &{}+e^{\lambda t}\max_{i\in J}\bigl(\bar{\beta }_{i}+\bar{\alpha }_{i}^{2}+ \vert \bar{\alpha }_{i}\bar{\gamma }_{i} \vert \bigr) \Vert I \Vert _{\infty }^{2}. \end{aligned}$$
(2.8)

Integrating both sides of (2.5), (2.6) and taking the expectation operator, we obtain from (2.3), (2.4), (2.7), and (2.8) that

$$ EU(t)\leq U(0)+ \Vert I \Vert _{\infty }^{2}\max _{i\in J}\bigl(\alpha _{i}^{2}+ \vert \alpha _{i}\gamma _{i} \vert \bigr) \int _{0}^{t}e^{\lambda s}\,ds $$
(2.9)

and

$$ EV(t)\leq V(0)+ \Vert I \Vert _{\infty }^{2}\max _{i\in J}\bigl(\bar{\beta }_{i}+ \bar{\alpha }_{i}^{2}+ \vert \bar{\alpha }_{i}\bar{\gamma }_{i} \vert \bigr) \int _{0}^{t}e^{ \lambda s}\,ds. $$
(2.10)

Choosing \(\gamma =\max_{i\in J}\{\alpha _{i}^{2}+|\alpha _{i}\gamma _{i}|, \bar{\beta }_{i}+\bar{\alpha }_{i}^{2}+|\bar{\alpha }_{i}\bar{\gamma }_{i}| \} \) and \(\beta =\min_{i\in J}\{\beta _{i},\bar{\beta }_{i}\}\), we obtain from (2.9) and (2.10) that

$$ \beta e^{\lambda t} E\Biggl(\sum_{i=1}^{n}x^{2}_{i}(t) \Biggr)\leq EU(t) \leq U(0)+\frac{\gamma }{\lambda } \Vert I \Vert ^{2}_{\infty }\bigl(e^{\lambda t}-1\bigr) $$
(2.11)

and

$$ \beta e^{\lambda t} E\Biggl(\sum_{i=1}^{n} \bigl(x'_{i}(t)\bigr)^{2}\Biggr)\leq EV(t) \leq V(0)+\frac{\gamma }{\lambda } \Vert I \Vert ^{2}_{\infty } \bigl(e^{\lambda t}-1\bigr). $$
(2.12)

Combining (2.11) and (2.12), the following holds:

$$ E\bigl( \bigl\Vert x(t) \bigr\Vert ^{2}+ \bigl\Vert x'(t) \bigr\Vert ^{2}\bigr)\leq \frac{U(0)+V(0)}{\beta }e^{-\lambda t}+ \frac{2\gamma }{\beta \lambda } \Vert I \Vert ^{2}_{\infty }, $$

which, together with Definition 2.1, implies that the stochastic inertial neural network (1.4) is mean-square exponentially input-to-state stable. This completes the proof of Theorem 2.1. □

Remark 2.1

From Definition 2.1, it is obvious that if stochastic inertial neural networks are mean-square exponentially input-to-state stable, the second moments of states and their first-order derivatives will remain bounded, but not converge to the equilibrium point. This reveals that the external inputs influence the dynamics of the stochastic inertial neural networks, and when they are bounded, the second moments of states and their first-order derivatives remain bounded. In Theorem 2.1, we derive some sufficient conditions for stochastic inertial neural network (1.4) to ensure the mean-square exponential input-to-state stability. To the best of our knowledge, it is the first time to consider the mean-square exponential input-to-state stability for stochastic inertial neural networks. Since references [118] and [2036] are concerned with the deterministic inertial neural networks, Prakash et al. [19] only consider synchronization of Markovian jumping inertial neural networks, and the authors of [3842] only study input-to-state stability of non-inertial neural networks. Those results are invalid for mean-square exponential input-to-state stability of stochastic inertial neural network (1.4).

3 An illustrative example

In order to verify correctness and effectiveness of the theoretical results, we show an example with numerical simulations.

Example 3.1

$$ \textstyle\begin{cases} dx_{1}'(t)=[-3 x_{1}'(t) -8x_{1}(t)+ 1.2 f_{1}(x_{1}(t))+ 1.5 f_{2}(x_{2}(t)) \\ \hphantom{dx_{1}'(t)=}{}-0.8g_{1}(x_{1}(t-2))+1.9g_{2}(x_{2}(t-2))+6 \cos t]\,dt+x_{1}(t)\,dB_{1}(t) , \\ dx_{2}'(t)=[-4x_{2}'(t) -10x_{2}(t)- 0.9f_{1}(x_{1}(t))- 1.7f_{2}(x_{2}(t)) \\ \hphantom{dx_{2}'(t)=}{} -2.5g_{1}(x_{1}(t-2))+2.1g_{2}(x_{2}(t-2))+7 \sin t]\,dt+x_{2}(t)\,dB_{2}(t) , \end{cases} $$
(3.1)

where \(f_{i}(u) = g_{i}(u) = 0.25(|u + 1|-|u-1|)\), \(i=1,2 \).

Choosing \(\alpha _{1}=\alpha _{2}=\gamma _{1}=\gamma _{2}=1\), \(\beta _{1}=8\), \(\beta _{2}=9\), \(\bar{\alpha }_{1}=\frac{1}{10}\), \(\bar{\alpha }_{2}=\frac{1}{4}\), \(\bar{ \gamma }_{1}=10\), \(\bar{\gamma }_{2}=4\), \(\bar{\beta }_{1}=1.1\), \(\bar{\beta }_{2}=1\), we obtain \(A_{1}=-0.65\), \(A_{2} =-1.2\), \(B_{1}=-2.95\), \(B_{2}=-3.6\), \(C_{1}=-2\), \(C_{2}=-4\), \(\bar{A}_{1}=-0.7715\), \(\bar{A}_{2} =-1.3375\), \(\bar{B}_{1}=-1.757\), \(\bar{B}_{2}=-4.2219\), \(\bar{C}_{1}=0.82\), \(\bar{C}_{2}=0.14\). Then (2.1) and (2.2) hold. Therefore, by Theorem 2.1, we see that the stochastic inertial neural network (3.1) is mean-square exponentially input-to-state stable. Furthermore, Fig. 1 shows this fact.

Figure 1
figure 1

The states and their first-order derivatives of (3.1) with initial values \((x_{1}(s),x_{2}(s),x'_{1}(s),x'_{2}(s))=(1,-3,0,0)\), \(s\in [-2,0]\)

4 Concluding remarks

In this paper, we have studied the mean-square exponential input-to-state stability for a class of stochastic inertial neural networks. By applying non-reduced order method and Lyapunov–Krasovskii functional, we have obtained several sufficient conditions to guarantee the mean-square exponential input-to-state stability of the suggested stochastic system, which has been considered by few authors. An example and its numerical simulation have been presented to check the theoretical result well.

Availability of data and materials

Data sharing not applicable to this paper as no data sets were generated or analyzed during the current study.

References

  1. Babcock, K., Westervelt, R.: Stability and dynamics of simple electronic neural networks with added inertia. Physica D 23, 464–469 (1986)

    Article  Google Scholar 

  2. Babcock, K., Westervelt, R.: Dynamics of simple electronic neural networks. Physica D 28, 305–316 (1987)

    Article  MathSciNet  Google Scholar 

  3. Ge, J., Xu, J.: Weak resonant double Hopf bifurcations in an inertial four neuron model with time delay. Int. J. Neural Syst. 22, 63–75 (2012)

    Article  Google Scholar 

  4. Li, C., Chen, G., Liao, L., Yu, J.: Hopf bifurcation and chaos in a single inertial neuron model with time delay. Eur. Phys. J. B 41, 337–343 (2004)

    Article  Google Scholar 

  5. Liu, Q., Liao, X., Liu, Y., Zhou, S., Guo, S.: Dynamics of an inertial two-neuron system with time delay. Nonlinear Dyn. 58, 573–609 (2009)

    Article  MathSciNet  Google Scholar 

  6. Song, Z., Xu, J., Zhen, B.: Multi-type activity coexistence in an inertial two-neuron system with multiple delays. Int. J. Bifurc. Chaos 25, 1530040 (2015)

    Article  Google Scholar 

  7. Wheeler, D., Schieve, W.: Stability and chaos in an inertial two-neuron system. Physica D 105, 267–284 (1997)

    Article  Google Scholar 

  8. Zhao, H., Yu, X., Wang, L.: Bifurcation and control in an inertial two-neuron system with time delays. Int. J. Bifurc. Chaos 22, 1250036 (2012)

    Article  MathSciNet  Google Scholar 

  9. Ke, Y., Miao, C.: Stability analysis of inertial Cohen–Grossberg-type neural networks with time delays. Neurocomputing 117, 196–205 (2013)

    Article  Google Scholar 

  10. Yu, S., Zhang, Z., Quan, Z.: New global exponential stability conditions for inertial Cohen–Grossberg neural networks with time delays. Neurocomputing 151, 1446–1454 (2015)

    Article  Google Scholar 

  11. Zhang, Z., Quan, Z.: Global exponential stability via inequality technique for inertial BAM neural networks with time delays. Neurocomputing 151, 1316–1326 (2015)

    Article  Google Scholar 

  12. Wang, J., Tian, L.: Global Lagrange stability for inertial neural networks with mixed time varying delays. Neurocomputing 235, 140–146 (2017)

    Article  Google Scholar 

  13. Kumar, R., Das, S.: Exponential stability of inertial BAM neural network with time-varying impulses and mixed time-varying delays via matrix measure approach. Commun. Nonlinear Sci. Numer. Simul. 81, 105016 (2020)

    Article  MathSciNet  Google Scholar 

  14. Ke, Y., Miao, C.: Stability and existence of periodic solutions in inertial BAM neural networks with time delay. Neural Comput. Appl. 23, 1089–1099 (2013)

    Article  Google Scholar 

  15. Ke, Y., Miao, C.: Anti-periodic solutions of inertial neural networks with time delays. Neural Process. Lett. 45, 523–538 (2017)

    Article  Google Scholar 

  16. Xu, C., Zhang, Q.: Existence and global exponential stability of anti-periodic solutions for BAM neural networks with inertial term and delay. Neurocomputing 153, 108–116 (2015)

    Article  Google Scholar 

  17. Cao, J., Wan, Y.: Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays. Neural Netw. 53, 165–172 (2014)

    Article  Google Scholar 

  18. Lakshmanan, S., Prakash, M., Lim, C., Rakkiyappan, R., Balasubramaniam, P., Nahavandi, S.: Synchronization of an inertial neural network with time varying delays and its application to secure communication. IEEE Trans. Neural Netw. Learn. Syst. 29(1), 195–207 (2018)

    Article  MathSciNet  Google Scholar 

  19. Prakash, M., Balasubramaniam, P., Lakshmanan, S.: Synchronization of Markovian jumping inertial neural networks and its applications in image encryption. Neural Netw. 83, 86–93 (2016)

    Article  Google Scholar 

  20. Rakkiyappan, R., Kumari, E., Chandrasekar, A., Krishnasamy, R.: Synchronization and periodicity of coupled inertial memristive neural networks with supremums. Neurocomputing 214, 739–749 (2016)

    Article  Google Scholar 

  21. Rakkiyappan, R., Premalatha, S., Chandrasekar, A., Cao, J.: Stability and synchronization analysis of inertial memristive neural networks with time delays. Cogn. Neurodyn. 10, 437–451 (2016)

    Article  Google Scholar 

  22. Tu, Z., Cao, J., Alsaedi, A., Alsaadi, F.: Global dissipativity of memristor-based neutral type inertial neural networks. Neural Netw. 88, 125–133 (2017)

    Article  Google Scholar 

  23. Tu, Z., Cao, J., Hayat, T.: Matrix measure based dissipativity analysis for inertial delayed uncertain neural networks. Neural Netw. 75, 47–55 (2016)

    Article  Google Scholar 

  24. Li, X., Li, X., Hu, C.: Some new results on stability and synchronization for delayed inertial neural networks based on non-reduced order method. Neural Netw. 96, 91–100 (2017)

    Article  Google Scholar 

  25. Huang, C., Liu, B.: New studies on dynamic analysis of inertial neural networks involving non-reduced order method. Neurocomputing 325, 283–287 (2019)

    Article  Google Scholar 

  26. Xu, Y.: Convergence on non-autonomous inertial neural networks with unbounded distributed delays. J. Exp. Theor. Artif. Intell. (2019). https://doi.org/10.1080/0952813X.2019.1652941

    Article  Google Scholar 

  27. Yao, L.: Global exponential stability on anti-periodic solutions in proportional delayed HIHNNs. J. Exp. Theor. Artif. Intell. (2020). https://doi.org/10.1080/0952813X.2020.1721571

    Article  Google Scholar 

  28. Huang, C.: Exponential stability of inertial neural networks involving proportional delays and non-reduced order method. J. Exp. Theor. Artif. Intell. (2019). https://doi.org/10.1080/0952813X.2019.1635654

    Article  MATH  Google Scholar 

  29. Huang, C., Yang, L., Liu, B.: New results on periodicity of non-autonomous inertial neural networks involving non-reduced order method. Neural Process. Lett. 50, 595–606 (2019)

    Article  Google Scholar 

  30. Huang, C., Liu, B., Qian, C., Cao, J.: Stability on positive pseudo almost periodic solutions of HPDCNNs incorporating D operator. Math. Comput. Simul. 190, 1150–1163 (2021)

    Article  MathSciNet  Google Scholar 

  31. Zhang, X., Hu, H.: Convergence in a system of critical neutral functional differential equations. Appl. Math. Lett. 107, 106385 (2020)

    Article  MathSciNet  Google Scholar 

  32. Huang, C., Huang, L., Wu, J.: Global population dynamics of a single species structured with distinctive time-varying maturation and self-limitation delays. Discrete Contin. Dyn. Syst., Ser. B (2021). https://doi.org/10.3934/dcdsb.2021138

    Article  Google Scholar 

  33. Tan, Y.: Dynamics analysis of Mackey–Glass model with two variable delays. Math. Biosci. Eng. 17(5), 4513–4526 (2020)

    Article  MathSciNet  Google Scholar 

  34. Zhang, J., Huang, C.: Dynamics analysis on a class of delayed neural networks involving inertial terms. Adv. Differ. Equ. 2020, 120 (2020)

    Article  MathSciNet  Google Scholar 

  35. Huang, C., Zhang, H.: Periodicity of non-autonomous inertial neural networks involving proportional delays and non-reduced order method. Int. J. Biomath. 12(2), 1950016 (2019)

    Article  MathSciNet  Google Scholar 

  36. Huang, C., Yang, H., Cao, J.: Weighted pseudo almost periodicity of multi-proportional delayed shunting inhibitory cellular neural networks with D operator. Discrete Contin. Dyn. Syst., Ser. S 14(4), 1259–1272 (2021)

    MathSciNet  Google Scholar 

  37. Haykin, S.: Neural Networks. Prentice Hall, New York (1994)

    MATH  Google Scholar 

  38. Zhu, Q., Cao, J.: Mean-square exponential input-to-state stability of stochastic delayed neural networks. Neurocomputing 131, 157–163 (2014)

    Article  Google Scholar 

  39. Wang, W., Gong, S., Chen, W.: New result on the mean-square exponential input-to-state stability of stochastic delayed recurrent neural networks. Syst. Sci. Control Eng. 6(1), 501–509 (2018)

    Article  Google Scholar 

  40. Shu, Y., Liu, X., Wang, F., Qiu, S.: Exponential input-to-state stability of stochastic neural networks with mixed delays. Int. J. Mach. Learn. Cybern. 9, 807–819 (2018)

    Article  Google Scholar 

  41. Zhou, L., Liu, X.: Mean-square exponential input-to-state stability of stochastic recurrent neural networks with multi-proportional delays. Neurocomputing 219, 396–403 (2017)

    Article  Google Scholar 

  42. Zhou, W., Teng, L., Xu, D.: Mean-square exponentially input-to-state stability of stochastic Cohen–Grossberg neural networks with time-varying delays. Neurocomputing 153, 54–61 (2015)

    Article  Google Scholar 

  43. Wang, W., Chen, W.: Mean-square exponential stability of stochastic inertial neural networks. Int. J. Control (2020). https://doi.org/10.1080/00207179.2020.1834145

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This work was supported by the Natural Scientific Research Fund of Zhejiang Provincial of China (grant no. LY16A010018).

Author information

Authors and Affiliations

Authors

Contributions

The authors declare that the study was realized in collaboration with the same responsibility. All authors read and approved the final version of the manuscript.

Corresponding author

Correspondence to Wei Chen.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, W., Chen, W. Mean-square exponential input-to-state stability of stochastic inertial neural networks. Adv Differ Equ 2021, 430 (2021). https://doi.org/10.1186/s13662-021-03586-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-021-03586-4

Keywords