- Research
- Open Access
- Published:

# Almost sure exponential stabilization of neural networks by aperiodically intermittent control based on delay observations

*Advances in Difference Equations*
**volume 2019**, Article number: 353 (2019)

## Abstract

This paper is concerned with almost sure exponential stabilization of neural networks by intermittent control based on delay observations. By the stochastic comparison principle and Itô’s formula, a sufficient criterion is derived, under which unstable neural networks can be stabilized by stochastic intermittent control based on delay observations. The range of intermittent rate is given, and the upper bound of time delay can be solved from a transcend equation. Finally, two examples are provided to demonstrate the feasibility and validity of our proposed methods.

## Introduction

In the past decade, the Hopfield neural networks have been thoroughly investigated. Nowadays they are widely applied in several areas, such as imagine processing, communication engineering, and optimization. These applications mainly depend on the asymptotic behavior of the neural networks [1,2,3,4,5], especially their stability, and therefore more and more authors study the stability of neural networks. The readers can refer to [6,7,8,9,10] and their references.

Actually, many practical problems are related to the unstable neural networks, which cannot be directly applied in engineering unless they are stabilized in advance. Hence, various control strategies have been proposed in order to stabilize the unstable neural networks, including intermittent control [11,12,13,14,15], pinning control [16], impulsive control [17], finite-time control [18], and adaptive control [19]. Meanwhile, noise disturbance is ubiquitous in the real world. As a result, an increasing number of authors have revealed the positive impact of the white noise on the systems in recent decades [20,21,22,23,24]. For example, Mao showed that noise can suppress an explosive solution for population systems in [25]. Also, some scholars have paid attention to the stochastic stabilization for neural networks. Shen and Wang have utilized the white noise to stabilize the unstable networks in [26]. The readers can refer to [27,28,29] for more details.

In the past decades, more and more scholars have realized that there is time delay *τ* between the observation time of the state and the arrival time of the feedback control. Thus, time-delay feedback control has attracted more and more researchers’ attention [30,31,32,33,34]. It was Guo and Mao who first integrated the delay feedback control strategy with the stochastic stabilization theory. Guo and Mao showed that the differential system is stabilized by delay feedback control provided the time delay is no more than an upper bound in [22]. Nevertheless, there are distinct characteristics in neural networks. It is significant to investigate the stochastic delay stabilization for neural networks based on the network characteristics. To the best of our knowledge, the stochastic delay stabilization for neural networks has scarcely been investigated yet. Accordingly, tackling this issue constitutes the first motivation of this paper.

Moreover, the intermittent control strategy has attracted some scholars [11,12,13, 35,36,37,38,39,40]. The networks are controlled by white noise during working time, and the white noise is removed from the networks during rest time. The corresponding controlled system can be regarded as a switching of a closed-loop subsystem during working time and an open-loop subsystem during rest time. Intermittent control has its advantages over classic continuous control. We cut costs by reducing the excessive wear of the controller due to long time work. Presently there are several results applying intermittent control to networks [11, 12, 37]. Then a question arises naturally: Can we integrate the intermittent control strategy with stochastic delay stabilization strategy? To date, there are few results available on this topic since the simultaneous presence of the delay feedback control and intermittent noise complicates the problem. As a result, the proposed approaches in existing literature cannot be directly adopted. Thus, overcoming the difficulties stemming from delay feedback control and intermittent noise is the second motivation.

Summarizing the above statements, this paper focuses on stochastic intermittent stabilization based on delay feedback control for neural networks. Sufficient conditions for almost sure exponential stabilization are obtained provided that the time delay is bounded by \(\tau _{0}\) and intermittent rate *ϕ* satisfies \(2\lambda _{+}(-D+| \bar{A}|K)<\sigma ^{2}(1-\phi )\) (*D*, *Ā*, *K*, and *σ* will be defined in Sect. 2, see [41] for more details). The main contribution of this paper lies in three aspects as follows. (1) The stochastic delay stabilization for neural networks has been investigated based on neural networks characteristic. (2) By the stochastic comparison principle and Itô’s formula, the stochastic intermittent stabilization based on delay feedback control can be obtained. (3) We succeed in overcoming the difficulties mainly arising from the simultaneous presence of the intermittent noise and delay feedback control.

## Preliminary

Throughout this paper, unless otherwise specified, let \((\varOmega , \mathcal{F}, \{\mathcal{F}_{t}\}_{t\geq 0}, \mathcal{P})\) be a complete probability space with a filtration \(\{\mathcal{F}_{t}\}_{t\geq 0}\) satisfying the usual conditions. Let \(\tau >0\), and denote by \(C=C([-\tau ,0];R^{n})\) the family of continuous functions *ξ* from \([-\tau ,0]\) to \(R^{n} \) with the norm \(\|\xi \|= \sup_{-\tau \leq \theta \leq 0}|\xi (\theta )|<\infty \). Denote by \(L^{2}_{\mathcal{F}_{0}}([-{\tau },0]; R^{n})\) the family of all \(\mathcal{F}_{0}\)-measurable \(C([-{\tau },0]; R^{n})\) valued random variables \(\zeta =\{\zeta (\theta ): -{\tau }\leq \theta \leq 0\}\) such that \(\sup_{-\tau \leq \theta \leq 0}E|\zeta (\theta )|^{2}<\infty \), where \(E|\cdot |\) stands for the mathematical expectation operator with respect to the given probability measure \(\mathcal{P}\). Let \(G=(g_{ij})_{n\times n}\). Denote \(\bar{G}=(\bar{g}_{ij})_{n\times n}\) with \(\bar{g}_{ii}=\max \{g_{ii},0\}\), \(\bar{g}_{ij}=g_{ij}\), and \(|G|=(|g_{ij}|)_{n\times n}\).

Consider the unstable neural networks as follows:

where \(D=\operatorname{diag}(d_{1},d_{2},\ldots ,d_{n})\), \(A=(a_{ij})_{n \times n}\), \(f(x)= (f_{1}(x_{1}),f_{2}(x_{2}),\ldots ,f_{n}(x_{n}) )^{T}\), \(f_{i}(\cdot ):R\rightarrow R\) is an activation function. \(x(t)\in R^{n}\), the variable \(x_{i}(t)\) represents the voltage on the input of the *i*th neuron. Consider the following neural networks by aperiodically intermittent control based on delay state observations:

where \(\varSigma =\operatorname{diag}(\sigma ,\ldots ,\sigma )\), \(B(t)\) is a scalar Brown motion, \(\varSigma h(t)x(t-\tau )\,\mathrm{d}B(t)\) is an aperiodically intermittent controller. \(t_{k}-t_{k-1}>0\) is the *k*th time interval length. The feedback control is imposed on the networks in the time interval \([t_{k},s_{k})\), while the control is removed in the rest of the interval \([s_{k},t_{k+1})\). Set \(\phi _{k}=(t_{k+1}-t _{k})^{-1}(t_{k+1}-s_{k})\), and \(\phi =\limsup_{k\longrightarrow + \infty }\phi _{k}\) is the intermittence rate.

In order to analyze the asymptotic behavior for networks (2), we define the auxiliary networks without delay observations:

The key technique in this paper is the comparison principle. The *p*th moment difference between networks (2) and (3) is estimated. The whole frame is based on two basic assumptions.

### Assumption 1

For each \(i=1,2,\ldots ,n\), there exists \(\kappa _{i}\) such that

Denote \(K=\operatorname{diag}(\kappa _{1},\ldots ,\kappa _{n})\).

### Assumption 2

\(\lambda _{+}(-D+|\bar{A}|K)= \sup_{|x|=1, x\in R^{n}_{+}}x^{T}(-D+|\bar{A}|K)x>0\).

### Remark 1

If \(\lambda _{+}(-D+|\bar{A}|K)<0\), calculating the derivative of \(|x(t)|^{2}\) along networks (1) gives

where \([x]^{+}=(|x_{1}|,|x_{2}|,\ldots ,|x_{n}|)^{T}\), and \(\bar{a} _{ii}=\max \{a_{ii},0\}\), \(\bar{a}_{ij}=|a_{ij}|\). This implies networks (1) are stable.

## Main results

The main results will be presented in this section.

### Theorem 3.1

*Let Assumption*
1
*and*
\(2\lambda _{+}(-D+| \bar{A}|K)<\sigma ^{2}(1-\phi )\)
*hold*. *System* (2) *can be stabilized by stochastic intermittent control*
\(\sigma h(t)X(t-\tau )\,\mathrm{d}B(t)\)
*provided*
\(\tau <\tau _{0}\), *where*
\(\tau _{0}\)
*is the solution of* (23).

### Proof

We divide the proof into three steps for convenience. The main aim of step 1 is to show the stability of system (3). The moment estimation on \(E|x(t)-y(t)|^{p}\) has been performed in step 2. We will show the stabilization by stochastic intermittent control with time delay in step 3.

*Step* 1. We will show that system (3) is *p*th moment exponentially stable if \(p\in (0,1-2\lambda _{+}(-D+|\bar{A}|K)\sigma ^{-2}(1-\phi )^{-1})\) is sufficiently small. Applying Itô’s formula to \(V=|y(t)|^{p}\) yields

By similar computations in Remark 1, we have

For convenience, denote \(\lambda _{1}=\lambda _{+}(-D+|\bar{A}|K)\), then (4) can be written as

It follows from the stochastic comparison principle that

Note that \(t=s_{0}\),

It is obvious that \(h(t)=0\), \(s_{0}< t\leq t_{1}\), then we obtain

Applying Itô’s formula to \(V_{1}(t)=e^{p\sigma B(t)}\) yields

Simple computations show that \(EV_{1}=e^{\frac{1}{2}p^{2}\sigma ^{2}t}\). Taking expectation on both sides of (5) yields

In the same way, using (6) yields

Denote \(\alpha _{1}=\lambda _{1}p+\frac{1}{2}\sigma ^{2}p(p-1)\), \(\alpha _{2}=\lambda _{1}p\). For \(t_{i}\leq t< s_{i}\), we can readily verify that

It follows from the definition of *ϕ* that, for any \(\varepsilon >0\), there exists a positive integer \(N>0\) such that \(\phi _{k}<\phi + \varepsilon \) for any \(k>N\). Consequently, for \(t_{i}\leq t< s_{i}\), we have

where *C* is a constant. Similarly, for \(s_{i}\leq t< t_{i+1}\), we get

Combining (7) and (8), for every \(\varepsilon >0\),

Letting \(\varepsilon \rightarrow 0\) yields

Then we can claim that there exists a positive real number \(T_{1}>0\) such that, for any \(t-t_{0}>T_{1}\),

*Step* 2. The main aim now is to estimate the *p*th moment for solution process \(x(t)\) and the difference process \(x(t)-y(t)\) between networks (2) and (3). Applying Itô’s formula to \(|x(t)|^{2}\) yields

Taking expectations on both sides, we have

Taking supremum on \([t_{0}-\tau , t]\) gives

Note that the right term of the above inequality is monotonically increasing for \(t\geq t_{0}\), then we obtain

The Gronwall inequality then gives

By the Burkholder–Davis–Gundy inequality and the Hölder inequality, we obtain

Using (11) yields

where \(\lambda _{2}=\sup_{|x|=1}x^{T}(D^{2}+K^{T}(x)A^{T}AK(x)-0.5DAK(x)-0.5K ^{T}(x)A^{T}D)x\), with \(K(x)=\operatorname{diag} (f(x_{1}/x_{1},f(x_{2}/x _{2},\ldots , f(x_{n}/x_{n}) )\). It follows from the Hölder inequality that

where \(F_{1}(\tau ,p)= [4\tau (\lambda ^{2}_{2}\tau \exp \{(2\lambda _{1}+\sigma ^{2})\tau \}+4\sigma ^{2}) ]^{\frac{p}{2}}\). Now we estimate the expectation \(E(\sup_{t_{0}\leq u\leq t}|x(u)|^{2})\). The element inequality gives

Taking supremum on \([t_{0}, t]\) and expectations on both sides yields

Together with (11), we have

The Hölder inequality then gives

Next we estimate the *p*th moment difference between networks (2) and (3). By Itô’s formula and the Hölder inequality, we get

Instituting (12) to (14) gives

It follows from the Gronwall inequality that

It follows from the Hölder inequality that

where

*Step* 3. Let \(x(t)=x(t,t_{0},\zeta )\), \(y(t_{0}+\tau +T)=y (t_{0}+\tau +T,t_{0}+\tau ,x(t_{0}+\tau ) )\) for simplicity. Taking \(T=\max \{T_{1}, \frac{2}{\gamma }\log (\frac{2^{2.5p}}{\epsilon })\}\) with \(\epsilon \in (0,1)\), assertion (10) gives that

The elementary inequality \((x+y)^{p}\leq 2^{p}(x^{p}+y^{p})\) for \(x,y\geq 0\) yields

It follows from (17) and (18) that

Using the elementary inequality and (19),

Note that, for given \(\epsilon \in (0,1)\), \(G(\tau , \varepsilon , p, T)\) is a monotonously increasing function, \(G(0, \epsilon , p, T)= \epsilon <1\), and \(G(0, \epsilon , p, T)\rightarrow \infty \) as \(\tau \rightarrow \infty \). Now we claim that there exists a unique \(\tau _{0}\) to the following equation:

Note the left item of (23), it is monotonically increasing when we think of it as a function with the independent variable *τ*, and it is equal to *ϵ* if \(\tau =0\). As a result, equation (23) has a unique solution if \(\tau _{0}>0\). Determine \(\tau \in (0,\tau _{0})\), and \(\zeta \in \mathcal{L}^{2}_{\mathcal{F} _{t_{0}}} (\varOmega ,C([-\tau ,0]; R^{n}) )\) is an arbitrary initial value. From \(\tau <\tau _{0}\) we can verify that

and therefore we can find a suitable constant \(c>0\) such that

We obtain from (21) that

Next we discuss the solution \(x(t)\) for \(t\geq t_{0}+2\tau +T\). We know that there is a unique solution to networks (2) for \(t>t_{0}-\tau \). In other words, we can regard \(x(t_{0}+2\tau +T)\) as the initial value of \(x(t)\) at \(t=t_{0}+2\tau +T\). By (24) we have

Analyzing (24) and the equation above, we get

After repeated iteration we have

Moreover, we can verify that it is established if \(n=0\). Using (13) and (25), we have

where \(\varXi =T+2\tau \) and \(F_{2}\) is defined by (13). Using Markov’s inequality and (26),

Using the Borel–Cantelli lemma, we can verify that, for almost every *ω*, there exists an integer \(N_{0}=N_{0}(\omega )\) such that

That is,

For almost every *ω*, we get

as desired. □

### Remark 2

Shen and Wang studied the stabilization of recurrent neural networks by continuous noise (see [26]). Compared to the existing results, we show that neural networks can be stabilized by intermittent noise with time delay.

Step 1 of Theorem 3.1 implies a sufficient condition on stabilization by stochastic intermittent control without delay observations as follows.

### Corollary 3.2

*Let Assumption*
1
*and*
\(2\lambda _{+}(-D+| \bar{A}|K)<\sigma ^{2}(1-\phi )\)
*hold*. *Networks* (2) *can be stabilized by stochastic intermittent control*
\(h(t)\varSigma x(t)\,\mathrm{d}B(t)\).

### Remark 3

Guo and Mao have discussed almost sure stabilization of delay differential systems by delay feedback control in [22]. Nevertheless, there is distinct characteristics in delay neural networks. We can make full use of the network characteristics. Thus, it is desirable to derive the stabilization condition. In this study the neural networks are stabilized by aperiodically intermittent noise based on delay observations. Comparing with the results in [22], we further integrate the intermittent control strategy.

When \(\psi =0\), the white noise is continuous and Theorem 3.1 implies a criterion on stabilization by delay feedback control.

### Corollary 3.3

*Let Assumption*
1
*and*
\(2\lambda _{+}(-D+| \bar{A}|K)<\sigma ^{2}\)
*hold*. *Networks* (2) *can be stabilized by delay feedback control*
\(\varSigma x(t-\tau )\,\mathrm{d}B(t)\)
*provided*
\(\tau >\tau _{0}\), *where*
\(\tau _{0}\)
*is the solution to* (23).

## Numerical example

A numerical example is presented in this section. We verify that the theorem above is available.

### Example 1

Consider two-neural networks:

where \(x(t)= (x_{1}(t),x_{2}(t) )^{T}\), \(f(x)=\tanh x\), and the other parameters in networks (27) are selected as follows:

Figure 1(a) shows that networks (27) are unstable. The controller \(\varSigma h(t)x(t-\tau )\,\mathrm{d}B(t)\) is designed. That is,

where \(\tau =0.015\), \(\varSigma =0.2I\).

We take aperiodic controlled intervals \([0,0.1]\cup [2.0,2.1]\cup [4.0,4.1] \cup [6.0,6.1]\cup [8.0,8.1]\cup\cdots \) , or \([0,0.33]\cup [1.00,1.34] \cup [2.00,2.33]\cup [3.00,3.33]\cup [4.00,4.33]\cup [5.00, 5.34] \cup [6.00,6.33]\cup [7.00,7.33]\cup [8.0,8.33]\cup [9.00,9.34]\cup\cdots \) , or \([0,0.9]\cup [2.0,3.1]\cup [4.0,5.1]\cup [6.0,6.9]\cup [8.0,9.0]\cup \cdots \) , the intermittent rates are \(0.05,0.33,0.5\) respectively. We draw four figures by Matlab. The networks are stabilized by continuous white noise with delay observations (the time delay is 0.015 (Fig. 1b), 0.03 (Fig. 2a), 0.06 (Fig. 2b), respectively), while they are not stabilized if time delay is 0.12 (Fig. 3a). We see easily that the bigger time delay is better owing to less observation interval; however, the networks cannot be stabilized by continuous noise if the time delay is big enough. We fix time delay \(\tau =0.03\) and switch the control strategy from continuous noise to intermittent noise, then we change the intermittent rate *ϕ* (\(\phi =0.05\) (Fig. 3b), 0.33 (Fig. 4a), 0.5 (Fig. 4b), respectively). The networks are stabilized when \(\phi =0.33\) or \(\phi =0.5\), while they are unstable when \(\phi =0.05\). That is, the networks are unstable if the intermittent rate is small enough. The networks work best when \(\phi =0.5\).

The numerical example shows that the proposed methods are practical and efficient. We observe the states less frequently and cut the costs by reducing the controlled time compared with the algorithm proposed by Mao (see [24]). The neural networks are stabilized by aperiodic intermittent control with delay observations.

## Conclusions

In this study, we have investigated the exponential stabilization for neural networks by aperiodically intermittent control based on delay observations. First of all, by using the stochastic comparison principle, Itô’s formula, and the sequence analysis technique, we show that the unstable neural networks can be stabilized by aperiodically intermittent noise. Secondly, in terms of the characteristic of neural networks, we show that the networks are exponentially stabilized based on delay observations. Finally, a numerical example is provided to illustrate the superiority and effectiveness of the proposed approaches.

## References

- 1.
Zeng, Z., Wang, J.: Associative memories based on continuous-time cellular neural networks designed using space-invariant cloning templates. Neural Netw.

**22**(5–6), 651–657 (2009) - 2.
Cao, Y., Samidurai, R., Sriraman, R.: Robust passivity analysis for uncertain neural networks with leakage delay and additive time-varying delays by using general activation function. Math. Comput. Simul.

**155**(5–6), 57–77 (2019) - 3.
Samidurai, R., Sriraman, R., Cao, J., Tu, Z.: Nonfragile stabilization for uncertain system with interval time varying delays via a new double integral inequality. Math. Methods Appl. Sci.

**41**, 6272–6287 (2018) - 4.
Cao, J., Rakkiyappan, R., Maheswari, K., Chandrasekar, A.: Exponential h(a) filtering analysis for discrete-time switched neural networks with random delays using sojourn probabilities. Sci. China, Technol. Sci.

**59**, 387–402 (2016) - 5.
Guo, Z., Wang, J., Yan, Z.: Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays. Neural Netw.

**48**, 158–172 (2013) - 6.
Zhu, E., Yin, G.: Stability in distribution of stochastic delay recurrent neural networks with Markovian switching. Neural Comput. Appl.

**27**, 2141–2151 (2016) - 7.
Zhang, H., Wang, Y.: Stability analysis of Markovian jumping stochastic Cohen–Grossberg neural networks with mixed time delays. IEEE Trans. Neural Netw.

**19**, 366–370 (2008) - 8.
Lei, L., Cao, J., Qian, C.:

*p*th moment exponential input-to-state stability of delayed recurrent neural networks with Markovian switching via vector Lyapunov function. IEEE Trans. Neural Netw. Learn. Syst.**99**, 1–12 (2017) - 9.
Mohamad, S., Gopalasmy, K.: Exponential stability of continuous-time and discrete-time cellular neural networks with delays. Appl. Math. Comput.

**135**, 17–38 (2013) - 10.
Wu, A., Zeng, Z.: Exponential stabilization of memristive neural networks with time delays. IEEE Trans. Neural Netw. Learn. Syst.

**23**, 1919–1929 (2012) - 11.
Zhang, G., Shen, Y.: Exponential stabilization of memristor-based chaotic neural networks with time-varying delays via intermittent control. IEEE Trans. Neural Netw. Learn. Syst.

**26**, 1431–1441 (2015) - 12.
Zhao, H., Cai, G.: Exponential Synchronization of Complex Delayed Dynamical Networks with Uncertain Parameters via Intermittent Control. Springer, Berlin (2015)

- 13.
Yu, J., Hu, C., Jiang, H., Teng, Z.: Exponential synchronization of Cohen–Grossberg neural networks via periodically intermittent control. Neurocomputing

**74**, 1776–1782 (2011) - 14.
Gao, J., Cao, J.: Aperiodically intermittent synchronization for switching complex networks dependent on topology structure. Adv. Differ. Equ.

**2017**, 244 (2017) - 15.
Feng, Y., Yang, X., Song, Q., Cao, J.: Synchronization of memristive neural networks with mixed delays via quantized intermittent control. Appl. Math. Comput.

**339**, 874–887 (2018) - 16.
Lu, J., Ho, D., Wang, Z.: Pinning stabilization of linearly coupled stochastic neural networks via minimum number of controllers. IEEE Trans. Neural Netw.

**20**, 1617–1629 (2009) - 17.
Lu, J., Wang, Z., Cao, J., Ho, D.: Pinning impulsive stabilization of nonlinear dynamical networks with time-varying delay. Int. J. Bifurc. Chaos

**22**, 1250176 (2012) - 18.
Liu, X., Park, J., Jiang, N., Cao, J.: Nonsmooth finite-time stabilization of neural networks with discontinuous activations. Neural Netw.

**52**, 25–32 (2014) - 19.
Wang, L., Shen, Y., Zhang, G.: Finite-time stabilization and adaptive control of memristor-based delayed neural networks. IEEE Trans. Neural Netw.

**28**, 2649–2659 (2017) - 20.
Wu, F., Hu, S.: Suppression and stabilisation of noise. Int. J. Control

**82**, 2150–2157 (2009) - 21.
Liu, L., Shen, Y.: Noise suppress explosive solution of differential systems whose coefficient obey the polynomial growth condition. Automatica

**48**, 619–624 (2012) - 22.
Guo, Q., Mao, X., Yue, R.: Almost sure exponential stability of stochastic differential delay equations. SIAM J. Control Optim.

**54**, 1219–1233 (2016) - 23.
Zhu, S., Shen, Y., Chen, G.: Noise suppress or express exponential growth for hybrid Hopfield neural networks. Phys. Lett. A

**374**, 2035–2043 (2010) - 24.
Mao, X.: Almost sure exponential stabilization by discrete-time stochastic feedback control. IEEE Trans. Autom. Control

**61**, 1619–1624 (2016) - 25.
Mao, X., Marion, G., Renshaw, E.: Environmental noise suppresses explosion in population dynamics. Stoch. Anal. Appl.

**97**, 95–110 (2002) - 26.
Shen, Y., Wang, J.: Noise induced stabilization of the recurrent neural networks with mixed time-varying delays and Markovian-switching parameters. IEEE Trans. Neural Netw.

**18**, 1457–1462 (2007) - 27.
Russo, G., Shorten, R.: On noise-induced synchronization and consensus (2016) arXiv:1602.06467

- 28.
Ma, L., Wang, Z., Fan, Q., Liu, Y.: Consensus control of stochastic multi-agent systems: a survey. Sci. China Inf. Sci.

**60**, 5–19 (2017) - 29.
Liao, X., Mao, X.: Exponential stability and instability of stochastic neural networks. Stoch. Anal. Appl.

**14**, 165–185 (1996) - 30.
Sun, J.: Delay-dependent stability criteria for time-delay chaotic systems via time-delay feedback control. Chaos Solitons Fractals

**21**, 143–150 (2004) - 31.
Zhu, Q., Zhang, Q.:

*p*th moment exponential stabilization of hybrid stochastic differential equations by feedback controls based on discrete-time state observations with a time delay. IET Control Theory Appl.**11**, 1992–2003 (2017) - 32.
Chen, W., Xu, S., Zou, Y.: Stabilization of hybrid neutral stochastic differential delay equations by delay feedback control. Syst. Control Lett.

**88**, 1–13 (2016) - 33.
Mao, X., Lam, J., Huang, L.: Stabilisation of hybrid stochastic differential equations by delay feedback control. Syst. Control Lett.

**57**, 927–935 (2008) - 34.
Maharajan, C., Raja, R., Cao, J., Ravi, J., Rajchakit, G.: Global exponential stability of Markovian jumping stochastic impulsive uncertain BAM neural networks with leakage, mixed time delays, and

*α*-inverse Hölder activation functions. Adv. Differ. Equ.**2018**, 113 (2018) - 35.
Wang, J., Xu, C., Chen, M.Z.Q., Feng, J., Chen, G.: Stochastic feedback coupling synchronization of networked harmonic oscillators. Automatica

**87**, 404–411 (2018) - 36.
Gawthrop, P.: Intermittent control: a computational theory of human control. Biol. Cybern.

**104**, 31–51 (2011) - 37.
Liu, X., Chen, T.: Synchronization of nonlinear coupled networks via aperiodically intermittent pinning control. IEEE Trans. Neural Netw. Learn. Syst.

**26**, 113–126 (2015) - 38.
Liu, X., Chen, T.: Cluster synchronization in directed networks via intermittent pinning control. IEEE Trans. Neural Netw.

**22**, 1009–1020 (2011) - 39.
Lu, J., Ho, D., Cao, J.: A unified synchronization criterion for impulsive dynamical networks. Automatica

**46**, 1215–1221 (2010) - 40.
Cai, S., Hao, J., He, Q., Liu, Z.: Exponential synchronization of complex delayed dynamical networks via pinning periodically intermittent control. Phys. Lett. A

**375**, 1965–1971 (2011) - 41.
Mao, X., Yuan, C.: Stochastic Differential Equations with Markovian Switching. Imperial College, London (2006)

## Acknowledgements

The authors would like to show their great thanks to Professor Jifeng Chu for his great and consistent support, without him this work would not be possible.

## Funding

The research work is supported by Fundamental Research Funds for the Central Universities (Grant No. 2018B19914); the National Science Foundation of China (Grant Nos. 61773152); the Chinese Postdoctoral Science Foundation (Grant No. 2016M601698, 2017T100318); the Jiangsu Province Postdoctoral Science Foundation (Grant No. 1701078B); and the project funded by the Qing Lan Project of Jiangsu Province, China.

## Author information

### Affiliations

### Contributions

The authors have made the same contribution. All authors read and approved the final manuscript.

### Corresponding author

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

## Additional information

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

He, X., Liu, L. & Feng, L. Almost sure exponential stabilization of neural networks by aperiodically intermittent control based on delay observations.
*Adv Differ Equ* **2019, **353 (2019). https://doi.org/10.1186/s13662-019-2260-8

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/s13662-019-2260-8

### Keywords

- Exponential stabilization
- Intermittent control
- Itô’s formula
- Delay observations