Skip to main content

Fractional Halanay inequality of order between one and two and application to neural network systems


We extend the (integer-order) Halanay inequality with distributed delay to the fractional-order case between one and two. The main feature is the passage from integer order to noninteger order between one and two. This case of order between one and two is more delicate than the case between zero and one because of several difficulties explained in this paper. These difficulties are encountered, in fact, in general differential equations. Here we show that solutions decay to zero as a power function in case the delay kernel satisfies a general (integral) condition. We provide a large class of admissible functions fulfilling this condition. The even more complicated nonlinear case is also addressed, and we obtain a local stability result of power type. Finally, we give an application to a problem arising in neural network theory and an explicit example.


The Halanay inequality is one of the most important inequalities used to prove the boundedness or stability of solutions of some functional differential equations. It contains a dissipative term, which tends to stabilize the system in an exponential manner, and a delayed term, which, on the contrary, usually has a destructive character. It is proved that when the dissipation coefficient dominates the discrete delayed term coefficient, then we get an exponential decay. Namely, we have the following [11]:

Lemma 1

Assume that \(w(t)\) is a nonnegative solution of

$$ w^{\prime }(t)\leq -Aw(t)+B\sup_{t-\tau \leq s\leq t}w(s),\quad \tau >0, t\geq a. $$

If \(0< B< A\), then there exist \(M>0\) and \(\alpha >0\) such that

$$ w(t)\leq Me^{-\alpha (t-a)},\quad t\geq a. $$

This inequality has been used in many engineering applications and extended to the variable delay and distributed delay cases [3, 13, 28, 32, 33, 38]:

$$ w^{\prime }(t)\leq -A(t)w(t)+B(t) \int _{0}^{\infty }k(s)w(t-s) \,ds,\quad t \geq 0. $$

It has been proved that solutions decay exponentially for kernels satisfying

$$ \int _{0}^{\infty }e^{\beta s}k(s)\,ds< \infty $$

for some \(\beta >0\), provided that

$$ B(t) \int _{0}^{\infty }k(s) \,ds\leq A(t)-b,\quad b>0,\ t\geq 0. $$

Artificial neural networks (ANNs) are one of the many products of artificial intelligence. They have been applied successfully in many areas such as combinatorial optimization, cryptography, parallel computing, signal theory, image processing, biological, biomedical, medical (epidemiology), polymer composite, and geology [10, 12, 14,15,16,17, 20, 21, 27, 29, 36, 40]. In particular, in petroleum engineering, the characterization of a hydrocarbon reservoir depends on many static and dynamic parameters such as permeability, porosity, fluid saturation, and pressure in the reservoir. The lack of accuracy or the unavailability of certain parameters affect negatively the oil production performance. Unlike the existing conventional ways, ANNs have the ability of connecting input data to output without imposing a prior understanding of the fluid flow or the medium. They are also robust enough to deal with noisy, distorted, fuzzy, and even incomplete data [1, 4, 19, 31].

For material and processes that exhibit memory and hereditary effects, it has been shown that fractional derivatives describe better the phenomena [2, 5,6,7,8, 23].

Most of the existing results are concerned with the case of a fractional order between 0 and 1 and for the case of discrete delays only. Unfortunately, the arguments there do not work for the present case. For general fractional systems of order between zero and one, several stability results (including the Mittag–Leffler stability) have been obtained with explicit decay rates [7, 8, 13, 23,24,25,26, 35,36,37, 39, 43].

The stability for the linear system

$$ D^{\alpha }x(t)=Ax(t),\quad t>t_{0}, $$

with \(1<\alpha <2\), has been treated in [23, 42]. The stability in the cases of Riemann–Liouville and Caputo fractional derivatives has been established under the condition \(\vert \arg (\mathrm{spec}(A)) \vert > \alpha \pi /2\). In fact, the stability is of type \(t^{-\alpha -1}\) in the case of Riemann–Liouville fractional derivative and of type \(t^{-\alpha +1}\) in the case of Caputo fractional derivative.

For the equation

$$ D^{\alpha }x(t)=Ax(t)+B(t)x(t), \quad t>t_{0}, $$

the zero solution is proved to be stable [41] if, in addition,

$$ \int _{t_{0}}^{\infty } \bigl\Vert B(t) \bigr\Vert \,dt $$

is bounded, in case of both fractional derivatives. The stability is asymptotic if \(\Vert B(t) \Vert =O(t-t_{0})^{\gamma }\) or is bounded (\(-1<\gamma <1-\alpha \)). The authors in [36] assume that \(\Vert B(t) \Vert \) is nondecreasing and \(B(t)=O(t-t_{0})^{ \theta }\) (\(\theta <-\alpha \)).

The perturbed equation

$$ D^{\alpha }x(t)=Ax(t)+f\bigl(x(t)\bigr),\quad t>t_{0}, $$

has been studied in [8, 24, 42], where asymptotic stability results are proved if

$$ \lim_{ \Vert x \Vert \rightarrow 0}\frac{ \Vert f(x(t)) \Vert }{ \Vert x(t) \Vert }=0,\quad t\geq t_{0}, $$

in addition to a condition on the spectrum of A.

We withdraw the attention of the reader to the work in [22], where the authors discussed a similar (control) problem and proved a “global” asymptotic stability result after noticing that the previous results were of “local” character because of condition (1).

The nonautonomous system

$$ D^{\alpha }x(t)=Ax(t)+f\bigl(t,x(t)\bigr),\quad t>t_{0}, $$

has been the subject of investigation in [18, 30, 42]. Asymptotic stability results have been established under the following conditions: \(f(t,x(t))\) is Lipschitz continuous, \(\Vert f(t,x(t)) \Vert \leq \gamma (t) \Vert x(t) \Vert \) with bounded \(\int _{t _{0}}^{\infty }\gamma (t)\,dt\), and

$$ \lim_{ \Vert x \Vert \rightarrow 0}\frac{ \Vert f(t,x(t)) \Vert }{ \Vert x(t) \Vert }=0,\quad t\geq t_{0}. $$

Because of the size of the paper and our exclusive concern on the case \(1<\alpha <2\), several references on the case \(0<\alpha <1\) have not been reported here. We note here that the previously used arguments for the case \(0<\alpha <1\) are not valid for \(1<\alpha <2\). In particular, the use of the “one-sided” chain rule formula for fractional derivatives leads to uncontrollable terms and seems useless. We opted for the variation of parameters formula, but even in this framework, we faced considerable difficulties. The main difficulties were related to the sign of the involved Mittag–Leffler functions and also to the uniform boundedness of a convolution term. The formulas and properties found in the literature were not able to solve these difficulties. Then we have been forced to prove a new integral inequality, which may be useful in other contexts as well.

Our objective here is two-fold: we extend the distributed Halanay inequality from the integer-order case to the fractional-order case (\(1< \alpha <2\)) and from the linear case to the nonlinear case. We impose a general condition on the kernels and provide a class of admissible kernels, as an example, showing that this condition can be met. The decay we find is of power type. Once established, our results will be applied to a fractional neural network system of Hopfield type. Namely, we consider (discrete and distributed delayed) systems of the form

$$ \textstyle\begin{cases} D_{C}^{\alpha }x_{i}(t) = -c_{i}x_{i}(t)+\sum_{j=1}^{n}a_{ij}f _{j}(x_{j}(t))+\sum_{j=1}^{n}b_{ij}g_{j}(x_{j}(t-\tau )) \\ \phantom{D_{C}^{\alpha }x_{i}(t) =}{}+\sum_{j=1}^{n}d_{ij}\int _{0}^{\infty }k_{j}(s) h_{j}(x_{j}(t-s)) \,ds+I_{i},\quad t>0, \\ x_{i}(t)=\chi _{i}(t), \quad t\leq 0, \end{cases} $$

for \(i=1,2,\dots,n\), \(0<\alpha <1\), where

n is the number of units in the network,
\(x_{i}\) is the state of the ith neuron at time t,
\(c_{i}>0\) are the passive delay rates,
\(a_{ij},b_{ij},d_{ij}\) are the connection weight matrices,
\(I_{i}\) are external constant inputs,
\(f_{j},g_{j},h_{j}\) are the signal transmission functions (activation functions),
\(k_{j}\) is the delay feedback (delay kernel function),
τ>0 is the transmission delay, and
\(\chi _{i}\) is the prehistory of the ith state.

Our argument is flexible and may be applied to more general systems than this one. The next section contains some preliminaries. In Sect. 3, we extend the Halanay inequality to the order \(1<\alpha <2\) and provide a large class of kernels for which our result applies. The nonlinear case is treated in Sect. 4. An application to a problem arising in neural network theory is given in Sect. 5 together with a numerical example.


In this section, we give the definitions of the fractional integral and fractional derivative (of Riemann–Liouville and Caputo types) and the Mittag–Leffler functions.

Definition 2

The Riemann–Liouville fractional integral of order \(\alpha >0\) is defined by

$$ I^{\alpha }f(t)=\frac{1}{\varGamma (\alpha )} \int _{0}^{t}(t-s)^{ \alpha -1}f(s)\,ds,\quad \alpha >0, $$

for any measurable function f, provided that the right-hand side exists. Here \(\varGamma (\alpha )\) is the usual gamma function.

Definition 3

The fractional derivative of order α in the sense of Caputo is defined by

$$ D_{C}^{\gamma }f(t)=\frac{1}{\varGamma (n-\gamma )} \int _{0}^{t}(t- \tau )^{n-\gamma -1}f^{{(n)}}( \tau )\,d\tau,\quad n=[\gamma ]+1, \gamma >0, $$

whereas the fractional derivative of order α in the sense of Riemann–Liouville is defined by

$$ D^{\gamma }f(t)=\frac{1}{\varGamma (n-\gamma )} \biggl( \frac{d}{dt} \biggr) ^{n} \int _{0}^{t}(t-\tau )^{n-\gamma -1}f(\tau )\,d \tau, \quad n=[ \gamma ]+1, \gamma >0, $$

provided that the integrals exist.

The one-parametric and two-parametric Mittag–Leffler functions \(E_{ \alpha }(z)\) and \(E_{\alpha,\beta }(z)\) are defined by

$$ E_{\alpha }(z):=\sum_{n=0}^{\infty } \frac{z^{n}}{\varGamma (\alpha n+1)}, \quad\Re (\alpha )>0, $$


$$ E_{\alpha,\beta }(z):=\sum_{n=0}^{\infty } \frac{z^{n}}{\varGamma ( \alpha n+\beta )},\quad \Re (\alpha )>0, \Re (\beta )>0, $$


Fractional distributed Halanay inequality

Here we extend the standard (integer-order) Halanay inequality to the fractional case \(1<\alpha <2\). We prove that the decay is of power type. Part of the difficulties encountered here is due to the fact that the properties of the Mittag–Leffler functions for \(1<\alpha <2\) are different from those for \(0<\alpha <1\), and therefore the methods used in the case \(0<\alpha <1\) are not applicable anymore.

Theorem 4

Let \(u(t)\) be a nonnegative solution of

$$ \textstyle\begin{cases} D_{C}^{\alpha }u(t)\leq -au(t)+\int _{0}^{t}k(t-s)u(s) \,ds,\quad 1< \alpha < 2,\ t>0, \\ u(0)=u_{0},\qquad u^{\prime }(0)=u_{1}, \end{cases} $$

where \(a>0\), and k is a nonnegative summable function satisfying

$$ t^{\alpha -1} \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha, \alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(s- \sigma )\sigma ^{1-\alpha }\,d \sigma \biggr) \,ds< 1,\quad t>0. $$

Then there exists a positive constant A such that

$$ u(t)\leq A/t^{\alpha -1},\quad t>0. $$


We compare solutions of (2) to those of

$$ \textstyle\begin{cases} D_{C}^{\alpha }w(t)=-aw(t)+\int _{0}^{t}k(t-s)w(s) \,ds,\quad 1< \alpha < 2,\ t>0, \\ w(0)=w_{0}=u_{0},\qquad w^{\prime }(0)=w_{1}=u_{1}. \end{cases} $$

Applying the Laplace transform to (4), we obtain the variation-of-parameters formula (see [42] and [43])

$$\begin{aligned} w(t)={}&E_{\alpha }\bigl(-at^{\alpha }\bigr)w_{0}+tE_{\alpha,2} \bigl(-at^{\alpha }\bigr)w _{1} \\ &{}+ \int _{0}^{t}(t-s)^{\alpha -1}E_{\alpha,\alpha } \bigl(-a(t-s)^{\alpha }\bigr) \biggl( \int _{0}^{s}k(s-\sigma )w(\sigma )\,d\sigma \biggr) \,ds,\quad t \geq 0. \end{aligned}$$

In view of the boundedness of \(E_{\alpha,\beta }(-at^{ \alpha })\), \(0<\alpha <2\), \(\beta >0\), \(a\geq 0\), \(t\geq 0\) ([34, Thms. 1.4 and 1.6, pp. 33, 34]),

$$ \bigl\vert E_{\alpha,\beta }\bigl(-at^{\alpha }\bigr) \bigr\vert \leq M( \alpha,\beta )/at^{\alpha } $$

for some \(M(\alpha,\beta )>0\), and we may write

$$\begin{aligned} w(t)\leq{}& \bigl\vert E_{\alpha }\bigl(-at^{\alpha }\bigr) \bigr\vert w_{0}+M _{1}(\alpha,a)t^{1-\alpha } \vert w_{1} \vert \\ &{}+ \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha,\alpha }\bigl(-a(t-s)^{ \alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(s-\sigma )w(\sigma )\,d \sigma \biggr) \,ds,\quad t\geq 0, \end{aligned}$$


$$\begin{aligned} &t^{\alpha -1}w(t) \\ &\quad \leq t^{\alpha -1} \bigl\vert E_{\alpha } \bigl(-at^{ \alpha }\bigr) \bigr\vert w_{0}+M_{1}( \alpha,a) \vert w_{1} \vert \\ &\qquad{}+t^{\alpha -1} \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha, \alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(s- \sigma )w(\sigma )\,d\sigma \biggr) \,ds,\quad t\geq 0, \end{aligned}$$

where \(M_{1}(\alpha,a)=M(\alpha,1)/a\) is coming from (5). Multiplying by \(\sigma ^{1-\alpha }\sigma ^{\alpha -1}\) inside the inner integral in (6),

$$\begin{aligned} &t^{\alpha -1}w(t) \\ &\quad \leq t^{\alpha -1} \bigl\vert E_{\alpha } \bigl(-at^{ \alpha }\bigr) \bigr\vert w_{0}+M_{1}( \alpha,a) \vert w_{1} \vert \\ &\qquad{}+t^{\alpha -1} \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha, \alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(s- \sigma )\sigma ^{1-\alpha } \sigma ^{\alpha -1}w(\sigma )\,d\sigma \biggr) \,ds, \quad t\geq 0 \end{aligned}$$

and taking the supremum, we find

$$\begin{aligned} &t^{\alpha -1}w(t) \\ &\quad\leq t^{\alpha -1} \bigl\vert E_{\alpha } \bigl(-at^{ \alpha }\bigr) \bigr\vert w_{0}+M_{1}( \alpha,a) \vert w_{1} \vert \\ &\qquad{}+t^{\alpha -1}\phi (t) \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{ \alpha,\alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(s- \sigma )\sigma ^{1-\alpha }\,d \sigma \biggr) \,ds,\quad t\geq 0, \end{aligned}$$

where \(\phi (t):=\sup_{0\leq \sigma \leq t}\sigma ^{\alpha -1}w(\sigma )\). The expression \(t^{\alpha -1} \vert E_{\alpha }(-at^{\alpha }) \vert \) is uniformly bounded (by \(C_{1}>0\)) nearby zero as \(E_{\alpha }(-at^{\alpha })\) is itself bounded, and it is also bounded far away from zero as \(\vert E_{\alpha }(-at^{\alpha }) \vert \) is decaying as \(t^{-\alpha }\) (see [34, 39]).

Assuming that

$$ t^{\alpha -1} \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha, \alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(s- \sigma )\sigma ^{1-\alpha }\,d \sigma \biggr) \,ds\leq B< 1, $$

it follows from (7) that

$$ t^{\alpha -1}w(t)\leq C_{1}w_{0}+M_{1}( \alpha,a) \vert w_{1} \vert +B \phi (t),\quad t>0. $$

Then, taking supremum in (8), we find

$$ (1-B)\phi (t)\leq C_{1}w_{0}+M_{1}(\alpha,a) \vert w_{1} \vert , \quad t>0, $$


$$ w(t)\leq \frac{C_{1}w_{0}+M_{1}(\alpha,a) \vert w_{1} \vert }{(1-B)t ^{\alpha -1}},\quad t>0. $$

This completes the proof. □

Lemma 5

If \(\nu \in C\) satisfies \(\frac{\alpha \pi }{2}< \vert \arg (\nu ) \vert \leq \pi \), then there exists a constant \(A(\alpha,\nu )>0\) (independent of t) such that

$$ \int _{0}^{t} \bigl\vert (t-s)^{\alpha -1}E_{\alpha,\alpha } \bigl(\nu (t-s)^{ \alpha }\bigr) \bigr\vert \,ds< A(\alpha,\nu ),\quad \forall t>0. $$


This lemma is proved in [9] when \(0<\alpha <1\). The case \(1<\alpha <2\) may be proved similarly. □

A class of admissible kernels. Condition (3) may be simplified considerably to

$$ \int _{0}^{t}k(t-\sigma )\sigma ^{1-\alpha }\,d \sigma \leq Ct^{1-\alpha },\quad t\geq 0, $$

for some \(C>0\). To see this, we prove the following lemma.

Lemma 6

For \(1<\alpha <2\), we have

$$ t^{\alpha -1} \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha, \alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert s^{1-\alpha } \,ds\leq D,\quad a>0, t\geq 0, $$

for some \(D>0\).



$$\begin{aligned} &t^{\alpha -1} \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha, \alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert s^{1-\alpha } \,ds \\ &\quad =t^{\alpha -1} \int _{0}^{t}(t-s)^{1-\alpha }s^{\alpha -1} \vert E _{\alpha,\alpha }(-as)^{\alpha })\vert \,ds \\ &\quad = \int _{0}^{1}(1-\xi )^{1-\alpha }\xi ^{\alpha -1}t^{\alpha -1} \bigl\vert E _{\alpha,\alpha } \bigl(-at^{\alpha }\xi ^{\alpha }\bigr) \bigr\vert t\,d\xi \\ &\quad =t^{\alpha } \int _{0}^{1}(1-\xi )^{1-\alpha }\xi ^{\alpha -1} \bigl\vert E _{\alpha,\alpha }\bigl(-at^{\alpha }\xi ^{\alpha }\bigr) \bigr\vert \,d\xi,\quad t>0, \end{aligned}$$

where \(\xi:=s/t\) and \(ds=t\,d\xi \). For \(0\leq \xi <1/2\), we have

$$\begin{aligned} &t^{\alpha } \int _{0}^{1/2}(1-\xi )^{1-\alpha }\xi ^{\alpha -1} \bigl\vert E _{\alpha,\alpha }\bigl(-at^{\alpha }\xi ^{\alpha }\bigr) \bigr\vert \,d\xi \\ &\quad \leq \max \bigl(1,2^{\alpha -1}\bigr)t^{\alpha } \int _{0}^{1/2}\xi ^{\alpha -1} \bigl\vert E_{\alpha,\alpha }\bigl(-at^{\alpha }\xi ^{\alpha }\bigr) \bigr\vert \,d \xi \\ &\quad \leq \max \bigl(1,2^{\alpha -1}\bigr)t \int _{0}^{1/2} ( t\xi ) ^{ \alpha -1} \bigl\vert E_{\alpha,\alpha }\bigl(-at^{\alpha }\xi ^{\alpha }\bigr) \bigr\vert \,d \xi, \end{aligned}$$

and putting \(\sigma:=t\xi \) and \(d\sigma:=t\,d\xi \), we see that

$$\begin{aligned} &t^{\alpha } \int _{0}^{1/2}(1-\xi )^{1-\alpha }\xi ^{\alpha -1} \bigl\vert E _{\alpha,\alpha }\bigl(-at^{\alpha }\xi ^{\alpha }\bigr) \bigr\vert \,d\xi \\ &\quad\leq \max \bigl(1,2^{\alpha -1}\bigr)t^{\alpha } \int _{0}^{1/2}\xi ^{\alpha -1} \bigl\vert E_{\alpha,\alpha }\bigl(-at^{\alpha }\xi ^{\alpha }\bigr) \bigr\vert \,d \xi \\ &\quad \leq \max \bigl(1,2^{\alpha -1}\bigr) \int _{0}^{t/2}\sigma ^{\alpha -1} \bigl\vert E _{\alpha,\alpha }\bigl(-a\sigma ^{\alpha }\bigr) \bigr\vert \,d\sigma. \end{aligned}$$

This last expression in (10) is bounded by Lemma 5.

For \(1/2\leq \xi <1\), it is clear that

$$\begin{aligned} &t^{\alpha } \int _{1/2}^{1}(1-\xi )^{1-\alpha }\xi ^{\alpha -1} \bigl\vert E _{\alpha,\alpha }\bigl(-at^{\alpha }\xi ^{\alpha }\bigr) \bigr\vert \,d\xi \\ &\quad \leq 2 \int _{0}^{1/2}(1-\xi )^{1-\alpha }t^{\alpha } \xi ^{\alpha } \bigl\vert E _{\alpha,\alpha }\bigl(-at^{\alpha }\xi ^{\alpha }\bigr) \bigr\vert \,d\xi, \end{aligned}$$

and, as the expression \(t^{\alpha }\xi ^{\alpha } \vert E_{\alpha ,\alpha }(-at^{\alpha }\xi ^{\alpha }) \vert \) is bounded by \(M(\alpha,\alpha )/a\) (see (5)), we find

$$\begin{aligned} &t^{\alpha } \int _{1/2}^{1}(1-\xi )^{1-\alpha }\xi ^{\alpha -1} \bigl\vert E _{\alpha,\alpha }\bigl(-at^{\alpha }\xi ^{\alpha }\bigr) \bigr\vert \,d\xi \\ &\quad \leq 2\frac{M(\alpha,\alpha )}{a} \int _{1/2}^{1}(1-\xi )^{1-\alpha }\,d \xi = \frac{2^{\alpha -1}M(\alpha,\alpha )}{ ( 2-\alpha ) a}. \end{aligned}$$

The lemma is proved. □

This lemma also gives us an idea about a class of kernels satisfying (9).

Example 7

Consider the class of nonnegative summable functions satisfying \(0\leq k(t)\leq C_{2}t^{\alpha -1} \vert E_{\alpha, \alpha }(-bt^{\alpha }) \vert \) with \(C_{2}\) and \(b>0\). This class encompasses, of course, the well-known class of kernels \(k(t)=C_{2}t^{\alpha -1}e^{-bt}\) used frequently in applications. By selecting appropriate constants \(C_{2}\) and/or b we see that it satisfies all the requirements of the theorem.

Nonlinear case

Here we consider the nonlinear case. This case is not only important from the mathematical point of view, but it is also very useful in applications. In neural network theory, for instance, activation functions are usually assumed to be Lipschitz continuous, so that we can pass from the nonlinear case to the linear case and use the linear Halanay inequality. Therefore the present nonlinear case of Halanay inequality allows dealing with the non-Lipschitz case. The price to pay is that we obtain a local stability result.

The inequality of concern is

$$ \textstyle\begin{cases} D_{C}^{\alpha }u(t)\leq -au(t)+\int _{0}^{t}k(t-s)h(u(s)) \,ds,\quad 1< \alpha < 2, t>0, \\ u(0)=u_{0}, \qquad u^{\prime }(0)=u_{1}, \end{cases} $$

where h is a nonlinear function.

Theorem 8

Assume that \(u(t)\) is a solution of (11), \(h(u)\leq u\tilde{h}(u)\) for some continuous nonnegative nondecreasing function \(\tilde{h}(u)\), and \(k(t)\) is a nonnegative summable function such that


$$ \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha,\alpha }\bigl(-a(t-s)^{ \alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(s-\sigma )\sigma ^{1- \alpha }\,d \sigma \biggr) \,ds\leq B_{1},\quad t>0, $$

for some \(B_{1}>0\) and \(\varsigma >0\) such that \(B_{1}\tilde{h}( \varsigma )\leq 1/2\), and


$$ \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha,\alpha }\bigl(-a(t-s)^{ \alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(\sigma )\,d\sigma \biggr) \,ds\leq B_{2},\quad t>0, $$

for some \(B_{2}>0\) and \(\xi >0\) such that \(B_{2}\tilde{h}(\xi )\leq 1/2\).


$$ \bigl\vert u(t) \bigr\vert \leq C \bigl( \vert u_{0} \vert + \vert u_{1} \vert \bigr) t^{1-\alpha },\quad t\geq 0, $$

for some positive constant C and small initial data.


Let us compare solutions of (11) with those of

$$ \textstyle\begin{cases} D_{C}^{\alpha }w(t)=-aw(t)+\int _{0}^{t}k(t-s)h(w(s)) \,ds,\quad 1< \alpha < 2, t>0, \\ w(0)=w_{0}=u_{0},\qquad w^{\prime }(0)=w_{1}=u_{1}. \end{cases} $$

The corresponding variation-of-parameters formula is

$$\begin{aligned} w(t)={}&E_{\alpha }\bigl(-at^{\alpha }\bigr)w_{0}+tE_{\alpha,2} \bigl(-at^{\alpha }\bigr)w _{1} \\ &{}+ \int _{0}^{t}(t-s)^{\alpha -1}E_{\alpha,\alpha } \bigl(-a(t-s)^{\alpha }\bigr) \biggl( \int _{0}^{s}k(s-\sigma )h\bigl(w(\sigma )\bigr)\,d \sigma \biggr) \,ds,\quad t \geq 0. \end{aligned}$$

Therefore from (5) and the assumption on h we have

$$\begin{aligned} \bigl\vert w(t) \bigr\vert \leq{}& \bigl\vert E_{\alpha } \bigl(-at^{\alpha }\bigr) \bigr\vert \vert w_{0} \vert +M_{2}(\alpha,a)t^{1- \alpha } \vert w_{1} \vert \\ &{}+ \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha,\alpha }\bigl(-a(t-s)^{ \alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(s-\sigma ) \bigl\vert w( \sigma ) \bigr\vert \tilde{h}\bigl( \bigl\vert w(\sigma ) \bigr\vert \bigr)\,d \sigma \biggr) \,ds \end{aligned}$$


$$\begin{aligned} t^{\alpha -1} \bigl\vert w(t) \bigr\vert \leq{}& t^{\alpha -1}E_{\alpha } \bigl(-at^{\alpha }\bigr) \vert w_{0} \vert +M_{2}( \alpha,a) \vert w _{1} \vert \\ &{}+t^{\alpha -1} \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha, \alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert \\ &{}\times \biggl( \int _{0}^{s}k(s- \sigma ) \bigl\vert w(\sigma ) \bigr\vert \tilde{h}\bigl( \bigl\vert w( \sigma ) \bigr\vert \bigr)\,d\sigma \biggr) \,ds \end{aligned}$$

for \(t\geq 0\). We multiply inside the inner integral in (15) by the expression \(\sigma ^{\alpha -1}\sigma ^{1-\alpha }\):

$$\begin{aligned} t^{\alpha -1} \bigl\vert w(t) \bigr\vert \leq {}&C_{1}(\alpha,a) \vert w _{0} \vert +M_{2}(\alpha,a) \vert w_{1} \vert \\ &{}+t^{\alpha -1} \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha, \alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert \\ &{}\times\biggl( \int _{0}^{s}k(s- \sigma )\sigma ^{\alpha -1} \bigl\vert w(\sigma ) \bigr\vert \sigma ^{1-\alpha }\tilde{h}\bigl( \bigl\vert w(\sigma ) \bigr\vert \bigr)\,d \sigma \biggr) \,ds. \end{aligned}$$


$$\begin{aligned} t^{\alpha -1} \bigl\vert w(t) \bigr\vert \leq{}& C_{1}(\alpha,a) \vert w _{0} \vert +M_{2}(\alpha,a) \vert w_{1} \vert \\ &{}+t^{\alpha -1}\phi (t) \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{ \alpha,\alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert \\ &{}\times \biggl( \int _{0}^{s}k(s-\sigma )\sigma ^{1-\alpha } \tilde{h}\bigl( \bigl\vert w(\sigma ) \bigr\vert \bigr)\,d\sigma \biggr) \,ds, \quad t\geq 0, \end{aligned}$$


$$ \phi (t)=\sup_{0\leq \sigma \leq t}\sigma ^{\alpha -1} \bigl\vert w( \sigma ) \bigr\vert ,\quad t\geq 0. $$

If the initial data satisfy

$$ C_{1}(\alpha,a) \vert w_{0} \vert +M_{2}( \alpha,a) \vert w _{1} \vert < \varsigma /4 $$

and \(\vert w(t) \vert \leq \varsigma \) for all \(0\leq t \leq \bar{t}\), then

$$\begin{aligned} &t^{\alpha -1} \bigl\vert w(t) \bigr\vert \\ &\quad \leq C_{1}(\alpha,a) \vert w _{0} \vert +M_{2}(\alpha,a) \vert w_{1} \vert \\ &\qquad{}+t^{\alpha -1}\phi (t)\tilde{h}(\varsigma ) \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha,\alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(s-\sigma )\sigma ^{1-\alpha }\,d \sigma \biggr) \,ds. \end{aligned}$$

Now if

$$\begin{aligned} &t^{\alpha -1}\tilde{h}(\varsigma ) \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E _{\alpha,\alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert \biggl( \int _{0} ^{s}k(s-\sigma )\sigma ^{1-\alpha }\,d \sigma \biggr) \,ds \\ &\quad\leq B_{1}\tilde{h}(\varsigma )\leq 1/2 \end{aligned}$$

for some \(B_{1}>0\), then from (17) we deduce that

$$ t^{\alpha -1} \bigl\vert w(t) \bigr\vert \leq C_{1}(\alpha,a) \vert w _{0} \vert +M_{2}(\alpha,a) \vert w_{1} \vert +\frac{ \phi (t)}{2},\quad 0\leq t\leq \bar{t}, $$

and taking the supremum in (19), we get

$$ \bigl\vert w(t) \bigr\vert \leq 2 \bigl( C_{1}(\alpha,a) \vert w _{0} \vert +M_{2}(\alpha,a) \vert w_{1} \vert \bigr) t^{1-\alpha },\quad 0\leq t\leq \bar{t}. $$

The difficulty here is to make the process continue forever to get this last estimate (20) valid for all t.

If \(\bar{t}\geq 1\), then

$$ \bigl\vert w(\bar{t}) \bigr\vert \leq 2 \bigl( C_{1} \vert w _{0} \vert +M \vert w_{1} \vert /a \bigr) < \varsigma /2, $$

and we can continue the process.

If \(\bar{t}<1\), then we go back to (14) and proceed as follows. We get

$$\begin{aligned} \bigl\vert w(t) \bigr\vert \leq{}& M_{3}(\alpha,a) \bigl( \vert w _{0} \vert + \vert w_{1} \vert \bigr) \\ &{}+ \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha,\alpha }\bigl(-a(t-s)^{ \alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(s-\sigma ) \bigl\vert w( \sigma ) \bigr\vert \tilde{h}\bigl( \bigl\vert w(\sigma ) \bigr\vert \bigr)\,d \sigma \biggr) \,ds. \end{aligned}$$


$$\begin{aligned} \bigl\vert w(t) \bigr\vert \leq {}&M_{3}(\alpha,a) \bigl( \vert w _{0} \vert + \vert w_{1} \vert \bigr) +\psi (t) \\ &{}\times \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha,\alpha }\bigl(-a(t-s)^{ \alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(s-\sigma )\tilde{h}\bigl( \bigl\vert w( \sigma ) \bigr\vert \bigr)\,d\sigma \biggr) \,ds, \end{aligned}$$


$$ \psi (t)=\sup_{0\leq \sigma \leq t} \bigl\vert w(\sigma ) \bigr\vert . $$

If \(M_{3}(\alpha,a) ( \vert w_{0} \vert + \vert w _{1} \vert ) <\xi /4\) and \(\vert w(t) \vert \leq \xi \) for all \(0\leq t\leq \bar{t}\), then we get

$$\begin{aligned} \bigl\vert w(t) \bigr\vert \leq{}& M_{3}(\alpha,a) \bigl( \vert w _{0} \vert + \vert w_{1} \vert \bigr) \\ &{}+\tilde{h}(\xi )\psi (t) \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E _{\alpha,\alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert \biggl( \int _{0} ^{s}k(s-\sigma )\,d\sigma \biggr) \,ds. \end{aligned}$$

Notice that the expression

$$ \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha,\alpha }\bigl(-a(t-s)^{ \alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(s-\sigma )\,d\sigma \biggr) \,ds $$

is uniformly bounded in view of Lemma 5 and the fact that k is summable.

Now, assuming that

$$ \tilde{h}(\xi ) \int _{0}^{t}(t-s)^{\alpha -1} \bigl\vert E_{\alpha, \alpha }\bigl(-a(t-s)^{\alpha }\bigr) \bigr\vert \biggl( \int _{0}^{s}k(\sigma )\,d\sigma \biggr) \,ds\leq B_{2}\tilde{h}(\xi )< 1/2 $$

for some \(B_{2}>0\), we find

$$ \bigl\vert w(t) \bigr\vert \leq M_{3}(\alpha,a) \bigl( \vert w _{0} \vert + \vert w_{1} \vert \bigr) + \frac{ \psi (t)}{2},\quad 0\leq t\leq \bar{t}. $$

Passing to the supremum, we obtain

$$ \bigl\vert w(t) \bigr\vert \leq 2M_{3}(\alpha,a) \bigl( \vert w _{0} \vert + \vert w_{1} \vert \bigr), \quad 0\leq t \leq \bar{t}, $$

and therefore

$$ \bigl\vert w(t) \bigr\vert \leq 2M_{3}(\alpha,a) \bigl( \vert w _{0} \vert + \vert w_{1} \vert \bigr) < \xi /2,\quad 0 \leq t\leq \bar{t}. $$

Relation (24) shows that the process can be continued. The proof is complete. □

Application to neural network theory

In this section, we present an application to neural network theory. For simplicity, we consider the problem

$$ \textstyle\begin{cases} D_{C}^{\alpha }x_{i}(t)=-c_{i}x_{i}(t)+\sum_{j=1}^{n}a_{ij}f_{j}(x _{j}(t))+\sum_{j=1}^{n}\int _{0}^{t}k_{ij}(s)g_{j}(x_{j}(t-s)) \,ds+I _{i}, \\ \quad t>0, i=1,\ldots,n, \\ x_{i}(0)=x_{i0},\qquad x_{i}^{\prime }(0)=x_{i1},\quad i=1,\ldots,n, \end{cases} $$

where \(0<\alpha <1\), \(c_{i}>0\), \(a_{ij}\in R \), \(I_{i}\), and \(x_{i0},x_{i1}\), \(i,j=1,\dots,n\), are given data. From our argument it will be clear that similar results hold for more general problems such as the case of additional discrete delays \(\sum_{j=1}^{n}b_{ij}f_{j}(x _{j}(t-\tau )) \) and also the case of different activation functions \(f_{j}\). Notice that we consider the finite distributed delay case.

We start with the following assumptions:

  1. (A1)

    The functions \(f_{i}\) are Lipschitz continuous on R with Lipschitz constants \(L_{i}\), \(i=1,2,\dots,n\), that is,

    $$ \bigl\vert f_{i}(x)-f_{i}(y) \bigr\vert \leq L_{i} \vert x-y \vert ,\quad \forall x,y\in R, i=1,2,\dots,n. $$
  2. (A2)

    The functions \(g_{i}\) are Lipschitz continuous on R with Lipschitz constants \(G_{i}\), \(i=1,2,\dots,n\), that is,

    $$ \bigl\vert g_{i}(x)-g_{i}(y) \bigr\vert \leq G_{i} \vert x-y \vert ,\quad \forall x,y\in R, i=1,2,\dots ,n. $$
  3. (A3)

    The delay kernel functions \(k_{ij}\) are nonnegative summable functions (\(\kappa _{ij}:= \int _{0}^{\infty }k_{ij}(s) \,ds< \infty \)) satisfying (3) or simply (9).

    We denote

    $$ u(t)=x(t)-x^{\ast }, $$

    where \(x^{\ast }\) is an equilibrium for problem (13). Then the stability of \(x^{\ast }\) is shifted to the stability of the 0 state for the system

    $$ \textstyle\begin{cases} D_{C}^{\alpha }u_{i}(t)=-c_{i}u_{i}(t)+\sum_{j=1}^{n}a_{ij}\bar{f} _{j}(u_{j}(t))+\sum_{j=1}^{n}\int _{0}^{t}k_{ij}(t-s) \bar{g}_{j}(u _{j}(s)) \,ds, \\ \quad t>0, i=1,2,\dots,n, \\ u_{i}(0)=\psi _{i}:=x_{i0}-x_{i}^{\ast },\qquad u_{i}^{\prime }(0)=\psi _{i} ^{\prime }:=x_{i1}-x_{i}^{\ast },\quad i=1,2,\dots,n, \end{cases} $$


    $$ \bar{f}_{j}\bigl(u_{j}(t)\bigr)=f_{j} \bigl(u_{j}(t)+x_{j}^{\ast }\bigr)-f_{j} \bigl(x_{j}^{ \ast }\bigr),\quad j=1,2,\dots,n, t\geq 0, $$


    $$ \bar{g}_{j}\bigl(u_{j}(t)\bigr)=g_{j} \bigl(u_{j}(t)+x_{j}^{\ast }\bigr)-g_{j} \bigl(x_{j}^{ \ast }\bigr), \quad j=1,2, \dots,n, t\geq 0, $$

    so that, in view of assumptions (A1) and (A2), we obtain

    $$ \textstyle\begin{cases} D_{C}^{\alpha }u_{i}(t)\leq -c_{i}u_{i}(t)+\sum_{j=1}^{n}a_{ij}L_{i} \vert u_{i}(t) \vert +\sum_{j=1}^{n}G_{j}\int _{0}^{t}k _{ij}(t-s) \vert u_{j}(s) \vert \,ds, \\ \quad t>0, i=1,2,\dots,n. \end{cases} $$

    We can apply the first theorem to get a global power-type stability result.

    For the nonlinear case, we assume:

  4. (A4)

    The functions \(g_{i}\) are such that

    $$ \bigl\vert g_{i}(x)-g_{i}(y) \bigr\vert \leq \vert x-y \vert \tilde{h}_{i} \bigl( \vert x-y \vert \bigr),\quad \forall x,y\in R, i=1,2,\dots,n $$

    for some continuous nondecreasing functions \(\tilde{h}_{i}\). The second theorem may be applied to get a local stability of power type.


Consider the example

$$ \textstyle\begin{cases} D_{C}^{\alpha }x_{1}(t)=-c_{1}x_{1}(t)+a_{11}f_{1}(x_{1}(t))+a_{12}f _{2}(x_{2}(t)) \\ \phantom{D_{C}^{\alpha }x_{1}(t)=}{}+\int _{0}^{t}k_{11}(s) f_{1}(x_{1}(t-s)) \,ds+\int _{0}^{t}k_{12}(s) f _{2}(x_{2}(t-s)) \,ds+I_{1}, \\ D_{C}^{\alpha }x_{2}(t)=-c_{2}x_{2}(t)+a_{21}f_{1}(x_{1}(t))+a_{22}f _{2}(x_{2}(t)) \\ \phantom{D_{C}^{\alpha }x_{2}(t)=}{}+\int _{0}^{t}k_{21}(s) f_{1}(x_{1}(t-s)) \,ds+\int _{0}^{t}k_{22}(s) f _{2}(x_{2}(t-s)) \,ds+I_{2}, \\ x_{i}(0)=x_{i0},\quad i=1,2, \end{cases} $$

with \(\alpha =3/2\), \(f_{i}(x)=\tanh x\), \(i=1,2\), \(k_{ij}(t)=K_{ij}t ^{\mu _{ij}-1}e^{-b_{ij}t}\), \(i,j=1,2\). The initial data may be any values. The rest of the coefficients and parameters are such that the conditions of the first theorem (see also the first example) are satisfied.

The equilibrium solution satisfies

$$ \textstyle\begin{cases} 0=-c_{1}x_{1}^{\ast }+ ( a_{11}+\int _{0}^{\infty }k_{11}(s) \,ds ) f_{1}(x_{1}^{\ast }) \\ \phantom{0=}{}+ ( a_{12}+\int _{0}^{\infty }k_{12}(s)\,ds ) f_{2}(x_{2}^{ \ast })+I_{1}, \\ 0=-c_{2}x_{2}^{\ast }+ ( a_{21}+\int _{0}^{\infty }k_{21}(s) \,ds ) f_{1}(x_{1}^{\ast }) \\ \phantom{0=}{}+ ( a_{22}+\int _{0}^{\infty }k_{22}(s)\,ds ) f_{2}(x_{2}^{ \ast })+I_{2}. \end{cases} $$

Having all the conditions in the first theorem satisfied, we conclude the power-type stability.


  1. 1.

    Allain, O.F., Horne, R.N.: Use of artificial intelligence in well-test interpretation. J. Pet. Technol. 42(3), 342–349 (1990)

    Article  Google Scholar 

  2. 2.

    Arena, P., Fortuna, L., Porto, D.: Chaotic behavior in noninteger-order cellular neural networks. Phys. Rev. E 61, 776–781 (2000)

    Article  Google Scholar 

  3. 3.

    Baker, C.T.H., Tang, A.: Generalized Halanay inequalities for Volterra functional differential equations and discretized versions. Invited plenary talk. In: Volterra Centennial Meeting, UTA Arlington (1996)

    Google Scholar 

  4. 4.

    Bhatt, A., Helle, H.B.: Committee neural networks for porosity and permeability prediction from well logs. Geophys. Prospect. 50, 645–660 (2002)

    Article  Google Scholar 

  5. 5.

    Boroomand, A., Menhaj, M.B.: Fractional-order Hopfield neural networks. In: Koppen, M., Kasabov, N., Coghill, G. (eds.) Advances in Neuro-Information Processing, pp. 883–890. Springer, Berlin (2008)

    Google Scholar 

  6. 6.

    Caponetto, R., Fortuna, L., Porto, D.: Bifurcation and chaos in noninteger order cellular neural networks. Int. J. Bifurc. Chaos 8, 1527–1539 (1998)

    Article  Google Scholar 

  7. 7.

    Chen, L., Chai, Y., Wu, R.: Dynamic analysis of a class of fractional-order neural networks with delay. Neurocomputing 111, 190–194 (2013)

    Article  Google Scholar 

  8. 8.

    Chen, L., Chai, Y., Wu, R., Yang, J.: Stability and stabilization of a class of nonlinear fractional-order systems with Caputo derivative. IEEE Trans. Circuits Syst. II, Express Briefs 59(9), 602–606 (2012)

    Article  Google Scholar 

  9. 9.

    Cong, N.D., Doan, T.S., Siegmund, S., Tuan, H.T.: Linearized asymptotic stability for fractional differential equations. Electron. J. Qual. Theory Differ. Equ. 2016, 39 (2016).

    MathSciNet  Article  MATH  Google Scholar 

  10. 10.

    Gonzalez, A., Barrufet, M.A., Startzman, R.: Improved neural-network model predicts dewpoint pressure of retrograde gases. J. Pet. Sci. Eng. 37, 183–194 (2003)

    Article  Google Scholar 

  11. 11.

    Halanay, A.: Differential Equations. Academic Press, New York (1996)

    Google Scholar 

  12. 12.

    Heider, D., Piovoso, M.J., Gillespie, J.W. Jr: Application of a neural network to improve an automated thermoplastic tow-placement process. J. Process Control 12, 101–111 (2002)

    Article  Google Scholar 

  13. 13.

    Hien, L.V., Phat, V.N., Trinh, H.: New generalized Halanay inequalities with applications to stability of nonlinear non-autonomous time-delay systems. Nonlinear Dyn. 82, 563–575 (2015)

    MathSciNet  Article  Google Scholar 

  14. 14.

    Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 79, 2554–2558 (1982)

    MathSciNet  Article  Google Scholar 

  15. 15.

    Hopfield, J.J.: Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA 81(8), 3088–3092 (1984)

    Article  Google Scholar 

  16. 16.

    Hopfield, J.J., Tank, D.W.: Computing with neural circuits: a model. Science 233, 625–633 (1986)

    Article  Google Scholar 

  17. 17.

    Huang, S.: Gene expression profiling genetic networks and cellular states: an integrating concept for tumorigenesis and drug discovery. J. Mol. Med. 77, 469–480 (1999)

    Article  Google Scholar 

  18. 18.

    Huang, S., Wang, B.: Stability and stabilization of a class of fractional-order nonlinear systems for \(1< a<2\). J. Comput. Nonlinear Dyn. 13, 1–8 (2018)

    Google Scholar 

  19. 19.

    Jian, H., Wenfen, H.: Novel approach to predict potentiality of enhanced oil recovery. In: Society of Petroleum Engineers Intelligent Energy Conference and Exhibition, Amsterdam, The Netherlands (2006)

    Google Scholar 

  20. 20.

    Kennedy, M.P., Chua, L.O.: Neural networks for non-linear programming. IEEE Trans. Circuits Syst. I, Fundam. Theory Appl. 35, 554–562 (1998)

    Article  Google Scholar 

  21. 21.

    Lee, J.A., Almond, D.P., Harris, B.: The use of neural networks for the prediction of fatigue lives of composite materials. Composites, Part A, Appl. Sci. Manuf. 30(10), 1159–1169 (1999)

    Article  Google Scholar 

  22. 22.

    Li, C., Wang, J., Lu, J., Ge, Y.: Observer-based stabilisation of a class of fractional order non-linear systems for \(1< a<2\) case. IET Control Theory Appl. 8(13), 1238–1246 (2014).

    MathSciNet  Article  Google Scholar 

  23. 23.

    Li, C.P., Zhang, F.R.: A survey on the stability of fractional differential equations. Eur. Phys. J. Spec. Top. 193, 27–47 (2011).

    Article  Google Scholar 

  24. 24.

    Li, T., Wang, Y.: Stability of a class of fractional-order nonlinear systems. Discrete Dyn. Nat. Soc. 2014, Article ID 724270 (2014).

    MathSciNet  Article  Google Scholar 

  25. 25.

    Li, Y., Chen, Y.Q., Podlubny, I.: Mittag–Leffler stability of fractional order nonlinear dynamic systems. Automatica 45, 1965–1969 (2009)

    MathSciNet  Article  Google Scholar 

  26. 26.

    Li, Y., Chen, Y.Q., Podlubny, I.: Stability of fractional-order nonlinear dynamic systems: Lyapunov direct method and generalized Mittag–Leffler stability. Comput. Math. Appl. 59, 1810–1821 (2010)

    MathSciNet  Article  Google Scholar 

  27. 27.

    Lippmann, R.P., Shahian, D.M.: Coronary artery bypass risk prediction using neural networks. Ann. Thorac. Surg. 63, 1635–1643 (1997)

    Article  Google Scholar 

  28. 28.

    Liu, B., Lu, W., Chen, T.: Generalized Halanay inequalities and their applications to neural networks with unbounded time-varying delays. IEEE Trans. Neural Netw. 22(9), 1508–1513 (2011)

    Article  Google Scholar 

  29. 29.

    Lou, X., Ye, Q., Cui, B.: Exponential stability of genetic regulatory networks with random delays. Neurocomputing 73, 759–769 (2010)

    Article  Google Scholar 

  30. 30.

    Lugo-Penaloza, A.F., Flores-Godoy, J.J., Fernandez-Anaya, G.: Preservation of stability and synchronization of a class of fractional-order systems. Math. Probl. Eng. 2012, Article ID 928930 (2012).

    MathSciNet  Article  MATH  Google Scholar 

  31. 31.

    Mohaghegh, S., Arefi, R., Ameri, S.: Petroleum reservoir characterization with the aid of artificial neural networks. J. Pet. Sci. Eng., 16, 263–274 (1996)

    Article  Google Scholar 

  32. 32.

    Mohamed, S., Gopalsamy, K.: Continuous and discrete Halanay-type inequalities. Bull. Aust. Math. Soc. 61, 371–385 (2000)

    MathSciNet  Article  Google Scholar 

  33. 33.

    Niamsup, P.: Stability of time-varying switched systems with time-varying delay. Nonlinear Anal. Hybrid Syst. 3, 631–639 (2009)

    MathSciNet  Article  Google Scholar 

  34. 34.

    Podlubny, I.: Fractional Differential Equations, vol. 198. Academic Press, San Diego (1998)

    MATH  Google Scholar 

  35. 35.

    Qin, Z., Wu, R., Lu, Y.: Stability analysis of fractional-order systems with the Riemann–Liouville derivative. Syst. Sci. Control Eng. 2(1), 727–731 (2014).

    Article  Google Scholar 

  36. 36.

    Ren, F., Cao, F., Cao, J.: Mittag-Leffler stability and generalized Mittag–Leffler stability of fractional-order gene regulatory networks. Neurocomputing 160, 185–190 (2015)

    Article  Google Scholar 

  37. 37.

    Wang, H., Yu, Y., Wen, G.: Stability analysis of fractional-order Hopfield neural networks with time delays. Neural Netw. 55, 98–109 (2014)

    Article  Google Scholar 

  38. 38.

    Wen, L., Yu, Y., Wang, W.: Generalized Halanay inequalities for dissipativity of Volterra functional differential equations. J. Math. Anal. Appl. 347, 169–178 (2008)

    MathSciNet  Article  Google Scholar 

  39. 39.

    Wen, X.J., Wu, Z.M., Lu, J.G.: Stability analysis of a class of nonlinear fractional-order systems. IEEE Trans. Circuits Syst. II 55(11), 1178–1182 (2008)

    Article  Google Scholar 

  40. 40.

    Wu, H., He, L., Qin, L., Feng, T., Shi, R.: Solving interval quadratic program with box set constraints in engineering by a projection neural network. Inf. Technol. J. 9(8), 1615–1621 (2010)

    Article  Google Scholar 

  41. 41.

    Zhang, F., Li, C.: Stability analysis of fractional differential systems with order lying in \((1, 2)\). Adv. Differ. Equ. 2011, Article ID 213485 (2011).

    MathSciNet  Article  MATH  Google Scholar 

  42. 42.

    Zhang, R., Tian, G., Yang, S., Cao, H.: Stability analysis of a class of fractional order nonlinear systems with order lying in (0, 2). ISA Trans. 56, 102–110 (2015)

    Article  Google Scholar 

  43. 43.

    Zhang, S., Yu, Y., Wang, H.: Mittag-Leffler stability of fractional-order Hopfield neural networks. Nonlinear Anal. Hybrid Syst. 16, 104–121 (2015)

    MathSciNet  Article  Google Scholar 

Download references


The author is grateful for the financial support and the facilities provided by King Abdulaziz City of Science and Technology (KACST) and King Fahd University of Petroleum and Minerals.


The author is supported by King Abdulaziz City of Science and Technology (KACST) under the National Science, Technology and Innovation Plan (NSTIP), Project No. 15-OIL4884-0124.

Author information




Author read and approved the final manuscript.

Corresponding author

Correspondence to Nasser-Eddine Tatar.

Ethics declarations

Competing interests

The author declares that he has no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Tatar, NE. Fractional Halanay inequality of order between one and two and application to neural network systems. Adv Differ Equ 2019, 273 (2019).

Download citation


  • Hopfield neural network
  • Power-type stability
  • Caputo fractional derivative
  • fractional Halanay inequality