Skip to main content

Mean-square stability of Riemann–Liouville fractional Hopfield’s graded response neural networks with random impulses

Abstract

In this paper a model of Hopfield’s graded response neural network is investigated. A network whose neurons are subject to a certain impulsive state displacement at random times is considered. The model is set up and studied. The presence of random moments of impulses in the model leads to a change of the solutions to stochastic processes. Also, we use the Riemann–Liouville fractional derivative to model adequately the long-term memory and the nonlocality in the neural networks. We set up in an appropriate way both the initial conditions and the impulsive conditions at random moments. The application of the Riemann–Liouville fractional derivative leads to a new definition of the equilibrium point. We define mean-square Mittag-Leffler stability in time of the equilibrium point of the model and study this type of stability. Some sufficient conditions for this type of stability are obtained. The general case with time varying self-regulating parameters of all units and time varying functions of the connection between two neurons is studied.

Introduction

The study of the dynamic behavior of neural networks has been investigated in many papers in the literature. Some researchers have found that in contrast to signal processing of integer-order models of neural networks, the application of fractional-order derivatives has significant advantages in modeling the problem. One of the most popular and influential types of neural networks is the classical first-order Hopfield neural network [12]. The long-term memory and the nonlocality of fractional derivatives allowed them to be incorporated into neural networks to describe better the behavior of the neurons connected with memory and heredity. Many researchers studied fractional neural networks and obtained many advantages over integer-order neural networks. For instance, Alofi et al. [6] studied the finite-time stability of Caputo fractional neural networks with distributed delay. Kaslik et al. [13] discussed the stability analysis of fractional-order neural networks of Hopfield type. It is worth mentioning that a novel conceptual framework of the fractional-order Hopfield neural network has been given in [18]. Several results concerning the implementation of the Caputo fractional derivative in the model have been obtained in the literature. We could mention, for example, the papers in [13, 18, 23, 31] about Caputo fractional-order Hopfield type neural networks.

Note that the case of fractional-neural networks with Riemann–Liouville (RL) fractional derivatives is not well studied because of this type of derivative and the required initial condition (see, for example, [10, 11, 15, 27, 29]). At the same time, there are some inaccuracies when the RL fractional derivative is applied. For example, in papers [10, 15, 27] the RL fractional integral and the RL fractional derivative, respectively, are not well defined in the initial conditions associated with the RL fractional model with delay. In [30] the equilibrium point is defined by an equation which has only the zero solution (see Remarks 3 and 1).

Sometimes, neural networks are subject to some perturbations acting on a negligible small time. The models in this case are so-called impulsive models. In the case when the time of impulsive perturbations are initially given deterministic points, the impulsive models with ordinary derivatives for neural networks are studied in [9, 19, 22, 25, 26, 33]. Note that impulsive fractional neural networks are studied for Caputo fractional derivative in several papers, such as [17, 20, 24, 32], and for RL fractional derivative, see [28, 29].

The presence of randomness in the neural networks could be incorporated in various ways. One of them is considering stochastic models (see, for example, the review paper [21] and the references cited therein). Another is considering impulsive perturbation in the neural networks occurring at random times (see, for example, [3, 4]).

The main goal of this paper is to define and study for the first time the RL fractional generalization of a Hopfield neural network with impulses occurring at random times. Initially, a brief detailed explanation of the solutions being stochastic processes is provided. Then, the equilibrium, deeply connected with the RL fractional derivative, is defined (it is different than the equilibrium in impulsive Caputo fractional models). Note that the presence of the RL fractional derivative and the type of the initial condition require a new type of stability which excludes an appropriate neighborhood of the initial time point (it is different than in impulsive Caputo fractional models). This stability, called the mean-square Mittag-Leffler stability in time, is defined and studied in the paper. Some examples are provided to illustrate the introduced equilibrium, the defined fractional Dini derivative of Lyapunov functions as well as the practical application of the obtained sufficient conditions.

The main contributions in this paper could be summarized as follows:

  • the model of Hopfield’s neural networks with RL fractional derivative and impulses at random times is defined;

  • the model with all variables in time coefficients is investigated;

  • both the initial conditions and the impulsive conditions are set up in an appropriate way;

  • the equilibrium point, deeply connected with the properties of RL fractional derivative, is defined;

  • appropriate types of stability, called mean-square Mittag-Leffler stability in time and eventual mean-square Mittag-Leffler stability in time, are defined and studied.

Preliminary notes on fractional derivatives and equations

Let \(t_{0}\geq 0\) be a given number. In this paper we use the following definitions for fractional derivatives and integrals:

  • Riemann–Liouville (RL) fractional integral of order \(q\in (0,1)\) (see [7, 8, 16])

    $$ {}_{t_{0}}I^{q}_{t}m(t)=\frac{1}{\Gamma ( q)} \int _{t_{0}}^{t} \frac{m(s)}{( t-s) ^{1-q}}\,ds,\quad t>t_{0}, $$

    where Γ is the gamma function.

  • Riemann–Liouville (RL) fractional derivative of order \(q\in (0,1)\) (see [7, 8, 16])

    $$ {}_{t_{0}}^{RL}D^{q}_{t}m(t)= \frac{d}{dt} \bigl({}_{t_{0}}I^{1-q}_{t}m(t) \bigr)= \frac{1}{\Gamma ( 1-q )}\frac{d}{dt} \int _{t_{0}}^{t} ( t-s ) ^{-q}m(s)\,ds,\quad t>t_{0}. $$

    We call the point \(t_{0}\) a lower limit of the RL fractional derivative.

  • Grunwald–Letnikov fractional derivative given by (see [7, 8, 16])

    $$ {}_{t_{0}}^{GL}D^{q}m(t)=\lim_{h\to 0} \frac{1}{h^{q}}\sum_{r=0}^{[ \frac{t-t_{0}}{h}]} (-1)^{r}{}_{q}C_{r}m(t-rh), \quad t>t_{0}, $$

    and the Grunwald–Letnikov fractional Dini derivative given by

    $$ {}_{t_{0}}^{GL}D^{q}_{+}m(t)= \limsup_{h\to 0+}\frac{1}{h^{q}}\sum_{r=0}^{[ \frac{t-t_{0}}{h}]} (-1)^{r}{}_{q}C_{r}m(t-rh), \quad t>t_{0}, $$

    where \({}_{q}C_{r}=\frac{q(q-1)\cdots (q-r+1)}{r!}\), \(r\geq 0\), is an integer and \([\frac{t-t_{0}}{h}]\) denotes the integer part of the fraction \(\frac{t-t_{0} }{h}\).

The definitions of the initial condition for fractional differential equations with RL-derivatives are based on the following result.

Lemma 2.1

(Lemma 3.2 [14])

Let \(q\in (0,1)\) and \(t\in J\).

  1. (a)

    If there exists a.e. a limit \(\lim_{t\to t_{0}+}[(t-t_{0})^{1-q}m(t)]=C\), then there also exists a limit

    $$ {}_{t_{0}}I^{1-q}_{t}m(t)\big| _{t=t_{0}}:= \lim _{t\to t_{0}+}{}_{t_{0}}I^{1-q}_{t}m(t) =C \Gamma (q). $$
  2. (b)

    If there exists a.e. a limit \({}_{t_{0}}I^{1-q}_{t}m(t)| _{t=t_{0}} =B\) and if there exists the limit \(\lim_{t\to t_{0}+}[(t-t_{0})^{1-q}m(t)] \), then

    $$ \lim_{t\to t_{0}+} \bigl[(t-t_{0})^{1-q}m(t) \bigr] = \frac{B}{\Gamma (q)}. $$

Remark 1

Note that if \(m(t)\equiv C=const\), then

$$ {}_{t_{0}}I^{1-q}_{t}C\big| _{t=t_{0}}=\lim _{t\to t_{0}+} \bigl[(t-t_{0})^{1-q}m(t) \bigr]=0 $$

and

$$ {}_{t_{0}}^{RL}D^{q}_{t}C=\frac{C}{\Gamma (1-q)(t-t_{0})^{q}}. $$

Let \(0\leq a< b\leq \infty \) and consider the scalar RL fractional differential equation

$$ {}_{a}^{RL}D^{q}_{t} x(t)=f \bigl(t,x(t) \bigr),\quad t\in (a,b],$$
(1)

where \(f: [a,b]\times \mathbb{R}\to \mathbb{R}\).

Remark 2

Note that according to [14] the initial conditions to (1) could be one of the following forms:

  • integral form (see (3.1.6) [14])

    $$ {}_{a}I^{1-q}_{t}x(t)\big| _{t=a}=B;$$
    (2)
  • weighted Cauchy type problem (see (3.1.7) [14])

    $$ \lim_{t\to a+} \bigl((t-a)^{1-q}x(t) \bigr)=C.$$
    (3)

Proposition 1

(Lemma 5.2 [8])

Let the function \(f : [a,b]\times \mathbb{R}\to \mathbb{R}\) be continuous, bounded, and Lipschitz with respect to the second variable.

Then the solution of the Cauchy type problem

$$ {}_{a}^{RL}D_{t}^{q}x(t))=f \bigl(t,x(t) \bigr),\quad \quad {}_{a}I^{1-q}_{t}x(t)\big| _{t=a}=B $$

satisfies the Volterra integral equation

$$ x(t)=\frac{B}{\Gamma (q)(t-a)^{1-q}}+\frac{1}{\Gamma (q)} \int _{a}^{t}(t-s)^{q-1}f \bigl(s,x(s) \bigr)\,ds, \quad t\in [a,b] $$

and vise verse.

Description of the RL fractional model with random impulses

We consider the model proposed by Hopfield [12] and known as Hopfield’s graded response neural network. We generalize it in two ways:

  • the rate of change of the state variables of neurons will be modeled by an RL fractional derivative;

  • the neurons are subject to certain impulsive state displacements at random moments.

RL fractional model with fixed points of impulses

Initially, we consider the case of fixed initially given points of impulses \(\{T_{k}\}_{k=1}^{\infty }\) with \(0< T_{k}< T_{k+1}\), \(k=1,2,\ldots\) , and \(\lim_{k\to \infty }T_{k}=\infty \), \(T_{0}=0\).

Consider the general model of RL fractional Hopfield’s graded response neural networks with impulses occurring at fixed initially given times (INN):

$$ \begin{aligned} &{}_{0}^{RL}D^{q}_{t}x_{i}(t)=-c_{i}(t)x_{i}(t)+ \sum_{j=1}^{n} a_{ij}(t)f_{j} \bigl(x_{j}(t) \bigr)+I_{i}(t) \\ &\quad \text{for } T_{k}< t< T_{k+1 }, k=0,1,\ldots, i=1,2, \ldots, n, \end{aligned} $$
(4)

where n represents the number of neurons in the network, \(x_{i}(t)\) is the pseudostate variable denoting the average membrane potential of the ith neuron at time t, \(x(t)=(x_{1}(t), x_{2}(t),\ldots, x_{n}(t))\in \mathbb{R}^{n}\), \(c_{i}(t)>0\), \(i=1,2,\ldots,n\), is the self-regulating parameter of the ith unit, \(a_{ij}(t)\), \(i,j=1,2,\ldots,n\), correspond to the synaptic connection strength of the ith neuron to the jth neuron at time t, \(f_{j}(x_{j}(t))\) denotes the activation functions of the neurons at time t and represent the response of the jth neuron to its membrane potential, \(f(x)=(f_{1}(x_{1}), f_{2}(x_{2}),\ldots, f_{n}( x_{n}))\) is the activation function, and \(I(t)=(I_{1}(t), I_{2}(t),\ldots, I_{n}(t))\) is the external bias vector.

The RL fractional derivative leads to a specific type of the initial condition of the model (see Remark 2 and Lemma 2.1):

$$ {}_{0}I^{1-q}_{t}x_{i}(t)\big| _{t=0}=x_{i}^{0}, \quad i=1,2,\ldots,n.$$
(5)

Let the average membrane potential of each neuron be subject to some instantaneous perturbations at times \(T_{k}\), \(k=1,2,\ldots\) . Then model (4) will have impulses at times \(T_{k}\), \(k=1,2,\ldots\) . Let us recall the meaning of impulses in differential equations. At the time point the state variable has a jump and then this state variable continues to behave according to the previously given dynamic equation. So, the value after the jump/impulse has a meaning of the initial value. In the case of ordinary derivatives and Caputo fractional derivatives, because of the type of the initial condition, the value of the impulse condition could be given in the form \(x(T_{k}+0)=I_{k}(x(T_{k}-0))\) with \(x(T_{k}+0)\), \(x(T_{k}-0)\) equaling to values of the state variable after the jump and before the jump, respectively, or similar one. In the case of the RL fractional derivative, some authors use this type of impulsive conditions (see, for example, [28]). But because of the type of the initial condition, we think that the above given type of the impulsive condition is not appropriate. Now, following the interpretation of impulses, Remark 2, and Lemma 2.1, we will define the impulsive conditions in the form:

$$ {}_{T_{k}}I^{1-q}_{t}x_{i}(t)\big| _{t=T_{k}}=\psi _{ik} \bigl(x_{i}(T_{k}-0) \bigr), \quad \text{for } i=1,2,\ldots,n, k=1,2,\ldots,$$
(6)

where the functions \(\psi _{i,k}(u)\), \(k=1,2,\ldots\) , are the impulsive functions giving the impulsive perturbation of the ith neuron at time \(T_{k}\), \(k=1,2,\ldots\) .

In our study we apply some results for the initial value problem for the scalar linear RL fractional differential equations with fixed points of impulses

$$ \begin{aligned} &{}_{0}^{RL}D^{q}_{t}u=au(t) \quad \text{for } t\in (T_{k},T_{k+1}], k=0,1,2,\ldots, \\ &{}_{T_{k}}I^{1-q}_{t} u(t)\big| _{t=T_{k}}=0, \quad \text{for } k=1,2,\ldots, \\ &{}_{0}I^{1-q}_{t} u(t)\big| _{t=0}=u_{0}, \end{aligned} $$
(7)

where \(u_{0}\in \mathbb{R}\), \(a<0\).

Lemma 3.1

([5])

IVP (7) has an exact solution \(u\in PC_{1-q}([0,\infty ),\mathbb{R})\) such that

$$ \bigl\vert u (t) \bigr\vert \leq \textstyle\begin{cases} \frac{ \vert u_{0} \vert }{t^{1-q}\Gamma (q)},&t\in (0,T_{1}], \\ \frac{ \vert u_{0} \vert \pi Csc[\pi q]}{t^{1-q}\Gamma (q)\Gamma (1-q)},& t\in (T_{1},T_{2}], \\ \frac{ \vert u_{0} \vert \pi Csc(\pi q)}{t^{1-q}\Gamma (q)\Gamma (1-q)}(n-1) (1+ \frac{\pi Csc(\pi q )}{\Gamma (q)\Gamma (1-q)} ),& t\in (T_{n},T_{n+1}], n=2,3,\ldots \end{cases} $$

holds.

RL fractional model with random impulses

Note that the fractional differential equations with impulses occurring at random times are studied in [1, 3, 5]. Similar to these papers, we will define now the RL fractional model with impulses at random times.

Let the probability space \((\Omega , \mathcal{F}, P)\) be given. Let a sequence of independent exponentially distributed random variables \(\{ \tau _{k} \} _{k=1}^{\infty }\) with the same parameter \(\lambda >0 \) defined on the sample space Ω be given.

Define the sequence of random variables \(\{ \xi _{k} \} _{k=0}^{\infty }\) by

$$ \xi _{k}=\sum_{i=1}^{k} \tau _{i} ,\quad k= 1,2,\ldots, \xi _{0}\equiv0.$$
(8)

The random variable \(\tau _{k}\) measures the waiting time of the kth impulse after the \((k-1)\)th impulse occurs, and the random variable \(\xi _{k}\) denotes the length of time until k impulses occur for \(t\geq 0\).

Let the points \(t_{k} \) be arbitrary values of the corresponding random variables \(\tau _{k}\), \(k=1,2,\ldots\) . Define the increasing sequence of points \(T_{k}=\sum_{i=1}^{k}t_{i}\), \(k=1,2,3,\ldots \) , that are values of the random variables \(\xi _{k} \).

Consider the initial value problem for INN with fixed points of impulses (4)–(6). The solution of the INN with fixed moments of impulses (4) depends not only on the initial value \(x_{0}=(x_{1}^{0},x_{2}^{0},\ldots,x_{n}^{0})\) but also on the moments of impulses \(T_{k}\), \(k=1,2,\ldots\) , i.e., the solution depends on the chosen arbitrary values \(t_{k}\) of the random variables \(\tau _{k}\), \(k=1,2,\ldots\) . We denote the solution of initial value problem (4) by \(x(t;x_{0},\{T_{k}\})\) with \(x=(x_{1},x_{2},\ldots,x_{n})\).

The set of all solutions \(x(t; x_{0},\{T_{k}\})\) of INN (4)–(6) for any values \(t_{k}\) of the random variables \(\tau _{k}\), \(k=1,2,\ldots\) , generates a stochastic process with state space \({\mathbb{R}}^{n}\). We denote it by \(x(t; x_{0},\{ \tau _{k}\}) \), and we say that it is a solution of the general model of Hopfield’s graded response neural networks with impulses occurring at random times (RINN), formally denoted by

$$ \begin{aligned} & {}_{0}^{RL}D^{q}_{t}x_{i}(t)=-c_{i}(t)x_{i}(t)+ \sum_{j=1}^{n} a_{ij}(t)f_{j} \bigl(x_{j}(t) \bigr)+I_{i}(t) \\ &\quad \text{for } \xi _{k}< t< \xi _{k+1 }, k=0,1,\ldots, i=1,2,\ldots, n \\ &{}_{\xi _{k}}I^{1-q}_{t}x_{i}(t)\big| _{t=\xi _{k}}=\psi _{k,i} \bigl( x_{i}( \xi _{k}) \bigr)\quad \text{for } k=1,2,\ldots, \\ &{}_{0}I^{1-q}_{t}x_{i}(0)=x_{i}^{0} . \end{aligned} $$
(9)

Now, similar to the papers [1, 3, 5], we will define the solution of the RL fractional model with impulses at random times.

Definition 1

Suppose that \(t_{k}\) is a value of the random variable \(\tau _{k}\), \(k=1,2,3,\ldots\) , and \(T_{k}= \sum_{i=1}^{k}t_{i}\), \(k =1,2,\ldots\) . Then the solution \(x(t; x_{0},\{T_{k}\})\) of the IVP for INN (4)–(6) is called a sample path solution of the IVP for RINN (9) (here, \(T_{0}=0\)).

Definition 2

A stochastic process \(x(t; x_{0},\{ \tau _{k}\}) \) with an uncountable state space \(\mathbb{R}^{n} \) is said to be a solution of the IVP for RINN (9) if, for any values \(t_{k}\) of the random variable \(\tau _{k}\), \(k=1,2,3,\ldots\) , and \(T_{k}= \sum_{i=1}^{k}t_{i}\), \(k =1,2,\ldots\) , the corresponding function \(x(t; x_{0},\{T_{k}\})\) is a sample path solution of the IVP for INN (4)–(6).

According to Definition 2 any solution of IVP (7) is a sample path solution of the following scalar linear RL fractional differential equation with random moments of impulses:

$$ \begin{aligned} &{} _{0}^{RL}D^{q}_{t}u=a u\quad \text{for } t>0, \xi _{k}< t< \xi _{k+1}, \\ &{}_{\xi _{k}}I^{1-q}_{t} u(t)\big| _{t=\xi _{k}}= 0, \quad \text{for } k=1,2,\ldots, \\ & {}_{0}I^{1-q}_{t} u(t)\big| _{t=0}=u_{0}, \end{aligned} $$
(10)

where \(u_{0}\in {\mathbb{R}}\), \(a<0\).

Lemma 3.2

([5])

For any positive number \(\epsilon >0\), the solution \(u(t;u_{0},\{\tau _{k}\})\) of the scalar linear RL fractional differential equation with random moments of impulses(10) satisfies the inequality

$$ E \bigl( \bigl\vert u \bigl(t; u_{0},\{\tau _{k}\} \bigr) \bigr\vert \bigr)\leq \vert u_{0} \vert \frac{\lambda }{t^{1-q}\Gamma (q)} \frac{\pi Csc(\pi q)}{\Gamma (1-q)} \biggl(1+\frac{\pi Csc(\pi q) }{\Gamma (q)\Gamma (1-q)} \biggr) , \quad t \geq 0, $$

where \(E(\cdot )\) is the expected value of the stochastic process \(\vert u(t; u_{0},\{\tau _{k}\}) \vert \) and it depends on the time t.

Equilibrium of RL fractional model with random impulses

The main problem in the definition for the equilibrium point of RL fractional model is based on the properties \({}_{a}^{RL}D^{q}_{t}C=\frac{C}{(t-a)^{q} \Gamma (1-q)}\) with a, \(C=const\) and \({}_{a}I^{1-q}_{t}C| _{t=a}= \frac{C(t-a)^{1 - q}}{\Gamma (q)}| _{t=a}=0\) (see Remark 1). These properties lead to a totally different definition of equilibrium of RL model (9), defined as follows.

Definition 3

A vector \(x^{*}\in \mathbb{R}^{n}\), \(x^{*}=(x_{1}^{*},x_{2}^{*},\ldots,x_{n}^{*})\), is an equilibrium point of RINN (9) iff the equalities

$$ 0=- \biggl(c_{i}(t)+\frac{1}{t^{q} \Gamma (1-q)} \biggr)x_{i}^{*}+\sum_{j=1}^{n} a_{ij}(t)f_{j} \bigl(x_{j}^{*} \bigr)+I_{i}(t) \quad \text{for } t\geq 0, i=1,2,\ldots,n,$$
(11)

and

$$ 0=\psi _{k,i} \bigl( x_{i}^{*} \bigr) \quad \text{for }k=1,2,\ldots, i=1,2,\ldots,n,$$
(12)

hold.

Remark 3

Equality (11) is the main part of the definition for the equality of an equilibrium of any type of a model with RL fractional derivative. If the term \(\frac{1}{t^{q} \Gamma (1-q)}x_{i}^{*}\) is missing in the definition for equilibrium, then it is correct only for zero equilibrium (see, for example, where the definition does not contain the mentioned term). In [27] the equilibrium point is defined as a function depending on time.

Remark 4

In the case of zero external bias vector, if zero is a fixed point of the activation function and the impulsive functions, then for any \(c_{i}(t)\) the point 0 is an equilibrium point of RINN (9).

Remark 5

In the special case of the zero external bias vector and \(a_{i,j}(t)=c_{i}(t)+\frac{1}{t^{q} \Gamma (q)}\), \(i,j=1,2,\ldots,n\), any fixed points of the activation functions could form an equilibrium point, i.e., if \(x_{i}^{*}=f_{i}(x_{i}^{*})\), then \(x^{*}=(x_{1}^{*},x_{2}^{*},\ldots,x_{n}^{*})\) could be an equilibrium of RINN (9).

Remark 6

In the case of all constant coefficients in the model, if the external bias vector is zero, then 0 could be an equilibrium point, but if the external bias vector is a nonzero constant vector, then the model has no equilibrium point.

Remark 7

In [28] the RL derivative is used in the model of BAM neural networks with deterministic impulses. The equilibrium point is defined in Definition 7 [28] with missing term \(\frac{1}{t^{q} \Gamma (1-q)}x_{i}^{*}\), but in all examples only zero equilibrium and zero bias external vector are considered.

The equilibrium point depends not only on the RL fractional equations but also on the impulsive conditions. We will illustrate it in the following example.

Example 1

Consider the model of a neural network with three neurons with impulses at random times:

$$ \begin{gathered} \begin{aligned} {} _{0}^{RL}D^{0.2}_{t}x_{1}(t)&=- \biggl( \frac{1}{t}+5- \frac{1}{t^{0.2}\Gamma (0.8)} \biggr)x_{1}(t)-0.1 \sin (t)\sin \bigl(x_{1}(t) \bigr) \\ &\quad {} + 0.4\sin \bigl(x_{2}(t) \bigr)+0.3\sin \bigl(x_{3}(t) \bigr)+ \biggl(\frac{1}{t}+5 \biggr) \pi \quad \text{for } \xi _{k}< t< \xi _{k+1}, \end{aligned} \\ \begin{aligned} {}_{0}^{RL}D^{0.2}_{t}x_{2}(t)&=- \biggl(\frac{1}{t}+5- \frac{1}{t^{0.2}\Gamma (0.8)} \biggr)x_{2}(t)- \frac{t^{2}}{5t^{2}+1} \sin \bigl(x_{1}(t) \bigr) \pi \\ &\quad {} + 0.3\sin \bigl(x_{2}(t) \bigr)+\frac{t}{5t+1}\sin \bigl(x_{3}(t) \bigr)+2 \biggl( \frac{1}{t}+5 \biggr) \pi \quad \text{for } \xi _{k}< t< \xi _{k+1}, \end{aligned} \\ \begin{aligned} {}_{0}^{RL}D^{0.2}_{t}x_{3}(t)&=- \biggl( \frac{1}{t}+5- \frac{1}{t^{0.2}\Gamma (0.8)} \biggr)x_{3}(t)+ \frac{t}{10t+1}\sin \bigl(x_{1}(t) \bigr) \pi \\ &\quad {} -0.2\cos (t)\sin \bigl(x_{2}(t) \bigr)-0.1 \sin (t)\sin \bigl(x_{3}(t) \bigr) \\ &\quad {}+3 \biggl( \frac{1}{t}+5 \biggr) \pi \quad \text{for } \xi _{k}< t< \xi _{k+1}, \end{aligned} \end{gathered} $$
(13)

and impulsive conditions

$$ \begin{aligned} &{}_{\xi _{k}}I^{1-q}_{t}x_{1}(t)\big| _{t=\xi _{k}}=\psi _{k,1} \bigl(x_{1}( \xi _{k}) \bigr)\quad \text{for } k=1,2,\ldots, \\ &{}_{\xi _{k}}I^{1-q}_{t}x_{2}(t)\big| _{t=\xi _{k}}=\psi _{k,2} \bigl(x_{2}( \xi _{k}) \bigr)\quad \text{for } k=1,2,\ldots, \\ &{}_{\xi _{k}}I^{1-q}_{t}x_{3}(t)\big| _{t=\xi _{k}}=\psi _{k,3} \bigl(x_{3}( \xi _{k}) \bigr)\quad \text{for } k=1,2,\ldots .\end{aligned} $$
(14)

Note that the point \(x^{*}=(\pi , 2\pi , 3\pi )\) satisfies the system of RL fractional differential equations (13).

In the case \(\psi _{k,1}(u)=\psi _{k,3}(u)=\sin (k (u-\pi ))\), \(\psi _{k,2}(u) = \sin (k u)\), \(k=1,2,\ldots\) , the point \(x^{*}=(\pi , 2\pi , 3\pi )\) is an equilibrium point of RINN (13),(14).

In the case \(\psi _{k,1}(u)=\psi _{k,3}(u)=\sin (k (u-\pi ))\), \(\psi _{k,2}(u) =k(u-2 \pi )\), \(k=1,2,\ldots\) , the point \(x^{*}=(\pi , 2\pi , 3\pi )\) is an equilibrium point of RINN (13),(14).

In the case \(\psi _{k,1}(u)=\psi _{k,3}(u)=\cos (k (u-\pi ))\), \(\psi _{k,2}(u) = \sin (k u)\), \(k=1,2,\ldots\) , the point \(x^{*}=(\pi , 2\pi , 3\pi )\) is not an equilibrium point of RINN (13),(14). The model has no equilibrium point.

We assume the following:

Assumption A1

Let RINN (9) have an equilibrium point \(x^{*}\in \mathbb{R}^{n}\).

If Assumption A1 is satisfied, then we can shift the equilibrium point \(x^{*}\) of system (9) to the origin. The transformation \(y(t) = x(t)- x^{*}\) is used to put system (9) in the following form:

$$ \begin{aligned} &{} _{0}^{RL}D^{q}_{t}y_{i}(t)=-c_{i}(t)y_{i}(t)+ \sum_{j=1}^{n} a_{ij}(t)F_{j} \bigl(y_{j}(t) \bigr) \\ &\quad \text{for } \xi _{k} < t< \xi _{k+1}, k=0,1,\ldots, i=1,2,\ldots, n, \\ &{}_{\xi _{k}}I^{1-q}_{t}y_{i}(t)\big| _{t=\xi _{k}}=\Phi _{k,i} \bigl( y_{i}( \xi _{k} ) \bigr) \quad \text{for } k=1,2,\ldots, \\ &{}_{0}I^{1-q}_{t}y_{i}(t)\big| _{t=0}=y_{i}^{0} , \end{aligned} $$
(15)

where \(F_{j}(u)=f_{j}(u+x_{j}^{*})-f_{j}(x_{j}^{*})\), \(j=1,2,\ldots,n\), and \(\Phi _{k,i}( u )=\psi _{k,i}(u+x_{i}^{*} )-\psi _{k,i}(x_{i}^{*} )= \psi _{k,i}(u+x_{i}^{*} )\), \(i=1,2,\ldots,n\), \(k=1,2,\ldots\) , \(y_{i}^{0}=x_{i}^{0}\).

Remark 8

RINN (15) has a zero external bias vector, zero is a fixed point of both the activation functions \(F_{j}(u)\), \(j=1,2,\ldots,n\), and the impulsive functions \(\Phi _{k,i}( u)\), \(i=1,2,\ldots,n\), \(k=1,2,\ldots\) , and according to Remark 5 the zero vector is an equilibrium of RINN (15).

Therefore, if we know the equilibrium point \(x^{*}\in \mathbb{R}^{n}\) of (9), then we can construct a model with zero equilibrium point. But oppositely, if the point \(y^{*}=0\) is an equilibrium of RINN (15), then we are not able to obtain the equilibrium point \(x^{*}\in \mathbb{R}^{n}\) of (9).

p-moment Mittag-Leffler stability in time for RL fractional model with random impulses

We define the mean-square Mittag-Leffler stability in time of the equilibrium point of RINN (9). This type of stability is deeply connected with the application of Mittag-Leffler functions with one parameter. Also, the presence of the RL fractional derivative and its singularity at the initial time leads to excluding this point from the interval of stability. This definition is similar to the definition of p-moment Mittag-Leffler stability in time, introduced for RL fractional delay differential equations by Agarwal et al. [5].

Definition 4

The equilibrium point \(x^{*}\) of RINN (9) is said to be mean-square Mittag-Leffler stable in time if, for any \(\epsilon >0\) and any initial value \(x_{0}\in {\mathbb{R}}^{n}\), there exists a constant \(\alpha >0 \) such that

$$ E \bigl[ \bigl\Vert x \bigl(t;x_{0},\{ \tau _{k} \bigr) \} )-x^{*} \bigr\Vert ^{2} \bigr] < \alpha \bigl\Vert x_{0}-x^{*} \bigr\Vert ^{2} E_{q} \bigl(-t^{q} \bigr) \quad \text{for all } t > \epsilon , $$

where \(x(t; x_{0},\{ \tau _{k}\}) \) is the solution of RINN (9) and \(E[\cdot ]\) is the expected value, and \(E_{q}(z)=\sum_{k=0}^{\infty }\frac{z^{k}}{\Gamma (\alpha k+1)}\) is the Mittag-Leffler function with one parameter.

Definition 5

The equilibrium point \(x^{*}\) of RINN (9) is said to be eventually mean-square Mittag-Leffler stable if there exists a number \(T>0\) such that, for any \(\epsilon >0\) and any initial value \(x_{0}\in {\mathbb{R}}^{n}\), there exists a constant \(\alpha >0 \) such that

$$ E \bigl[ \bigl\Vert x \bigl(t;x_{0},\{ \tau _{k} \bigr) \} )-x^{*} \bigr\Vert ^{2} \bigr] < \alpha \bigl\Vert x_{0}-x^{*} \bigr\Vert ^{2} E_{q} \bigl(-t^{q} \bigr) \quad \text{for all } t > \max \{T,\epsilon \}. $$

Remark 9

The above defined types of stability are mainly characterized by the corresponding inequalities, giving bounds that the solutions are satisfied out of an enough small neighborhood of the initial time. Note that it differs from the case of the Caputo fractional derivative and ordinary derivatives. This is because of the type of the initial condition which is deeply connected with the RL fractional derivative.

In the study of the stability properties of RINN(9) we use Lyapunov functions \(V(t,x):[0,\infty )\times \mathbb{R}^{n}\rightarrow \mathbb{R}_{+} \) from the class \(\Lambda ([0,\infty ),\mathbb{R}^{n}) \), i.e., they are continuous on \([0,\infty )\times \mathbb{R}^{n}\) and Lipschitzian with respect to their second vector argument.

We use the Dini fractional derivative of a Lyapunov function \(V(t,x)\), and we define it similarly to [2].

Definition 6

Let \(V\in \Lambda ([0,\infty ),\mathbb{R}^{n})\), \(T\leq \infty \). The fractional Dini derivatives along trajectories of solutions of RINN (9) are defined as follows:

$$\begin{aligned} \begin{aligned} D_{\text{(9)}}^{q}V(t,x)&= \limsup_{h\rightarrow 0^{+}} { \frac{1}{h^{q}}} \Biggl\{ V(t,x) \\ &\quad {} -\sum_{r=1}^{[\frac{t}{h}]}(-1)^{r+1} {}_{q}C_{r} V \Biggl(t-rh,x_{1}-h^{q} \Biggl(-c_{1}(t)x_{1}+\sum_{j=1}^{n} a_{1j}(t)f_{j}(x_{j})+I_{1}(t) \Biggr), \\ &\quad x_{2}-h^{q} \Biggl(-c_{2}(t)x_{2}+ \sum_{j=1}^{n} a_{2j}(t)f_{j}(x_{j})+I_{2}(t) \Biggr),\ldots, \\ &\quad x_{n}-h^{q} \Biggl(-c_{n}(t)x_{n}+ \sum_{j=1}^{n} a_{nj}(t)f_{j}(x_{j})+I_{n}(t) \Biggr) \Biggr) \Biggr\} , \quad t\geq 0, x \in \mathbb{R}^{n}, \end{aligned} \end{aligned}$$
(16)

where \(x=(x_{1},x_{2},\ldots,x_{n})\).

Example 2

Let \(V(t,x_{1},x_{2},\ldots,x_{n})=m(t)\sum_{i=1}^{n}x_{i}^{2}\), where \(m\in C^{1}(\mathbb{R}_{+},\mathbb{R}_{+})\). Use (16) to obtain the fractional Dini derivative of V along the trajectories of solutions of RINN (15):

$$\begin{aligned} &D_{\text{(15)}}^{q}V(t,x_{1},x_{2}, \ldots,x_{n}) \\ &\quad =\limsup_{h\rightarrow 0^{+}} {\frac{1}{h^{q}}} \Biggl\{ m(t)\sum _{i=1}^{n}x_{i}^{2} \\ &\quad\quad {} -\sum_{r=1}^{[\frac{t}{h}]}(-1)^{r+1} {}_{q}C_{r} m(t-rh)\sum_{i=1}^{n} \Biggl(x_{i}-h^{q} \Biggl(-c_{i}(t)x_{i}+ \sum_{j=1}^{n} a_{ij}(t)F_{j}(x_{j}) \Biggr) \Biggr)^{2} \Biggr\} \\ &\quad =\sum_{i=1}^{n}x_{i}^{2} \limsup_{h\rightarrow 0^{+}} { \frac{1}{h^{q}}}\sum _{r=0}^{[\frac{t}{h}]}(-1)^{r} {}_{q}C_{r} m(t-rh) \\ &\quad\quad {} +\limsup_{h\rightarrow 0^{+}}\sum_{i=1}^{n} (-2x_{i} \Biggl(-c_{i}(t)x_{i}+ \sum _{j=1}^{n} a_{ij}(t)F_{j}(x_{j}) \Biggr)\sum_{r=1}^{[\frac{t}{h}]}(-1)^{r} {}_{q}C_{r} m(t-rh) \\ &\quad\quad {} +\limsup_{h\rightarrow 0^{+}} \sum_{i=1}^{n}h^{q} \Biggl(-c_{i}(t)x_{i}+ \sum_{j=1}^{n} a_{ij}(t)F_{j}(x_{j}) \Biggr)^{2}\sum _{r=1}^{[ \frac{t}{h}]}(-1)^{r} {}_{q}C_{r} m(t-rh) \} \\ &\quad ={}_{0}^{RL}D^{q} \bigl(m(t) \bigr)\sum _{i=1}^{n}x_{i}^{2} -2m(t)\sum _{i=1}^{n}c_{i}(t)x_{i}^{2}+2m(t) \sum_{i=1}^{n}\sum _{j=1}^{n} a_{ij}(t)x_{i}F_{j}(x_{j}). \end{aligned}$$

Note that the fractional Dini derivative depends significantly not only on the order q of the fractional differential equation but also on the initial time (0 in our case).

We will introduce the following assumptions:

Assumption A2

The fractional order \(q\in (0,1)\) of the RL fractional derivative is such that, for any \(\epsilon >0\), the equation \(\frac{1}{t^{1-q}}=\frac{1}{E_{q}(-\epsilon ^{q})\epsilon ^{1-q}}E_{q}(-t^{q})\) has only one solution for \(t>0\), where \(E_{\alpha }(z)\) is the Mittag-Leffler function with one parameter.

Assumption A3

The neuron activation functions are Lipschitz, i.e., there exist positive numbers \(L_{i}>0 \), \(i=1,2,\ldots,n\), such that \(\vert f_{i}(u)-f_{i}(v) \vert \leq L_{i} \vert u-v \vert \), \(i=1,2,\ldots,n\) for \(u,v\in \mathbb{R}\).

Assumption A4

There exist positive numbers \(M_{i,j}\), \(i,j=1,2,\ldots,n\), such that \(\vert a_{i,j}(t) \vert \leq M_{i,j}\) for \(t\geq 0\).

Assumption A5

There exist numbers \(B_{i}>0\), \(i=1,2,\ldots,n\), such that the inequalities \(c_{i}(t)\geq B_{i}>0\), \(t\geq 0\) hold.

Assumption A6

There exist positive constants A, B: \(A< B\) and a positive bounded RL differentiable function \(m(t)\): \(\mathbb{R}_{+}\to [A,B]\): \(A\leq m(t)\leq B\), \(t\geq 0\), such that \(\lim_{t\to 0}[ t^{q-1}m(t)]=\alpha <\infty \), and the inequality

$$ 2\min_{i\in \overline{1,n}}B_{i}- \frac{{}_{0}^{RL}D^{q}_{t} (m(t) )}{m(t)}-\sum_{j=1}^{n} \Bigl( \max_{i\in \overline{1,n}}M_{ij} L_{j}+\max _{i\in \overline{1,n}} M_{ji} L_{i} \Bigr)>0,\quad t\geq 0,$$
(17)

holds.

Assumption A7

There exist positive constants T, B and a positive bounded RL differentiable function \(m(t)\): \(\mathbb{R}_{+}\to [0,B]\): \(m(t)\neq0\) for \(t>0\) and \(\lim_{t\to 0}[ t^{q-1}m(t)]=\alpha <\infty \), and the inequality

$$ 2\min_{i\in \overline{1,n}}B_{i}- \frac{{}_{0}^{RL}D^{q}_{t} (m(t) )}{m(t)}-\sum_{j=1}^{n} \Bigl( \max_{i\in \overline{1,n}}M_{ij} L_{j}+\max _{i\in \overline{1,n}} M_{ji} L_{i} \Bigr)>0,\quad t> T,$$
(18)

holds.

Remark 10

If Assumption A3 is fulfilled, then the function F in RINN (15) satisfies \(\vert F_{j}(u) \vert \leq L_{j} \vert u \vert \), \(j=1,2,\ldots,n\), for any \(u\in \mathbb{R}\).

Remark 11

The number \(q=0.2\), for example, satisfies condition A2 (see Fig. 1 and Fig. 2 for different values of ϵ), but the number \(q=0.8\) does not satisfy Assumption A2 (see Fig. 3).

Figure 1
figure1

Graphs of the functions \(\frac{1}{t^{1-q}}\) and \(\frac{1}{E_{q}(-\epsilon ^{q})\epsilon ^{1-q}}E_{q}(-t^{q})\) for \(q=0.2\) and \(\epsilon =0.5\)

Figure 2
figure2

Graphs of the functions \(\frac{1}{t^{1-q}}\) and \(\frac{1}{E_{q}(-\epsilon ^{q})\epsilon ^{1-q}}E_{q}(-t^{q})\) for \(q=0.2\) and \(\epsilon =0.1\)

Figure 3
figure3

Graphs of the functions \(\frac{1}{t^{1-q}}\) and \(\frac{1}{E_{q}(-\epsilon ^{q})\epsilon ^{1-q}}E_{q}(-t^{q})\) for \(q=0.8\) and \(\epsilon =0.1\)

Theorem 4.1

Let Assumptions A1A6be satisfied.

Then the equilibrium point \(x^{*}\) of RINN (9) is mean-square Mittag-Leffler stable in time.

Proof

According to Remark 8, we study the behavior of the zero equilibrium of RINN(15). Consider the Lyapunov function \(V(t,x)=m(t)\sum_{i=1}^{n} x_{i}^{2}\), \(x=(x_{1},x_{2},\ldots,x_{n})\), where the function \(m(t)\) is defined in Assumption A6.

Choose a positive number C such that

$$ 2\min_{i\in \overline{1,n}}B_{i}- \frac{{}_{0}^{RL}D^{q}_{t} (m(t) )}{m(t)}-\sum _{j=1}^{n} \Bigl( \max_{i\in \overline{1,n}}M_{ij} L_{j}+\max_{i\in \overline{1,n}} M_{ji} L_{i} \Bigr)\geq C>0,\quad t\geq 0. $$

According to Example 2, Remark 10, and Assumption A6, we have, for \(t\geq 0\),

$$ \begin{aligned} & D_{\text{(15)}}^{q}V(t,x) \\ &\quad ={}_{0}^{RL}D^{q}_{t} \bigl(m(t) \bigr) \sum_{i=1}^{n}x_{i}^{2} -2m(t) \sum_{i=1}^{n}c_{i}(t)x_{i}^{2}+2m(t) \sum_{i=1}^{n}\sum _{j=1}^{n} a_{ij}(t)x_{i}F_{j}(x_{j}) \\ &\quad \leq {}_{0}^{RL}D^{q}_{t} \bigl(m(t) \bigr)\sum_{i=1}^{n}x_{i}^{2} -2m(t) \sum_{i=1}^{n}B_{i}x_{i}^{2}+2m(t) \sum_{i=1}^{n}\sum _{j=1}^{n} M_{ij} \vert x_{i} \vert L_{j} \vert x_{j} \vert \\ &\quad \leq - \biggl(2\min_{i\in \overline{1,n}}B_{i}- \frac{{}_{0}^{RL}D^{q}_{t} (m(t) )}{m(t)} \biggr)V(t,x)+m(t)\sum_{i=1}^{n} x_{i}^{2}\sum_{j=1}^{n} M_{ij} L_{j} \\ &\quad \quad {} +m(t)\sum_{i=1}^{n} \sum _{j=1}^{n} M_{ij} L_{j} x_{j}^{2} \\ &\quad \leq - \biggl(2\min_{i\in \overline{1,n}}B_{i}- \frac{{}_{0}^{RL}D^{q}_{t} (m(t) )}{m(t)} \biggr)V(t,x) \\ &\quad \quad {} +V(t,x) \Biggl(\sum_{j=1}^{n} \max_{i\in \overline{1,n}}M_{ij} L_{j}+\sum _{j=1}^{n} \max_{i\in \overline{1,n}} M_{ji} L_{i} \Biggr) \\ &\quad \leq - \Biggl(2\min_{i\in \overline{1,n}}B_{i}- \frac{{}_{0}^{RL}D^{q}_{t} (m(t) )}{m(t)}-\sum_{j=1}^{n} \Bigl( \max_{i\in \overline{1,n}}M_{ij} L_{j}+\max _{i\in \overline{1,n}} M_{ji} L_{i} \Bigr) \Biggr)V(t,x) \\ &\quad \leq -m V(t,x), \end{aligned} $$
(19)

where \(\overline{1,n}=\{1,2,\ldots,n\}\).

Let \(x_{0} \in {\mathbb{R}}^{n}\) be an arbitrary initial value and the stochastic process \(x(t;x_{0},\{ \tau _{k}\}) \) be a solution of the initial value problem for RINN (9). Then the stochastic process \(y(t;x_{0}-x^{*},\{ \tau _{k}\}) \) is a solution of the initial value problem for RINN (15).

Let \(\epsilon >0\) be an arbitrary number and \(t_{k} \) be arbitrary values of the random variables \(\tau _{k}\), \(k=1,2,\ldots\) . Then \(T_{k}=\sum_{i=1}^{k}t_{i}\), \(k =1,2,\ldots\) , are values of the random variables \(\xi _{k} \). Thus the corresponding function \(y(t)=y(t;x_{0}-x^{*},\{T_{k}\})\) is a sample path solution of the IVP for RINN (9), i.e., \(y(t)=y(t;x_{0}-x^{*},\{T_{k}\})\) is a solution of the IVP for the INN with fixed points of impulses

$$ \begin{aligned} & {}_{0}^{RL}D^{q}_{t}y_{i}(t)=-c_{i}(t)y_{i}(t)+ \sum_{j=1}^{n} a_{ij}(t)F_{j} \bigl(y_{j}(t) \bigr),\quad t>0, t\neq T_{k}, i=1,2,\ldots, n, \\ &{}_{T_{k}}I^{1-q}_{t}y_{i}(t)\big| _{t=T_{k}}=\Phi _{k,i} \bigl( y_{i}(T_{k} ) \bigr) \quad \text{for } k=1,2,\ldots, \\ &{}_{0}I^{1-q}_{t}y_{i}(t)\big| _{t=0}=x_{i}^{0}-x_{i}^{*}. \end{aligned} $$
(20)

Define the function \(v(t)=V(t, y(t))=m(t)\sum_{i=1}^{n}y_{i}^{2}(t)\). Then, for \(t\in (T_{k},T_{k+1}]\), \(k=0,1,2,\ldots\) , (here \(T_{0}=0\)), from the existence of the RL derivative of \(y(t;x_{0}-x^{*},\{ \tau _{k}\}) \) on the whole interval \(t\geq 0\) and the link between RL and Grunwald–Letnikov derivatives (see p. 76 [16]), we obtain

$$ \begin{aligned} -c_{i}(t)y_{i}(t)+\sum _{j=1}^{n} a_{ij}(t)F_{j} \bigl(y_{j}(t) \bigr) &={}_{0}^{RL}D^{q}_{t} y_{i}(t) ={}_{0}^{GL}D^{q}_{+} y_{i}(t) \\ &=\limsup_{h\to 0+}\frac{1}{h^{q}} \Biggl[y_{i}(t)- \sum_{p=1}^{[ \frac{t}{h}]} (-1)^{p+1}{}_{q}C_{p} y_{i}(t-ph) \Biggr] \end{aligned} $$

or

$$ y_{i}(t)-h^{q} \Biggl(-c_{i}(t)y_{i}(t)+ \sum_{j=1}^{n} a_{ij}(t)F_{j} \bigl(y_{j}(t) \bigr) \Biggr)=\sum_{p=1}^{[\frac{t}{h}]}(-1)^{p+1}{}_{q}C_{p} y_{i}(t-ph)+ \Lambda _{i} \bigl(h^{q} \bigr)$$
(21)

with \(\frac{\Lambda _{i} (h^{q})}{h^{q}}\rightarrow 0\) as \(h\rightarrow 0\). Thus, from the definition of the function \(v(t)\) and Eq. (21), we get

$$ \begin{aligned} &v(t)- \sum _{r=1}^{[\frac{t}{h}]}(-1)^{r+1}{}_{q}C_{r} v(t-rh) \\ &\quad =m(t)\sum_{i=1}^{n}y_{i}^{2}(t)- \sum_{r=1}^{[\frac{t}{h}]}(-1)^{r+1} {}_{q}C_{r} m(t-rh)\sum_{i=1}^{n}y_{i}^{2}(t-rh) \\ &\quad =\mathcal{A}(t)+\sum_{r=1}^{[\frac{t}{h}]}(-1)^{r+1} {}_{q}C_{r} m(t-rh) \\ &\quad \quad {} \times \sum_{i=1}^{n} \Biggl\{ \Biggl(y_{i}(t)-h^{q} \Biggl(-c_{i}(t)y_{i}(t)+ \sum_{j=1}^{n} a_{ij}(t)F_{j} \bigl(y_{j}(t) \bigr) \Biggr) \Biggr)^{2}-y_{i}^{2}(t-rh) \Biggr\} \\ &\quad =\mathcal{A}(t) +\sum_{r=1}^{[\frac{t}{h}]}(-1)^{r+1} {}_{q}C_{r} m(t-rh) \\ &\quad\quad {} \times \sum_{i=1}^{n} \Biggl\{ \Biggl(\sum_{p=1}^{[\frac{t}{h}]}(-1)^{p+1} {}_{q}C_{p} y_{i}(t-ph)+\Lambda _{i} \bigl(h^{q} \bigr) \Biggr)^{2}-y_{i}^{2}(t-rh) \Biggr\} \\ &\quad \leq \mathcal{A}(t) +C\sum_{r=1}^{[\frac{t}{h}]} {}_{q}C_{r} \sum_{i=1}^{n} \Lambda _{i}^{2} \bigl(h^{q} \bigr) \\ &\quad\quad {} +2A\sum_{r=1}^{[\frac{t}{h}]} {}_{q}C_{r} \sum_{i=1}^{n} \Lambda _{i} \bigl(h^{q} \bigr) \Biggl\vert \sum _{p=1}^{[\frac{t}{h}]}(-1)^{r+1}{}_{q}C_{p} y_{i}(t-ph) \Biggr\vert \\ &\quad \quad {} +A \Biggl(\sum_{r=0}^{[\frac{t}{h}]}(-1)^{r} {}_{q}C_{r} \Biggr) \Biggl(\sum_{i=1}^{n} \Biggl\{ \sum_{p=0}^{[\frac{t}{h}]}(-1)^{p}{}_{q}C_{p} y_{i}^{2}(t-ph) \Biggr\} \Biggr), \end{aligned} $$
(22)

where \(\mathcal{A}(t)=m(t)\sum_{i=1}^{n}y_{i}^{2}(t) -\sum_{r=1}^{[ \frac{t}{h}]}(-1)^{r+1} {}_{q}C_{r} m(t-rh)\sum_{i=1}^{n} (y_{i}(t)-h^{q} (-c_{i}(t)y_{i}(t)+ \sum_{j=1}^{n} a_{ij}(t)F_{j}(y_{j}(t)) ) )^{2}\).

Dividing both sides of inequality (22) by \(h^{q}\), taking the limit as \(h\rightarrow 0^{+}\), using (19) and \(\sum_{r=0}^{\infty }{}_{q}C_{r} z^{r}=(1+z)^{q}\) if \(\vert z \vert \leq 1\), we have

$$ \begin{aligned} & {}_{0}^{GL}D_{+}^{q}v(t) \leq D_{\text{(15)}}^{q}V \bigl(t,y(t) \bigr) \leq -C V \bigl(t,y(t) \bigr)=-C v(t), \quad t\in (T_{k},T_{k+1}]. \end{aligned} $$
(23)

From (20) and Lemma 2.1, we have

$$ \lim_{t\to 0} \bigl[t^{1-q}y_{i}(t) \bigr]= \frac{x_{i}^{0}-x_{i}^{*}}{\Gamma (q)} $$

and

$$ \lim_{t\to T_{k}} \bigl[(t-T_{k})^{1-q}y_{i}(t) \bigr]= \frac{\psi _{k,i}(y_{i}(T_{k})+x_{i}^{*} )}{\Gamma (q)}. $$

Then, applying Assumption A6, we obtain

$$ \begin{aligned}\lim_{t\to 0} t^{1-q} v(t)&=\lim_{t\to 0} \Biggl[ t^{1-q}m(t) \sum_{i=1}^{n}y_{i}^{2}(t) \Biggr]=\lim_{t\to 0}\frac{m(t)}{t^{1-q}}\sum _{i=1}^{n} \Bigl(\lim_{t\to 0} \bigl[ t^{1-q}y_{i}(t) \bigr]^{2} \Bigr) \\ &=\alpha \sum_{i=1}^{n} \bigl(x_{i}^{0}-x_{i}^{*} \bigr)^{2}=\alpha \bigl\Vert x^{0}-x^{*} \bigr\Vert ^{2} \end{aligned} $$
(24)

and

$$ \begin{aligned} \lim_{t\to T_{k}}(t-T_{k})^{1-q} v(t)&=\lim_{t\to T_{k}} \Biggl[(t-T_{k})^{1-q} m(t) \sum_{i=1}^{n} y_{i}^{2}(t) \Biggr] \\ &=m(T_{k})\sum_{i=1}^{n} y_{i}^{2}(T_{k})\lim_{t\to T_{k}} \bigl[(t-T_{k})^{1-q} \bigr] =0. \end{aligned} $$
(25)

Therefore, from (23), (24), (25) it follows that the function \(v(t)\) satisfies the linear impulsive fractional differential inequalities with fixed points of impulses

$$ \begin{aligned} & {}_{0}^{RL}D_{t}^{q}v(t) \leq -C v(t)\quad \text{for } t>0, t\neq T_{k}, \\ &{}_{T_{k}}I^{1-q}_{t} v(t)\big| _{t=T_{k}}=0, \quad k=1,2,\ldots, \\ & {}_{0}I^{1-q}_{t} v(t)\big| _{t=0}= u_{0}, \end{aligned} $$
(26)

where \(u_{0}=\alpha \Vert x^{0}-x^{*} \Vert ^{2}\).

Consider (7) with \(a=-C\). According to Lemma 3.1 it has an exact solution \(u\in PC_{1-q}([0,\infty ),\mathbb{R})\) and \(v(t)\leq u(t)\). The function \(u(t)\) is a sample path solution of (10) where the solution \(u(t; u_{0},\{\tau _{k}\})\) according to Lemma 3.2 satisfies the inequality

$$ \begin{aligned} & E \bigl( \bigl\vert u \bigl(t; u_{0},\{\tau _{k}\} \bigr) \bigr\vert \bigr) \\&\quad \leq \vert u_{0} \vert \frac{\lambda }{t^{1-q}\Gamma (q)}\frac{\pi Csc(\pi q)}{\Gamma (1-q)} \biggl(1+ \frac{\pi Csc(\pi q) }{\Gamma (q)\Gamma (1-q)} \biggr) \\ &\quad = \alpha \bigl\Vert x^{0}-x^{*} \bigr\Vert ^{2}\frac{\lambda }{t^{1-q}\Gamma (q)} \frac{\pi Csc(\pi q)}{\Gamma (1-q)} \biggl(1+ \frac{\pi Csc(\pi q) }{\Gamma (q)\Gamma (1-q)} \biggr), \quad t\geq \epsilon . \end{aligned} $$
(27)

According to Assumption A2 and (27), we get

$$ \begin{aligned} & E \bigl( \bigl\vert u \bigl(t; u_{0},\{\tau _{k}\} \bigr) \bigr\vert \bigr)\leq \xi _{\lambda ,q, \epsilon } \bigl\Vert x^{0}-x^{*} \bigr\Vert ^{2} E_{q} \bigl(-t^{q} \bigr), \quad t\geq \epsilon , \end{aligned} $$
(28)

with \(\xi _{\lambda ,q,\epsilon }= \frac{\lambda \alpha }{E_{q}(-\epsilon ^{q})\epsilon ^{1-q}\Gamma (q)} \frac{\pi Csc(\pi q)}{\Gamma (1-q)} (1+ \frac{\pi Csc(\pi q) }{\Gamma (q)\Gamma (1-q)} )\).

Applying

$$ \begin{aligned} E \bigl( \bigl\vert v \bigl(t; u_{0},\{\tau _{k}\} \bigr) \bigr\vert \bigr)&=E \Biggl(m(t) \sum_{i=1}^{n} y_{i} \bigl(t; x^{0}-x^{*},\{\tau _{k}\} \bigr)^{2} \Biggr)) \\ &=m(t)E \Biggl(\sum_{i=1}^{n} y_{i} \bigl(t; x^{0}-x^{*},\{\tau _{k} \} \bigr)^{2} \Biggr)\\&=m(t)E \Biggl( \sum _{i=1}^{n} \bigl(x_{i} \bigl(t; x^{0},\{\tau _{k}\} \bigr)-x_{i}^{*} \bigr)^{2} \Biggr) \\ &=m(t)E \bigl( \bigl\Vert x \bigl(t; x^{0},\{\tau _{k}\} \bigr)-x^{*} \bigr\Vert ^{2} \bigr) \\ &\leq E \bigl( \bigl\vert u \bigl(t; u_{0},\{\tau _{k}\} \bigr) \bigr\vert \bigr), \end{aligned} $$
(29)

we get

$$ E \bigl( \bigl\Vert x \bigl(t; x^{0},\{\tau _{k}\} \bigr)-x^{*} \bigr\Vert ^{2} \bigr)\leq \frac{\xi _{\lambda ,q,\epsilon } }{B} \bigl\Vert x^{0}-x^{*} \bigr\Vert ^{2} E_{q} \bigl(-t^{q} \bigr) , \quad t\geq \epsilon . $$
(30)

Inequality (30) proves the mean-square Mittag-Leffler stability in time. □

In the case inequality (17) is not satisfied for all \(t\geq 0\) but inequality (18) is satisfied for enough large t,we could prove the eventually mean-square Mittag-Leffler stability in time of the equilibrium.

Theorem 4.2

Let Assumptions A1A5and A7be satisfied.

Then the equilibrium point \(x^{*}\) of RINN (9) is eventually mean-square Mittag-Leffler stable in time.

Proof

According to Assumption A7, there exist a function \(m(t)\) and a number T such that \(m(t)>0\) for \(t>T\). Choose a constant \(C>0\) such that

$$ 2\min_{i\in \overline{1,n}}B_{i}- \frac{{}_{0}^{RL}D^{q}_{t} (m(t) )}{m(t)}-\sum _{j=1}^{n} \Bigl( \max_{i\in \overline{1,n}}M_{ij} L_{j}+\max_{i\in \overline{1,n}} M_{ji} L_{i} \Bigr)\geq C,\quad t>T. $$

Similar to the proof of Theorem 4.1, we consider the Lyapunov function \(V(t,x)=m(t)\sum_{i=1}^{n} x_{i}^{2}\), \(x=(x_{1},x_{2},\ldots,x_{n})\) and prove inequality (19) for \(t>T\). Let \(x_{0} \in {\mathbb{R}}^{n}\) be an arbitrary initial value and the stochastic process \(x(t;x_{0},\{ \tau _{k}\}) \) be a solution of the initial value problem for RINN (9) and \(y(t;x_{0}-x^{*},\{ \tau _{k}\}) \) be a solution of (15).

Let \(\epsilon >0\) be an arbitrary number and \(t_{k} \) be arbitrary values of the random variables \(\tau _{k}\), \(k=1,2,\ldots\) . Then \(T_{k}=\sum_{i=1}^{k}t_{i}\), \(k =1,2,\ldots\) , are values of the random variables \(\xi _{k} \). Thus the corresponding function \(y(t)=y(t;x_{0}-x^{*},\{T_{k}\})\) is a solution of (20). Define the function \(v(t)=V(t, y(t))\). As in the proof of Theorem 4.1, we prove \({}_{0}^{GL}D_{+}^{q}v(t) \leq -C v(t)\) for \(t>T\), \(t\in (T_{k},T_{k+1}]\), \(k=0,1,2,\ldots\) .

Define the function \(\tilde{v}(t):[0,\infty )\to \mathbb{R}_{+}\) such that \(\tilde{v}(t)\equiv v(t)\), \(t>T\), and \(\tilde{v}(t)\) is a solution of (26) for \(t\in [0,T]\).

Then, as in the proof of Theorem 4.1, we obtain for \(t>\max \{\epsilon ,T\}\) the inequality \(E( \vert \tilde{v}(t; u_{0},\{\tau _{k}\}) \vert )=E( \vert v(t; u_{0},\{\tau _{k}\}) \vert ) \leq E( \vert u(t; u_{0},\{\tau _{k}\}) \vert )\), which proves the claim of Theorem 4.2. □

Example 3

Let \(\tau _{k}\in Exp(1)\), \(k=1,2,\ldots\) , and consider the model of RL fractional neural network with impulses at random times (13) with impulsive conditions

$$ \begin{aligned} &{}_{\xi _{k}}I^{1-q}_{t}x_{1}(t)\big| _{t=\xi _{k}}= \frac{1}{k}\sin \bigl(x_{1}(\xi _{k}) \bigr)\quad \text{for } k=1,2,\ldots, \\ &{}_{\xi _{k}}I^{1-q}_{t}x_{2}(t)\big| _{t=\xi _{k}}=\frac{1}{k}\cos \bigl(x_{2}( \xi _{k})-0.5\pi \bigr)\quad \text{for }k=1,2,\ldots, \\ &{}_{\xi _{k}}I^{1-q}_{t}x_{3}(t)\big| _{t=\xi _{k}}=\frac{1}{k}\sin \bigl(x_{3}( \xi _{k}) \bigr)\quad \text{for } k=1,2,\ldots \end{aligned} $$
(31)

and initial conditions

$$ {}_{0}I^{1-q}_{t}y_{i}(t)\big| _{t=0}=x_{i}^{0},\quad i=1,2,3. $$
(32)

According to Example 1, the point \((\pi ,2\pi ,3\pi )\) is an equilibrium point of RINN (13),(31), i.e., Assumption A1 is satisfied.

According to Remark 11, Assumption A2 is fulfilled.

In this special case \(c_{i}(t)= (\frac{1}{t}-\frac{1}{t^{0.2}\Gamma (1-0.2)}+5 )>4.55\), \(f_{1}(u)=f_{2}(u)=f_{3}(u)=\sin (u)\), and the matrix of coefficients \(A(t)=\{a_{ij}(t)\}\) is given by

$$ A(t)= \begin{pmatrix} -0.1 \sin t & 0.4 & 0.3 \\ -\frac{t^{2}}{5t^{2}+1} & 0.3 & \frac{t}{5t+1} \\ \frac{t}{10t+1} & -0.2\cos t & -0.1 \sin t \end{pmatrix} . $$

Assumption A3 is satisfied with \(L_{i}=1\), \(i=1,2,3\).

Also Assumption A4 is satisfied with the matrix

$$ M= \begin{pmatrix} 0.1 & 0.4 & 0.3 \\ 0.2 & 0.3 & 0.2 \\ 0.1 & 0.2 & 0.1 \end{pmatrix} . $$

Assumption A5 is satisfied with \(B_{i}=4.55\), \(i=1,2,3\).

Define the function \(m(t):\mathbb{R}_{+}\to [0,1)\) by \(m(t)=\frac{t^{0.8} }{t^{0.8} + 1}\). It satisfies \(\lim_{t\to 0}[t^{0.2-1}m(t)]= \alpha = 1\) (see Fig. 4).

Figure 4
figure4

Example 3. Graph of the fractional derivative \({}_{0}^{RL}D^{0.2} (m(t) )\)

Then there exists \(T=0.00006>0\) such that (see Fig. 5)

$$ \begin{aligned} &2\min_{i=\overline{1,3}}B_{i}- \sum_{j=1}^{3} \Bigl( \max _{i=\overline{1,3}}M_{ij} L_{j}+\max _{i=\overline{1,3}} M_{ji} L_{i} \Bigr) \\ &\quad =2(4.55)-1.8=9.1-1.8=7.3> \frac{{}_{0}^{RL}D^{q}_{t} (m(t) )}{m(t)},\quad t>T. \end{aligned} $$
(33)

Therefore, since (31) is not satisfied for all \(t\geq 0\), Assumption A6 is not satisfied, but Assumption A7 is fulfilled.

Figure 5
figure5

Example 3. Graphs of bound 7.3 and the function \(\frac{{}_{0}^{RL}D^{q}_{t} (m(t) )}{m(t)}\)

Thus, according to Theorem 4.2, the equilibrium of RINN (13) is eventually mean-square Mittag-Leffler stable in time, i.e., in this particular case

$$ \xi _{1,0.2,\epsilon }= \frac{1}{E_{0.2}(-\epsilon ^{0.2})\epsilon ^{0.8}\Gamma (0.2)} \frac{\pi Csc(\pi 0.2)}{\Gamma (0.8)} \biggl(1+ \frac{\pi Csc(\pi 0.2) }{\Gamma (0.2)\Gamma (0.8)} \biggr)= \frac{2}{E_{0.2}(-\epsilon ^{0.2})\epsilon ^{0.8}} $$

for any \(\epsilon >0\) and \(t\geq \max \{0.00006,\epsilon \}\), the inequality

$$ E \bigl( \bigl\Vert x \bigl(t; x^{0},\{\tau _{k}\} \bigr)-x^{*} \bigr\Vert ^{2} \bigr)\leq \frac{2}{E_{0.2}(-\epsilon ^{0.2})\epsilon ^{0.8}} \bigl\Vert x^{0}-x^{*} \bigr\Vert ^{2} E_{q} \bigl(-t^{q} \bigr) $$

holds.

Conclusions

In this paper the RL fractional generalization of the first-order Hopfield neural network is studied in the case when some impulses occur at random times. We study the case when the waiting time between two consecutive times of impulses is exponentially distributed. In connection with the application of the RL fractional derivative in the model, we define in an appropriate way both the initial condition and the impulsive conditions. We define mean-square stability in time of the model and obtain some sufficient conditions.

In further work we hope to consider a number of directions:

  1. (i)

    Considering both fractional models, i.e., with the Caputo fractional derivative as well as the RL fractional derivative, and generalizing the waiting time between two consecutive impulses is Erlang distributed, is Log-normal distributed, etc.

  2. (ii)

    Generalizing the RL fractional model to various other types of delays.

Availability of data and materials

Not applicable.

Abbreviations

\(\tau _{k}\) and \(\xi _{k}\), \(k=1,2,\ldots\) :

random variables

\(E(\tau )\) :

the expected value of the random variable τ

\(Csc(u)\) :

\(\frac{1}{\cos (u)}\)

\(\Gamma (u)\) :

the gamma function

RL:

Riemann–Liouville

IVP:

initial value problem

INN:

RL fractional Hopfield’s graded response neural networks with impulses occurring at fixed initially given times

RINN:

RL fractional Hopfield’s graded response neural networks with impulses occurring at random times \(\xi _{k}\), \(k=1,2,\ldots\)

References

  1. 1.

    Agarwal, R., Hristova, S., O’Regan, D.: Exponential stability for differential equations with random impulses at random times. Adv. Differ. Equ. 2013, 372 (2013)

    MathSciNet  Article  Google Scholar 

  2. 2.

    Agarwal, R., Hristova, S., O’Regan, D.: A survey of Lyapunov functions, stability and impulsive Caputo fractional differential equations. Fract. Calc. Appl. Anal. 19(2), 290–318 (2016)

    MathSciNet  Article  Google Scholar 

  3. 3.

    Agarwal, R.P., Hristova, S., O’Regan, D., Kopanov, P.: p-moment exponential stability of differential equations with random impulses and the Erlang distribution. Mem. Differ. Equ. Math. Phys. 70, 99–106 (2017)

    MathSciNet  MATH  Google Scholar 

  4. 4.

    Agarwal, R.P., Hristova, S., O’Regan, D., Kopanov, P.: Impulsive differential equations with gamma distributed moments of impulses and p-moment exponential stability. Acta Math. Sci. 37(4), 985–997 (2017)

    MathSciNet  Article  Google Scholar 

  5. 5.

    Agarwal, R.P., Hristova, S., O’Regan, D., Kopanov, P.: p-moment Mittag-Leffler stability of Riemann–Liouville fractional differential equations with random impulses. Mathematics 8, 1379 (2020). https://doi.org/10.3390/math8081379

    Article  Google Scholar 

  6. 6.

    Alofi, A., Cao, J., Elaiw, A., Al-Mazrooei, A.: Delay-dependent stability criterion of Caputo fractional neural networks with distributed delay. Discrete Dyn. Nat. Soc. 2014, Article ID 529358 (2014)

    MathSciNet  Article  Google Scholar 

  7. 7.

    Das, Sh.: Functional Fractional Calculus. Springer, Berlin (2011)

    Google Scholar 

  8. 8.

    Diethelm, K.: The Analysis of Fractional Differential Equations. Springer, Berlin (2010)

    Google Scholar 

  9. 9.

    Gopalsamy, K.: Stability of artificial neural networks with impulses. Appl. Math. Comput. 154, 783–813 (2004)

    MathSciNet  MATH  Google Scholar 

  10. 10.

    Gu, Y., Wang, H., Yu, Y.: Stability and synchronization for Riemann–Liouville fractional-order time-delayed inertial neural networks. Neurocomputing 340, 270–280 (2019)

    Article  Google Scholar 

  11. 11.

    Gu, Y., Wang, H., Yu, Y.: Synchronization for commensurate Riemann–Liouville fractional-order memristor-based neural networks with unknown parameters. J. Franklin Inst. (2020). https://doi.org/10.1016/j.jfranklin.2020.06.025

    MathSciNet  Article  MATH  Google Scholar 

  12. 12.

    Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79, 2554–2558 (1982)

    MathSciNet  Article  Google Scholar 

  13. 13.

    Kaslik, E., Sivasundaram, S.: Dynamics of fractional-order neural networks. In: Proc. Int. Joint Conf. Neural Netw, San Jose, CA, USA, Jul./Aug., pp. 611–618 (2011)

    Google Scholar 

  14. 14.

    Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam (2006)

    Google Scholar 

  15. 15.

    Li, J.D., Wu, Z.B., Huang, N.J.: Asymptotical stability of Riemann–Liouville fractional-order neutral-type delayed projective neural networks. Neural Process. Lett. (2019). https://doi.org/10.1007/s11063-019-10050-8

    Article  Google Scholar 

  16. 16.

    Podlubny, I.: Fractional Differential Equations. Academic Press, San Diego (1999)

    Google Scholar 

  17. 17.

    Pratap, A., Alzabut, J., Dianavinnarasi, J., Cao, J., Rajchakit, G.: Finite-time Mittag-Leffler stability of fractional-order quaternion-valued memristive neural networks with impulses. Neural Process. Lett. 51, 1485–1526 (2020). https://doi.org/10.1007/s11063-019-10154-1

    Article  Google Scholar 

  18. 18.

    Pu, Y.F., Yi, Z., Zhou, J.L.: Fractional Hopfield neural networks: fractional dynamic associative recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2319–2333 (2017)

    MathSciNet  Article  Google Scholar 

  19. 19.

    Rakkiyappan, R., Balasubramaiam, P., Cao, J.: Global exponential stability of neutral-type impulsive neural networks. Nonlinear Anal., Real World Appl. 11, 122–130 (2010)

    MathSciNet  Article  Google Scholar 

  20. 20.

    Rifhat, R., Muhammadhaji, A., Teng, Z.: Global Mittag-Leffler synchronization for impulsive fractional-order neural networks with delays. Int. J. Nonlinear Sci. Numer. Simul. 19(2), 205–213 (2018)

    MathSciNet  Article  Google Scholar 

  21. 21.

    Scardapane, S., Wang, D.: Randomness in neural networks: an overview. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 7, e1200 (2017)

    Article  Google Scholar 

  22. 22.

    Song, X., Zhao, P., Xing, Z., Peng, J.: Global asymptotic stability of CNNs with impulses and multi-proportional delays. Math. Methods Appl. Sci. 39, 722–733 (2016)

    MathSciNet  Article  Google Scholar 

  23. 23.

    Tavares, C.A., Santos, T.M.R., Lemes, N.H.T., dos Santos, J.P.C., Ferreira, J.C., Braga, J.P.: Solving ill-posed problems faster using fractional-order Hopfield neural network. J. Comput. Appl. Math. 381, 112984 (2021)

    MathSciNet  Article  Google Scholar 

  24. 24.

    Wang, F., Yang, Y., Hu, M.: Asymptotic stability of delayed fractional-order neural networks with impulsive effects. Neurocomputing 154(22), 239–244 (2015)

    Article  Google Scholar 

  25. 25.

    Wu, Z., Li, Ch.: Exponential stability analysis of delayed neural networks with impulsive time window. In: Advanced Computational Intelligence (ICACI), 2017 Ninth International Conference on, pp. 37–42 (2017)

    Google Scholar 

  26. 26.

    Yang, Z., Xu, D.: Stability analysis of delay neural networks with impulsive effects. IEEE Trans. Circuits Syst. II, Express Briefs 52(8), 517–521 (2015)

    Article  Google Scholar 

  27. 27.

    Zhang, H., Ye, M., Ye, R., Cao, J.: Synchronization stability of Riemann–Liouville fractional delay-coupled complex neural networks. Physica A 508, 155–165 (2018)

    MathSciNet  Article  Google Scholar 

  28. 28.

    Zhang, H., Ye, R., Cao, J., Alsaedi, A.: Existence and globally asymptotic stability of equilibrium solution for fractional-order hybrid BAM neural networks with distributed delays and impulses. Complexity 2017, Article ID 6875874 (2017). https://doi.org/10.1155/2017/6875874

    MathSciNet  Article  MATH  Google Scholar 

  29. 29.

    Zhang, H., Ye, R., Cao, J., Alsaedi, A.: Synchronization control of Riemann–Liouville fractional competitive network systems with time-varying delay and different time scales. Int. J. Control. Autom. Syst. 16(3), 1404–1414 (2018). https://doi.org/10.1007/s12555-017-0371-0

    Article  Google Scholar 

  30. 30.

    Zhang, H., Ye, R., Cao, J., Alsaedi, A., Li, X., Wang, Y.: Lyapunov functional approach to stability analysis of Riemann–Liouville fractional neural networks with time-varying delays. Asian J. Control 20(5), 1938–1951 (2018). https://doi.org/10.1002/asjc.1675

    MathSciNet  Article  MATH  Google Scholar 

  31. 31.

    Zhang, R., Qi, D., Wang, Y.: Dynamics analysis of fractional order three-dimensional Hopfield neural network. In: Proc. 6th Int. Conf. Natural Comput., Yantai, China, pp. 3037–3039 (2010)

    Google Scholar 

  32. 32.

    Zhang, X., Niu, P., Ma, Y., Wei, Y., Li, G.: Neural networks global Mittag-Leffler stability analysis of fractional-order impulsive neural networks with one-side Lipschitz condition. Neural Netw. 94, 67–75 (2017)

    Article  Google Scholar 

  33. 33.

    Zhou, Q.: Global exponential stability of BAM neural networks with distributed delays and impulses. Nonlinear Anal., Real World Appl. 10, 144–153 (2009)

    MathSciNet  Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

The research is supported by the Bulgarian National Science Fund under Project KP-06-N32/7.

Author information

Affiliations

Authors

Contributions

All authors contributed equally to the writing of this paper. Furthermore, all authors carefully read and approved the final manuscript.

Corresponding author

Correspondence to S. Hristova.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Agarwal, R., Hristova, S., O’Regan, D. et al. Mean-square stability of Riemann–Liouville fractional Hopfield’s graded response neural networks with random impulses. Adv Differ Equ 2021, 98 (2021). https://doi.org/10.1186/s13662-021-03237-8

Download citation

MSC

  • 34A37
  • 34D20
  • 34F99
  • 92B20

Keywords

  • Hopfield’s graded response neural network
  • Riemann–Liouville fractional derivative
  • Impulses at random times
  • Lyapunov functions
  • Mean-square Mittag-Leffler stability in time