Skip to main content

Theory and Modern Applications

Stability of delayed impulsive stochastic differential equations driven by a fractional Brown motion with time-varying delay

Abstract

We study the stability problem of mild solutions of impulsive stochastic differential equations driven by a fractional Brown motion with finite time-varying delay. The Hurst parameter H of the fractional Brown motion belongs to \((\frac{1}{2},1)\). In terms of fractional power of operators and semigroup theory, we obtain sufficient conditions that guarantee the stability of the mild solution of such a equation in two cases: the impulse depends on current states of the system and the impulse depends not only on current states but also on historical states of the system. We give two examples illustrating the theorems.

1 Introduction

Fractional Brownian motions (fBms) play a central role in the modeling and analysis of many complex phenomena in applications where the systems are subject to “rough” external forcing. An fBm with Hurst parameter \(H\in(0,1)\) is a zero-mean Gaussian process denoted by \(W^{H}=\{W^{H}(t),t\geq0\}\). If \(H\in(0,\frac{1}{2})\), then it is regarded as a short-memory process; if \(H=\frac{1}{2}\), then it reduces to the standard Brownian motion; if \(H\in(\frac{1}{2},1)\), then it is regarded as a long-memory process. It is easy to see that fBm is a generalization of Brownian motion, but it behaves different significantly from the standard Brownian motion. In particular, it is neither a semimartingale nor a Markov process. It is characterized by stationary increments and memory property, which make an fBm a potential candidate to model noise in biology and finance (see, e.g., [15], in geophysics ([6]), in communication networks ([7]), in electricity markets ([8]), and so on.

In recent years, stochastic differential equations driven by fBms have attracted increasing interest because of their applications in a variety of fields (see [4, 919] and references therein). However, most of the existing literature is focused on the existence and uniqueness of mild solutions for stochastic differential equations driven by fBms (see, e.g., [915]), but the existing results on the stability of mild solutions for stochastic differential equations driven by fBms are relatively few. We only found a few stories in the literature [4, 1619]. In [4], the authors provided sufficient conditions to guarantee the exponential asymptotic behavior of solutions of general linear stochastic differential equations driven by fBms with time-varying delays. In [16], the authors provided the conditions for the existence, uniqueness, and exponential asymptotic behavior of mild solutions to stochastic delay evolution equations perturbed by an fBm. In [17], the authors provided conditions to ensure the exponential decay to zero in mean square of solutions of neutral stochastic differential equations driven by fBms in a Hilbert space. However, in [4, 16, 17], the impulses are not considered in the systems. In [18], the author gave asymptotic stability conditions for mild solutions of neutral stochastic differential equations driven by fBms with finite delays and nonlinear impulsive effects. In [19], the authors gave mean-square exponential stability conditions for mild solutions of neutral stochastic differential equations driven by fBms with infinite delays and impulses. However, the impulses in stochastic differential equations in [18, 19] only depend on the current states of the systems, which are supposed to be of the form \(I_{k}(x(t_{k}))\) at impulsive moments \(t_{k}\) (\(k=1,2,\ldots\)). Here, the delayed impulses we consider describe the impulsive transients depending not only on their current states but also on historical states of the system. Delayed impulses exist in many practical problems, for example, in communication security systems based on impulsive synchronization. During the information transmission process, the sampling delay created from sampling the impulses at some discrete instances causes the impulsive transients depend on their historical states. There are some results ([2025]) on delayed impulsive differential equation, where the delays in impulsive perturbations are fixed as constants or vary in a finite interval. However, most of these studies of stability for such delayed impulsive differential equations consider deterministic equations ([2025]), whereas stochastic differential equations are rarely considered ([26]). In [26], the stochastic term was supposed to be of the form \(g(x_{t},t)\,dw(t)\), where \(w(t)\) is a standard Brownian motion, not a fractional one.

In view of the above discussion, we investigate the stability of impulsive stochastic delay differential equations driven by fBms with Hurst parameter \(H\in(\frac{1}{2},1)\). Firstly, we assume that the impulse in such an equation depends on the current states and give mean-square exponential stability conditions. Secondly, we assume that the impulse in such an equation depends on the historical states and give mean-square asymptotic stability conditions.

The rest of this paper is organized as follows. In Section 2, we introduce some notation, concepts, and lemmas. In Section 3, we give mean-square exponential stability conditions for stochastic differential equations driven by fBms with impulses and time-varying delays, where the impulses only depend on the current states of the system. In Section 4, we study a delayed impulsive differential equation driven by an fBm with impulses and time-varying delay, where the impulses depend not only on the current states but also on the historical states of the system, and provide mean-square asymptotic stability conditions for such an equation. In Section 5, we give two illustrative examples. Conclusions are given in Section 6.

2 Preliminaries

In the present paper, we study stability behavior of mild solutions of an impulsive stochastic delay differential equation driven by an fBm in a Hilbert space of the form

$$ \textstyle\begin{cases} \mathrm{d}x(t)=[Ax(t)+F(t,x(t-\sigma(t)))]\,\mathrm{d}t+G(t)\,\mathrm{d}W^{H}(t),\quad t\geq t_{0}, t\neq t_{k}, \\ \Delta x(t_{k})=I_{k}(x(t_{k}^{-}),x(t_{k}^{-}-\delta)),\quad k\in\mathbb {N}, \\ x(t_{0}+\theta)=\phi(\theta),\quad \theta\in[-\alpha,0], \end{cases} $$
(1)

where A is the infinitesimal generator of an analytic semigroup \(\{ S(t)\}_{t\geq t_{0}}\) of bounded linear operators in a Hilbert space X, \(W^{H}(t)\) is am fBm with Hurst parameter \(H\in(\frac{1}{2},1)\) on a real separable Hilbert space Y, the delay \(\sigma (t):[t_{0},+\infty)\to[0,\alpha]\) (\(\alpha>0\)) is continuous, \(\mathbb{N}\) denotes the set of positive integers, impulsive moments satisfy \(0 \leq t_{0}< t_{1}< t_{2}<\cdots<t_{k}<t_{k+1}<\cdots\) and \(\lim_{k\to\infty} t_{k}=\infty\), \(\Delta x(t_{k})=x(t^{+}_{k})-x(t^{-}_{k})\) represents the jump size in the state x at \(t_{k}\), \(x(t^{+}_{k})\) and \(x(t^{-}_{k})\) are respectively the right and left limits of \(x(t)\) at \(t=t_{k}\), δ is a constant delay in impulses, \(I_{k}: X\times X\to X\) is continuous, the initial data \(\phi\in C([-\alpha,0],X)\) (the space of all continuous functions from \([-\alpha,0]\) to X) has finite second moments, and \(F:[t_{0},+\infty)\times X\to X\) and \(G:[t_{0},+\infty)\to \mathcal{L}^{0}_{2}(Y,X)\), where \(\mathcal{L}^{0}_{2}(Y,X)\) is the space of all Q-Hilbert-Schmidt operators \(\psi:Y\to X\).

In the rest of this section, we recall the definition of Wiener integrals with respect to fBms and two useful lemmas.

Let \((\Omega, \mathfrak{F},P)\) be a complete probability space. An fBm \(\{W^{H}(t),t \geq t_{0}\}\) with Hurst parameter \(H\in(\frac{1}{2},1)\) is a continuous centered Gaussian process with covariance function

$$R_{H}(t,s)=\mathbb{E} \bigl[W^{H}(t)W^{H}(s) \bigr]= \frac{1}{2} \bigl(\vert t\vert ^{2H}+\vert s\vert ^{2H}-\vert t-s\vert ^{2H} \bigr) $$

and has the following Wiener integral representation:

$$W^{H}(t)= \int^{t}_{t_{0}}K_{H}(t,s)\,\mathrm{d}W(s), $$

where \(W(t)=\{W(t):t\geq t_{0}\}\) is a standard Wiener process, \(K_{H}(t,s)\) is the kernel given by

$$K_{H}(t,s)=c_{H}s^{\frac{1}{2}-H} \int^{t}_{s}(u-s)^{H-\frac{1}{2}}u^{H-\frac{1}{2}} \, \mathrm{d}u,\quad t>s, $$

with \(c_{H}=\sqrt{\frac{H(2H-1)}{B(2-2H,H-\frac{1}{2})}}\), and B denotes the beta function. Let \(K_{H}(t,s)=0\) for \(t\leq s\). For the deterministic function \(\varphi\in L^{2}([t_{0},+\infty))\), it is known from [16] that the fractional Wiener integral with respect to \(W^{H}(t)\) can be defined by

$$\int^{\infty}_{t_{0}}\varphi(t) \,\mathrm{d}W^{H}(t)= \int^{\infty}_{t_{0}} \bigl(K^{*} _{H} \varphi \bigr) (t)\,\mathrm{d} W(t), $$

where \((K^{*} _{H}\varphi) (s)=\int^{t}_{s}\varphi(t)\frac{\partial K_{H}(t,s)}{\partial t}\,\mathrm{d}t\).

Next, we are interested in considering an fBm with values in a Hilbert space and giving the definition of the corresponding stochastic integral.

Let X and Y be real separable Hilbert spaces, and let \(\mathcal {L}(Y,X)\) denote the space of all bounded linear operators from Y to X. Let \(Q\in\mathcal{L}(Y,X)\) be the operator defined by \(Qe_{n}=\lambda_{n}e_{n}\) with finite trace \(\operatorname{tr}Q=\sum_{n=1}^{\infty}\lambda_{n}\), where \(\lambda_{n}\), \(n=1,2,\ldots\) , are nonnegative real numbers, and \(\{e_{n},n=1,2,\ldots\}\) is a complete orthonormal basis in Y. Define a Y-valued Gaussian process as

$$W^{H}(t)=\sum_{n=1}^{\infty}\sqrt{ \lambda_{n}}e_{n}W^{H}_{n}(t), $$

where \(W^{H}_{n}(t)\) \((n=1,2,\ldots)\) are real independent fBms. It has the covariance

$$\mathbb{E} \bigl\langle W^{H}(t),x \bigr\rangle \bigl\langle W^{H}(s),y \bigr\rangle =R(t,s) \bigl\langle Q(x),y \bigr\rangle $$

for all \(x,y\in Y\) and \(t,s\geq t_{0}\). The space \(\mathcal {L}^{0}_{2}(Y,X)\) the space of all Q-Hilbert-Schmidt operators \(\psi :Y\to X\), equipped with the following norm and inner product

$$\Vert \psi \Vert _{\mathcal{L}^{0}_{2}(Y,X)}=\sum_{n=1}^{\infty} \Vert \sqrt{\lambda_{n}} \psi e_{n}\Vert ^{2} $$

and

$$\langle\phi, \psi\rangle_{\mathcal{L}^{0}_{2}(Y,X)}=\sum_{n=1}^{\infty} \langle\phi e_{n}, \psi e_{n}\rangle. $$

The space \(\mathcal{L}^{0}_{2}(Y,X)\) is a separable Hilbert space. Then (see [16]) the fractional Wiener integral of the function \(\psi :[t_{0},+\infty)\to\mathcal{L}^{0}_{2}(Y,X)\) with respect to the fBm is defined by

$$\int^{t}_{t_{0}}\psi(s)\,\mathrm{d} W^{H}(s)= \sum_{n=1}^{\infty} \int^{t}_{t_{0}}\sqrt{\lambda_{n}}\psi (s)e_{n}\,\mathrm{d}W^{H}_{n}(s)=\sum _{n=1}^{\infty} \int^{t}_{t_{0}}\sqrt{\lambda_{n}}K^{*}_{H}( \psi e_{n}) (s)\,\mathrm{d}W_{n}(s). $$

Lemma 1

[16]

For any \(\psi:[t_{0},\infty)\to\mathcal {L}^{0}_{2}(Y,X)\) satisfying \(\int^{\infty}_{t_{0}}\Vert \psi (s)\Vert ^{2}_{\mathcal{L}^{0}_{2}(Y,X)}\,ds<+\infty\), the integral \(\int^{t}_{t_{0}}\psi(s)\,dW^{H}(s)\) is well defined as an X-valued random variable, and

$$\mathbb{E} \biggl\Vert \int^{t}_{t_{0}}\psi(s)\,\mathrm{d}W^{H}(s) \biggr\Vert ^{2}\leq cH(2H-1) (t-t_{0})^{2H-1} \int^{t}_{t_{0}} \bigl\Vert \psi(s) \bigr\Vert ^{2}_{\mathcal{L}^{0}_{2}(Y,X)}\,\mathrm{d}s,\quad t>t_{0}, $$

where \(c=c(H)\) is a constant.

Lemma 2

[19]

For any \(\mu>0\), assume that there exist positive constants \(\alpha_{1}\), \(\alpha_{2}\), \(\beta_{k}, k\in \mathbb{N}\), and a function \(\psi:[-\alpha,\infty)\to[0,\infty)\) such that

$$\psi(t)\leq\alpha_{1}e^{-\mu t},\quad t\in[-\alpha, t_{0}], $$

and, for each \(t\geq t_{0}\),

$$\psi(t) \leq\alpha_{1}e^{-\mu t}+\alpha_{2} \int^{t}_{t_{0}}e^{-\mu(t-s)}\sup _{\theta\in[-\alpha, 0]}{\psi(s+\theta)}\,\mathrm{d}s+\sum _{t_{k}< t}\beta_{k}e^{-\mu(t-t_{k})}\psi \bigl(t^{-}_{k} \bigr) $$

if \(\frac{\alpha_{2}}{\mu}+\sum_{k=1}^{\infty}\beta_{k}<1\). Then we have

$$\psi(t)\leq M_{0}e^{-\lambda t},\quad t\geq-\alpha, $$

where \(\lambda>0\) is the unique solution of the equation \(\frac{\alpha _{2}e^{\lambda\alpha}}{\mu-\lambda}+\sum_{k=1}^{\infty}\beta _{k}=1\), and \(M_{0}=\max\{\alpha_{1},\frac{\alpha_{1}(\mu-\lambda )}{\alpha_{2}e^{\lambda\alpha}}\}\).

Definition 2.1

A mild solution of system (1) is said to be exponentially stable in mean square if there is a pair of positive constants λ, \(M_{0}\) such that

$$\mathbb{E} \bigl\Vert x(t,t_{0},\phi) \bigr\Vert ^{2} \leq M_{0} e^{-\lambda(t-t_{0})}. $$

3 Mean-square exponential stability of the impulsive differential equation driven by an fBm with time-varying delay

In this section, we first assume that the impulses in Eq. (1) only depend on the current states of the system, that is, the impulses can be described as \(I_{k}(x(t_{k}^{-}),x(t_{k}^{-}-\delta ))=I_{k}(x(t_{k}^{-}))\). Then Eq. (1) reduces to the following equation:

$$ \textstyle\begin{cases} \mathrm{d}x(t)=[Ax(t)+F(t,x(t-\sigma(t)))]\,\mathrm{d}t+G(t)\,\mathrm{d}W^{H}(t),\quad t\geq t_{0}, t\neq t_{k}, \\ \Delta x(t_{k})=I_{k}(x(t_{k}^{-})),\quad k\in\mathbb{N,} \\ x(t_{0}+\theta)=\phi(\theta),\quad \theta\in[-\alpha,0]. \end{cases} $$
(2)

Next, we present the definition of a mild solution of Eq. (2).

Definition 3.1

An X-valued stochastic process \(\{x(t), t\in [-\alpha, \infty)\}\) is called a mild solution of Eq. (2) if \(x(t_{0}+\theta)=\phi(\theta)\) on the interval \([-\alpha,0]\) and the following conditions hold:

  1. (i)

    \(x(\cdot)\) is continuous on \([t_{0},t_{1}]\) and on each interval \((t_{k},t_{k+1}]\), \(k\in\mathbb{N} \),

  2. (ii)

    for each k, the limits \(x(t^{+}_{k})\), \(x(t^{-}_{k})\) exist, and \(x(t^{-}_{k})=x(t_{k})\),

  3. (iii)

    for each \(t\geq t_{0}\), we have a.s.

    $$ x(t)= \textstyle\begin{cases} \int^{t}_{t_{0}}S(t-s)F(s,x(s-\sigma(s)))\,\mathrm{d}s+\int ^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s),\\ \quad t\in[t_{0},t_{1}],\\ S(t-t_{k})[x(t^{-}_{k})+I_{k}(x(t^{-}_{k}))]+\int ^{t}_{t_{k}}S(t-s)F(s,x(s-\sigma(s)))\,\mathrm{d}s\\ \quad {}+\int^{t}_{t_{k}}S(t-s)G(s)\,\mathrm{d}W^{H}(s), \quad t\in(t_{k},t_{k+1}], k\in\mathbb{N.} \end{cases} $$
    (3)

Our aim in this section is to obtain mean-square exponential stability conditions for a mild solution for Eq. (2). For this, we need the following assumptions.

(\(H_{1}\)):

The strongly continuous semigroup \(\{S(t)\}_{t\geq t_{0}}\) is exponentially stable, that is, there exist a constant \(M>0\) and a real number \(r>0\) such that \(\Vert S(t)\Vert \leq Me^{-rt}\), \(t\geq t_{0}\).

(\(H_{2}\)):

There exists a nonnegative real number \(R_{1}>0\) such that

$$\bigl\Vert F(t,x) \bigr\Vert \leq R_{1} \Vert x\Vert ,\quad t \geq t_{0}, x\in X. $$
(\(H_{3}\)):

There exist nonnegative real numbers \(d_{k}\), \(k\in\mathbb {N}\), such that

$$\bigl\Vert I_{k}(x)-I_{k}(y) \bigr\Vert \leq d_{k}\Vert x-y\Vert $$

for all \(x,y\in X\) and \(\sum^{\infty}_{k=1}d_{k}<\infty\)

(\(H_{4}\)):

The function \(G:[t_{0},\infty)\to\mathcal{L}_{2}^{0}(Y,X)\) satisfies, for some \(r>0\),

$$\int^{\infty}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}_{\mathcal{L}_{2}^{0}(Y,X)}\,\mathrm{d}s< \infty. $$

Theorem 3.1

Suppose that (\(H_{1}\))-(\(H_{4}\)) hold and

$$ \frac{3M^{2}R_{1}^{2}}{r^{2}}+3M^{2} \Biggl(\sum _{i=1}^{\infty}d_{i} \Biggr)^{2}< 1. $$
(4)

Then, the mild solution of Eq. (2) is mean-square exponentially stable.

Proof

For \(t\in[t_{0},t_{1}]\), we have

$$ x(t)= \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s+ \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s). $$
(5)

For \(t\in(t_{1},t_{2}]\), from (3) we get

$$\begin{aligned} x(t) = & S(t-t_{1}) \bigl( x \bigl(t^{-}_{1} \bigr)+I_{1} \bigl(x \bigl(t^{-}_{1} \bigr) \bigr) \bigr)+ \int^{t}_{t_{1}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{} + \int^{t}_{t_{1}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s. \end{aligned}$$
(6)

By (ii) of Definition 3.1, \(x(t_{k}^{-})=x(t_{k})\), and combining (5) with(6), for \(t\in(t_{1},t_{2}]\), we get

$$\begin{aligned} x(t) = &S(t-t_{1})I_{1} \bigl(x \bigl(t^{-}_{1} \bigr) \bigr)+S(t-t_{1}) \int^{t_{1}}_{t_{0}}S(t_{1}-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{} +S(t-t_{1}) \int^{t_{1}}_{t_{0}}S(t_{1}-s)G(s) \, \mathrm{d}W^{H}(s)+ \int^{t}_{t_{1}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{} + \int^{t}_{t_{1}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ =& \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s+S(t-t_{1})I_{1} \bigl(x \bigl(t^{-}_{1} \bigr) \bigr) \\ &{} + \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s). \end{aligned}$$
(7)

In the same manner, for \(t\in(t_{2},t_{3}]\), we get

$$\begin{aligned} x(t) = & S(t-t_{2}) \bigl( x \bigl(t^{-}_{2} \bigr)+I_{2} \bigl(x \bigl(t^{-}_{2} \bigr) \bigr) \bigr) \\ &{}+ \int^{t}_{t_{2}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{}+ \int^{t}_{t_{2}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ =& \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+ \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{} +S(t-t_{1})I_{1} \bigl(x \bigl(t^{-}_{1} \bigr) \bigr)+S(t-t_{2})I_{2} \bigl(x \bigl(t^{-}_{2} \bigr) \bigr). \end{aligned}$$
(8)

Assuming that, for \(t\in(t_{m},t_{m+1}]\), where \(m\in\mathbb{N}\),

$$\begin{aligned} x(t) =& \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s+ \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{} +\sum_{i=1}^{m}S(t-t_{i})I_{i} \bigl(x \bigl(t^{-}_{i} \bigr) \bigr), \end{aligned}$$
(9)

for \(t\in(t_{m+1},t_{m+2}]\), by (3) we have

$$\begin{aligned} x(t) = &S(t-t_{m+1}) \bigl[x \bigl(t_{m+1}^{-} \bigr)+I_{m+1} \bigl(x \bigl(t^{-}_{m+1} \bigr) \bigr) \bigr]+ \int^{t}_{t_{m+1}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{}+ \int^{t}_{t_{m+1}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s. \end{aligned}$$
(10)

Combining (9) with (10), for \(t\in(t_{m+1},t_{m+2}]\), we have

$$\begin{aligned} x(t) = &S(t-t_{m+1}) \Biggl[ \int^{t_{m+1}}_{t_{0}}S(t_{m+1}-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+ \int^{t_{m+1}}_{t_{0}}S(t_{m+1}-s)G(s) \, \mathrm{d}W^{H}(s)+\sum_{i=1}^{m}S(t_{m+1}-t_{i})I_{i} \bigl(x \bigl(t^{-}_{i} \bigr) \bigr) \Biggr] \\ &{}+S(t-t_{m+1})I_{m+1} \bigl(x \bigl(t^{-}_{m+1} \bigr) \bigr)+ \int^{t}_{t_{m+1}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{}+ \int^{t}_{t_{m+1}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s. \end{aligned}$$
(11)

By the properties of a semigroup and (11), for \(t\in (t_{m+1},t_{m+2}]\), we have

$$\begin{aligned} x(t) = & \int^{t_{m+1}}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+ \int^{t_{m+1}}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{} +\sum_{i=1}^{m}S(t-t_{i})I_{i} \bigl(x \bigl(t^{-}_{i} \bigr) \bigr)+S(t-t_{m+1})I_{m+1} \bigl(x \bigl(t^{-}_{m+1} \bigr) \bigr) \\ &{}+ \int^{t}_{t_{m+1}}S(t-s)G(s)\,\mathrm{d}W^{H}(s)+ \int^{t}_{t_{m+1}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s. \end{aligned}$$
(12)

Calculating (12), then we get that, for \(t\in(t_{m+1},t_{m+2}]\),

$$\begin{aligned} x(t) = & \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s+ \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{} +\sum_{i=1}^{m+1}S(t-t_{i})I_{i} \bigl(x \bigl(t^{-}_{i} \bigr) \bigr). \end{aligned}$$
(13)

From Eqs. (9) and (13), by mathematical induction, for all \(t\in (t_{k},t_{k+1}]\), we have

$$\begin{aligned} x(t) = & \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s+ \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{}+\sum_{i=1}^{k}S(t-t_{i})I_{i} \bigl(x \bigl(t^{-}_{i} \bigr) \bigr). \end{aligned}$$
(14)

Taking the mathematical expectation of (14), by the \(C_{p}\) inequality, for \(t\in(t_{k},t_{k+1}]\), \(k\in\mathbb{N}\), we have

$$\begin{aligned} \mathbb{E} \bigl\Vert x(t) \bigr\Vert ^{2} \leq& 3 \mathbb{E} \biggl\Vert \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \biggr\Vert ^{2} \\ &{}+3\mathbb{E} \biggl\Vert \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \biggr\Vert ^{2} \\ &{}+3\mathbb{E} \Biggl\Vert \sum_{i=1}^{k}S(t-t_{i})I_{i} \bigl(x \bigl(t^{-}_{i} \bigr) \bigr) \Biggr\Vert ^{2} \\ =& 3 \sum_{j=1}^{3}Q_{j}. \end{aligned}$$
(15)

Next, we will estimate each item separately. Denote \(\mu=r-\varepsilon \), \(\varepsilon>0 \). Using the Hölder inequality and (\(H_{1}\)), (\(H_{2}\)), and (\(H_{4}\)), we have

$$\begin{aligned} Q_{1} =& \mathbb{E} \biggl\Vert \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \biggr\Vert ^{2} \\ \leq& \mathbb{E} \biggl( \int^{t}_{t_{0}} \bigl\Vert S(t-s) \bigr\Vert \bigl\Vert F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr) \bigr\Vert \,\mathrm{d}s \biggr)^{2} \\ \leq& M^{2} \mathbb{E} \biggl( \int^{t}_{t_{0}}e^{-\frac{r}{2}(t-s)}e^{-\frac{r}{2}(t-s)} \bigl\Vert F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr) \bigr\Vert \,\mathrm{d}s\biggr)^{2} \\ \leq& M^{2} \int^{t}_{t_{0}}e^{-r(t-s)}\,\mathrm{d}s \int^{t}_{t_{0}}e^{-r(t-s)}\mathbb{E} \bigl\Vert F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr) \bigr\Vert ^{2}\, \mathrm{d}s \\ \leq& M^{2}\times\frac{1}{r} \bigl(1-e^{-r(t-t_{0})} \bigr) \int^{t}_{t_{0}}e^{-r(t-s)}R_{1}^{2} \mathbb{E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2} \, \mathrm{d}s \\ \leq& \frac{M^{2}R_{1}^{2}}{r} \int^{t}_{t_{0}}e^{-\mu(t-s)} \sup _{\theta\in[-\alpha,0]}\mathbb{E} \bigl\Vert x(s+\theta) \bigr\Vert ^{2}\,\mathrm{d}s. \end{aligned}$$
(16)

Using Lemma 1 and (\(H_{4}\)), we have

$$\begin{aligned} Q_{2} =&\mathbb{E} \biggl\Vert \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \biggr\Vert ^{2} \\ \leq& cH(2H-1) (t-t_{0})^{2H-1} \int^{t}_{t_{0}} \bigl\Vert S(t-s)G(s) \bigr\Vert ^{2}_{\mathfrak{L}^{0}_{2}(Y,X)}\,\mathrm{d}s \\ \leq& M^{2}cH(2H-1) (t-t_{0})^{2H-1}e^{-rt} \int^{t}_{t_{0}}e^{rs}e^{-r(t-s)} \bigl\Vert G(s) \bigr\Vert ^{2}_{\mathfrak{L}^{2}_{0}(Y,X)}\,\mathrm{d}s \\ \leq& M^{2}cH(2H-1) (t-t_{0})^{2H-1}e^{-(\mu+\varepsilon)t} \int^{t}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}_{\mathfrak{L}^{2}_{0}(Y,X)}\,\mathrm{d}s \\ \leq& e^{-\mu t} \biggl(M^{2}cH(2H-1) (t-t_{0})^{2H-1}e^{-\varepsilon t} \int^{t}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}_{\mathfrak{L}^{2}_{0}(Y,X)}\,\mathrm{d}s \biggr). \end{aligned}$$
(17)

In view of (\(H_{4}\)), there exists a constant \(R_{2}>0\) such that

$$ M^{2}cH(2H-1) (t-t_{0})^{2H-1}e^{-\varepsilon t} \int^{t}_{0}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}_{\mathfrak{L}^{2}_{0}(Y,X)}\,\mathrm{d}s\leq R_{2}. $$
(18)

Then, combining (17) and (18), we have

$$ Q_{2}\leq R_{2} e^{-\mu t}. $$
(19)

By the Hölder inequality, (\(H_{1}\)), and (\(H_{4}\)), we have

$$\begin{aligned} Q_{3} =&\mathbb{E} \Biggl\Vert \sum _{i=1}^{k}S(t-t_{i})I_{i} \bigl(x \bigl(t_{i}^{-} \bigr) \bigr) \Biggr\Vert ^{2} \\ \leq& \mathbb{E} \Biggl(\sum_{i=1}^{k} \bigl\Vert S(t-t_{i})I_{i} \bigl(x \bigl(t_{i}^{-} \bigr) \bigr) \bigr\Vert \Biggr)^{2} \\ \leq& M^{2} \mathbb{E} \Biggl(\sum_{i=1}^{k}(d_{i})^{\frac {1}{2}}(d_{i})^{\frac{1}{2}}e^{-r(t-t_{i})}x \bigl(t_{i}^{-} \bigr) \Biggr)^{2} \\ \leq& M^{2} \Biggl(\sum_{i=1}^{k}d_{i} \Biggr) \Biggl(\sum_{i=1}^{k}d_{i}e^{-2r(t-t_{i})} \mathbb{E} \bigl\Vert x \bigl(t_{i}^{-} \bigr) \bigr\Vert ^{2} \Biggr) \\ \leq& M^{2} \Biggl(\sum_{i=1}^{k}d_{i} \Biggr)\sum_{i=1}^{k}d_{i}e^{-\mu(t-t_{i})} \mathbb{E} \bigl\Vert x \bigl(t_{i}^{-} \bigr) \bigr\Vert ^{2}. \end{aligned}$$
(20)

By (15), (16), (19), and (20), for all \(t\geq t_{0}\), we have

$$\begin{aligned} \mathbb{E} \bigl\Vert x(t) \bigr\Vert ^{2} \leq&3 \frac{M^{2}R_{1}^{2}}{r} \int^{t}_{t_{0}}e^{-\mu(t-s)} \sup _{\theta\in[-\alpha,0]}\mathbb{E} \bigl\Vert x(s+\theta) \bigr\Vert ^{2}\,\mathrm{d}s+3R_{2} e^{-\mu t} \\ &{}+ 3M^{2} \Biggl(\sum_{i=1}^{\infty}d_{i} \Biggr)\sum_{t_{i}< t}d_{i}e^{-\mu(t-t_{i})} \mathbb{E} \bigl\Vert x \bigl(t_{i}^{-} \bigr) \bigr\Vert ^{2}, \end{aligned}$$
(21)

and it is easy to see that, for \(t\in[-\alpha,t_{0}]\),

$$ \mathbb{E} \bigl\Vert x(t) \bigr\Vert ^{2}\leq\alpha _{1}e^{-\mu t}, $$
(22)

where \(\alpha_{1}=\max (3R_{2}, \sup_{\theta\in[-\alpha,0]} \mathbb {E}\Vert \phi(\theta)\Vert ^{2} )\). By (21), (22), (4), and Lemma 2, for all \(t\geq-\alpha\), we have

$$ \mathbb{E} \bigl\Vert x(t) \bigr\Vert ^{2}\leq M_{0}e^{-\lambda t}, $$
(23)

where λ is the unique solution to the equation \(\frac{\alpha_{2}e^{\lambda\alpha}}{\mu-\lambda}+\sum_{k=1}^{\infty}\beta_{k}=1\), with \(\alpha_{2}=\frac {3M^{2}R_{1}^{2}}{r}\), \(\beta_{k}=3M^{2}d_{k} (\sum_{k=1}^{\infty }d_{k} )\), and \(M_{0}=\max\{\alpha_{1}, \frac{\alpha_{1}(\mu-\lambda)}{\alpha _{2}e^{\lambda\alpha}}\}\). This means that the mild solution of Eq. (2) is exponentially stable in mean square, and the proof is completed. □

Remark 3.1

We would like to mention that our method used in Theorem 3.1 can be extended to the neutral case.

Remark 3.2

If (\(H_{2}\)) is replaced by \((\overline{H}_{2})\), then Theorem 3.1 remains true.

(\(\overline{H}_{2}\)):

There exist two nonnegative real numbers \(R_{1}>0\) and \(l>0\) such that

$$\bigl\Vert F(t,x) \bigr\Vert ^{2} \leq R_{1} \Vert x \Vert ^{2}+le^{-rt},\quad \forall t \geq t_{0}, x \in X. $$

Note that \(\alpha_{1}\) in (22) should be \(\alpha_{1}=\max (3R_{2}+3\frac{M^{2}l}{r(r-\mu)}, \sup_{\theta\in[-\alpha,0]}\mathbb {E}\Vert \phi(\theta)\Vert ^{2} )\).

Remark 3.3

If the impulses in (2) are removed, that is, \(\Delta x(t_{k})=I_{k}(\cdot)=0\), \(k\in\mathbb{N}\), then Eq. (2) reduces to the following equation:

$$ \textstyle\begin{cases} \mathrm{d}x(t)=[Ax(t)+F(t,x(t-\sigma(t))]\,\mathrm{d}t+G(t)\,\mathrm{d}W^{H}(t),\quad t\geq t_{0},\\ x(t_{0}+\theta)=\phi(\theta), \quad \theta\in[-\alpha,0]. \end{cases} $$
(24)

By using the same technique as in Theorem 3.1, we can easily deduce the following corollary.

Corollary 3.1

Suppose that (\(H_{1}\)), (\(H_{2}\)), (\(H_{4}\)) or (\(H_{1}\)), \((\overline{H}_{2})\), (\(H_{4}\)) and \(2M^{2}R_{1}< r^{2}\) hold. Then the mild solution of Eq. (24) is exponentially stable in mean square.

Remark 3.4

In [16], the authors studied Eq. (24) and proved the mean-square exponential stability of the mild solution for (24), assuming that the delay function \(\sigma(t):[0,+\infty)\to[0,r]\) is differentiable and \(\vert 1-\sigma^{\prime}(t)\vert ^{-1}<\rho ^{*}\), where \(\rho^{*}\) is a constant. It is obviousl that Corollary 3.1 is less conservative than Theorem 4 in [16].

4 Mean-square exponential stability of delayed impulsive differential equation driven by an fBm with time-varying delay

In this section, we assume that the impulses in Eq. (1) depend on the historical states of the system, that is, the impulses can be described as \(I_{k}(x(t_{k}^{-}), x(t_{k}^{-}-\delta))=I_{k}(x(t_{k}^{-}-\delta ))\). In addition, we assume that the impulses are linear. Then Eq. (1) reduces to the following equation:

$$ \textstyle\begin{cases} \mathrm{d}x(t)=[Ax(t)+F(t,x(t-\sigma(t)))]\,\mathrm{d}t+G(t)\,\mathrm{d}W^{H}(t),\quad t\geq t_{0}, t\neq t_{k}, \\ \Delta x(t_{k})=\rho_{k}x(t_{k}^{-}-\delta),\quad k\in\mathbb{N}, \\ x(t_{0}+\theta)=\phi(\theta),\quad \theta\in[-\alpha,0], \end{cases} $$
(25)

where the delay δ in impulses is a constant, and \(0\leq\delta <\underline{\tau}=\inf_{k\in\mathbb{N}}\{t_{k}-t_{k-1}\}\), \(\rho _{k}>0\), \(k\in\mathbb{N} \), are constants.

Next, we give the definition of a mild solution of Eq. (25).

Definition 4.1

An X-valued stochastic process \(\{x(t),t\in [-\alpha, \infty)\}\) is called a mild solution of Eq. (25) if \(x(t_{0}+\theta)=\phi(\theta)\) on the interval \([-\alpha,0]\) and the following conditions hold:

  1. (i)

    \(x(\cdot)\) is continuous on \([t_{0},t_{1}]\) and on each interval \((t_{k},t_{k+1}]\), \(k\in\mathbb{N} \),

  2. (ii)

    for each k, the limits \(x(t^{+}_{k})\), \(x(t^{-}_{k})\) exist, and \(x(t^{-}_{k})=x(t_{k})\),

  3. (iii)

    for each \(t\geq t_{0}\), we have a.s.

    $$ x(t)= \textstyle\begin{cases} \int^{t}_{t_{0}}S(t-s)F(s,x(s-\sigma(s)))\,\mathrm{d}s+\int ^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s),\\ \quad t\in[t_{0},t_{1}],\\ S(t-t_{k})[x(t^{-}_{k})+\rho_{k}x(t^{-}_{k}-\delta)]+\int ^{t}_{t_{k}}S(t-s)F(s,x(s-\sigma(s)))\,\mathrm{d}s\\ \quad {}+\int^{t}_{t_{k}}S(t-s)G(s)\,\mathrm{d}W^{H}(s),\quad t\in(t_{k},t_{k+1}], k\in\mathbb{N}. \end{cases} $$
    (26)

In order to obtain the mean-square asymptotic stability conditions of for the mild solution for Eq. (25), we need the following assumption.

(\(H_{5}\)):

The impulses satisfy

$$\lim_{k\to\infty}\boldsymbol{\Sigma}< \infty, $$

where

$$\begin{aligned} \boldsymbol{\Sigma} = &k \Biggl(\mathrm{C}^{1}_{k} \sum_{i_{1}=1}^{k}\rho_{i_{1}}^{2}e^{-r(k-i_{1})\underline{\tau}}+ \mathrm{C}^{2}_{k}\sum_{i_{1}=1}^{k-1} \sum_{i_{2}>i_{1}}^{k}\rho_{i_{1}}^{2} \rho_{i_{2}}^{2}e^{-r((k-i_{1})\underline{\tau}-\delta)}+\cdots \\ &{}+\mathrm{C}^{k}_{k}\sum_{i_{1}=1}^{1} \sum_{i_{2}>i_{1}}^{2}\cdots\sum _{i_{k}>i_{k-1}}^{k}\rho_{i_{1}}^{2}\rho _{i_{2}}^{2}\cdots\rho_{i_{k}}^{2}e^{-r((k-i_{1})\underline{\tau }-(k-1)\delta)} \Biggr) \end{aligned}$$

with binomial coefficients \(\mathrm{C}^{j}_{k}=\binom{j}{k}\), \(j=1,2,\ldots,k\).

Remark 4.1

Assumption (\(H_{5}\)) looks complicated because of the addition of delay in impulses; however, it is not difficult to realize. We give an example here. If \(d_{k}=0\), \(k\in\mathbb{N}\), then \(\boldsymbol{\Sigma}=0\). It is easy to see that each term \(e^{-r(k-i_{1})}\geq0\), \(i_{1}=1,2,\ldots,k\); \(e^{-r((k-i_{1})\underline{\tau}-\delta)}\geq0\), \(i_{1}=1,2,\ldots ,k-1\); …\(e^{-r((k-i_{1})\underline{\tau}-(k-1)\delta)}\geq0\), \(i_{1}=1\). If we choose \(\rho_{k}=\frac{1}{k^{2}}\), then

$$\boldsymbol{\Sigma} \leq k \sum_{j=1}^{k} \biggl(\mathrm{C}^{j}_{k}\frac{1}{k^{2j}} \biggr)^{2}\leq k \Biggl(\sum_{j=1}^{k} \mathrm{C}^{j}_{k}\frac{1}{k^{2j}} \Biggr)^{2}=k \biggl[ \biggl(1+\frac{1}{k^{2}} \biggr)^{k}-1 \biggr]^{2}. $$

Taking the limit on both sides of this inequality, we get

$$\lim_{k\to\infty}\boldsymbol{\Sigma} \leq\lim_{k\to\infty} \frac{\{[(1+\frac{1}{k^{2}})^{k^{2}}]^{\frac{1}{k}}-1\}^{2}}{\frac{1}{k}}=0, $$

so that \(\lim_{k\to\infty}\boldsymbol{\Sigma}=0\).

Theorem 4.1

Suppose that (\(H_{1}\)), (\(H_{2}\)), (\(H_{4}\)), and (\(H_{5}\)) hold. Then, the mild solution of Eq. (25) is asymptotically stable in mean square.

Proof

Denote by \(\mathfrak{F}\) the space of all stochastic processes \(x(t,\omega):[-\alpha,+\infty)\times\Omega\to X\) satisfying \(x(t_{0}+\theta)=\phi(\theta)\), \(\theta\in[-\alpha,0]\) and conditions (i)-(ii) in Definition 4.1 and

$$ \lim_{t\to\infty}\mathbb{E} \bigl\Vert x(t) \bigr\Vert ^{2}=0. $$
(27)

Our goal is to estimate the limit of \(\mathbb{E}\Vert x(t)\Vert ^{2}\) as \(t\to\infty\). For \(t\in[t_{0},t_{1}]\), we have

$$ x(t)= \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s+ \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s). $$
(28)

For \(t\in(t_{1},t_{2}]\), by (26), (28), and Definition 4.1 we have

$$\begin{aligned} x(t) = & S(t-t_{1}) \bigl( x \bigl(t^{-}_{1} \bigr)+\rho_{1}x(t_{1}- \delta) \bigr)+ \int^{t}_{t_{1}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{}+ \int^{t}_{t_{1}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ =& \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s+ \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{}+\rho_{1} \int^{t_{1}-\delta}_{t_{0}}S(t-\delta-s)F \bigl(s,x \bigl(s-\sigma (s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\rho_{1} \int^{t_{1}-\delta}_{t_{0}}S(t-\delta-s)G(s) \, \mathrm{d}W^{H}(s). \end{aligned}$$
(29)

For \(t\in(t_{2},t_{3}]\), from (26) we have

$$\begin{aligned} x(t) = &S(t-t_{2}) \bigl( x \bigl(t^{-}_{2} \bigr)+\rho_{2}x(t_{2}- \delta) \bigr)+ \int^{t}_{t_{2}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+ \int^{t}_{t_{2}}S(t-s)G(s)\,\mathrm{d}W^{H}(s). \end{aligned}$$
(30)

By (29) and (30) we get, for \(t\in(t_{2},t_{3}]\),

$$\begin{aligned} x(t) =& \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s+ \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{}+\rho_{1} \int^{t_{1}-\delta}_{t_{0}}S(t-\delta-s)F \bigl(s,x \bigl(s-\sigma (s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\rho_{1} \int^{t_{1}-\delta}_{t_{0}}S(t-\delta-s)G(s) \, \mathrm{d}W^{H}(s) \\ &{}+\rho_{2} \int^{t_{2}-\delta}_{t_{0}}S(t-\delta-s)F \bigl(s,x \bigl(s-\sigma (s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\rho_{2} \int^{t_{2}-\delta}_{t_{0}}S(t-\delta-s)G(s) \, \mathrm{d}W^{H}(s) \\ &{}+\rho_{1}\rho_{2} \int^{t_{1}-\delta}_{t_{0}}S(t-2\delta-s)F \bigl(s,x \bigl(s- \sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\rho_{1}\rho_{2} \int^{t_{1}-\delta}_{t_{0}}S(t-2\delta-s)G(s) \, \mathrm{d}W^{H}(s). \end{aligned}$$
(31)

Assume that the following holds for \(t\in(t_{m},t_{m+1}]\) (for any \(m\in\mathbb{N}\)):

$$\begin{aligned} x(t) =& \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s+ \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{}+\sum_{i_{1}=1}^{m}\rho_{i_{1}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-\delta-s)F \bigl(s,x \bigl(s-\sigma (s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\sum_{i_{1}=1}^{m}\rho_{i_{1}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-\delta-s)G(s) \, \mathrm{d}W^{H}(s) \\ &{}+\sum_{i_{1}=1}^{m-1}\sum _{i_{2}>i_{1}}^{m}\rho_{i_{1}}\rho_{i_{2}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-2\delta-s)F \bigl(s,x \bigl(s- \sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\sum_{i_{1}=1}^{m-1}\sum _{i_{2}>i_{1}}^{m}\rho_{i_{1}}\rho_{i_{2}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-2\delta-s)G(s) \, \mathrm{d}W^{H}(s)+\cdots \\ &{}+\sum_{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{m}>i_{m-1}}^{m} \rho_{i_{1}}\rho_{i_{2}}\cdots\rho_{i_{m}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-m\delta-s) F \bigl(s,x \bigl(s- \sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\sum_{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdot\sum_{i_{m}>i_{m-1}}^{m} \rho_{i_{1}}\rho_{i_{2}}\cdots\rho_{i_{m}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-m\delta-s)G(s) \, \mathrm{d}W^{H}(s). \end{aligned}$$
(32)

Then, for \(t\in(t_{m+1},t_{m+2}]\), by (26) we have

$$\begin{aligned} x(t) =&S(t-t_{m+1}) \bigl( x \bigl(t^{-}_{m+1} \bigr)+\rho_{m+1}x \bigl(t_{m+1}^{-}-\delta \bigr) \bigr)+ \int^{t}_{t_{m+1}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{}+ \int^{t}_{t_{m+1}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s. \end{aligned}$$
(33)

Substituting \(x(t^{-}_{m+1})\) and \(x(t_{m+1}^{-}-\delta)\), which can be obtained from (32), into (33), by properties of semigroups and integrals, for \(t\in(t_{m+1},t_{m+2}]\), we have

$$\begin{aligned} x(t) =& \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+ \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{}+\sum_{i_{1}=1}^{m+1}\rho_{i_{1}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-\delta-s)F \bigl(s,x \bigl(s-\sigma (s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\sum_{i_{1}=1}^{m+1}\rho _{i_{1}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-\delta-s)G(s) \, \mathrm{d}W^{H}(s) \\ &{}+\sum_{i_{1}=1}^{m}\sum _{i_{2}>i_{1}}^{m+1}\rho_{i_{1}}\rho_{i_{2}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-2\delta-s)F \bigl(s,x \bigl(s- \sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\sum_{i_{1}=1}^{m}\sum _{i_{2}>i_{1}}^{m+1}\rho_{i_{1}}\rho_{i_{2}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-2\delta-s)G(s) \, \mathrm{d}W^{H}(s)+\cdots \\ &{}+\sum_{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{m+1}>i_{m}}^{m+1} \rho_{i_{1}}\rho_{i_{2}}\cdots\rho_{i_{m}} \int^{t_{i_{1}}-\delta}_{t_{0}}S \bigl(t-(m+1)\delta-s \bigr) F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\sum_{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{m+1}>i_{m}}^{m+1} \rho_{i_{1}}\rho_{i_{2}}\cdots\rho_{i_{m}} \int^{t_{i_{1}}-\delta}_{t_{0}}S \bigl(t-(m+1)\delta-s \bigr)G(s) \, \mathrm{d}W^{H}(s). \end{aligned}$$
(34)

By mathematical induction and (32)-(34), for all \(k\in\mathbb{N}\), \(t\in(t_{k},t_{k+1}]\), we have

$$\begin{aligned} x(t) =& \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s+ \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \\ &{}+ \Bigg(\sum_{i_{1}=1}^{k}\rho_{i_{1}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-\delta-s)F \bigl(s,x \bigl(s-\sigma (s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\sum_{i_{1}=1}^{k}\rho_{i_{1}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-\delta-s)G(s) \, \mathrm{d}W^{H}(s) \\ &{}+\sum_{i_{1}=1}^{k-1}\sum _{i_{2}>i_{1}}^{k}\rho_{i_{1}}\rho_{i_{2}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-2\delta-s)F \bigl(s,x \bigl(s- \sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\sum_{i_{1}=1}^{k-1}\sum _{i_{2}>i_{1}}^{k}\rho_{i_{1}}\rho_{i_{2}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-2\delta-s)G(s) \, \mathrm{d}W^{H}(s)+\cdots \\ &{}+\sum_{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{k}>i_{k-1}}^{k} \rho_{i_{1}}\rho_{i_{2}}\cdots\rho_{i_{k}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-k\delta-s) F \bigl(s,x \bigl(s- \sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\sum_{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{k}>i_{k-1}}^{k} \rho_{i_{1}}\rho_{i_{2}}\cdots\rho_{i_{k}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-k\delta-s)G(s) \, \mathrm{d}W^{H}(s)\Bigg) \\ =& \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s+ \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s)+ \varXi. \end{aligned}$$
(35)

Now we estimate (35). For all \(k\in\mathbb{N}\), \(t\in (t_{k},t_{k+1}]\), we have

$$\begin{aligned} \mathbb{E} \bigl\Vert x(t) \bigr\Vert ^{2} \leq&3 \mathbb{E} \biggl\Vert \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \biggr\Vert ^{2} \\ &{}+3\mathbb{E} \biggl\Vert \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \biggr\Vert ^{2} +3\mathbb{E}\Vert \varXi \Vert ^{2}. \end{aligned}$$
(36)

By (\(H_{1}\)), (\(H_{2}\)), and the Hölder inequality we have

$$\begin{aligned} T_{1} =&3\mathbb{E} \biggl\Vert \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \biggr\Vert ^{2} \\ \leq&3\mathbb{E} \biggl( \int^{t}_{t_{0}}Me^{-r(t-s)} \bigl\Vert F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr) \bigr\Vert \,\mathrm{d}s \biggr)^{2} \\ \leq&3M^{2} \int^{t}_{t_{0}}e^{-r(t-s)}\,\mathrm{d}s \int^{t}_{t_{0}}e^{-r(t-s)} \bigl\Vert F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq&\frac{3M^{2}R_{1}^{2}}{r} \bigl(1-e^{-r(t-t_{0})} \bigr) \int^{t}_{t_{0}}e^{-r(t-s)}\mathbb{E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s. \end{aligned}$$
(37)

For any \(x\in\mathfrak{F}\), by the definition of \(x\in\mathfrak{F}\), for arbitrary \(\varepsilon>0\), there exists \(s^{*}>t_{0}\) such that \(\mathbb{E}\Vert x(s-\sigma(s))\Vert ^{2}<\varepsilon\) for all \(s>s^{*}\). So we have

$$\begin{aligned}& \int^{t}_{t_{0}}e^{-r(t-s)}\mathbb{E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s \\& \quad = \int^{s^{*}}_{t_{0}}e^{-r(t-s)}\mathbb{E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s+ \frac{1}{r}\varepsilon \bigl(1-e^{-r(t-s^{*})} \bigr). \end{aligned}$$
(38)

Combining (37) with (38), we have

$$ \lim_{t\to\infty}T_{1}=\lim _{t\to\infty}3 \mathbb{E} \biggl\Vert \int^{t}_{t_{0}}S(t-s)F \bigl(s,x \bigl(s-\sigma(s) \bigr) \bigr)\,\mathrm{d}s \biggr\Vert ^{2}=0. $$
(39)

By Lemma 1 we get that

$$\begin{aligned} T_{2} =&3\mathbb{E} \biggl\Vert \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \biggr\Vert ^{2} \\ \leq&3 cH(2H-1) (t-t_{0})^{2H-1} \int^{t}_{t_{0}} \bigl\Vert S(t-s)G(s) \bigr\Vert ^{2}_{\mathcal{L}^{0}_{2}(Y,X)}\,\mathrm{d}s \\ \leq&3 cH(2H-1) (t-t_{0})^{2H-1} \int^{t}_{t_{0}}e^{-2r(t-s)} \bigl\Vert G(s) \bigr\Vert ^{2}_{\mathcal{L}^{0}_{2}(Y,X)}\,\mathrm{d}s \\ =& 3 cH(2H-1) (t-t_{0})^{2H-1}e^{-rt} \int^{t}_{t_{0}}e^{rs}e^{-r(t-s)} \bigl\Vert G(s) \bigr\Vert ^{2}_{\mathcal{L}^{0}_{2}(Y,X)}\,\mathrm{d}s \\ \leq&3 cH(2H-1) (t-t_{0})^{2H-1}e^{-rt} \int^{t}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}_{\mathcal{L}^{0}_{2}(Y,X)}\,\mathrm{d}s. \end{aligned}$$
(40)

It is easy to see that \(\lim_{t\to\infty}(t-t_{0})^{2H-1}e^{-rt}=0\), and by condition \((H4)\) we get

$$ \lim_{t\to\infty}T_{2}=3\mathbb{E} \biggl\Vert \int^{t}_{t_{0}}S(t-s)G(s)\,\mathrm{d}W^{H}(s) \biggr\Vert ^{2}=0. $$
(41)

Next, we estimate \(\mathbb{E}\Vert \varXi \Vert ^{2}\):

$$\begin{aligned} T_{3} =&3\mathbb{E} \Biggl\Vert \sum _{i_{1}=1}^{k}\rho_{i_{1}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-\delta-s)F \bigl(s,x \bigl(s-\sigma (s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\sum_{i_{1}=1}^{k}\rho_{i_{1}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-\delta-s)G(s) \, \mathrm{d}W^{H}(s) \\ &{}+\sum_{i_{1}=1}^{k-1}\sum _{i_{2}>i_{1}}^{k}\rho_{i_{1}}\rho_{i_{2}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-2\delta-s)F \bigl(s,x \bigl(s- \sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+\sum_{i_{1}=1}^{k-1}\sum _{i_{2}>i_{1}}^{k}\rho_{i_{1}}\rho_{i_{2}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-2\delta-s)G(s) \, \mathrm{d}W^{H}(s)+\cdots \\ &{}+\sum_{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{k}>i_{k-1}}^{k} \rho_{i_{1}}\rho_{i_{2}}\cdots\rho_{i_{k}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-k\delta-s) F \bigl(s,x \bigl(s- \sigma(s) \bigr) \bigr)\,\mathrm{d}s \\ &{}+ \sum_{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{k}>i_{k-1}}^{k} \rho_{i_{1}}\rho_{i_{2}}\cdots\rho_{i_{k}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-k\delta-s)G(s) \, \mathrm{d}W^{H}(s) \Biggr\Vert ^{2} \\ =&3\mathbb{E}\Vert \varXi \Vert ^{2}. \end{aligned}$$
(42)

Using the \(C_{p}\) inequality, we get

$$\begin{aligned} T_{3} =&3\mathbb{E}\Vert \varXi \Vert ^{2} \\ \leq&3\times2k \Biggl\{ \mathbb{E} \Biggl\Vert \sum _{i_{1}=1}^{k}\rho_{i_{1}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-\delta-s)F \bigl(s,x \bigl(s-\sigma (s) \bigr) \bigr)\,\mathrm{d}s \Biggr\Vert ^{2} \\ &{}+\mathbb{E} \Biggl\Vert \sum_{i_{1}=1}^{k-1} \sum_{i_{2}>i_{1}}^{k}\rho_{i_{1}}\rho _{i_{2}}\times \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-2\delta-s)F \bigl(s,x \bigl(s- \sigma(s) \bigr) \bigr)\,\mathrm{d}s \Biggr\Vert ^{2}+\cdots \\ &{}+\mathbb{E} \Biggl\Vert \sum_{i_{1}=1}^{1} \sum_{i_{2}>i_{1}}^{2}\cdots\sum _{i_{k}>i_{k-1}}^{k}\rho_{i_{1}}\rho _{i_{2}} \cdots\rho_{i_{k}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-k\delta-s)F \bigl(s,x \bigl(s- \sigma(s) \bigr) \bigr)\,\mathrm{d}s \Biggr\Vert ^{2} \\ &{}+\mathbb{E} \Biggl\Vert \sum_{i_{1}=1}^{k} \rho_{i_{1}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-\delta-s)G(s) \, \mathrm{d}W^{H}(s) \Biggr\Vert ^{2} \\ &{}+\mathbb{E} \Biggl\Vert \sum_{i_{1}=1}^{k-1} \sum_{i_{2}>i_{1}}^{k}\rho_{i_{1}}\rho _{i_{2}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-2\delta-s)G(s) \, \mathrm{d}W^{H}(s) \Biggr\Vert ^{2}+\cdots \\ &{}+\mathbb{E} \Biggl\Vert \sum_{i_{1}=1}^{1} \sum_{i_{2}>i_{1}}^{2}\cdots\sum _{i_{k}>i_{k-1}}^{k}\rho_{i_{1}}\rho _{i_{2}} \cdots\rho_{i_{k}} \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-k\delta-s)G(s) \, \mathrm{d}W^{H}(s) \Biggr\Vert ^{2} \Biggr\} \\ =&6k \Biggl(\sum_{j=1}^{k}U_{j}+ \sum_{j=1}^{k}V_{j} \Biggr). \end{aligned}$$
(43)

Firstly, we estimate the terms \(U_{j}\) (\(j=1,2,\ldots,k\)). By (\(H_{1}\)), (\(H_{2}\)), the \(C_{p}\) inequality, and the Hölder inequality we have

$$\begin{aligned} U_{1} \leq&\mathrm{C}^{1}_{k} \sum_{i_{1}=1}^{k}\rho_{i_{1}}^{2} \mathbb{E} \biggl\Vert \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-\delta-s)F \bigl(s,x \bigl(s-\sigma (s) \bigr) \bigr)\,\mathrm{d}s \biggr\Vert ^{2} \\ \leq& M^{2}R_{1}^{2}\mathrm{C}^{1}_{k} \sum_{i_{1}=1}^{k}\rho_{i_{1}}^{2} \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t-\delta-s)}\,\mathrm{d}s \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t-\delta-s)}\mathbb{E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq&\frac{M^{2}R_{1}^{2}}{r}\mathrm{C}^{1}_{k}\sum _{i_{1}=1}^{k}\rho_{i_{1}}^{2}e^{-2r(t-t_{i_{1}})} \bigl(1-e^{-r(t_{i_{1}}-\delta-t_{0})} \bigr) \\ &{} \times \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t-\delta-s)}\mathbb{E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq&\frac{M^{2}R_{1}^{2}}{r}\mathrm{C}^{1}_{k}\sum _{i_{1}=1}^{k}\rho_{i_{1}}^{2}e^{-2r(t-t_{i_{1}})} \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t_{i_{1}}-\delta-s)}\mathbb {E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s. \end{aligned}$$
(44)

In a similar way, we get

$$\begin{aligned} U_{2} \leq&\mathrm{C}^{2}_{k} \sum_{i_{1}=1}^{k-1}\sum _{i_{2}>i_{1}}^{k}\rho_{i_{1}}^{2}\rho _{i_{2}}^{2}\mathbb{E} \biggl\Vert \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-2\delta-s)F \bigl(s,x \bigl(s- \sigma(s) \bigr) \bigr)\,\mathrm{d}s \biggr\Vert ^{2} \\ \leq& M^{2}R_{1}^{2}\mathrm{C}^{2}_{k} \sum_{i_{1}=1}^{k-1}\sum _{i_{2}>i_{1}}^{k}\rho_{i_{1}}^{2}\rho _{i_{2}}^{2} \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t-2\delta-s)}\,\mathrm{d}s \\ &{} \times \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t-2\delta-s)}\mathbb{E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq&\frac{M^{2}R_{1}^{2}}{r}\mathrm{C}^{2}_{k}\sum _{i_{1}=1}^{k-1}\sum_{i_{2}>i_{1}}^{k} \rho_{i_{1}}^{2}\rho_{i_{2}}^{2}e^{-2r(t-t_{i_{1}}-\delta)} \bigl(1-e^{-r(t_{i_{1}}-\delta-t_{0})} \bigr) \\ &{} \times \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t_{i_{1}}-\delta-s)}\mathbb {E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq&\frac{M^{2}R_{1}^{2}}{r}\mathrm{C}^{2}_{k}\sum _{i_{1}=1}^{k-1}\sum_{i_{2}>i_{1}}^{k} \rho_{i_{1}}^{2}\rho_{i_{2}}^{2}e^{-2r(t-t_{i_{1}}-\delta)} \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t_{i_{1}}-\delta-s)}\mathbb {E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s. \end{aligned}$$
(45)

In the same way, one get

$$\begin{aligned} U_{k} \leq&\mathrm{C}^{k}_{k} \sum_{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{k}>i_{k-1}}^{k} \rho_{i_{1}}^{2}\rho_{i_{2}}^{2}\cdots\rho _{i_{k}}^{2}\mathbb{E} \biggl\Vert \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-k\delta-s)F \bigl(s,x \bigl(s- \sigma(s) \bigr) \bigr)\,\mathrm{d}s \biggr\Vert ^{2} \\ \leq& M^{2}R_{1}^{2}\mathrm{C}^{k}_{k} \sum_{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{k}>i_{k-1}}^{k} \rho_{i_{1}}^{2}\rho_{i_{2}}^{2}\cdots\rho _{i_{k}}^{2} \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t-k\delta-s)}\,\mathrm{d}s \\ &{} \times \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t-k\delta-s)}\mathbb{E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq&\frac{M^{2}R_{1}^{2}}{r}\mathrm{C}^{k}_{k} \sum_{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{k}>i_{k-1}}^{k} \rho_{i_{1}}^{2}\rho_{i_{2}}^{2}\cdots\rho _{i_{k}}^{2}e^{-2r(t-t_{i_{1}}-(k-1)\delta)} \\ &{} \times \bigl(1-e^{-r(t_{i_{1}}-\delta-t_{0})} \bigr) \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t_{i_{1}}-\delta-s)}\mathbb {E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq&\frac{M^{2}R_{1}^{2}}{r}\mathrm{C}^{k}_{k}\sum _{i_{1}=1}^{1}\sum_{i_{2}>i_{1}}^{2} \cdots \sum_{i_{k}>i_{k-1}}^{k}\rho_{i_{1}}^{2} \rho_{i_{2}}^{2}\cdots\rho_{i_{k}}^{2}e^{-2r(t-t_{i_{1}}-(k-1)\delta)} \\ &{} \times \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t_{i_{1}}-\delta-s)}\mathbb {E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s. \end{aligned}$$
(46)

By \(C_{p}\), Lemma 1, and (\(H_{1}\)) we have

$$\begin{aligned} V_{1} \leq&\mathrm{C}^{1}_{k} \sum_{i_{1}=1}^{k}d_{i_{1}}^{2} \mathbb{E} \biggl\Vert \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-\delta-s)G(s) \, \mathrm{d}W^{H}(s) \biggr\Vert ^{2} \\ \leq&\mathrm{C}^{1}_{k}\sum_{i_{1}=1}^{k} \rho_{i_{1}}^{2}cH(2H-1) (t_{i_{1}}-\delta -t_{0})^{2H-1} \int^{t_{i_{1}}-\delta}_{t_{0}} \bigl\Vert S(t-\delta-s)G(s) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq& M^{2}cH(2H-1) \mathrm{C}^{1}_{k}\sum _{i_{1}=1}^{k}\rho_{i_{1}}^{2}(t_{i_{1}}- \delta-t_{0})^{2H-1} \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-2r(t-\delta-s)} \bigl\Vert G(s) \bigr\Vert ^{2}_{\mathcal{L}^{0}_{2}(Y,X)}\,\mathrm{d}s \\ \leq& M^{2}cH(2H-1)\mathrm{C}^{1}_{k}\sum _{i_{1}=1}^{k}\rho_{i_{1}}^{2}(t_{i_{1}}- \delta-t_{0})^{2H-1}e^{-r(t-\delta)} \int^{t_{i_{1}}-\delta}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}_{\mathcal{L}^{0}_{2}(Y,X)}\,\mathrm{d}s \\ \leq& M^{2}cH(2H-1)\mathrm{C}^{1}_{k}\sum _{i_{1}=1}^{k}\rho_{i_{1}}^{2}(t_{i_{1}}- \delta-t_{0})^{2H-1}e^{-r(t-\delta)} \int^{t}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}_{\mathcal{L}^{0}_{2}(Y,X)}\,\mathrm{d}s \\ \leq& M^{2}cH(2H-1)\mathrm{C}^{1}_{k}\sum _{i_{1}=1}^{k}\rho_{i_{1}}^{2}(t_{i_{1}}- \delta-t_{0})^{2H-1}e^{-r(t_{i_{1}}-\delta-t_{0})}e^{-r(t-t_{i_{1}})} \\ &{} \times \int^{t}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}_{\mathcal{L}^{0}_{2}(Y,X)}\,\mathrm{d}s. \end{aligned}$$
(47)

We can do the same calculation for \(V_{2}\):

$$\begin{aligned} V_{2} \leq&\mathrm{C}^{2}_{k} \sum_{i_{1}=1}^{k-1}\sum _{i_{2}>i_{1}}^{k}\rho_{i_{1}}^{2}\rho _{i_{2}}^{2}\mathbb{E} \biggl\Vert \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-2\delta-s)G(s) \, \mathrm{d}W^{H}(s) \biggr\Vert ^{2} \\ \leq&\mathrm{C}^{2}_{k}\sum_{i_{1}=1}^{k-1} \sum_{i_{2}>i_{1}}^{k}\rho_{i_{1}}^{2} \rho_{i_{2}}^{2}cH(2H-1) (t_{i_{1}}-\delta -t_{0})^{2H-1} \int^{t_{i_{1}}-\delta}_{t_{0}} \bigl\Vert S(t-2\delta-s)G(s) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq& cH(2H-1)M^{2}\mathrm{C}^{2}_{k}\sum _{i_{1}=1}^{k-1}\sum _{i_{2}>i_{1}}^{k}\rho_{i_{1}}^{2}\rho _{i_{2}}^{2}(t_{i_{1}}-\delta-t_{0})^{2H-1} \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-2r(t-2\delta-s)} \bigl\Vert G(s) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq& cH(2H-1)M^{2}\mathrm{C}^{2}_{k}\sum _{i_{1}=1}^{k-1}\sum _{i_{2}>i_{1}}^{k}\rho_{i_{1}}^{2}\rho _{i_{2}}^{2}(t_{i_{1}}-\delta-t_{0})^{2H-1}e^{-r(t-2\delta)} \int^{t_{i_{1}}-\delta}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq& cH(2H-1)M^{2}\mathrm{C}^{2}_{k} \sum_{i_{1}=1}^{k-1}\sum _{i_{2}>i_{1}}^{k}\rho_{i_{1}}^{2}\rho _{i_{2}}^{2}(t_{i_{1}}-\delta-t_{0})^{2H-1}e^{-r(t-2\delta)} \int^{t}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq& cH(2H-1)M^{2}\mathrm{C}^{2}_{k}\sum _{i_{1}=1}^{k-1}\sum _{i_{2}>i_{1}}^{k}\rho_{i_{1}}^{2}\rho _{i_{2}}^{2}(t_{i_{1}}-\delta-t_{0})^{2H-1}e^{-r(t_{i_{1}}-\delta -t_{0})}e^{-r(t-t_{i_{1}}-\delta)} \\ &{} \times \int^{t}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}\,\mathrm{d}s. \end{aligned}$$
(48)

The rest can be done in the same manner as before:

$$\begin{aligned} V_{k} \leq&\mathrm{C}^{k}_{k} \sum_{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{k}>i_{k-1}}^{k} \rho_{i_{1}}^{2}\rho_{i_{2}}^{2}\cdots\rho _{i_{k}}^{2}\mathbb{E} \biggl\Vert \int^{t_{i_{1}}-\delta}_{t_{0}}S(t-k\delta-s)G(s) \, \mathrm{d}W^{H}(s) \biggr\Vert ^{2} \\ \leq&\mathrm{C}^{k}_{k}\sum_{i_{1}=1}^{1} \sum_{i_{2}>i_{1}}^{2}\cdots\sum _{i_{k}>i_{k-1}}^{k}\rho_{i_{1}}^{2}\rho _{i_{2}}^{2}\cdots\rho_{i_{k}}^{2}cH(2H-1) (t_{i_{1}}-\delta-t_{0})^{2H-1} \\ &{} \times \int^{t_{i_{1}}-\delta}_{t_{0}} \bigl\Vert S(t-k\delta-s)G(s) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq& cH(2H-1)M^{2}\mathrm{C}^{k}_{k}\sum _{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{k}>i_{k-1}}^{k} \rho_{i_{1}}^{2}\rho_{i_{2}}^{2}\cdots\rho _{i_{k}}^{2}(t_{i_{1}}-\delta-t_{0})^{2H-1} \\ &{} \times e^{-r(t-k\delta)} \int^{t_{i_{1}}-\delta}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq& cH(2H-1)M^{2}\mathrm{C}^{k}_{k}\sum _{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{k}>i_{k-1}}^{k} \rho_{i_{1}}^{2}\rho_{i_{2}}^{2}\cdots\rho _{i_{k}}^{2}(t_{i_{1}}-\delta-t_{0})^{2H-1} \\ &{} \times e^{-r(t-k\delta)} \int^{t}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}\,\mathrm{d}s \\ \leq& cH(2H-1)M^{2}\mathrm{C}^{k}_{k}\sum _{i_{1}=1}^{1}\sum _{i_{2}>i_{1}}^{2}\cdots\sum_{i_{k}>i_{k-1}}^{k} \rho_{i_{1}}^{2}\rho_{i_{2}}^{2}\cdots\rho _{i_{k}}^{2}(t_{i_{1}}-\delta-t_{0})^{2H-1} \\ &{} \times e^{-r(t_{i_{1}}-\delta-t_{0})}e^{-r(t-t_{i_{1}}-(k-1)\delta)} \int^{t}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}\,\mathrm{d}s. \end{aligned}$$
(49)

By (43) to (49) we get

$$\begin{aligned} \sum_{j=1}^{k}U_{j} \leq&\frac{M^{2}R_{1}^{2}}{r} \Biggl(\mathrm{C}^{1}_{k}\sum _{i_{1}=1}^{k}\rho_{i_{1}}^{2}e^{-2r(t-t_{i_{1}})} \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t_{i_{1}}-\delta-s)}\mathbb {E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s \\ &{}+ \mathrm{C}^{2}_{k}\sum_{i_{1}=1}^{k-1} \sum_{i_{2}>i_{1}}^{k}\rho_{i_{1}}^{2} \rho_{i_{2}}^{2}e^{-2r(t-t_{i_{1}}-\delta)} \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t_{i_{1}}-\delta-s)}\mathbb {E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s \\ &{}+\cdots+\mathrm{C}^{k}_{k}\sum _{i_{1}=1}^{1}\sum_{i_{2}>i_{1}}^{2} \cdots \sum_{i_{k}>i_{k-1}}^{k}\rho_{i_{1}}^{2} \rho_{i_{2}}^{2}\cdots\rho_{i_{k}}^{2}e^{-2r(t-t_{i_{1}}-(k-1)\delta)} \\ &{} \times \int^{t_{i_{1}}-\delta}_{t_{0}}e^{-r(t_{i_{1}}-\delta-s)}\mathbb {E} \bigl\Vert x \bigl(s-\sigma(s) \bigr) \bigr\Vert ^{2}\,\mathrm{d}s \Biggr). \end{aligned}$$
(50)

For any \(x(t)\in\mathfrak{F}\), by the definition of \(\mathfrak{F}\) and by (\(H_{5}\)) we get

$$ \lim_{k\to\infty}k\sum_{j=1}^{k}U_{j}=0 $$
(51)

and

$$\begin{aligned} \sum_{j=1}^{k}V_{j} \leq& M^{2}cH(2H-1) \int^{t}_{t_{0}}e^{rs} \bigl\Vert G(s) \bigr\Vert ^{2}_{\mathcal{L}^{0}_{2}(Y,X)}\,\mathrm{d}s \\ &{} \times \Biggl\{ \mathrm{C}^{1}_{k}\sum _{i_{1}=1}^{k}\rho_{i_{1}}^{2}e^{-r(t-t_{i_{1}})(t_{i_{1}}-\delta -t_{0})^{2H-1}e^{-r(t_{i_{1}}-\delta-t_{0})}} \\ &{}+\mathrm{C}^{2}_{k}\sum_{i_{1}=1}^{k-1} \sum_{i_{2}>i_{1}}^{k}\rho_{i_{1}}^{2} \rho_{i_{2}}^{2}(t_{i_{1}}-\delta-t_{0})^{2H-1}e^{-r(t_{i_{1}}-\delta -t_{0})}e^{-r(t-t_{i_{1}}-\delta)} \\ &{}+\mathrm{C}^{k}_{k}\sum_{i_{1}=1}^{1} \sum_{i_{2}>i_{1}}^{2}\cdots\sum _{i_{k}>i_{k-1}}^{k}\rho_{i_{1}}^{2}\rho _{i_{2}}^{2}\cdots\rho_{i_{k}}^{2}(t_{i_{1}}- \delta-t_{0})^{2H-1} \\ &{} \times e^{-r(t_{i_{1}}-\delta-t_{0})}e^{-r(t-t_{i_{1}}-(k-1)\delta)} \Biggr\} . \end{aligned}$$
(52)

By (\(H_{4}\)) and (\(H_{5}\)) we get

$$ \lim_{k\to\infty}k\sum_{j=1}^{k}V_{j}=0. $$
(53)

By (43), (51), and (53) we get

$$ \lim_{t \to\infty}T_{3}=0. $$
(54)

By (36), (37), (40), and (54) we finally get \(\lim_{t\to\infty}\mathbb {E}\Vert x(t)\Vert ^{2}=0\) and complete the proof of the theorem. □

Remark 4.2

The delay in impulses of Eq. (25) are assume to satisfy \(0\leq\delta< \underline{\tau}=\inf_{k\in\mathbb{N}}\{ t_{k}-t_{k-1}\}\), which means that impulsive instances depend on information of the prior interval in the history. In other words, the information of the history of the prior interval \((t_{k-1},t_{k}]\) is needed to decide the impulsive instances at \(t_{k}\). In fact, we can assume that the impulsive instance states are determined by the information of the prior several intervals in the history, and the method of Theorem 4.1 is also applicable, although the expression of the states of the system is very complex.

Remark 4.3

\(\Delta x(t_{k})=\rho_{k}x(t_{k}^{-}-\delta)\) in Eq. (25) describe that impulsive moments are linear in structure and dependent on delayed state variables. By the method of Theorem 3.1 and Theorem 4.1 we can get corresponding stability conditions when the impulsive moments are described as \(\Delta x(t_{k})=I_{k}(x(t_{k}^{-}))+\rho_{k}x(t_{k}^{-}-\delta)\). Here we omit the details.

Remark 4.4

In [16] and [19], the authors investigated the exponential stability of impulsive stochastic differential equations driven by an fBm. In [16], the impulses were not considered in the systems. In [19], the impulses were considered in the systems, but they only depended on the current states of the systems. However, in our paper, the impulses considered in the systems depend not only on the current states but also on the historical states of the systems.

5 Examples

In this section, we provide two examples to illustrate the obtained results.

Example 1

Let us consider the following impulsive stochastic differential equation driven by an fBm:

$$ \textstyle\begin{cases} \mathrm{d} Z(t,x) = [\frac{\partial^{2}Z(t,x)}{\partial x^{2}}+a(t)\frac{Z(t-\frac{\alpha}{2}(1+\sin t))}{1+[Z(t-\frac{\alpha }{2}(1+\sin t))]^{2}} ]\,\mathrm{d}t+e^{-\pi^{2}t}\,\mathrm{d}W^{H}(t),\\ \quad t\geq t_{0}, t\ne t_{k}, 0\leq x \leq\pi,\\ \Delta Z(t_{k},x)=\frac{b}{k^{2}}Z(t^{-}_{k}),\quad k\in\mathbb{N,} \\ Z(t,0)=Z(t,\pi)=0, \\ Z(t_{0}+\theta,x)=\phi(\theta,x),\quad \theta\in[-\alpha, 0], \end{cases} $$
(55)

where \(a(t):[t_{0},\infty)\to\mathbb{R}^{+}\) is a continuous bounded function for \(t\geq t_{0}\). Denote by \(a_{0}\) the smallest upper bound of the function \(a(t)\) and let \(b>0\) be a constant. Let \(X=L^{2}[0,\pi ]\) and \(Y=L^{2}[0,\pi]\), and let the operator \(A:X\to X\) be defined by \(Ax=x^{\prime\prime}\) with domain \(\mathfrak{D}(A)=\{ x\in X: x, x^{\prime} \mbox{ are absolutely continuous}, x^{\prime\prime}\in X, x(0)=x(\pi)=0\}\). Then, \(Ax=\sum_{n=1}^{\infty}n^{2}\langle x,x_{n}\rangle x_{n}\), \(x\in\mathfrak{D}(A)\), where \(x_{n}(t)=\sqrt{\frac {2}{\pi}}\sin(nt)\), \(n=1,2,\ldots\) , is the orthogonal set of eigenvectors in A. It is easy to see that A is the infinitesimal generator of an analytic semigroup \(\{S(t)\}_{t\geq0}\) in X. Furthermore, we have \(S(t)x=\sum_{n=1}^{\infty}e^{-n^{2}t}\langle x, x_{n} \rangle x_{n}\) for all \(x\in X\), \(t>0\). We have seen that \(\Vert S(t)\Vert \leq e^{-\pi^{2} t}\). To define the operator \(Q: Y\to X\), we take a sequence \(\{\lambda_{n}\}_{n\geq1}\subset R^{+}\) and set \(Q\omega _{n}=\lambda_{n}\omega_{n}\), where \(\omega_{n}\) is a complete orthonormal basis in Y. Also, assume that \(\operatorname{tr}(Q)=\sum_{n=1}^{\infty}\lambda_{n}^{1/2}<\infty \). Now we define the process \(W^{H}(t)\) by \(W^{H}(t)=\sum_{n=1}^{\infty } \sqrt{\lambda_{n}}\omega_{n}W^{H}_{n}(t)\), where \(H\in(\frac {1}{2},1)\), and \(\{W^{H}_{n}(t)\}\) is a sequence of independent two-sided one-dimensional fBms.

Now, let us verify the conditions of Theorem 3.1. Since \(\Vert S(t)\Vert \leq e^{-\pi^{2} t}\), we can choose \(M=1\) and \(r=\pi ^{2}\) in (\(H_{1}\)) and \(\sigma(t)=\frac{\alpha}{2}(1+\sin t)\in[0,\alpha]\). The function F has the form

$$F(t,Z)=a(t)\frac{Z}{1+Z^{2}}. $$

So, we can choose \(R_{1}=a_{0}\) in (\(H_{2}\)) and \(d_{k}=b\frac {1}{k^{2}}\) in (\(H_{3}\)). Then \(\sum_{k=1}^{\infty}d_{k}=b\frac{\pi ^{2}}{6}<\infty\). So, conditions (\(H_{2}\)) and (\(H_{3}\)) hold. In Eq. (55), the function \(G(t)=e^{-\pi^{2}t}\), and (\(H_{4}\)) holds. If \(a_{0}\) and b satisfy the inequality

$$\frac{3M^{2}R_{1}^{2}}{r^{2}}+3M^{2} \Biggl(\sum_{i=1}^{\infty}d_{i} \Biggr)^{2}=\frac{3a_{0}^{2}}{\pi^{4}}+\frac{3b\pi^{2}}{6}< 1, $$

then all the assumptions in Theorem 3.1 are satisfied. By Theorem 3.1 the mild solution of Eq. (55) is exponentially stable in mean square.

Example 2

Let us consider the following stochastic differential equation driven by an fBm with delayed impulses:

$$ \textstyle\begin{cases} \mathrm{d} Z(t,x) = [\frac{\partial^{2}Z(t,x)}{\partial x^{2}}+a(t)\frac{Z(t-\frac{\alpha}{2}(1+\sin t))}{1+[Z(t-\frac{\alpha }{2}(1+\sin t))]^{2}} ]\,\mathrm{d}t+e^{-\pi^{2}t}\,\mathrm{d}W^{H}(t),\\ \quad t\geq t_{0}, t\ne t_{k}, 0\leq x \leq\pi,\\ \Delta Z(t_{k},x)=\frac{b}{k^{2}}Z(t^{-}_{k}-\delta),\quad k\in\mathbb{N}, \\ Z(t,0)=Z(t,\pi)=0, \\ Z(t_{0}+\theta,x)=\phi(\theta,x), \quad \theta\in[-\alpha, 0], \end{cases} $$
(56)

where \(a(t)\), b, A, \(F(t,y)\) are the same as in Eq. (55). The conditions (\(H_{1}\)), (\(H_{2}\)), and (\(H_{4}\)) hold, so we only need to verify condition (\(H_{5}\)). We assume that \(t_{k}-t_{k+1}=1\), \(k\in\mathbb{N}\), and \(\delta=0.9\). Then

$$ \boldsymbol{\Sigma}\leq k \Biggl(\mathrm{C}^{1}_{k} \sum_{i_{1}=1}^{k}\frac{b^{2}}{k^{4}}+ \mathrm{C}^{2}_{k}\sum_{i_{1}=1}^{k-1} \sum_{i_{2}>i_{1}}^{k}\frac{b^{2}}{k^{4}}+\cdots+ \mathrm{C}^{k}_{k}\sum_{i_{1}=1}^{1} \sum_{i_{2}>i_{1}}^{2}\cdots\sum _{i_{k}>i_{k-1}}^{k}\frac{b^{2}}{k^{4}} \Biggr), $$
(57)

and by binomial expansion we get

$$\boldsymbol{\Sigma}\leq k\sum_{j=1}^{k} \biggl(\mathrm{C}^{j}_{k}\frac{1}{k^{2j}} \biggr)^{2}\leq k \Biggl(\sum_{j=1}^{k} \mathrm{C}^{j}_{k}\frac{1}{k^{2j}} \Biggr)^{2}=k \biggl( \biggl(1+\frac{1}{k^{2}} \biggr)^{k}-1 \biggr)^{2}. $$

From an important limit we get that \(\lim_{k\to\infty}\boldsymbol{\Sigma}=0<\infty\). So all the conditions of Theorem 4.1 are satisfied, and the mild solution of Eq. (25) is asymptotically stable in mean square.

6 Conclusion

In this article, we obtained two stability results for stochastic differential equations driven by an fBm (Hurst parameter \(H\in(\frac {1}{2},1)\)) with varying delays and impulses. The mean-square exponential stability conditions for mild solutions under the condition that the impulses only depend on the current states of the systems, and the mean-square asymptotic stability conditions for mild solutions under the condition that the impulses depend not only on the current states but also on the history states of the systems. Two examples were given to illustrate the theorems.

References

  1. Kyrtsou, C, Terraza, M: Seasonal Mackey-Glass-GARCH process and short-term dynamics. Empir. Econ. 38, 325-345 (2010)

    Article  Google Scholar 

  2. Kyrtsou, S, Rougier, PR: Relation between postural control assessment with eyes open and centre of pressure visual feedback effects in healthy individuals. Exp. Brain Res. 195, 145-152 (2009)

    Article  Google Scholar 

  3. Fuente, DL, Samartin, ALP, Martinez, L, Garcia, MA, Lopez, AV: Long-range correlations in rabbit brain neural activity. Ann. Biomed. Eng. 34, 295-299 (2006)

    Article  Google Scholar 

  4. Nguyen, D: Asymptotic behavior of linear fractional stochastic differential equations with time varying delays. Commun. Nonlinear Sci. Numer. Simul. 19, 1-7 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  5. Comte, F, Renault, E: Long memory continuous time models. J. Econom. 73, 101-149 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  6. Tsai, CCP: Slip, stress drop and ground motion of earthquakes: a view from the perspective of fractional Brownian motion. Pure Appl. Geophys. 149, 689-706 (1997)

    Article  Google Scholar 

  7. Leland, WE, Taqqu, MS, Willinger, W, Wilson, DV: On self-similar nature of ethernet traffic (extended version). IEEE/ACM Trans. Netw. 2, 203-213 (1994)

    Article  Google Scholar 

  8. Simonsen, I: Measuring anti-correlations in the nordic electricity spot market by wavelets. Phys. A, Stat. Mech. Appl. 322, 579-606 (2003)

    Article  MATH  Google Scholar 

  9. Ferrante, M, Rovira, C: Stochastic delay differential equations driven by fractional Brownian motion with Hurst parameter \(H> \frac{1}{2}\). Bernoulli 12, 85-100 (2006)

    MathSciNet  MATH  Google Scholar 

  10. Ferrante, M, Rovira, C: Convergence of delay differential equations driven by fractional Brownian motion. J. Evol. Equ. 10, 761-783 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  11. Boufoussi, B, Hajji, S: Functional differential equations driven by a fractional Brownian motion. Comput. Math. Appl. 62, 746-754 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  12. Shuai, J: Nonlinear fractional stochastic PDEs and BDSDEs with Hurst parameter in \((\frac{1}{2},1)\). Syst. Control Lett. 61, 655-665 (2006)

    MathSciNet  MATH  Google Scholar 

  13. Diop, MA, Garrido-Atienza, MJ: Retarded evolution systems driven by fractional Brownian motion with Hurst parameter \(H> \frac{1}{2}\). Nonlinear Anal. 97, 15-29 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  14. Dung, NT: Stochastic Volterra integro-differential equations driven by fractional Brownian motion in a Hilbert space. Stoch. Int. J. Probab. Stoch. Process. 87, 142-159 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  15. Ren, Y, Cheng, X, Sakthivel, R: On time-dependent stochastic evolution equations driven by fractional Brownian motion in a Hilbert space with finite delay. Math. Methods Appl. Sci. 74, 3671-3684 (2011)

    MathSciNet  Google Scholar 

  16. Caraballo, T, Garrido-Atienza, MJ, Taniguchi, T: The existence and exponential behaviour of solutions to stochastic delay evolution equations with a fractional Brownian motion. Nonlinear Anal. 74, 3671-3684 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  17. Boufoussi, B, Hajji, S: Neutral stochastic functional differential equations driven by a fractional Brownian motion in a Hilbert space. Stat. Probab. Lett. 82, 1549-1558 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  18. Dung, NT: Neutral stochastic differential equations driven by a fractional Brownian motion with impulsive effects and varying time delays. J. Korean Stat. Soc. 43, 599-608 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  19. Arthi, G, Park, JH, Jung, HY: Existence and exponential stability for neutral stochastic integrodifferential equations with impulses driven by a fractional Brownian motion. Commun. Nonlinear Sci. Numer. Simul. 32, 145-157 (2016)

    Article  MathSciNet  Google Scholar 

  20. Wang, HM, Duan, SK, Li, CD, Wang, LD, Huang, TW: Globally exponential stability of delayed impulsive functional differential systems with impulse time windows. Nonlinear Dyn. 84, 1655-1665 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  21. Khadra, A, Liu, XZ, Shen, XS, Senior, Member, IEEE: Analyzing the robustness of impulsive synchronization coupled by linear delayed impulses. IEEE Trans. Autom. Control 54, 923-928 (2009)

    Article  MathSciNet  Google Scholar 

  22. Chen, WH, Zheng, WX: Exponential stability of nonlinear time-delay systems with delayed impulse effects. Automatica 47, 1075-1083 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  23. Li, XD, Wu, JH: Stability of nonlinear differential systems with state-dependent delayed impulses. Automatica 64, 63-69 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  24. Gao, LJ, Wang, DD, Wang, G: Further results on exponential stability for impulsive switched nonlinear time-delay systems with delayed impulse effects. Appl. Math. Comput. 268, 186-200 (2015)

    MathSciNet  Google Scholar 

  25. Wang, HM, Duan, SK, Li, CD, Wang, LD, Huang, TW: Stability of impulsive delayed linear differential systems with delayed impulses. J. Franklin Inst. 352, 3044-3068 (2015)

    Article  MathSciNet  Google Scholar 

  26. Cheng, P, Deng, FQ, Yao, FQ: Exponential stability analysis of impulsive stochastic functional differential systems with delayed impulses. Commun. Nonlinear Sci. Numer. Simul. 19, 2104-2114 (2014)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous referees and editors for their careful comments and valuable suggestions on this work. This work is supported by the key Project of Anhui Province Universities and Colleges Natural Science Foundation (No. KJ2016A553) and the Foundation for Young Talents in College of Anhui Province (Project File No. WAN JIAO MI REN [2014] 181).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xia Zhou.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

For the first time, we attempted to investigate the stability of stochastic differential equations driven by an fBm with varying delays and impulses, where the stochastic disturbance is an fBm rather than a Brownian motion, and the impulsive transients depend not only on the current states but also on the historical states of the systems.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, X., Liu, X. & Zhong, S. Stability of delayed impulsive stochastic differential equations driven by a fractional Brown motion with time-varying delay. Adv Differ Equ 2016, 328 (2016). https://doi.org/10.1186/s13662-016-1018-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-016-1018-9

Keywords