Theory and Modern Applications

Linear impulsive Volterra integro-dynamic system on time scales

Abstract

This paper deals with the asymptotic stability and boundedness of the solution of a time-varying impulsive Volterra integro-dynamic system on time scales in which the coefficient matrix is not necessarily stable. We generalize to a time scale some known properties concerning the asymptotic behavior and boundedness from the continuous case.

1 Introduction

Impulsive differential systems represent a natural framework for mathematical modelling of several processes in the applied sciences [14]. Basic qualitative and quantitative results about impulsive Volterra integro differential equations were studied in the literature (see [58]). Volterra-type equations (integral and integro-dynamic) on time scales were studied in [915]. In [16] the authors presented a theory for linear impulsive dynamic systems on time scales and recently in [17] various results concerning the asymptotic stability and boundedness of Volterra integro-dynamic equations on time scales were developed. Motivated by these papers, we generalize these results to impulsive integro-dynamic systems on time scales.

2 Preliminaries

In this paper we assume that the reader is familiar with the basic calculus of time scales. Let ${\mathbb{R}}^{n}$ be the space of n-dimensional column vectors $x=col\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ with a norm $\parallel \cdot \parallel$. Also, with the same symbol $\parallel \cdot \parallel$ we denote the corresponding matrix norm in the space ${M}_{n}\left(\mathbb{R}\right)$ of $n×n$ matrices. If $A\in {M}_{n}\left(\mathbb{R}\right)$, then we denote by ${A}^{T}$ its conjugate transpose. We recall that $\parallel A\parallel :=sup\left\{\parallel Ax\parallel ;\parallel x\parallel \le 1\right\}$ and the following inequality $\parallel Ax\parallel \le \parallel A\parallel \parallel x\parallel$ holds for all $A\in {M}_{n}\left(\mathbb{R}\right)$ and $x\in {\mathbb{R}}^{n}$. A time scale $\mathbb{T}$ is a nonempty closed subset of . The set of all rd-continuous functions $f:\mathbb{T}\to {\mathbb{R}}^{n}$ will be denoted by ${C}_{\mathrm{rd}}\left(\mathbb{T},{\mathbb{R}}^{n}\right)$.

The notations $\left[a,b\right]$, $\left[a,b\right)$, and so on, denote time scale intervals such as $\left[a,b\right]:=\left\{t\in \mathbb{T};a\le t\le b\right\}$, where $a,b\in \mathbb{T}$. Also, for any $\tau \in \mathbb{T}$, let ${\mathbb{T}}_{\tau }:=\left[\tau ,\mathrm{\infty }\right)\cap \mathbb{T}$ and ${\mathbb{T}}_{0}:=\left[0,\mathrm{\infty }\right)\cap \mathbb{T}$.

We denote by (respectively ${\mathcal{R}}^{+}$) the set of all regressive (respectively positively regressive) functions from $\mathbb{T}$ to . The space of all rd-continuous and regressive functions from $\mathbb{T}$ to is denoted by ${C}_{\mathrm{rd}}\mathcal{R}\left(\mathbb{T},\mathbb{R}\right)$. Also,

We denote by ${C}_{\mathrm{rd}}^{1}\left(\mathbb{T},{\mathbb{R}}^{{\phantom{\rule{0.1em}{0ex}}}_{n}}\right)$ the set of all functions $f:\mathbb{T}\to {\mathbb{R}}^{n}$ that are differentiable on $\mathbb{T}$ with its delta-derivative ${f}^{\mathrm{\Delta }}\left(t\right)\in {C}_{\mathrm{rd}}\left(\mathbb{T},{\mathbb{R}}^{n}\right)$. The set of rd-continuous (respectively rd-continuous and regressive) functions $A:\mathbb{T}\to {M}_{n}\left(\mathbb{R}\right)$ is denoted by ${C}_{\mathrm{rd}}\left(\mathbb{T},{M}_{n}\left(\mathbb{R}\right)\right)$ (respectively by ${C}_{\mathrm{rd}}\mathcal{R}\left(\mathbb{T},{M}_{n}\left(\mathbb{R}\right)\right)$). We recall that a matrix-valued function A is said to be regressive if $I+\mu \left(t\right)A\left(t\right)$ is invertible for all $t\in \mathbb{T}$, where I is the $n×n$ identity matrix. For a comprehensive review on time scales, we refer the reader to [18] and [19].

Lemma 2.1 ([[18], Theorem 2.38])

Let $p,q\in {C}_{\mathrm{rd}}\mathcal{R}\left(\mathbb{T},\mathbb{R}\right)$. Then ${e}_{p\ominus q}^{\mathrm{\Delta }}\left(\cdot ,{t}_{0}\right)=\left(p-q\right)\frac{{e}_{p}\left(\cdot ,{t}_{0}\right)}{{e}_{q}^{\sigma }\left(\cdot ,{t}_{0}\right)}$.

Lemma 2.2 ([[18], Theorem 6.2])

Let $\alpha \in \mathbb{R}$ with $\alpha \in {C}_{\mathrm{rd}}^{+}\mathcal{R}\left(\mathbb{T},\mathbb{R}\right)$. Then

Theorem 2.3 ([[13], Theorem 7])

Let $a,b\in \mathbb{T}$ with $b>a$ and assume that $f:\mathbb{T}×\mathbb{T}\to \mathbb{R}$ is integrable on $\left\{\left(t,s\right)\in \mathbb{T}×\mathbb{T}:b>t>s\ge a\right\}$. Then

${\int }_{a}^{b}{\int }_{a}^{\eta }f\left(\eta ,\xi \right)\mathrm{\Delta }\xi \mathrm{\Delta }\eta ={\int }_{a}^{b}{\int }_{\sigma \left(\xi \right)}^{b}f\left(\eta ,\xi \right)\mathrm{\Delta }\eta \mathrm{\Delta }\xi .$

It is easy to verify that the above result holds for $f\in {C}_{\mathrm{rd}}\left(\mathbb{T}×\mathbb{T},{\mathbb{R}}^{n}\right)$.

Lemma 2.4 ([[16], Lemma 2.1])

Let ${t}_{0}\in {\mathbb{T}}_{0}$, $y\in {C}_{\mathrm{rd}}\mathcal{R}\left({\mathbb{T}}_{0},\mathbb{R}\right)$, $p\in {C}_{\mathrm{rd}}^{+}\mathcal{R}\left({\mathbb{T}}_{0},\mathbb{R}\right)$ and $c,{b}_{k}\in {\mathbb{R}}_{+}$, $k=1,2,\dots$ . Then

$y\left(t\right)\le c+{\int }_{{t}_{0}}^{t}p\left(s\right)y\left(s\right)\mathrm{\Delta }s+\sum _{{t}_{0}<{t}_{k}

implies

$y\left(t\right)\le c\prod _{{t}_{0}<{t}_{k}

Consider the Volterra time-varying impulsive integro-dynamic system

$\left\{\begin{array}{c}{x}^{\mathrm{\Delta }}\left(t\right)=A\left(t\right)x\left(t\right)+{\int }_{{t}_{0}}^{t}K\left(t,s\right)x\left(s\right)\mathrm{\Delta }s+F\left(t\right),\phantom{\rule{1em}{0ex}}t\in {\mathbb{T}}_{0}\mathrm{\setminus }\left\{{t}_{k}\right\},\hfill \\ x\left({t}_{k}^{+}\right)=\left(I+{C}_{k}\right)x\left({t}_{k}\right),\phantom{\rule{1em}{0ex}}t={t}_{k},k=1,2,\dots ,\hfill \\ x\left({t}_{0}\right)={x}_{0},\hfill \end{array}$
(1)

where A (not necessarily stable) is an $n×n$ matrix function and F is an n-vector function, which is piecewise continuous on ${\mathbb{T}}_{0}$, K is an $n×n$ matrix function, which is piecewise continuous on $\mathrm{\Omega }:=\left\{\left(t,s\right)\in {\mathbb{T}}_{0}×{\mathbb{T}}_{0}:{t}_{0}\le s\le t<\mathrm{\infty }\right\}$, ${C}_{k}\in {M}_{n}\left({\mathbb{R}}_{+}\right)$, $0\le {t}_{0}<{t}_{1}<{t}_{2}<\cdots <{t}_{k}<\cdots$ , with ${lim}_{k\to \mathrm{\infty }}{t}_{k}=\mathrm{\infty }$ and the impulsive points ${t}_{k}$ are right dense. Note that $x\left({t}_{k}^{-}\right)$ represents the left limit of $x\left(t\right)$ at $t={t}_{k}$ and $x\left({t}_{k}^{+}\right)$ represents the right limit of $x\left(t\right)$ at $t={t}_{k}$.

The rest of the paper is organized as follows. In Section 3, we investigate the asymptotic behavior of solutions of system (1), which generalizes the continuous version ($\mathbb{T}=\mathbb{R}$) of [[8], Theorem 2.5]. In Section 4 we discuss the uniform boundedness of solutions of (1) by constructing a Lyapunov functional. Further results for boundedness, uniform boundedness and stability of solutions will also be developed.

3 Asymptotic stability

Our first result in this section is to present a system equivalent to (1) which involves an arbitrary function.

Theorem 3.1 Let $L\left(t,s\right)$ be an $n×n$ continuously differentiable matrix function with respect to s on ${t}_{k-1} with $L\left(t,{t}_{k}^{+}\right)={\left(I+{C}_{k}\right)}^{-1}L\left(t,{t}_{k}\right)$ for each $k=1,2,\dots$ . Then (1) is equivalent to the system

$\left\{\begin{array}{c}{y}^{\mathrm{\Delta }}\left(t\right)=B\left(t\right)y\left(t\right)+{\int }_{{t}_{0}}^{t}G\left(t,s\right)y\left(s\right)\mathrm{\Delta }s+H\left(t\right),\phantom{\rule{1em}{0ex}}t\in {\mathbb{T}}_{0}\mathrm{\setminus }\left\{{t}_{k}\right\},\hfill \\ y\left({t}_{k}^{+}\right)=\left(I+{C}_{k}\right)y\left({t}_{k}\right),\phantom{\rule{1em}{0ex}}t={t}_{k},k=1,2,\dots ,\hfill \\ y\left({t}_{0}\right)={y}_{0},\hfill \end{array}$
(2)

where

$\begin{array}{r}B\left(t\right)=A\left(t\right)-L\left(t,t\right),\\ H\left(t\right)=F\left(t\right)+L\left(t,{t}_{0}\right){x}_{0}+{\int }_{{t}_{0}}^{t}L\left(t,\sigma \left(s\right)\right)F\left(s\right)\mathrm{\Delta }s,\end{array}$
(3)

and

$\begin{array}{rl}G\left(t,s\right)=& K\left(t,s\right)+{\mathrm{\Delta }}_{s}L\left(t,s\right)+L\left(t,\sigma \left(s\right)\right)A\left(s\right)\\ +{\int }_{\sigma \left(s\right)}^{t}L\left(t,\sigma \left(\tau \right)\right)K\left(\tau ,s\right)\mathrm{\Delta }\tau ,\phantom{\rule{1em}{0ex}}s,t\ne {t}_{k}.\end{array}$
(4)

Proof Let $x\left(t\right)$ be any solution of (1) on ${\mathbb{T}}_{0}$. If we take $p\left(s\right)=L\left(t,s\right)x\left(s\right)$, then for ${t}_{k-1}, we have

${p}^{\mathrm{\Delta }}\left(s\right)={\mathrm{\Delta }}_{s}L\left(t,s\right)x\left(s\right)+L\left(t,\sigma \left(s\right)\right){x}^{\mathrm{\Delta }}\left(s\right)$

and by (1) it follows that

$\begin{array}{rcl}{p}^{\mathrm{\Delta }}\left(s\right)& =& {\mathrm{\Delta }}_{s}L\left(t,s\right)x\left(s\right)+L\left(t,\sigma \left(s\right)\right)A\left(s\right)x\left(s\right)\\ +L\left(t,\sigma \left(s\right)\right){\int }_{{t}_{0}}^{s}K\left(s,\tau \right)x\left(\tau \right)\mathrm{\Delta }\tau +L\left(t,\sigma \left(s\right)\right)F\left(s\right).\end{array}$

Integration from ${t}_{0}$ to t yields

$\begin{array}{r}p\left(t\right)-p\left({t}_{0}\right)-\sum _{{t}_{0}<{t}_{k}

Using Theorem 2.3, we obtain

$\begin{array}{rcl}p\left(t\right)-p\left({t}_{0}\right)-\sum _{{t}_{0}<{t}_{k}

By a change of variables in the double integral term, we have

$\begin{array}{rcl}p\left(t\right)-p\left({t}_{0}\right)-\sum _{{t}_{0}<{t}_{k}

Using (3) and (4), we obtain

$\left(A\left(t\right)-B\left(t\right)\right)x\left(t\right)={\int }_{{t}_{0}}^{t}\left(G\left(t,s\right)-K\left(t,s\right)\right)x\left(s\right)\mathrm{\Delta }s+H\left(t\right)-F\left(t\right)+\sum _{{t}_{0}<{t}_{k}

From (1), we have

${x}^{\mathrm{\Delta }}\left(t\right)=B\left(t\right)x\left(t\right)+{\int }_{{t}_{0}}^{t}G\left(t,s\right)x\left(s\right)\mathrm{\Delta }s+H\left(t\right)+\sum _{{t}_{0}<{t}_{k}

For ${t}_{0}, we obtain

$\begin{array}{rcl}\mathrm{\Delta }p\left({t}_{k}\right)& =& L\left(t,{t}_{k}^{+}\right)x\left({t}_{k}^{+}\right)-L\left(t,{t}_{k}\right)x\left({t}_{k}\right)\\ =& \left[L\left(t,{t}_{k}^{+}\right)\left(I+{C}_{k}\right)-L\left(t,{t}_{k}\right)\right]x\left({t}_{k}\right)=0.\end{array}$

Hence, $x\left(t\right)$ is a solution of (2).

Conversely, let $y\left(t\right)$ be any solution of (2) on ${\mathbb{T}}_{0}$. We shall show that it satisfies (1). Consider

$Z\left(t\right)={y}^{\mathrm{\Delta }}\left(t\right)-F\left(t\right)-A\left(t\right)y\left(t\right)-{\int }_{{t}_{0}}^{t}K\left(t,s\right)y\left(s\right)\mathrm{\Delta }s.$

Then by (2) and (3) we have

$\begin{array}{rcl}Z\left(t\right)& =& -L\left(t,t\right)y\left(t\right)+L\left(t,{t}_{0}\right){x}_{0}+{\int }_{{t}_{0}}^{t}G\left(t,s\right)y\left(s\right)\mathrm{\Delta }s\\ +{\int }_{{t}_{0}}^{t}L\left(t,\sigma \left(s\right)\right)F\left(s\right)\mathrm{\Delta }s-{\int }_{{t}_{0}}^{t}K\left(t,s\right)y\left(s\right)\mathrm{\Delta }s.\end{array}$

Using (4), we obtain

$\begin{array}{rcl}Z\left(t\right)& =& -L\left(t,t\right)y\left(t\right)+L\left(t,{t}_{0}\right){x}_{0}+{\int }_{{t}_{0}}^{t}L\left(t,\sigma \left(s\right)\right)F\left(s\right)\mathrm{\Delta }s\\ -{\int }_{{t}_{0}}^{t}K\left(t,s\right)y\left(s\right)\mathrm{\Delta }s+{\int }_{{t}_{0}}^{t}\left[K\left(t,s\right)+{\mathrm{\Delta }}_{s}L\left(t,s\right)+L\left(t,\sigma \left(s\right)\right)A\left(s\right)\\ +{\int }_{\sigma \left(s\right)}^{t}L\left(t,\sigma \left(\tau \right)\right)K\left(\tau ,s\right)\mathrm{\Delta }\tau \right]y\left(s\right)\mathrm{\Delta }s.\end{array}$

Again by Theorem 2.3, we have

$\begin{array}{rl}Z\left(t\right)=& -L\left(t,t\right)y\left(t\right)+{\int }_{{t}_{0}}^{t}\left[{\mathrm{\Delta }}_{s}L\left(t,s\right)+L\left(t,\sigma \left(s\right)\right)A\left(s\right)\right]y\left(s\right)\mathrm{\Delta }s\\ +{\int }_{{t}_{0}}^{t}L\left(t,\sigma \left(s\right)\right)\left[{\int }_{{t}_{0}}^{s}K\left(s,\tau \right)y\left(\tau \right)\mathrm{\Delta }\tau \right]\mathrm{\Delta }s\\ +L\left(t,{t}_{0}\right){x}_{0}+{\int }_{{t}_{0}}^{t}L\left(t,\sigma \left(s\right)\right)F\left(s\right)\mathrm{\Delta }s.\end{array}$
(5)

Now, by setting $q\left(s\right)=L\left(t,s\right)y\left(s\right)$, then for ${t}_{k-1}, we get

${q}^{\mathrm{\Delta }}\left(s\right)={\mathrm{\Delta }}_{s}L\left(t,s\right)y\left(s\right)+L\left(t,\sigma \left(s\right)\right){y}^{\mathrm{\Delta }}\left(s\right).$
(6)

Integrating (6) from ${t}_{0}$ to t yields

$q\left(t\right)-q\left({t}_{0}\right)-\sum _{{t}_{0}<{t}_{k}

and therefore, we have

$L\left(t,t\right)y\left(t\right)-L\left(t,{t}_{0}\right){x}_{0}-\sum _{{t}_{0}<{t}_{k}
(7)

Since $\mathrm{\Delta }q\left({t}_{k}\right)=0$, substituting (7) in (5), we obtain

$\begin{array}{rcl}Z\left(t\right)& =& -{\int }_{{t}_{0}}^{t}L\left(t,\sigma \left(s\right)\right){y}^{\mathrm{\Delta }}\left(s\right)\mathrm{\Delta }s+{\int }_{{t}_{0}}^{t}L\left(t,\sigma \left(s\right)\right)A\left(s\right)y\left(s\right)\mathrm{\Delta }s\\ +{\int }_{{t}_{0}}^{t}L\left(t,\sigma \left(s\right)\right)\left[{\int }_{{t}_{0}}^{s}K\left(s,\tau \right)y\left(\tau \right)\mathrm{\Delta }\tau \right]\mathrm{\Delta }s+{\int }_{{t}_{0}}^{t}L\left(t,\sigma \left(s\right)\right)F\left(s\right)\mathrm{\Delta }s\\ =& -{\int }_{{t}_{0}}^{t}L\left(t,\sigma \left(s\right)\right)Z\left(s\right)\mathrm{\Delta }s,\end{array}$

which implies $Z\left(t\right)\equiv 0$, by the unique solution of Volterra integral equations [12] and the fact that $\mathrm{\Delta }Z\left({t}_{k}\right)=0$. Hence $y\left(t\right)$ is a solution of (1). □

For our next result we assume that matrix B commutes with its integral, so B commutes with its matrix exponential, that is, $B\left(t\right){e}_{B}\left(t,s\right)={e}_{B}\left(t,s\right)B\left(t\right)$, [20].

Theorem 3.2 Let $B\in C\left(\mathbb{T},{M}_{n}\left(\mathbb{R}\right)\right)$ and $M,\alpha >0$. Assume that matrix B commutes with its integral. If

$\parallel {e}_{B}\left(t,s\right)\parallel \le M{e}_{\alpha }\left(s,t\right),\phantom{\rule{1em}{0ex}}t,s\in \mathrm{\Omega },$
(8)

then every solution $x\left(t\right)$ of (1) satisfies

$\begin{array}{rl}\parallel x\left(t\right)\parallel \le & M\parallel {x}_{0}\parallel {e}_{\alpha }\left({t}_{0},t\right)+M{\int }_{{t}_{0}}^{t}{e}_{\alpha }\left(\sigma \left(s\right),t\right)\parallel H\left(s\right)\parallel \mathrm{\Delta }s\\ +M{\int }_{{t}_{0}}^{t}\left[{\int }_{\sigma \left(s\right)}^{t}{e}_{\alpha }\left(\sigma \left(\tau \right),t\right)\parallel G\left(\tau ,s\right)\parallel \mathrm{\Delta }\tau \right]\parallel x\left(s\right)\parallel \mathrm{\Delta }s\\ +M{e}_{\alpha }\left({t}_{0},t\right)\sum _{{t}_{0}<{t}_{k}
(9)

where ${\beta }_{k}=\left[{e}_{B}\left({t}_{0},{t}_{k}^{+}\right)\left(I+{C}_{k}\right)-{e}_{B}\left({t}_{0},{t}_{k}\right)\right]$.

Proof Let $x\left(t\right)$ be the solution of (2) and define $q\left(t\right)={e}_{B}\left({t}_{0},t\right)x\left(t\right)$. Then

${q}^{\mathrm{\Delta }}\left(t\right)=-B\left(t\right){e}_{B}\left({t}_{0},\sigma \left(t\right)\right)x\left(t\right)+{e}_{B}\left({t}_{0},\sigma \left(t\right)\right){x}^{\mathrm{\Delta }}\left(t\right).$

Substituting for ${x}^{\mathrm{\Delta }}\left(t\right)$ from (2) and integrating from ${t}_{0}$ to t, we obtain

$\begin{array}{rcl}q\left(t\right)-q\left({t}_{0}\right)-\sum _{{t}_{0}<{t}_{k}

Using Theorem 2.3 and applying the semigroup property of exponential functions [[18], Theorem 2.36], we obtain

$\begin{array}{rl}x\left(t\right)=& {e}_{B}\left(t,{t}_{0}\right){x}_{0}+{\int }_{{t}_{0}}^{t}{e}_{B}\left(t,\sigma \left(s\right)\right)H\left(s\right)\mathrm{\Delta }s+{e}_{B}\left(t,{t}_{0}\right)\sum _{{t}_{0}<{t}_{k}
(10)

For ${t}_{0}, we have

$\begin{array}{rcl}\mathrm{\Delta }q\left({t}_{k}\right)& =& {e}_{B}\left({t}_{0},{t}_{k}^{+}\right)x\left({t}_{k}^{+}\right)-{e}_{B}\left({t}_{0},{t}_{k}\right)x\left({t}_{k}\right)\\ =& \left[{e}_{B}\left({t}_{0},{t}_{k}^{+}\right)\left(I+{C}_{k}\right)-{e}_{B}\left({t}_{0},{t}_{k}\right)\right]x\left({t}_{k}\right)\\ =& {\beta }_{k}x\left({t}_{k}\right).\end{array}$

Hence, using (8) and applying the norm on (10), we obtain (9), which completes the proof. □

In the next theorem we present sufficient conditions for asymptotic stability.

Theorem 3.3 Let $L\left(t,s\right)$ be an $n×n$ continuously differentiable matrix function with respect to s on Ω such that

1. (a)

the assumptions of Theorem  3.2 hold,

2. (b)

$\parallel L\left(t,s\right)\parallel \le \frac{{L}_{0}{e}_{\gamma }\left(s,t\right)}{\left(1+\mu \left(t\right)\alpha \right)\left(1+\mu \left(t\right)\gamma \right)}$,

3. (c)

${sup}_{{t}_{0}\le s\le t<\mathrm{\infty }}{\int }_{\sigma \left(s\right)}^{t}{e}_{\alpha }\left(\sigma \left(\tau \right),t\right)\parallel G\left(\tau ,s\right)\parallel \mathrm{\Delta }\tau \le {\alpha }_{0}$,

4. (d)

$F\left(t\right)\equiv 0$ and

5. (e)

${\prod }_{{t}_{0}<{t}_{k}, where ${d}_{k}=\parallel \left(I+{C}_{k}\right)\parallel$: ${d}_{k}\to 0$ as $k\to \mathrm{\infty }$,

where ${L}_{0},\gamma >\alpha$, ${\alpha }_{0}$, λ are positive real constants.

If $\alpha \ominus M{\alpha }_{0}\ominus \lambda >0$, then every solution $x\left(t\right)$ of (1) tends to zero exponentially as $t\to +\mathrm{\infty }$.

Proof In view of Theorem 3.1 and the fact that $L\left(t,s\right)$ satisfies (a), it is enough to show that every solution of (2) tends to zero as $t\to +\mathrm{\infty }$. From (a) and using (9), we obtain

$\begin{array}{rl}{e}_{\alpha }\left(t,0\right)\parallel x\left(t\right)\parallel \le & M\parallel {x}_{0}\parallel {e}_{\alpha }\left({t}_{0},0\right)+M{\int }_{{t}_{0}}^{t}{e}_{\alpha }\left(\sigma \left(s\right),0\right)\parallel H\left(s\right)\parallel \mathrm{\Delta }s\\ +M{\int }_{{t}_{0}}^{t}\left[{\int }_{\sigma \left(s\right)}^{t}{e}_{\alpha }\left(\sigma \left(\tau \right),0\right)\parallel G\left(\tau ,s\right)\parallel \mathrm{\Delta }\tau \right]\parallel x\left(s\right)\parallel \mathrm{\Delta }s\\ +M{e}_{\alpha }\left({t}_{0},0\right)\sum _{{t}_{0}<{t}_{k}
(11)

Since

${\int }_{{t}_{0}}^{t}{e}_{\alpha }\left(\sigma \left(s\right),0\right)\parallel H\left(s\right)\parallel \mathrm{\Delta }s\le {L}_{0}\parallel {x}_{0}\parallel {e}_{\gamma }\left({t}_{0},0\right){\int }_{{t}_{0}}^{t}\frac{{e}_{\alpha }\left(\sigma \left(s\right),0\right){e}_{\gamma }\left(0,s\right)}{\left(1+\mu \left(s\right)\alpha \right)\left(1+\mu \left(s\right)\gamma \right)}\mathrm{\Delta }s,$

then by Lemma 2.1 and the fact that $\gamma >\alpha$, we obtain

${\int }_{{t}_{0}}^{t}{e}_{\alpha }\left(\sigma \left(s\right),0\right)\parallel H\left(s\right)\parallel \mathrm{\Delta }s\le \frac{{L}_{0}\parallel {x}_{0}\parallel {e}_{\alpha }\left({t}_{0},0\right)}{\gamma -\alpha }.$

Using (11), (b), (c) and (d), we have

$\begin{array}{rcl}{e}_{\alpha }\left(t,0\right)\parallel x\left(t\right)\parallel & \le & M\parallel {x}_{0}\parallel {e}_{\alpha }\left({t}_{0},0\right)+M{L}_{0}\parallel {x}_{0}\parallel \frac{{e}_{\alpha }\left({t}_{0},0\right)}{\gamma -\alpha }+M{\int }_{{t}_{0}}^{t}{\alpha }_{0}{e}_{\alpha }\left(s,0\right)\parallel x\left(s\right)\parallel \mathrm{\Delta }s\\ +M{e}_{\alpha }\left({t}_{0},0\right)\sum _{{t}_{0}<{t}_{k}

From Theorem 3.2, we have

$\parallel {\beta }_{k}\parallel \le \parallel {e}_{B}\left({t}_{0},{t}_{k}^{+}\right)\parallel \parallel \left(I+{C}_{k}\right)\parallel +\parallel {e}_{B}\left({t}_{0},{t}_{k}\right)\parallel \le M{e}_{\alpha }\left({t}_{k},{t}_{0}\right)\left(1+{d}_{k}\right),$
(12)

which implies

$\begin{array}{rl}{e}_{\alpha }\left(t,0\right)\parallel x\left(t\right)\parallel \le & M\parallel {x}_{0}\parallel \left(1+\frac{{L}_{0}}{\gamma -\alpha }\right){e}_{\alpha }\left({t}_{0},0\right)+M{\int }_{{t}_{0}}^{t}{\alpha }_{0}{e}_{\alpha }\left(s,0\right)\parallel x\left(s\right)\parallel \mathrm{\Delta }s\\ +\sum _{{t}_{0}<{t}_{k}
(13)

Lemma 2.4 yields that

${e}_{\alpha }\left(t,0\right)\parallel x\left(t\right)\parallel \le M\parallel {x}_{0}\parallel \left(1+\frac{{L}_{0}}{\gamma -\alpha }\right){e}_{\alpha }\left({t}_{0},0\right)\prod _{{t}_{0}<{t}_{k}

Using [[18], Theorem 2.36], (e) and the fact that ${t}_{0}<{t}_{k}, we obtain

$\parallel x\left(t\right)\parallel \le M\parallel {x}_{0}\parallel \left(1+\frac{{L}_{0}}{\gamma -\alpha }\right){e}_{\alpha \ominus M{\alpha }_{0}}\left({t}_{0},0\right){e}_{\alpha \ominus M{\alpha }_{0}\ominus \lambda }\left(0,t\right),$

where $\alpha \ominus M{\alpha }_{0}\ominus \lambda =\frac{\alpha -M{\alpha }_{0}-\lambda \left(1+\mu \left(t\right)M{\alpha }_{0}\right)}{\left(1+\mu \left(t\right)M{\alpha }_{0}\right)\left(1+\mu \left(t\right)\lambda \right)}$ [[18], Exercise 2.28]. By Lemma 2.2, we have ${e}_{\alpha \ominus M{\alpha }_{0}\ominus \lambda }\left(0,t\right)\le \frac{1}{1+\left(\alpha \ominus M{\alpha }_{0}\ominus \lambda \right)t}$, so we obtain

$\parallel x\left(t\right)\parallel \le \frac{M\parallel {x}_{0}\parallel \left(1+\frac{{L}_{0}}{\gamma -\alpha }\right){e}_{\alpha \ominus M{\alpha }_{0}}\left({t}_{0},0\right)}{1+\left(\alpha \ominus M{\alpha }_{0}\ominus \lambda \right)t}.$

Hence, in view of (e) and the fact $\alpha \ominus M{\alpha }_{0}\ominus \lambda >0$, we obtain the required result. □

Example 3.4 Let us consider the Volterra integro-dynamic equation

$\left\{\begin{array}{c}{x}^{\mathrm{\Delta }}\left(t\right)=\ominus 2x\left(t\right)+{\int }_{0}^{t}{e}_{\ominus 2}\left(t,s\right)x\left(s\right)\mathrm{\Delta }s,\hfill \\ x\left({t}_{k}^{+}\right)=\frac{1}{4k}x\left({t}_{k}\right),\hfill \\ x\left(0\right)=1,\hfill \end{array}$
(14)

where $A\left(t\right)=\ominus 2$, $K\left(t,s\right)={e}_{\ominus 2}\left(t,s\right)$, $1+{C}_{k}=\frac{1}{4k}$ and the impulsive points ${t}_{k}=2k$. Now consider $L\left(t,s\right)=0$ so then $B\left(t\right)=\ominus 2$. The matrix function $G\left(t,s\right)$ given in (4) becomes

$G\left(t,s\right)={e}_{\ominus 2}\left(t,s\right).$
(15)

In the following, we check the assumptions of Theorem 3.3 when $\mathbb{T}=\mathbb{R}$.

Let $\mathbb{T}=\mathbb{R}$. Then we have

$|{e}_{B}\left(t,s\right)|=|{e}_{-2}\left(t,s\right)|={e}^{2\left(s-t\right)}\le M{e}^{2\left(s-t\right)},\phantom{\rule{1em}{0ex}}M=2,$

and

$0=|L\left(t,s\right)|<{L}_{0}{e}^{3\left(s-t\right)},\phantom{\rule{1em}{0ex}}{L}_{0}=1.$

Here the constants are $\alpha =2$ and $\gamma =3$. From (15) it follows that

$G\left(t,s\right)={e}^{-2\left(t-s\right)}.$
(16)

Then from (16) we obtain that $G\left(t,s\right)$ is a positive function, and

$\begin{array}{rcl}{\int }_{s}^{t}{e}^{2\left(\tau -t\right)}|G\left(\tau ,s\right)|\phantom{\rule{0.2em}{0ex}}d\tau & =& {\int }_{s}^{t}{e}^{2\left(\tau -t\right)}{e}^{-2\left(\tau -s\right)}\phantom{\rule{0.2em}{0ex}}d\tau \\ =& {e}^{2\left(s-t\right)}\left(t-s\right)\\ \le & \frac{\left(t-s\right)}{1+2\left(t-s\right)}\\ <& \frac{1}{2},\end{array}$

from which it follows that

$\underset{0\le s\le t<\mathrm{\infty }}{sup}{\int }_{s}^{t}{e}^{\frac{1}{2}\left(\tau -t\right)}|G\left(\tau ,s\right)|\phantom{\rule{0.2em}{0ex}}d\tau \le \frac{1}{2}$

and

$\prod _{{t}_{0}<{t}_{k}

Since ${\alpha }_{0}=\frac{1}{2}$ and $\lambda =\frac{19}{20}$, then we have that $\alpha -M{\alpha }_{0}-\lambda >0$. Therefore, since all the assumptions of Theorem 3.3 hold for system (14), it follows that the solution of (14) tends to zero exponentially as $t\to +\mathrm{\infty }$.

If $\mathbb{T}=\mathbb{N}$, then all points are right scattered and there is no impulse condition. So, from [[17], Example 3.7] it follows that the solution of (14) tends to zero exponentially as $t\to +\mathrm{\infty }$.

Theorem 3.5 Let $L\in C\left(\mathrm{\Omega },{M}_{n}\left(\mathbb{R}\right)\right)$ such that ${\mathrm{\Delta }}_{s}L\left(t,s\right)\in C\left(\mathrm{\Omega },{M}_{n}\left(\mathbb{R}\right)\right)$ for $\left(t,s\right)\in \mathrm{\Omega }$ and

1. (i)

assumptions (a), (b), (d) and (e) of Theorem  3.3 hold,

2. (ii)

$\parallel {\mathrm{\Delta }}_{s}L\left(t,s\right)\parallel \le {N}_{0}{e}_{\delta }\left(s,t\right)$ and $\parallel K\left(t,s\right)\parallel \le {K}_{0}{e}_{\theta }\left(s,t\right)$,

3. (iii)

$\parallel A\left(t\right)\parallel \le {A}_{0}$ for ${t}_{0}\le t<\mathrm{\infty }$,

4. (iv)

${sup}_{{t}_{0}\le s\le t<\mathrm{\infty }}{\int }_{\sigma \left(s\right)}^{t}\left[\left({K}_{0}+{N}_{0}\right)\left(1+\mu \left(\tau \right)\alpha \right)+\frac{{A}_{0}{L}_{0}+\left(\tau -\sigma \left(s\right)\right){L}_{0}{K}_{0}}{\mu \left(\tau \right)\alpha }\right]\mathrm{\Delta }\tau \le {\alpha }_{0}^{\star }$ for some ${\alpha }_{0}^{\star }>0$,

where ${A}_{0}$, ${N}_{0}$, ${K}_{0}$, δ and θ are positive real numbers such that $\gamma >\delta >\alpha$, $\theta >\alpha$.

If $\alpha \ominus M{\alpha }_{0}^{\star }\ominus \lambda >0$, then every solution $x\left(t\right)$ of (1) tends to zero exponentially as $t\to +\mathrm{\infty }$.

Proof From (4), we obtain

$\begin{array}{rcl}\parallel G\left(t,s\right)\parallel & \le & \parallel K\left(t,s\right)\parallel +\parallel {\mathrm{\Delta }}_{s}L\left(t,s\right)\parallel +\parallel L\left(t,\sigma \left(s\right)\right)\parallel \parallel A\left(s\right)\parallel \\ +{\int }_{\sigma \left(s\right)}^{t}\parallel L\left(t,\sigma \left(u\right)\right)\parallel \parallel K\left(u,s\right)\parallel \mathrm{\Delta }u,\end{array}$

which implies

$\begin{array}{rl}\parallel G\left(t,s\right)\parallel \le & {K}_{0}{e}_{\theta }\left(s,t\right)+{N}_{0}{e}_{\delta }\left(s,t\right)+\frac{{L}_{0}{e}_{\gamma }\left(s,t\right)}{\left(1+\mu \left(t\right)\alpha \right)\left(1+\mu \left(t\right)\gamma \right)}{A}_{0}\\ +{\int }_{\sigma \left(s\right)}^{t}\frac{{L}_{0}{K}_{0}{e}_{\gamma }\left(u,t\right){e}_{\theta }\left(s,u\right)}{\left(1+\mu \left(t\right)\alpha \right)\left(1+\mu \left(t\right)\gamma \right)}\mathrm{\Delta }u.\end{array}$
(17)

Since $\gamma >\delta >\alpha$, $\theta >\alpha$, then from (i), (ii) and (iii), (17) becomes

$\begin{array}{rl}\parallel G\left(t,s\right)\parallel \le & {K}_{0}{e}_{\alpha }\left(s,t\right)+{N}_{0}{e}_{\alpha }\left(s,t\right)\\ +\frac{{L}_{0}{e}_{\alpha }\left(s,t\right)}{\left(1+\mu \left(t\right)\alpha \right)\left(1+\mu \left(t\right)\gamma \right)}{A}_{0}+\frac{\left(t-\sigma \left(s\right)\right){L}_{0}{K}_{0}{e}_{\alpha }\left(s,t\right)}{\left(1+\mu \left(t\right)\alpha \right)\left(1+\mu \left(t\right)\gamma \right)}\end{array}$
(18)

and

${e}_{\alpha }\left(\sigma \left(t\right),0\right)\parallel G\left(t,s\right)\parallel \le \left[\left({K}_{0}+{N}_{0}\right)\left(1+\mu \left(t\right)\alpha \right)+\frac{{A}_{0}{L}_{0}+\left(t-\sigma \left(s\right)\right){L}_{0}{K}_{0}}{\mu \left(t\right)\alpha }\right]{e}_{\alpha }\left(s,0\right).$

Integrating the above inequality and using (iv), we obtain

${\int }_{\sigma \left(s\right)}^{t}{e}_{\alpha }\left(\sigma \left(\tau \right),0\right)\parallel G\left(\tau ,s\right)\parallel \mathrm{\Delta }\tau \le {\alpha }_{0}^{\star }{e}_{\alpha }\left(s,0\right).$
(19)

Substituting (19) in (11), we obtain

$\begin{array}{rcl}{e}_{\alpha }\left(t,0\right)\parallel x\left(t\right)\parallel & \le & M\parallel {x}_{0}\parallel \left(1+\frac{{L}_{0}}{\gamma -\alpha }\right){e}_{\alpha }\left({t}_{0},0\right)+M{\int }_{{t}_{0}}^{t}{\alpha }_{0}^{\star }{e}_{\alpha }\left(s,0\right)\parallel x\left(s\right)\parallel \mathrm{\Delta }s\\ +\sum _{{t}_{0}<{t}_{k}

Lemma 2.4 yields that

$\begin{array}{rcl}{e}_{\alpha }\left(t,0\right)\parallel x\left(t\right)\parallel & \le & M\parallel {x}_{0}\parallel \left(1+\frac{{L}_{0}}{\gamma -\alpha }\right){e}_{\alpha }\left({t}_{0},0\right)\\ ×\prod _{{t}_{0}<{t}_{k}

Using [[18], Theorem 2.36], (e) and the fact that ${t}_{0}<{t}_{k}, we obtain

$\parallel x\left(t\right)\parallel \le M\parallel {x}_{0}\parallel \left(1+\frac{{L}_{0}}{\gamma -\alpha }\right){e}_{\alpha \ominus M{\alpha }_{0}^{\star }}\left({t}_{0},0\right){e}_{\alpha \ominus M{\alpha }_{0}^{\star }\ominus \lambda }\left(0,t\right).$

Then by Lemma 2.2, we have

$\parallel x\left(t\right)\parallel \le \frac{M\parallel {x}_{0}\parallel \left(1+\frac{{L}_{0}}{\gamma -\alpha }\right){e}_{\alpha \ominus M{\alpha }_{0}^{\star }}\left({t}_{0},0\right)}{1+\left(\alpha \ominus M{\alpha }_{0}^{\star }\ominus \lambda \right)t}.$

Hence, in view of (i) and $\alpha \ominus M{\alpha }_{0}^{\star }\ominus \lambda >0$, we obtain the required result. □

Corollary 3.6 Let $L\left(t,s\right)$ be an $n×n$ continuously differentiable matrix function with respect to s on ${t}_{k-1} with $L\left(t,{t}_{k}^{+}\right)={\left(I+{C}_{k}\right)}^{-1}L\left(t,{t}_{k}\right)$ for each $k=1,2,\dots$ . Then (1) is equivalent to the impulsive dynamic system

$\left\{\begin{array}{c}{y}^{\mathrm{\Delta }}\left(t\right)=B\left(t\right)y\left(t\right)+H\left(t\right),\phantom{\rule{1em}{0ex}}t\in {\mathbb{T}}_{0}\mathrm{\setminus }\left\{{t}_{k}\right\},\hfill \\ y\left({t}_{k}^{+}\right)=\left(I+{C}_{k}\right)y\left({t}_{k}\right),\phantom{\rule{1em}{0ex}}t={t}_{k},k=1,2,\dots ,\hfill \\ y\left({t}_{0}\right)={y}_{0},\hfill \end{array}$
(20)

where

$\begin{array}{r}B\left(t\right)=A\left(t\right)-L\left(t,t\right),\\ H\left(t\right)=F\left(t\right)+L\left(t,{t}_{0}\right){x}_{0}+{\int }_{s}^{t}L\left(t,\sigma \left(s\right)\right)F\left(s\right)\mathrm{\Delta }s,\end{array}$
(21)

and

$\begin{array}{r}K\left(t,s\right)+{\mathrm{\Delta }}_{s}L\left(t,s\right)+L\left(t,\sigma \left(s\right)\right)A\left(s\right)\\ \phantom{\rule{1em}{0ex}}+{\int }_{\sigma \left(s\right)}^{t}L\left(t,\sigma \left(u\right)\right)K\left(u,s\right)\mathrm{\Delta }u=0,\phantom{\rule{1em}{0ex}}s,t\ne {t}_{k}.\end{array}$
(22)

Proof The proof follows an argument similar to that in Theorem 3.1 with $G\left(t,s\right)=0$. □

4 Boundedness

In the first result of this section, we give sufficient conditions to insure that (1) has bounded solutions. Our results apply to (1) whether $A\left(t\right)$ is stable, identically zero, or completely unstable, and do not require $A\left(t\right)$ to be constant nor $K\left(t,s\right)$ to be a convolution kernel. Let $C\left(t\right)$ and $D\left(t,s\right)$ be continuous $n×n$ matrices, ${t}_{0}\le s\le t<\mathrm{\infty }$. Let $s\in \left[{t}_{0},\mathrm{\infty }\right)$ and assume that $C\left(t\right)$ is an $n×n$ regressive matrix. The unique matrix solution of the initial valued problem

${Y}^{\mathrm{\Delta }}=C\left(t\right)Y,\phantom{\rule{2em}{0ex}}Y\left({t}_{k}^{+}\right)=\left(I+{C}_{k}\right)Y\left({t}_{k}\right),\phantom{\rule{2em}{0ex}}Y\left(s\right)=I,$
(23)

is called the impulsive transition matrix (at s) and it is denoted by ${S}_{C}\left(t,s\right)$ (see [[16], Corollary 3.1]). Also, if $H\left(t,s\right)$ is an $n×n$ regressive matrix satisfying

$\left\{\begin{array}{c}{\mathrm{\Delta }}_{t}H\left(t,s\right)=C\left(t\right)H\left(t,s\right)+D\left(t,s\right),\hfill \\ H\left({t}_{k}^{+},s\right)=\left(I+{C}_{k}\right)H\left({t}_{k},s\right),\hfill \\ H\left(s,s\right)=A\left(s\right)-C\left(s\right),\hfill \end{array}$
(24)

then

$H\left(t,s\right)={S}_{C}\left(t,s\right)\left[A\left(s\right)-C\left(s\right)\right]+{\int }_{s}^{t}{S}_{C}\left(t,\sigma \left(\tau \right)\right)D\left(\tau ,s\right)\mathrm{\Delta }\tau .$
(25)

Theorem 4.1 Let ${e}_{C}\left(t,s\right)$ be the solution of (23), and suppose that there are positive constants N, J and M such that

1. (i)

$\parallel {S}_{C}\left(t,{t}_{0}\right)\parallel \le N$,

2. (ii)

${\int }_{{t}_{0}}^{t}\parallel {S}_{C}\left(t,s\right)\left[A\left(s\right)-C\left(s\right)\right]+{\int }_{s}^{t}{S}_{C}\left(t,\sigma \left(\tau \right)\right)K\left(\tau ,s\right)\mathrm{\Delta }\tau \parallel \mathrm{\Delta }s\le J<1$,

3. (iii)

$\parallel {\int }_{{t}_{0}}^{t}{S}_{C}\left(t,\sigma \left(u\right)\right)\left[F\left(u\right)-G\left(t\right)x\left(t\right)\right]\mathrm{\Delta }u\parallel \le M$.

Then all the solutions of (1) are uniformly bounded, and the zero solution of the corresponding homogenous equation of (1) is uniformly stable with the initial condition $x\left({t}_{0}\right)=0$.

Proof Consider the following functional

$V\left(t,x\left(\cdot \right)\right)=x\left(t\right)-{\int }_{{t}_{0}}^{t}H\left(t,s\right)x\left(s\right)\mathrm{\Delta }s.$
(26)

The derivative of $V\left(t,x\left(\cdot \right)\right)$ along a solution $x\left(t\right)=x\left(t,{t}_{0},{x}_{0}\right)$ of (1) satisfies

${V}^{\mathrm{\Delta }}\left(t,x\left(\cdot \right)\right)={x}^{\mathrm{\Delta }}\left(t\right)-{\mathrm{\Delta }}_{t}{\int }_{{t}_{0}}^{t}H\left(t,s\right)x\left(s\right)\mathrm{\Delta }s.$

From [[18], Theorem 1.117], we obtain

$\begin{array}{rcl}{V}^{\mathrm{\Delta }}\left(t,x\left(\cdot \right)\right)& =& {x}^{\mathrm{\Delta }}\left(t\right)-H\left(\sigma \left(t\right),t\right)x\left(t\right)-{\int }_{{t}_{0}}^{t}{\mathrm{\Delta }}_{t}H\left(t,s\right)x\left(s\right)\mathrm{\Delta }s\\ =& A\left(t\right)x\left(t\right)-H\left(\sigma \left(t\right),t\right)x\left(t\right)+{\int }_{{t}_{0}}^{t}K\left(t,s\right)x\left(s\right)\mathrm{\Delta }s\\ -{\int }_{{t}_{0}}^{t}{\mathrm{\Delta }}_{t}H\left(t,s\right)x\left(s\right)\mathrm{\Delta }s+F\left(t\right)\end{array}$

or

$\begin{array}{rcl}{V}^{\mathrm{\Delta }}\left(t,x\left(\cdot \right)\right)& =& \left[A\left(t\right)-H\left(\sigma \left(t\right),t\right)\right]x\left(t\right)+F\left(t\right)\\ +{\int }_{{t}_{0}}^{t}\left[K\left(t,s\right)-{\mathrm{\Delta }}_{t}H\left(t,s\right)\right]x\left(s\right)\mathrm{\Delta }s.\end{array}$
(27)

By using (25), [[18], Theorems 1.75] and [[16], Theorem 3.4], we have the following expression:

$\begin{array}{rcl}H\left(\sigma \left(t\right),t\right)& =& {S}_{C}\left(\sigma \left(t\right),t\right)\left[A\left(t\right)-C\left(t\right)\right]+{\int }_{t}^{\sigma \left(t\right)}{S}_{C}\left(\sigma \left(t\right),\sigma \left(\tau \right)\right)D\left(\tau ,t\right)\mathrm{\Delta }\tau \\ =& \left(I+\mu \left(t\right)C\left(t\right)\right){S}_{C}\left(t,t\right)\left[A\left(t\right)-C\left(t\right)\right]+\mu \left(t\right){S}_{C}\left(\sigma \left(t\right),\sigma \left(t\right)\right)D\left(t,t\right)\\ =& \left(I+\mu \left(t\right)C\left(t\right)\right)\left[A\left(t\right)-C\left(t\right)\right]+\mu \left(t\right)D\left(t,t\right)\\ =& \left[A\left(t\right)-C\left(t\right)\right]+\mu \left(t\right)\left[C\left(t\right)A\left(t\right)-{C}^{2}\left(t\right)+D\left(t,t\right)\right],\end{array}$

which implies that

$H\left(\sigma \left(t\right),t\right)=\left[A\left(t\right)-C\left(t\right)\right]+G\left(t\right),$
(28)

where $G\left(t\right)=\mu \left(t\right)\left[C\left(t\right)A\left(t\right)-{C}^{2}\left(t\right)+D\left(t,t\right)\right]$. Substituting (28) in (27) yields

${V}^{\mathrm{\Delta }}\left(t,x\left(\cdot \right)\right)=C\left(t\right)x\left(t\right)-G\left(t\right)x\left(t\right)+{\int }_{{t}_{0}}^{t}\left[K\left(t,s\right)-{\mathrm{\Delta }}_{t}H\left(t,s\right)\right]x\left(s\right)\mathrm{\Delta }s+F\left(t\right).$

From (24) and (26), we have

${V}^{\mathrm{\Delta }}\left(t,x\left(\cdot \right)\right)=C\left(t\right)V\left(t,x\left(\cdot \right)\right)+{\int }_{{t}_{0}}^{t}\left[K\left(t,s\right)-D\left(t,s\right)\right]x\left(s\right)\mathrm{\Delta }s+F\left(t\right)-G\left(t\right)x\left(t\right),$

and it is easy to see that

$V\left({t}_{k}^{+},x\left(\cdot \right)\right)=\left(I+{C}_{k}\right)V\left({t}_{k},x\left(\cdot \right)\right).$

Thus

$V\left(t,x\left(\cdot \right)\right)={S}_{C}\left(t,{t}_{0}\right){x}_{0}+{\int }_{{t}_{0}}^{t}{S}_{C}\left(t,\sigma \left(u\right)\right)g\left(u,x\left(\cdot \right)\right)\mathrm{\Delta }u,$
(29)

where

$g\left(t,x\left(\cdot \right)\right)={\int }_{{t}_{0}}^{t}\left[K\left(t,s\right)-D\left(t,s\right)\right]x\left(s\right)\mathrm{\Delta }s+F\left(t\right)-G\left(t\right)x\left(t\right).$

Let $D\left(t,s\right)=K\left(t,s\right)$. Then by (25), (ii) is precisely ${\int }_{{t}_{0}}^{t}\parallel H\left(t,s\right)\parallel \mathrm{\Delta }s\le J<1$. By (29) and (i)-(iii),

$\begin{array}{rcl}|V\left(t,x\left(\cdot \right)\right)|& =& \parallel {S}_{C}\left(t,{t}_{0}\right){x}_{0}+{\int }_{{t}_{0}}^{t}{S}_{C}\left(t,\sigma \left(u\right)\right)\left[F\left(u\right)-G\left(t\right)x\left(t\right)\right]\mathrm{\Delta }u\parallel \\ \le & \parallel {S}_{C}\left(t,{t}_{0}\right)\parallel \parallel {x}_{0}\parallel +\parallel {\int }_{{t}_{0}}^{t}{S}_{C}\left(t,\sigma \left(u\right)\right)\left[F\left(u\right)-G\left(t\right)x\left(t\right)\right]\mathrm{\Delta }u\parallel \\ \le & N\parallel {x}_{0}\parallel +M.\end{array}$

If $\parallel {x}_{0}\parallel <{B}_{1}$ for some constant, and if $Q=N{B}_{1}+M$, then by (26) we obtain

$\parallel x\left(t\right)\parallel -{\int }_{{t}_{0}}^{t}\parallel H\left(t,s\right)\parallel \parallel x\left(s\right)\parallel \mathrm{\Delta }s\le \parallel V\left(t,x\left(\cdot \right)\right)\parallel \le Q.$
(30)

Now, either there exists ${B}_{2}>0$ such that $\parallel x\left(t\right)\parallel <{B}_{2}$ for all $t\ge {t}_{0}$, and thus $x\left(t\right)$ is uniformly bounded, or there exists a monotone sequence $\left\{{t}_{n}\right\}$ tending to infinity such that $\parallel x\left({t}_{n}\right)\parallel ={max}_{{t}_{0}\le t\le {t}_{n}}\parallel x\left(t\right)\parallel$ and $\parallel x\left({t}_{n}\right)\parallel \to \mathrm{\infty }$ as ${t}_{n}\to \mathrm{\infty }$, and by (ii) and (30) we have

$\parallel x\left({t}_{n}\right)\parallel \left(1-J\right)\le \parallel x\left({t}_{n}\right)\parallel -{\int }_{{t}_{0}}^{{t}_{n}}\parallel H\left({t}_{n},s\right)\parallel \parallel x\left(s\right)\parallel \mathrm{\Delta }s\le Q,$

a contradiction. This completes the proof. □

In the second part of this section, we consider system (1) with $F\left(t\right)$ bounded and suppose that

$C\left(t,s\right)=-{\int }_{t}^{\mathrm{\infty }}K\left(u,s\right)\mathrm{\Delta }u$
(31)

is defined and continuous on Ω. The matrix $E\left(t\right)$ on $\left[{t}_{0},\mathrm{\infty }\right)$ is defined by

$E\left(t\right)=A\left(t\right)-C\left(\sigma \left(t\right),t\right).$
(32)

Then (1) is equivalent to the system

$\left\{\begin{array}{c}{x}^{\mathrm{\Delta }}\left(t\right)=E\left(t\right)x\left(t\right)+{\mathrm{\Delta }}_{t}{\int }_{{t}_{0}}^{t}C\left(t,s\right)x\left(s\right)\mathrm{\Delta }s+F\left(t\right),\phantom{\rule{1em}{0ex}}t\in {\mathbb{T}}_{0}\mathrm{\setminus }\left\{{t}_{k}\right\},\hfill \\ x\left({t}_{k}^{+}\right)=\left(I+{C}_{k}\right)x\left({t}_{k}\right),\phantom{\rule{1em}{0ex}}t={t}_{k},k=1,2,\dots ,\hfill \\ x\left({t}_{0}\right)={x}_{0}.\hfill \end{array}$
(33)

Theorem 4.2 Let $E\in C\left(\mathbb{T},{M}_{n}\left(\mathbb{R}\right)\right)$ and $M,\alpha >0$. Assume that $E\left(t\right)$ commutes with its integral. If

$\parallel {e}_{E}\left(t,s\right)\parallel \le M{e}_{\alpha }\left(s,t\right),\phantom{\rule{1em}{0ex}}t,s\in \mathrm{\Omega },$
(34)

then every solution $x\left(t\right)$ of (1) with $x\left({t}_{0}\right)={x}_{0}$ satisfies

$\begin{array}{rl}\parallel x\left(t\right)\parallel \le & M\parallel {x}_{0}\parallel {e}_{\alpha }\left({t}_{0},t\right)+M{\int }_{{t}_{0}}^{t}{e}_{\alpha }\left(\sigma \left(s\right),t\right)\parallel F\left(s\right)\parallel \mathrm{\Delta }s\\ +M{\int }_{{t}_{0}}^{t}\parallel E\left(u\right)\parallel {e}_{\alpha }\left(\sigma \left(u\right),t\right)\left[{\int }_{{t}_{0}}^{u}\parallel C\left(u,s\right)\parallel \parallel x\left(s\right)\parallel \mathrm{\Delta }s\right]\mathrm{\Delta }u\\ +{\int }_{{t}_{0}}^{t}\parallel C\left(t,s\right)\parallel \parallel x\left(s\right)\parallel \mathrm{\Delta }s+M{e}_{\alpha }\left({t}_{0},t\right)\sum _{{t}_{0}<{t}_{k}
(35)

where ${\beta }_{k}=\left[{e}_{E}\left({t}_{0},{t}_{k}^{+}\right)\left(I+{C}_{k}\right)-{e}_{E}\left({t}_{0},{t}_{k}\right)\right]$.

Proof Let $x\left(t\right)$ be the solution of (1) and define $q\left(t\right)={e}_{E}\left({t}_{0},t\right)x\left(t\right)$. Then

${q}^{\mathrm{\Delta }}\left(t\right)=-E\left(t\right){e}_{E}\left({t}_{0},\sigma \left(t\right)\right)x\left(t\right)+{e}_{E}\left({t}_{0},\sigma \left(t\right)\right){x}^{\mathrm{\Delta }}\left(t\right).$

Substituting for ${x}^{\mathrm{\Delta }}\left(t\right)$ from (33) and integrating from ${t}_{0}$ to t, yields

$\begin{array}{rcl}q\left(t\right)-q\left({t}_{0}\right)-\sum _{{t}_{0}<{t}_{k}

Applying the integration by parts on the second term of the right-hand side [[18], Theorem 1.77] and the semigroup property of exponential functions [[18], Theorem 2.36], we obtain

$\begin{array}{rl}x\left(t\right)=& {e}_{E}\left(t,{t}_{0}\right){x}_{0}+{\int }_{{t}_{0}}^{t}{e}_{E}\left(t,\sigma \left(s\right)\right)F\left(s\right)\mathrm{\Delta }s+{\int }_{{t}_{0}}^{t}C\left(t,s\right)x\left(s\right)\mathrm{\Delta }s\\ +{\int }_{{t}_{0}}^{t}E\left(u\right){e}_{E}\left(t,\sigma \left(s\right)\right)\left[{\int }_{{t}_{0}}^{u}C\left(u,s\right)x\left(s\right)\mathrm{\Delta }s\right]\mathrm{\Delta }u+{e}_{E}\left(t,{t}_{0}\right)\sum _{{t}_{0}<{t}_{k}
(36)

For ${t}_{0}, we have

$\begin{array}{rcl}\mathrm{\Delta }q\left({t}_{k}\right)& =& {e}_{E}\left({t}_{0},{t}_{k}^{+}\right)x\left({t}_{k}^{+}\right)-{e}_{E}\left({t}_{0},{t}_{k}\right)x\left({t}_{k}\right)\\ =& \left[{e}_{E}\left({t}_{0},{t}_{k}^{+}\right)\left(I+{C}_{k}\right)-{e}_{E}\left({t}_{0},{t}_{k}\right)\right]x\left({t}_{k}\right)\\ =& {\beta }_{k}x\left({t}_{k}\right).\end{array}$

Hence, using (34) and applying the norm on (36), we obtain (35), which completes the proof. □

Assume that the hypotheses of Theorem 4.2 hold for next results.

Theorem 4.3 Let $x\left(t\right)$ be a solution of (1). If $\parallel E\left(t\right)\parallel \le d$ on $\left[{t}_{0},\mathrm{\infty }\right)$ for some $d>0$, $F\left(t\right)$ is bounded and ${sup}_{{t}_{0}\le t<\mathrm{\infty }}{\int }_{{t}_{0}}^{t}\parallel C\left(t,s\right)\parallel \mathrm{\Delta }s+\frac{1}{d}{\sum }_{{t}_{0}<{t}_{k}, with β sufficiently small, then $x\left(t\right)$ is bounded.

Proof For the given ${t}_{0}$ and bounded $F\left(t\right)$, there is ${C}_{1}>0$ with

$M\parallel {x}_{0}\parallel {e}_{\alpha }\left({t}_{0},t\right)+M\underset{{t}_{0}\le t<\mathrm{\infty }}{sup}{\int }_{{t}_{0}}^{t}{e}_{\alpha }\left(\sigma \left(s\right),t\right)\parallel F\left(s\right)\parallel \mathrm{\Delta }s<{C}_{1}.$
(37)

Substituting (37) in (35), we obtain

$\begin{array}{rcl}\parallel x\left(t\right)\parallel & \le & {C}_{1}+Md{\int }_{{t}_{0}}^{t}{e}_{\alpha }\left(\sigma \left(u\right),t\right)\left[{\int }_{{t}_{0}}^{u}\parallel C\left(u,s\right)\parallel \parallel x\left(s\right)\parallel \mathrm{\Delta }s\right]\mathrm{\Delta }u\\ +{\int }_{{t}_{0}}^{t}\parallel C\left(t,s\right)\parallel \parallel x\left(s\right)\parallel \mathrm{\Delta }s+M{e}_{\alpha }\left({t}_{0},t\right)\sum _{{t}_{0}<{t}_{k}

Let β be chosen so that $\beta \left[1+\frac{Md}{\alpha }\right]=m<1$. Then

$\parallel x\left(t\right)\parallel \le {C}_{1}+m\underset{{t}_{0}\le s

Let ${C}_{2}>\parallel {x}_{0}\parallel$ and ${C}_{1}+m{C}_{2}<{C}_{2}$. If $\parallel x\left(t\right)\parallel$ is not bounded, then there exists a first ${t}_{1}>{t}_{0}$ with $\parallel x\left({t}_{1}\right)\parallel ={C}_{2}$, and then

${C}_{2}=\parallel x\left({t}_{1}\right)\parallel \le {C}_{1}+m{C}_{2}<{C}_{2},$

a contradiction. This completes the proof. □

Theorem 4.4 If $F\left(t\right)=0$ in (1), $\parallel E\left(t\right)\parallel \le d$ on $\left[{t}_{0},\mathrm{\infty }\right)$ for some $d>0$, and ${\int }_{{t}_{0}}^{t}\parallel C\left(t,s\right)\parallel \mathrm{\Delta }s+\frac{1}{d}{\sum }_{{t}_{0}<{t}_{k} for β sufficiently small, then the zero solution of (1) with the initial condition $x\left({t}_{0}\right)=0$ is uniformly stable.

Proof Let $ϵ>0$ be given. We wish to find $\delta >0$ such that ${t}_{0}\ge 0$, $\parallel {x}_{0}\parallel <\delta$, and $t\ge {t}_{0}$ implies $\parallel x\left(t,{x}_{0}\right)\parallel <ϵ$. Let $\delta <ϵ$ with δ yet to be determined. If $\parallel {x}_{0}\parallel <\delta$, then $M\parallel {x}_{0}\parallel \le M\delta$. From (35) with $F\left(t\right)=0$,

$\begin{array}{rcl}\parallel x\left(t\right)\parallel & \le & M\delta +\frac{Md}{\alpha }\beta \underset{{t}_{0}\le s

First take β so that $\beta \left[1+\frac{Md}{\alpha }\right]\le \frac{3}{4}$ and δ so that $M\delta +\frac{3}{4}ϵ<ϵ$. If $\parallel {x}_{0}\parallel <\delta$ and if there exists ${t}_{1}>{t}_{0}$ with $\parallel x\left({t}_{1}\right)\parallel =ϵ$, we have

$ϵ=\parallel x\left({t}_{1}\right)\parallel

a contradiction. Thus the zero solution is uniformly stable. The proof is complete. □

Example 4.5 Let us consider the following system:

$\left\{\begin{array}{c}{x}^{\mathrm{\Delta }}\left(t\right)=-\frac{1}{{\left(\sigma \left(t\right)\right)}^{2}}+{\int }_{0}^{t}\left(\frac{1}{{t}^{2}\sigma \left(t\right)}+\frac{1}{t{\left(\sigma \left(t\right)\right)}^{2}}\right)x\left(s\right)\mathrm{\Delta }s,\hfill \\ x\left({t}_{k}^{+}\right)=\left(1+{C}_{k}\right)x\left({t}_{k}\right),\hfill \\ x\left(0\right)=0,\hfill \end{array}$
(38)

where $A\left(t\right)=-\frac{1}{{\left(\sigma \left(t\right)\right)}^{2}}$ and $K\left(t,s\right)=\left(\frac{1}{{t}^{2}\sigma \left(t\right)}+\frac{1}{t{\left(\sigma \left(t\right)\right)}^{2}}\right)$. It is easy to check that

${\left(\frac{1}{{u}^{2}}\right)}^{\mathrm{\Delta }}=-\frac{1}{{u}^{2}\sigma \left(u\right)}-\frac{1}{u{\left(\sigma \left(u\right)\right)}^{2}}.$
(39)

By using (31) and (39), we obtain

$\begin{array}{rcl}C\left(t,s\right)& =& {\int }_{t}^{\mathrm{\infty }}\left(-\frac{1}{{u}^{2}\sigma \left(u\right)}-\frac{1}{u{\left(\sigma \left(u\right)\right)}^{2}}\right)\mathrm{\Delta }u\\ =& {\int }_{t}^{\mathrm{\infty }}{\left(\frac{1}{{u}^{2}}\right)}^{\mathrm{\Delta }}\mathrm{\Delta }u\\ =& \frac{1}{{t}^{2}}.\end{array}$

This implies that $E\left(t\right)=0$ and

${\int }_{0}^{t}\frac{1}{{t}^{2}}\mathrm{\Delta }s=\frac{1}{t}.$
(40)

Finally, by taking the supremum over t in (40), over ${\left[0,\mathrm{\infty }\right)}_{\mathbb{T}}$, we obtain

$sup{\int }_{0}^{t}\frac{1}{{t}^{2}}\mathrm{\Delta }s=0.$

Since ${\beta }_{k}={C}_{k}$, so we can choose ${C}_{k}$ such that $\frac{1}{d}{\sum }_{{t}_{0}<{t}_{k} for β sufficiently small. It follows that all the assumptions of Theorem 4.3 are satisfied, hence all the solutions of (38) are bounded. Moreover, Theorem 4.4 yields that the zero solution of (38) is uniformly stable on an arbitrary time scale.

References

1. 1.

Bainov DD, Kostadinov SI: Abstract Impulsive Differential Equations. World Scientific, New Jersey; 1995.

2. 2.

Bainov DD, Simeonov PS: Impulsive Differential Equations: Periodic Solutions and Applications. Longman, New York; 1993.

3. 3.

Lakshmikantham V, Bainov DD, Simeonov PS: Theory of Impulsive Differential Equations. World Scientific, Singapore; 1989.

4. 4.

Samoilenko AM, Perestyuk NA: Impulsive Differential Equations. World Scientific, Singapore; 1995.

5. 5.

Akca H, Berezansky L, Braverman E: On linear integro-differential equations with integral impulsive conditions. Z. Anal. Anwend. 1996, 15: 709-727. 10.4171/ZAA/724

6. 6.

Akhmetov MU, Zafer A, Sejilova RD: The control of boundary value problems for quasilinear impulsive integro-differential equations. Nonlinear Anal. 2002, 48: 271-286. 10.1016/S0362-546X(00)00186-3

7. 7.

Grossman SI, Miller RK: Perturbation theory for Volterra integrodifferential system. J. Differ. Equ. 1970, 8: 457-474. 10.1016/0022-0396(70)90018-5

8. 8.

Rao MRM, Sathananthan S, Sivasundaram S: Asymptotic behavior of solutions of impulsive integrodifferential systems. Appl. Math. Comput. 1989, 34(3):195-211. 10.1016/0096-3003(89)90104-5

9. 9.

Adivar M: Principal matrix solutions and variation of parameters for Volterra integro-dynamic equations on time scales. Glasg. Math. J. 2011, 53(3):463-480. 10.1017/S0017089511000073

10. 10.

Adivar M: Function bounds for solutions of Volterra integro dynamic equations on time scales. Electron. J. Qual. Theory Differ. Equ. 2010, 7: 1-22.

11. 11.

Grace SR, Graef JR, Zafer A: Oscillation of integro-dynamic equations on time scales. Appl. Math. Lett. 2013, 26(4):383-386. 10.1016/j.aml.2012.10.001

12. 12.

Kulik T, Tisdell CC: Volterra integral equations on time scales: basic qualitative and quantitative results with applications to initial value problems on unbounded domains. Int. J. Differ. Equ. 2008, 3(1):103-133.

13. 13.

Karpuz, B: Basics of Volterra integral equations on time scales. arXiv:1102.5588v1 [math.CA] 28 Feb (2011).

14. 14.

Xing Y, Han M, Zheng G: Initial value problem for first order integro-differential equation of Volterra type on time scales. Nonlinear Anal. 2005, 60(3):429-442.

15. 15.

Li Y, Zhang H: Extremal solutions of periodic boundary value problems for first-order impulsive integrodifferential equations of mixed-type on time scales. Bound. Value Probl. 2007., 2007: Article ID 073176

16. 16.

Lupulescu V, Zada A: Linear impulsive dynamic systems on time scales. Electron. J. Qual. Theory Differ. Equ. 2010, 11: 1-30.

17. 17.

Lupulescu V, Ntouyas SK, Younus A: Qualitative aspects of a Volterra integro-dynamic system on time scales. Electron. J. Qual. Theory Differ. Equ. 2013, 5: 1-35.

18. 18.

Bohner M, Peterson A: Dynamic Equations on Time Scales. An Introduction with Applications. Birkhäuser, Boston; 2001.

19. 19.

Bohner M, Peterson A: Advances in Dynamic Equations on Time Scales. Birkhäuser, Boston; 2003.

20. 20.

DaCunha JJ: Transition matrix and generalized matrix exponential via the Peano-Baker series. J. Differ. Equ. Appl. 2005, 11(15):1245-1264. 10.1080/10236190500272798

Acknowledgements

The authors would like to thank the anonymous referees of this paper for very helpful comments and suggestions.

Author information

Authors

Corresponding author

Correspondence to Awais Younus.

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

Agarwal, R.P., Awan, A.S., O’Regan, D. et al. Linear impulsive Volterra integro-dynamic system on time scales. Adv Differ Equ 2014, 6 (2014). https://doi.org/10.1186/1687-1847-2014-6