Skip to main content

Theory and Modern Applications

Linear impulsive Volterra integro-dynamic system on time scales

Abstract

This paper deals with the asymptotic stability and boundedness of the solution of a time-varying impulsive Volterra integro-dynamic system on time scales in which the coefficient matrix is not necessarily stable. We generalize to a time scale some known properties concerning the asymptotic behavior and boundedness from the continuous case.

1 Introduction

Impulsive differential systems represent a natural framework for mathematical modelling of several processes in the applied sciences [14]. Basic qualitative and quantitative results about impulsive Volterra integro differential equations were studied in the literature (see [58]). Volterra-type equations (integral and integro-dynamic) on time scales were studied in [915]. In [16] the authors presented a theory for linear impulsive dynamic systems on time scales and recently in [17] various results concerning the asymptotic stability and boundedness of Volterra integro-dynamic equations on time scales were developed. Motivated by these papers, we generalize these results to impulsive integro-dynamic systems on time scales.

2 Preliminaries

In this paper we assume that the reader is familiar with the basic calculus of time scales. Let R n be the space of n-dimensional column vectors x=col( x 1 , x 2 ,, x n ) with a norm . Also, with the same symbol we denote the corresponding matrix norm in the space M n (R) of n×n matrices. If A M n ( R ) , then we denote by A T its conjugate transpose. We recall that A:=sup{Ax;x1} and the following inequality AxAx holds for all A M n ( R ) and x R n . A time scale T is a nonempty closed subset of . The set of all rd-continuous functions f:T R n will be denoted by C rd (T, R n ).

The notations [a,b], [a,b), and so on, denote time scale intervals such as [a,b]:={tT;atb}, where a,bT. Also, for any τT, let T τ :=[τ,)T and T 0 :=[0,)T.

We denote by (respectively R + ) the set of all regressive (respectively positively regressive) functions from T to . The space of all rd-continuous and regressive functions from T to is denoted by C rd R(T,R). Also,

C rd + R(T,R):= { p C rd R ( T , R ) ; 1 + μ ( t ) p ( t ) > 0  for all  t T } .

We denote by C rd 1 (T, R n ) the set of all functions f:T R n that are differentiable on T with its delta-derivative f Δ (t) C rd (T, R n ). The set of rd-continuous (respectively rd-continuous and regressive) functions A:T M n (R) is denoted by C rd (T, M n (R)) (respectively by C rd R(T, M n (R))). We recall that a matrix-valued function A is said to be regressive if I+μ(t)A(t) is invertible for all tT, where I is the n×n identity matrix. For a comprehensive review on time scales, we refer the reader to [18] and [19].

Lemma 2.1 ([[18], Theorem 2.38])

Let p,q C rd R(T,R). Then e p q Δ (, t 0 )=(pq) e p ( , t 0 ) e q σ ( , t 0 ) .

Lemma 2.2 ([[18], Theorem 6.2])

Let αR with α C rd + R(T,R). Then

e α (t,s)1+α(ts)for all ts.

Theorem 2.3 ([[13], Theorem 7])

Let a,bT with b>a and assume that f:T×TR is integrable on {(t,s)T×T:b>t>sa}. Then

a b a η f(η,ξ)ΔξΔη= a b σ ( ξ ) b f(η,ξ)ΔηΔξ.

It is easy to verify that the above result holds for f C rd (T×T, R n ).

Lemma 2.4 ([[16], Lemma 2.1])

Let t 0 T 0 , y C rd R( T 0 ,R), p C rd + R( T 0 ,R) and c, b k R + , k=1,2, . Then

y(t)c+ t 0 t p(s)y(s)Δs+ t 0 < t k < t b k y( t k ),t T 0

implies

y(t)c t 0 < t k < t (1+ b k ) e p (t,τ),t t 0 .

Consider the Volterra time-varying impulsive integro-dynamic system

{ x Δ ( t ) = A ( t ) x ( t ) + t 0 t K ( t , s ) x ( s ) Δ s + F ( t ) , t T 0 { t k } , x ( t k + ) = ( I + C k ) x ( t k ) , t = t k , k = 1 , 2 , , x ( t 0 ) = x 0 ,
(1)

where A (not necessarily stable) is an n×n matrix function and F is an n-vector function, which is piecewise continuous on T 0 , K is an n×n matrix function, which is piecewise continuous on Ω:={(t,s) T 0 × T 0 : t 0 st<}, C k M n ( R + ), 0 t 0 < t 1 < t 2 << t k < , with lim k t k = and the impulsive points t k are right dense. Note that x( t k ) represents the left limit of x(t) at t= t k and x( t k + ) represents the right limit of x(t) at t= t k .

The rest of the paper is organized as follows. In Section 3, we investigate the asymptotic behavior of solutions of system (1), which generalizes the continuous version (T=R) of [[8], Theorem 2.5]. In Section 4 we discuss the uniform boundedness of solutions of (1) by constructing a Lyapunov functional. Further results for boundedness, uniform boundedness and stability of solutions will also be developed.

3 Asymptotic stability

Our first result in this section is to present a system equivalent to (1) which involves an arbitrary function.

Theorem 3.1 Let L(t,s) be an n×n continuously differentiable matrix function with respect to s on t k 1 <s t k <t with L(t, t k + )= ( I + C k ) 1 L(t, t k ) for each k=1,2, . Then (1) is equivalent to the system

{ y Δ ( t ) = B ( t ) y ( t ) + t 0 t G ( t , s ) y ( s ) Δ s + H ( t ) , t T 0 { t k } , y ( t k + ) = ( I + C k ) y ( t k ) , t = t k , k = 1 , 2 , , y ( t 0 ) = y 0 ,
(2)

where

B ( t ) = A ( t ) L ( t , t ) , H ( t ) = F ( t ) + L ( t , t 0 ) x 0 + t 0 t L ( t , σ ( s ) ) F ( s ) Δ s ,
(3)

and

G ( t , s ) = K ( t , s ) + Δ s L ( t , s ) + L ( t , σ ( s ) ) A ( s ) + σ ( s ) t L ( t , σ ( τ ) ) K ( τ , s ) Δ τ , s , t t k .
(4)

Proof Let x(t) be any solution of (1) on T 0 . If we take p(s)=L(t,s)x(s), then for t k 1 <s t k <t, we have

p Δ (s)= Δ s L(t,s)x(s)+L ( t , σ ( s ) ) x Δ (s)

and by (1) it follows that

p Δ ( s ) = Δ s L ( t , s ) x ( s ) + L ( t , σ ( s ) ) A ( s ) x ( s ) + L ( t , σ ( s ) ) t 0 s K ( s , τ ) x ( τ ) Δ τ + L ( t , σ ( s ) ) F ( s ) .

Integration from t 0 to t yields

p ( t ) p ( t 0 ) t 0 < t k < t Δ p ( t k ) = t 0 t Δ s L ( t , s ) x ( s ) Δ s + t 0 t L ( t , σ ( s ) ) A ( s ) x ( s ) Δ s + t 0 t L ( t , σ ( s ) ) t 0 s K ( s , τ ) x ( τ ) Δ τ Δ s + t 0 t L ( t , σ ( s ) ) F ( s ) Δ s .

Using Theorem 2.3, we obtain

p ( t ) p ( t 0 ) t 0 < t k < t Δ p ( t k ) = t 0 t Δ s L ( t , s ) x ( s ) Δ s + t 0 t L ( t , σ ( s ) ) A ( s ) x ( s ) Δ s + t 0 t ( σ ( τ ) t L ( t , σ ( s ) ) K ( s , τ ) Δ s ) x ( τ ) Δ τ + t 0 t L ( t , σ ( s ) ) F ( s ) Δ s .

By a change of variables in the double integral term, we have

p ( t ) p ( t 0 ) t 0 < t k < t Δ p ( t k ) = t 0 t [ Δ s L ( t , s ) + L ( t , σ ( s ) ) A ( s ) + σ ( s ) t L ( t , σ ( u ) ) K ( u , s ) Δ u ] x ( s ) Δ s + t 0 t L ( t , σ ( s ) ) F ( s ) Δ s .

Using (3) and (4), we obtain

( A ( t ) B ( t ) ) x(t)= t 0 t ( G ( t , s ) K ( t , s ) ) x(s)Δs+H(t)F(t)+ t 0 < t k < t Δp( t k ).

From (1), we have

x Δ (t)=B(t)x(t)+ t 0 t G(t,s)x(s)Δs+H(t)+ t 0 < t k < t Δp( t k ).

For t 0 <s t k <t, we obtain

Δ p ( t k ) = L ( t , t k + ) x ( t k + ) L ( t , t k ) x ( t k ) = [ L ( t , t k + ) ( I + C k ) L ( t , t k ) ] x ( t k ) = 0 .

Hence, x(t) is a solution of (2).

Conversely, let y(t) be any solution of (2) on T 0 . We shall show that it satisfies (1). Consider

Z(t)= y Δ (t)F(t)A(t)y(t) t 0 t K(t,s)y(s)Δs.

Then by (2) and (3) we have

Z ( t ) = L ( t , t ) y ( t ) + L ( t , t 0 ) x 0 + t 0 t G ( t , s ) y ( s ) Δ s + t 0 t L ( t , σ ( s ) ) F ( s ) Δ s t 0 t K ( t , s ) y ( s ) Δ s .

Using (4), we obtain

Z ( t ) = L ( t , t ) y ( t ) + L ( t , t 0 ) x 0 + t 0 t L ( t , σ ( s ) ) F ( s ) Δ s t 0 t K ( t , s ) y ( s ) Δ s + t 0 t [ K ( t , s ) + Δ s L ( t , s ) + L ( t , σ ( s ) ) A ( s ) + σ ( s ) t L ( t , σ ( τ ) ) K ( τ , s ) Δ τ ] y ( s ) Δ s .

Again by Theorem 2.3, we have

Z ( t ) = L ( t , t ) y ( t ) + t 0 t [ Δ s L ( t , s ) + L ( t , σ ( s ) ) A ( s ) ] y ( s ) Δ s + t 0 t L ( t , σ ( s ) ) [ t 0 s K ( s , τ ) y ( τ ) Δ τ ] Δ s + L ( t , t 0 ) x 0 + t 0 t L ( t , σ ( s ) ) F ( s ) Δ s .
(5)

Now, by setting q(s)=L(t,s)y(s), then for t k 1 <s< t k <t, we get

q Δ (s)= Δ s L(t,s)y(s)+L ( t , σ ( s ) ) y Δ (s).
(6)

Integrating (6) from t 0 to t yields

q(t)q( t 0 ) t 0 < t k < t Δq( t k )= t 0 t [ Δ s L ( t , s ) y ( s ) + L ( t , σ ( s ) ) y Δ ( s ) ] Δs

and therefore, we have

L(t,t)y(t)L(t, t 0 ) x 0 t 0 < t k < t Δq( t k )= t 0 t [ Δ s L ( t , s ) y ( s ) + L ( t , σ ( s ) ) y Δ ( s ) ] Δs.
(7)

Since Δq( t k )=0, substituting (7) in (5), we obtain

Z ( t ) = t 0 t L ( t , σ ( s ) ) y Δ ( s ) Δ s + t 0 t L ( t , σ ( s ) ) A ( s ) y ( s ) Δ s + t 0 t L ( t , σ ( s ) ) [ t 0 s K ( s , τ ) y ( τ ) Δ τ ] Δ s + t 0 t L ( t , σ ( s ) ) F ( s ) Δ s = t 0 t L ( t , σ ( s ) ) Z ( s ) Δ s ,

which implies Z(t)0, by the unique solution of Volterra integral equations [12] and the fact that ΔZ( t k )=0. Hence y(t) is a solution of (1). □

For our next result we assume that matrix B commutes with its integral, so B commutes with its matrix exponential, that is, B(t) e B (t,s)= e B (t,s)B(t), [20].

Theorem 3.2 Let BC(T, M n (R)) and M,α>0. Assume that matrix B commutes with its integral. If

e B ( t , s ) M e α (s,t),t,sΩ,
(8)

then every solution x(t) of (1) satisfies

x ( t ) M x 0 e α ( t 0 , t ) + M t 0 t e α ( σ ( s ) , t ) H ( s ) Δ s + M t 0 t [ σ ( s ) t e α ( σ ( τ ) , t ) G ( τ , s ) Δ τ ] x ( s ) Δ s + M e α ( t 0 , t ) t 0 < t k < t β k x ( t k ) ,
(9)

where β k =[ e B ( t 0 , t k + )(I+ C k ) e B ( t 0 , t k )].

Proof Let x(t) be the solution of (2) and define q(t)= e B ( t 0 ,t)x(t). Then

q Δ (t)=B(t) e B ( t 0 , σ ( t ) ) x(t)+ e B ( t 0 , σ ( t ) ) x Δ (t).

Substituting for x Δ (t) from (2) and integrating from t 0 to t, we obtain

q ( t ) q ( t 0 ) t 0 < t k < t Δ q ( t k ) = t 0 t e B ( t 0 , σ ( s ) ) H ( s ) Δ s + t 0 t e B ( t 0 , σ ( s ) ) [ t 0 s G ( s , τ ) x ( τ ) Δ τ ] Δ s .

Using Theorem 2.3 and applying the semigroup property of exponential functions [[18], Theorem 2.36], we obtain

x ( t ) = e B ( t , t 0 ) x 0 + t 0 t e B ( t , σ ( s ) ) H ( s ) Δ s + e B ( t , t 0 ) t 0 < t k < t Δ q ( t k ) + t 0 t [ σ ( s ) t e B ( t , σ ( τ ) ) G ( τ , s ) Δ τ ] x ( s ) Δ s .
(10)

For t 0 <s< t k <t, we have

Δ q ( t k ) = e B ( t 0 , t k + ) x ( t k + ) e B ( t 0 , t k ) x ( t k ) = [ e B ( t 0 , t k + ) ( I + C k ) e B ( t 0 , t k ) ] x ( t k ) = β k x ( t k ) .

Hence, using (8) and applying the norm on (10), we obtain (9), which completes the proof. □

In the next theorem we present sufficient conditions for asymptotic stability.

Theorem 3.3 Let L(t,s) be an n×n continuously differentiable matrix function with respect to s on Ω such that

  1. (a)

    the assumptions of Theorem  3.2 hold,

  2. (b)

    L(t,s) L 0 e γ ( s , t ) ( 1 + μ ( t ) α ) ( 1 + μ ( t ) γ ) ,

  3. (c)

    sup t 0 s t < σ ( s ) t e α (σ(τ),t)G(τ,s)Δτ α 0 ,

  4. (d)

    F(t)0 and

  5. (e)

    t 0 < t k < t [1+ M 2 ( d k +1)] e λ ( t k ,0), where d k =(I+ C k ): d k 0 as k,

where L 0 ,γ>α, α 0 , λ are positive real constants.

If αM α 0 λ>0, then every solution x(t) of (1) tends to zero exponentially as t+.

Proof In view of Theorem 3.1 and the fact that L(t,s) satisfies (a), it is enough to show that every solution of (2) tends to zero as t+. From (a) and using (9), we obtain

e α ( t , 0 ) x ( t ) M x 0 e α ( t 0 , 0 ) + M t 0 t e α ( σ ( s ) , 0 ) H ( s ) Δ s + M t 0 t [ σ ( s ) t e α ( σ ( τ ) , 0 ) G ( τ , s ) Δ τ ] x ( s ) Δ s + M e α ( t 0 , 0 ) t 0 < t k < t β k x ( t k ) .
(11)

Since

t 0 t e α ( σ ( s ) , 0 ) H ( s ) Δs L 0 x 0 e γ ( t 0 ,0) t 0 t e α ( σ ( s ) , 0 ) e γ ( 0 , s ) ( 1 + μ ( s ) α ) ( 1 + μ ( s ) γ ) Δs,

then by Lemma 2.1 and the fact that γ>α, we obtain

t 0 t e α ( σ ( s ) , 0 ) H ( s ) Δs L 0 x 0 e α ( t 0 , 0 ) γ α .

Using (11), (b), (c) and (d), we have

e α ( t , 0 ) x ( t ) M x 0 e α ( t 0 , 0 ) + M L 0 x 0 e α ( t 0 , 0 ) γ α + M t 0 t α 0 e α ( s , 0 ) x ( s ) Δ s + M e α ( t 0 , 0 ) t 0 < t k < t β k x ( t k ) .

From Theorem 3.2, we have

β k e B ( t 0 , t k + ) ( I + C k ) + e B ( t 0 , t k ) M e α ( t k , t 0 )(1+ d k ),
(12)

which implies

e α ( t , 0 ) x ( t ) M x 0 ( 1 + L 0 γ α ) e α ( t 0 , 0 ) + M t 0 t α 0 e α ( s , 0 ) x ( s ) Δ s + t 0 < t k < t M 2 ( 1 + d k ) e α ( t k , 0 ) x ( t k ) .
(13)

Lemma 2.4 yields that

e α (t,0) x ( t ) M x 0 ( 1 + L 0 γ α ) e α ( t 0 ,0) t 0 < t k < t [ 1 + M 2 ( 1 + d k ) ] e M α 0 (t, t 0 ).

Using [[18], Theorem 2.36], (e) and the fact that t 0 < t k <t, we obtain

x ( t ) M x 0 ( 1 + L 0 γ α ) e α M α 0 ( t 0 ,0) e α M α 0 λ (0,t),

where αM α 0 λ= α M α 0 λ ( 1 + μ ( t ) M α 0 ) ( 1 + μ ( t ) M α 0 ) ( 1 + μ ( t ) λ ) [[18], Exercise 2.28]. By Lemma 2.2, we have e α M α 0 λ (0,t) 1 1 + ( α M α 0 λ ) t , so we obtain

x ( t ) M x 0 ( 1 + L 0 γ α ) e α M α 0 ( t 0 , 0 ) 1 + ( α M α 0 λ ) t .

Hence, in view of (e) and the fact αM α 0 λ>0, we obtain the required result. □

Example 3.4 Let us consider the Volterra integro-dynamic equation

{ x Δ ( t ) = 2 x ( t ) + 0 t e 2 ( t , s ) x ( s ) Δ s , x ( t k + ) = 1 4 k x ( t k ) , x ( 0 ) = 1 ,
(14)

where A(t)=2, K(t,s)= e 2 (t,s), 1+ C k = 1 4 k and the impulsive points t k =2k. Now consider L(t,s)=0 so then B(t)=2. The matrix function G(t,s) given in (4) becomes

G(t,s)= e 2 (t,s).
(15)

In the following, we check the assumptions of Theorem 3.3 when T = R .

Let T = R . Then we have

| e B (t,s)|=| e 2 (t,s)|= e 2 ( s t ) M e 2 ( s t ) ,M=2,

and

0=|L(t,s)|< L 0 e 3 ( s t ) , L 0 =1.

Here the constants are α=2 and γ=3. From (15) it follows that

G(t,s)= e 2 ( t s ) .
(16)

Then from (16) we obtain that G(t,s) is a positive function, and

s t e 2 ( τ t ) | G ( τ , s ) | d τ = s t e 2 ( τ t ) e 2 ( τ s ) d τ = e 2 ( s t ) ( t s ) ( t s ) 1 + 2 ( t s ) < 1 2 ,

from which it follows that

sup 0 s t < s t e 1 2 ( τ t ) |G(τ,s)|dτ 1 2

and

t 0 < t k < t ( 5 + 1 k ) e 19 k 10 .

Since α 0 = 1 2 and λ= 19 20 , then we have that αM α 0 λ>0. Therefore, since all the assumptions of Theorem 3.3 hold for system (14), it follows that the solution of (14) tends to zero exponentially as t+.

If T = N , then all points are right scattered and there is no impulse condition. So, from [[17], Example 3.7] it follows that the solution of (14) tends to zero exponentially as t+.

Theorem 3.5 Let LC(Ω, M n (R)) such that Δ s L(t,s)C(Ω, M n (R)) for (t,s)Ω and

  1. (i)

    assumptions (a), (b), (d) and (e) of Theorem  3.3 hold,

  2. (ii)

    Δ s L(t,s) N 0 e δ (s,t) and K(t,s) K 0 e θ (s,t),

  3. (iii)

    A(t) A 0 for t 0 t<,

  4. (iv)

    sup t 0 s t < σ ( s ) t [( K 0 + N 0 )(1+μ(τ)α)+ A 0 L 0 + ( τ σ ( s ) ) L 0 K 0 μ ( τ ) α ]Δτ α 0 for some α 0 >0,

where A 0 , N 0 , K 0 , δ and θ are positive real numbers such that γ>δ>α, θ>α.

If αM α 0 λ>0, then every solution x(t) of (1) tends to zero exponentially as t+.

Proof From (4), we obtain

G ( t , s ) K ( t , s ) + Δ s L ( t , s ) + L ( t , σ ( s ) ) A ( s ) + σ ( s ) t L ( t , σ ( u ) ) K ( u , s ) Δ u ,

which implies

G ( t , s ) K 0 e θ ( s , t ) + N 0 e δ ( s , t ) + L 0 e γ ( s , t ) ( 1 + μ ( t ) α ) ( 1 + μ ( t ) γ ) A 0 + σ ( s ) t L 0 K 0 e γ ( u , t ) e θ ( s , u ) ( 1 + μ ( t ) α ) ( 1 + μ ( t ) γ ) Δ u .
(17)

Since γ>δ>α, θ>α, then from (i), (ii) and (iii), (17) becomes

G ( t , s ) K 0 e α ( s , t ) + N 0 e α ( s , t ) + L 0 e α ( s , t ) ( 1 + μ ( t ) α ) ( 1 + μ ( t ) γ ) A 0 + ( t σ ( s ) ) L 0 K 0 e α ( s , t ) ( 1 + μ ( t ) α ) ( 1 + μ ( t ) γ )
(18)

and

e α ( σ ( t ) , 0 ) G ( t , s ) [ ( K 0 + N 0 ) ( 1 + μ ( t ) α ) + A 0 L 0 + ( t σ ( s ) ) L 0 K 0 μ ( t ) α ] e α (s,0).

Integrating the above inequality and using (iv), we obtain

σ ( s ) t e α ( σ ( τ ) , 0 ) G ( τ , s ) Δτ α 0 e α (s,0).
(19)

Substituting (19) in (11), we obtain

e α ( t , 0 ) x ( t ) M x 0 ( 1 + L 0 γ α ) e α ( t 0 , 0 ) + M t 0 t α 0 e α ( s , 0 ) x ( s ) Δ s + t 0 < t k < t M 2 ( 1 + d k ) e α ( t k , 0 ) x ( t k ) .

Lemma 2.4 yields that

e α ( t , 0 ) x ( t ) M x 0 ( 1 + L 0 γ α ) e α ( t 0 , 0 ) × t 0 < t k < t [ 1 + M 2 ( 1 + d k ) ] e M α 0 ( t , t 0 ) .

Using [[18], Theorem 2.36], (e) and the fact that t 0 < t k <t, we obtain

x ( t ) M x 0 ( 1 + L 0 γ α ) e α M α 0 ( t 0 ,0) e α M α 0 λ (0,t).

Then by Lemma 2.2, we have

x ( t ) M x 0 ( 1 + L 0 γ α ) e α M α 0 ( t 0 , 0 ) 1 + ( α M α 0 λ ) t .

Hence, in view of (i) and αM α 0 λ>0, we obtain the required result. □

Corollary 3.6 Let L(t,s) be an n×n continuously differentiable matrix function with respect to s on t k 1 <s t k <t with L(t, t k + )= ( I + C k ) 1 L(t, t k ) for each k=1,2, . Then (1) is equivalent to the impulsive dynamic system

{ y Δ ( t ) = B ( t ) y ( t ) + H ( t ) , t T 0 { t k } , y ( t k + ) = ( I + C k ) y ( t k ) , t = t k , k = 1 , 2 , , y ( t 0 ) = y 0 ,
(20)

where

B ( t ) = A ( t ) L ( t , t ) , H ( t ) = F ( t ) + L ( t , t 0 ) x 0 + s t L ( t , σ ( s ) ) F ( s ) Δ s ,
(21)

and

K ( t , s ) + Δ s L ( t , s ) + L ( t , σ ( s ) ) A ( s ) + σ ( s ) t L ( t , σ ( u ) ) K ( u , s ) Δ u = 0 , s , t t k .
(22)

Proof The proof follows an argument similar to that in Theorem 3.1 with G(t,s)=0. □

4 Boundedness

In the first result of this section, we give sufficient conditions to insure that (1) has bounded solutions. Our results apply to (1) whether A(t) is stable, identically zero, or completely unstable, and do not require A(t) to be constant nor K(t,s) to be a convolution kernel. Let C(t) and D(t,s) be continuous n×n matrices, t 0 st<. Let s[ t 0 ,) and assume that C(t) is an n×n regressive matrix. The unique matrix solution of the initial valued problem

Y Δ =C(t)Y,Y ( t k + ) =(I+ C k )Y( t k ),Y(s)=I,
(23)

is called the impulsive transition matrix (at s) and it is denoted by S C (t,s) (see [[16], Corollary 3.1]). Also, if H(t,s) is an n×n regressive matrix satisfying

{ Δ t H ( t , s ) = C ( t ) H ( t , s ) + D ( t , s ) , H ( t k + , s ) = ( I + C k ) H ( t k , s ) , H ( s , s ) = A ( s ) C ( s ) ,
(24)

then

H(t,s)= S C (t,s) [ A ( s ) C ( s ) ] + s t S C ( t , σ ( τ ) ) D(τ,s)Δτ.
(25)

Theorem 4.1 Let e C (t,s) be the solution of (23), and suppose that there are positive constants N, J and M such that

  1. (i)

    S C (t, t 0 )N,

  2. (ii)

    t 0 t S C (t,s)[A(s)C(s)]+ s t S C (t,σ(τ))K(τ,s)ΔτΔsJ<1,

  3. (iii)

    t 0 t S C (t,σ(u))[F(u)G(t)x(t)]ΔuM.

Then all the solutions of (1) are uniformly bounded, and the zero solution of the corresponding homogenous equation of (1) is uniformly stable with the initial condition x( t 0 )=0.

Proof Consider the following functional

V ( t , x ( ) ) =x(t) t 0 t H(t,s)x(s)Δs.
(26)

The derivative of V(t,x()) along a solution x(t)=x(t, t 0 , x 0 ) of (1) satisfies

V Δ ( t , x ( ) ) = x Δ (t) Δ t t 0 t H(t,s)x(s)Δs.

From [[18], Theorem 1.117], we obtain

V Δ ( t , x ( ) ) = x Δ ( t ) H ( σ ( t ) , t ) x ( t ) t 0 t Δ t H ( t , s ) x ( s ) Δ s = A ( t ) x ( t ) H ( σ ( t ) , t ) x ( t ) + t 0 t K ( t , s ) x ( s ) Δ s t 0 t Δ t H ( t , s ) x ( s ) Δ s + F ( t )

or

V Δ ( t , x ( ) ) = [ A ( t ) H ( σ ( t ) , t ) ] x ( t ) + F ( t ) + t 0 t [ K ( t , s ) Δ t H ( t , s ) ] x ( s ) Δ s .
(27)

By using (25), [[18], Theorems 1.75] and [[16], Theorem 3.4], we have the following expression:

H ( σ ( t ) , t ) = S C ( σ ( t ) , t ) [ A ( t ) C ( t ) ] + t σ ( t ) S C ( σ ( t ) , σ ( τ ) ) D ( τ , t ) Δ τ = ( I + μ ( t ) C ( t ) ) S C ( t , t ) [ A ( t ) C ( t ) ] + μ ( t ) S C ( σ ( t ) , σ ( t ) ) D ( t , t ) = ( I + μ ( t ) C ( t ) ) [ A ( t ) C ( t ) ] + μ ( t ) D ( t , t ) = [ A ( t ) C ( t ) ] + μ ( t ) [ C ( t ) A ( t ) C 2 ( t ) + D ( t , t ) ] ,

which implies that

H ( σ ( t ) , t ) = [ A ( t ) C ( t ) ] +G(t),
(28)

where G(t)=μ(t)[C(t)A(t) C 2 (t)+D(t,t)]. Substituting (28) in (27) yields

V Δ ( t , x ( ) ) =C(t)x(t)G(t)x(t)+ t 0 t [ K ( t , s ) Δ t H ( t , s ) ] x(s)Δs+F(t).

From (24) and (26), we have

V Δ ( t , x ( ) ) =C(t)V ( t , x ( ) ) + t 0 t [ K ( t , s ) D ( t , s ) ] x(s)Δs+F(t)G(t)x(t),

and it is easy to see that

V ( t k + , x ( ) ) =(I+ C k )V ( t k , x ( ) ) .

Thus

V ( t , x ( ) ) = S C (t, t 0 ) x 0 + t 0 t S C ( t , σ ( u ) ) g ( u , x ( ) ) Δu,
(29)

where

g ( t , x ( ) ) = t 0 t [ K ( t , s ) D ( t , s ) ] x(s)Δs+F(t)G(t)x(t).

Let D(t,s)=K(t,s). Then by (25), (ii) is precisely t 0 t H(t,s)ΔsJ<1. By (29) and (i)-(iii),

| V ( t , x ( ) ) | = S C ( t , t 0 ) x 0 + t 0 t S C ( t , σ ( u ) ) [ F ( u ) G ( t ) x ( t ) ] Δ u S C ( t , t 0 ) x 0 + t 0 t S C ( t , σ ( u ) ) [ F ( u ) G ( t ) x ( t ) ] Δ u N x 0 + M .

If x 0 < B 1 for some constant, and if Q=N B 1 +M, then by (26) we obtain

x ( t ) t 0 t H ( t , s ) x ( s ) Δs V ( t , x ( ) ) Q.
(30)

Now, either there exists B 2 >0 such that x(t)< B 2 for all t t 0 , and thus x(t) is uniformly bounded, or there exists a monotone sequence { t n } tending to infinity such that x( t n )= max t 0 t t n x(t) and x( t n ) as t n , and by (ii) and (30) we have

x ( t n ) (1J) x ( t n ) t 0 t n H ( t n , s ) x ( s ) ΔsQ,

a contradiction. This completes the proof. □

In the second part of this section, we consider system (1) with F(t) bounded and suppose that

C(t,s)= t K(u,s)Δu
(31)

is defined and continuous on Ω. The matrix E(t) on [ t 0 ,) is defined by

E(t)=A(t)C ( σ ( t ) , t ) .
(32)

Then (1) is equivalent to the system

{ x Δ ( t ) = E ( t ) x ( t ) + Δ t t 0 t C ( t , s ) x ( s ) Δ s + F ( t ) , t T 0 { t k } , x ( t k + ) = ( I + C k ) x ( t k ) , t = t k , k = 1 , 2 , , x ( t 0 ) = x 0 .
(33)

Theorem 4.2 Let EC(T, M n (R)) and M,α>0. Assume that E(t) commutes with its integral. If

e E ( t , s ) M e α (s,t),t,sΩ,
(34)

then every solution x(t) of (1) with x( t 0 )= x 0 satisfies

x ( t ) M x 0 e α ( t 0 , t ) + M t 0 t e α ( σ ( s ) , t ) F ( s ) Δ s + M t 0 t E ( u ) e α ( σ ( u ) , t ) [ t 0 u C ( u , s ) x ( s ) Δ s ] Δ u + t 0 t C ( t , s ) x ( s ) Δ s + M e α ( t 0 , t ) t 0 < t k < t β k x ( t k ) ,
(35)

where β k =[ e E ( t 0 , t k + )(I+ C k ) e E ( t 0 , t k )].

Proof Let x(t) be the solution of (1) and define q(t)= e E ( t 0 ,t)x(t). Then

q Δ (t)=E(t) e E ( t 0 , σ ( t ) ) x(t)+ e E ( t 0 , σ ( t ) ) x Δ (t).

Substituting for x Δ (t) from (33) and integrating from t 0 to t, yields

q ( t ) q ( t 0 ) t 0 < t k < t Δ q ( t k ) = t 0 t e E ( t 0 , σ ( s ) ) F ( s ) Δ s + t 0 t e E ( t 0 , σ ( u ) ) [ Δ u t 0 u C ( u , s ) x ( s ) Δ s ] Δ u .

Applying the integration by parts on the second term of the right-hand side [[18], Theorem 1.77] and the semigroup property of exponential functions [[18], Theorem 2.36], we obtain

x ( t ) = e E ( t , t 0 ) x 0 + t 0 t e E ( t , σ ( s ) ) F ( s ) Δ s + t 0 t C ( t , s ) x ( s ) Δ s + t 0 t E ( u ) e E ( t , σ ( s ) ) [ t 0 u C ( u , s ) x ( s ) Δ s ] Δ u + e E ( t , t 0 ) t 0 < t k < t Δ q ( t k ) .
(36)

For t 0 <s< t k <t, we have

Δ q ( t k ) = e E ( t 0 , t k + ) x ( t k + ) e E ( t 0 , t k ) x ( t k ) = [ e E ( t 0 , t k + ) ( I + C k ) e E ( t 0 , t k ) ] x ( t k ) = β k x ( t k ) .

Hence, using (34) and applying the norm on (36), we obtain (35), which completes the proof. □

Assume that the hypotheses of Theorem 4.2 hold for next results.

Theorem 4.3 Let x(t) be a solution of (1). If E(t)d on [ t 0 ,) for some d>0, F(t) is bounded and sup t 0 t < t 0 t C(t,s)Δs+ 1 d t 0 < t k < t β k β, with β sufficiently small, then x(t) is bounded.

Proof For the given t 0 and bounded F(t), there is C 1 >0 with

M x 0 e α ( t 0 ,t)+M sup t 0 t < t 0 t e α ( σ ( s ) , t ) F ( s ) Δs< C 1 .
(37)

Substituting (37) in (35), we obtain

x ( t ) C 1 + M d t 0 t e α ( σ ( u ) , t ) [ t 0 u C ( u , s ) x ( s ) Δ s ] Δ u + t 0 t C ( t , s ) x ( s ) Δ s + M e α ( t 0 , t ) t 0 < t k < t β k x ( t k ) , C 1 + M d α β sup t 0 s < x ( s ) + β sup t 0 s < x ( s ) = C 1 + β [ 1 + M d α ] sup t 0 s < x ( s ) .

Let β be chosen so that β[1+ M d α ]=m<1. Then

x ( t ) C 1 +m sup t 0 s < t x ( s ) .

Let C 2 > x 0 and C 1 +m C 2 < C 2 . If x(t) is not bounded, then there exists a first t 1 > t 0 with x( t 1 )= C 2 , and then

C 2 = x ( t 1 ) C 1 +m C 2 < C 2 ,

a contradiction. This completes the proof. □

Theorem 4.4 If F(t)=0 in (1), E(t)d on [ t 0 ,) for some d>0, and t 0 t C(t,s)Δs+ 1 d t 0 < t k < t β k β for β sufficiently small, then the zero solution of (1) with the initial condition x( t 0 )=0 is uniformly stable.

Proof Let ϵ>0 be given. We wish to find δ>0 such that t 0 0, x 0 <δ, and t t 0 implies x(t, x 0 )<ϵ. Let δ<ϵ with δ yet to be determined. If x 0 <δ, then M x 0 Mδ. From (35) with F(t)=0,

x ( t ) M δ + M d α β sup t 0 s < t x ( s ) + β sup t 0 s < t x ( s ) = M δ + β [ 1 + M d α ] sup t 0 s < t x ( s ) .

First take β so that β[1+ M d α ] 3 4 and δ so that Mδ+ 3 4 ϵ<ϵ. If x 0 <δ and if there exists t 1 > t 0 with x( t 1 )=ϵ, we have

ϵ= x ( t 1 ) <Mδ+ 3 4 ϵ<ϵ,

a contradiction. Thus the zero solution is uniformly stable. The proof is complete. □

Example 4.5 Let us consider the following system:

{ x Δ ( t ) = 1 ( σ ( t ) ) 2 + 0 t ( 1 t 2 σ ( t ) + 1 t ( σ ( t ) ) 2 ) x ( s ) Δ s , x ( t k + ) = ( 1 + C k ) x ( t k ) , x ( 0 ) = 0 ,
(38)

where A(t)= 1 ( σ ( t ) ) 2 and K(t,s)=( 1 t 2 σ ( t ) + 1 t ( σ ( t ) ) 2 ). It is easy to check that

( 1 u 2 ) Δ = 1 u 2 σ ( u ) 1 u ( σ ( u ) ) 2 .
(39)

By using (31) and (39), we obtain

C ( t , s ) = t ( 1 u 2 σ ( u ) 1 u ( σ ( u ) ) 2 ) Δ u = t ( 1 u 2 ) Δ Δ u = 1 t 2 .

This implies that E(t)=0 and

0 t 1 t 2 Δs= 1 t .
(40)

Finally, by taking the supremum over t in (40), over [ 0 , ) T , we obtain

sup 0 t 1 t 2 Δs=0.

Since β k = C k , so we can choose C k such that 1 d t 0 < t k < t β k β for β sufficiently small. It follows that all the assumptions of Theorem 4.3 are satisfied, hence all the solutions of (38) are bounded. Moreover, Theorem 4.4 yields that the zero solution of (38) is uniformly stable on an arbitrary time scale.

References

  1. Bainov DD, Kostadinov SI: Abstract Impulsive Differential Equations. World Scientific, New Jersey; 1995.

    Google Scholar 

  2. Bainov DD, Simeonov PS: Impulsive Differential Equations: Periodic Solutions and Applications. Longman, New York; 1993.

    MATH  Google Scholar 

  3. Lakshmikantham V, Bainov DD, Simeonov PS: Theory of Impulsive Differential Equations. World Scientific, Singapore; 1989.

    Book  MATH  Google Scholar 

  4. Samoilenko AM, Perestyuk NA: Impulsive Differential Equations. World Scientific, Singapore; 1995.

    MATH  Google Scholar 

  5. Akca H, Berezansky L, Braverman E: On linear integro-differential equations with integral impulsive conditions. Z. Anal. Anwend. 1996, 15: 709-727. 10.4171/ZAA/724

    Article  MathSciNet  MATH  Google Scholar 

  6. Akhmetov MU, Zafer A, Sejilova RD: The control of boundary value problems for quasilinear impulsive integro-differential equations. Nonlinear Anal. 2002, 48: 271-286. 10.1016/S0362-546X(00)00186-3

    Article  MathSciNet  MATH  Google Scholar 

  7. Grossman SI, Miller RK: Perturbation theory for Volterra integrodifferential system. J. Differ. Equ. 1970, 8: 457-474. 10.1016/0022-0396(70)90018-5

    Article  MathSciNet  MATH  Google Scholar 

  8. Rao MRM, Sathananthan S, Sivasundaram S: Asymptotic behavior of solutions of impulsive integrodifferential systems. Appl. Math. Comput. 1989, 34(3):195-211. 10.1016/0096-3003(89)90104-5

    Article  MathSciNet  MATH  Google Scholar 

  9. Adivar M: Principal matrix solutions and variation of parameters for Volterra integro-dynamic equations on time scales. Glasg. Math. J. 2011, 53(3):463-480. 10.1017/S0017089511000073

    Article  MathSciNet  MATH  Google Scholar 

  10. Adivar M: Function bounds for solutions of Volterra integro dynamic equations on time scales. Electron. J. Qual. Theory Differ. Equ. 2010, 7: 1-22.

    Article  Google Scholar 

  11. Grace SR, Graef JR, Zafer A: Oscillation of integro-dynamic equations on time scales. Appl. Math. Lett. 2013, 26(4):383-386. 10.1016/j.aml.2012.10.001

    Article  MathSciNet  MATH  Google Scholar 

  12. Kulik T, Tisdell CC: Volterra integral equations on time scales: basic qualitative and quantitative results with applications to initial value problems on unbounded domains. Int. J. Differ. Equ. 2008, 3(1):103-133.

    MathSciNet  Google Scholar 

  13. Karpuz, B: Basics of Volterra integral equations on time scales. arXiv:1102.5588v1 [math.CA] 28 Feb (2011).

    Google Scholar 

  14. Xing Y, Han M, Zheng G: Initial value problem for first order integro-differential equation of Volterra type on time scales. Nonlinear Anal. 2005, 60(3):429-442.

    MathSciNet  MATH  Google Scholar 

  15. Li Y, Zhang H: Extremal solutions of periodic boundary value problems for first-order impulsive integrodifferential equations of mixed-type on time scales. Bound. Value Probl. 2007., 2007: Article ID 073176

    Google Scholar 

  16. Lupulescu V, Zada A: Linear impulsive dynamic systems on time scales. Electron. J. Qual. Theory Differ. Equ. 2010, 11: 1-30.

    MathSciNet  Google Scholar 

  17. Lupulescu V, Ntouyas SK, Younus A: Qualitative aspects of a Volterra integro-dynamic system on time scales. Electron. J. Qual. Theory Differ. Equ. 2013, 5: 1-35.

    Article  MathSciNet  Google Scholar 

  18. Bohner M, Peterson A: Dynamic Equations on Time Scales. An Introduction with Applications. Birkhäuser, Boston; 2001.

    Book  MATH  Google Scholar 

  19. Bohner M, Peterson A: Advances in Dynamic Equations on Time Scales. Birkhäuser, Boston; 2003.

    Book  MATH  Google Scholar 

  20. DaCunha JJ: Transition matrix and generalized matrix exponential via the Peano-Baker series. J. Differ. Equ. Appl. 2005, 11(15):1245-1264. 10.1080/10236190500272798

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous referees of this paper for very helpful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Awais Younus.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Agarwal, R.P., Awan, A.S., O’Regan, D. et al. Linear impulsive Volterra integro-dynamic system on time scales. Adv Differ Equ 2014, 6 (2014). https://doi.org/10.1186/1687-1847-2014-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2014-6

Keywords