Skip to main content

Theory and Modern Applications

Stability analysis of discrete-time systems with variable delays via some new summation inequalities

Abstract

This paper proposes an improved stability condition of discrete-time systems with variable delays. Based on some mathematical techniques, a series of new summation inequalities are obtained. These new inequalities are less conservative than the Jensen inequality. Based on these new summation inequalities and the reciprocally convex combination inequality, a novel sufficient criterion on asymptotical stability of discrete-time systems with variable delays is obtained by constructing a new Lyapunov-Krasovskii functional. The advantage of the proposed inequality in this paper is demonstrated by a classical example from the literature.

1 Introduction

Time delay is usually encountered in many practical situations such as signal processing, image processing etc. There has been an increasing research activity on time-delay systems during the past years [116]. The problem of the delay-dependent stability analysis of time-delay systems has become a hot research topic in the control community [17, 18] due to the fact that stability criteria can provide a maximum admissible upper bound of time delay. The maximum admissible upper bound can be regarded as an important index for the conservatism of stability criteria [1923]. To our knowledge, Jensen’s inequality has been mostly used as a powerful mathematical tool in the stability analysis of time-delay systems. However, Jensen’s inequality neglected some terms, which unavoidably introduced some conservatism. In order to investigate the stability of a linear discrete systems with constant delay, Zhang and Han [24] established the following Abel lemma-based finite-sum inequality, which improved the Jensen inequality to some extent.

Theorem A

[24]

For a constant matrix \(R\in R^{n\times n}\) with \(R=R^{T}>0\), and two integers \(r_{1}\) and \(r_{2}\) with \(r_{2}-r_{1}>1\), the following inequality holds:

$$ \sum_{j=r_{1}}^{r_{2}-1} \eta^{T}(j)R\eta(j)\geq \frac{1}{\rho_{1}}\nu^{T}_{1}R \nu_{1}+\frac{3\rho_{2}}{\rho_{1}\rho _{3}}\nu^{T}_{2}R \nu_{2}, $$
(1)

where \(\eta(j)=x(j+1)-x(j)\), \(\nu_{1}=x(r_{2})-x(r_{1})\), \(\nu_{2}=x(r_{2})+x(r_{1})-\frac{2}{r_{2}-r_{1}-1}\sum_{j=r_{1}+1}^{r_{2}-1}x(j)\), \(\rho_{1}=r_{2}-r_{1}\), \(\rho_{2}=r_{2}-r_{1}-1\), \(\rho_{3}=r_{2}-r_{1}+1\).

Seuret et al. [25] also obtained a new stability criterion for the discrete-time systems with time-varying delay via the following novel summation inequality.

Theorem B

[25]

For a given symmetric positive definite matrix \(R \in R^{n\times n}\) and any sequence of discrete-time variables z in \([-h, 0]\cap Z\rightarrow R^{n}\), where \(h\geq1\), the following inequality holds:

$$ \sum_{i=-h+1}^{0}y^{T}(i)Ry(i) \geq\frac{1}{h}\left ( \textstyle\begin{array}{@{}c@{}} \Theta_{0} \\ \Theta_{1} \end{array}\displaystyle \right )^{T}\left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} R & 0 \\ 0 & 3(\frac{h+1}{h-1})R \end{array}\displaystyle \right )\left ( \textstyle\begin{array}{@{}c@{}} \Theta_{0} \\ \Theta_{1} \end{array}\displaystyle \right ), $$
(2)

where \(y(i)=z(i)-z(i-1)\), \(\Theta_{0}=z(0)-z(-h)\), \(\Theta_{1}=z(0)+z(-h)-\frac{2}{h+1}\sum_{i=-h}^{0}z(i)\).

In fact, Theorem A is equivalent to Theorem B. These two summation inequalities encompass the Jensen inequality. It is worth mentioning that Theorem A and Theorem B can be regarded as a discrete time version of the Wirtinger-based integral inequality, which was proved in [26].

Recently, Park et al. [27] developed a novel class of integral inequalities for quadratic functions via some intermediate terms called auxiliary functions which improved the Wirtinger-based integral inequality. Based on the novel inequalities, some new stability criteria are presented for systems with time-varying delays by constructing some appropriate Lyapunov-Krasovskii functionals in [27].

The Lyapunov-Krasovskii functional method is the most commonly used method in the investigation of the stability of delayed systems. The conservativeness of this approach is mainly from the construction of the Lyapunov-Krasovskii functional and the estimation of its time derivative. In order to get less conservative results, Jensen’s integral inequality, Wirtinger’s integral inequality, and a free-matrix-based integral inequality are proposed to obtain a tighter upper bound of the integrals occurring in the time derivative of the Lyapunov-Krasovskii functional. Many papers have focused on integral inequalities and their applications in stability analysis of continuous-time-delayed systems. However, only a few papers have studied the summation inequalities and their application in stability analysis of discrete-time systems with variable delays. The summation inequalities in Theorem A and Theorem B are used to obtain a bound for \(\sum_{j=r_{1}}^{r_{2}-1}\eta^{T}(j)R\eta(j)\) or \(\sum_{i=-h+1}^{0}y^{T}(i)Ry(i)\).

Motivated by the above works, in order to provide a tighter bound for \(\sum_{j=r_{1}}^{r_{2}-1}\eta^{T}(j)R\eta(j)\) or \(\sum_{i=-h+1}^{0}y^{T}(i)Ry(i)\), this paper is aimed at establishing some novel summation inequalities as the discrete-time versions of the integral inequalities obtained in [27]. In this paper, we will extend the two summation inequalities given in [24, 25]. Some new summation inequalities are proposed to provide a sharper bound than the summation inequalities in [24, 25]. The inequalities in Theorem A and Theorem B are a special case of Corollary 6 in our paper. Moreover, a novel estimation to the double summation as \(\sum_{i=-h+1}^{0}\sum_{k=i}^{0}\Delta x(k)^{T}R \Delta x(k)\) is also given in this paper. Based on these new summation inequalities, the reciprocally convex combination inequality, and a new Lyapunov-Krasovskii functional, a less conservative sufficient criterion on asymptotical stability of discrete-time systems with variable delays is obtained.

Notations

Throughout this paper, \(R^{n}\) and \(R^{n\times m}\) denote, respectively, the n-dimensional Euclidean space and the set of all \(n\times m\) real matrices. For real symmetric matrices X and Y, the notation \(X\geq Y\) (or \(X>Y\)) means that the matrix \(X-Y\) is a positive semi-definite (or positive definite). The symbol within a matrix represents the symmetric term of the matrix.

2 Novel summation inequalities

Theorem 1

For a positive definite matrix \(R>0\), any sequence of discrete-time variables \(y:[-h, 0]\cap Z\rightarrow R^{n}\), and any sequence of discrete-time variables \(p:[-h, 0]\cap Z\rightarrow R\) satisfying \(\sum_{k=-h+1}^{0}p(k)=0\), the following inequality holds:

$$\begin{aligned}& \sum_{k=-h+1}^{0}p^{2}(k) \sum_{i=-h+1}^{0}y(i)^{T}R y(i) \\& \quad \geq \frac{1}{h}\Theta_{0}^{T}R \Theta_{0}\sum_{k=-h+1}^{0}p^{2}(k) +\Biggl[\sum_{i=-h+1}^{0}y(i)p(i) \Biggr]^{T}R \Biggl[\sum_{i=-h+1}^{0}y(i)p(i) \Biggr], \end{aligned}$$
(3)

where \(\Theta_{0}= \sum_{k=-h+1}^{0}y(k)\).

Proof

Let \(z(i)=y(i)-\frac{1}{h}\Theta_{0}-p(i)v \), where \(v\in R^{n}\) is to be defined later. Then find a vector to minimize the following energy function \(J(v)\):

$$ J(v)=\sum_{i=-h+1}^{0}z(i)^{T}Rz(i). $$
(4)

Obviously,

$$\begin{aligned} J'(v) =&-2\sum_{i=-h+1}^{0} \biggl[y(i)p(i)-\frac{1}{h}\Theta _{0}p(i)-p^{2}(i)v \biggr]^{T}R \\ =&-2\sum_{i=-h+1}^{0}\bigl[y(i)p(i)-p^{2}(i)v \bigr]^{T}R+\frac {2}{h}\Theta_{0}^{T}\sum _{i=-h+1}^{0}p(i)R \\ =&-2\sum_{i=-h+1}^{0}y(i)^{T}p(i)R+2v^{T}R \sum_{i=-h+1}^{0}p^{2}(i). \end{aligned}$$
(5)

If \(\sum_{k=-h+1}^{0}p^{2}(k)>0\), solving the equation \(J'(v)=0\) gives

$$ \hat{v}=\sum_{i=-h+1}^{0}y(i)p(i) \Biggl[\sum_{i=-h+1}^{0}p^{2}(i) \Biggr]^{-1}. $$
(6)

Substituting for v in \(J(v)\), we get

$$\begin{aligned} J(\hat{v}) =&\sum_{i=-h+1}^{0} \biggl[y(i)-\frac{1}{h}\Theta _{0}-p(i)\hat{v}\biggr] ^{T}R\biggl[y(i)-\frac{1}{h}\Theta_{0}-p(i)\hat{v} \biggr] \\ =&\sum_{i=-h+1}^{0}\biggl[y(i)- \frac{1}{h}\Theta_{0}\biggr]^{T}R\biggl[y(i)- \frac {1}{h}\Theta_{0}\biggr] \\ &{}-2\sum_{i=-h+1}^{0}\biggl[y(i)- \frac{1}{h}\Theta_{0}\biggr]^{T}R p(i)\hat{v} +\sum _{i=-h+1}^{0}p^{2}(i) \hat{v}^{T} R\hat{v} \\ =&\sum_{i=-h+1}^{0}\biggl[y(i)- \frac{1}{h}\Theta_{0}\biggr]^{T}R\biggl[y(i)- \frac {1}{h}\Theta_{0}\biggr] -2\sum_{i=-h+1}^{0}y(i)^{T}R p(i)\hat{v} \\ &{}+\sum_{i=-h+1}^{0}p^{2}(i) \hat{v}^{T} R\hat{v} \\ =&\sum_{i=-h+1}^{0}\biggl[y(i)- \frac{1}{h}\Theta_{0}\biggr]^{T}R\biggl[y(i)- \frac {1}{h}\Theta_{0}\biggr] -2\sum_{i=-h+1}^{0}p(i)y(i)^{T}R \hat{v} \\ &{}+\sum_{i=-h+1}^{0}p^{2}(i) \hat{v}^{T} R\hat{v} \\ =&\sum_{i=-h+1}^{0}\biggl[y(i)- \frac{1}{h}\Theta_{0}\biggr]^{T}R\biggl[y(i)- \frac {1}{h}\Theta_{0}\biggr] -2\sum_{i=-h+1}^{0}p^{2}(i) \hat{v}^{T}R \hat{v} \\ &{}+\sum_{i=-h+1}^{0}p^{2}(i) \hat{v}^{T} R\hat{v} \\ =&\sum_{i=-h+1}^{0}\biggl[y(i)- \frac{1}{h}\Theta_{0}\biggr]^{T}R\biggl[y(i)- \frac {1}{h}\Theta_{0}\biggr] -\sum_{i=-h+1}^{0}p^{2}(i) \hat{v}^{T}R \hat{v} \\ =&\sum_{i=-h+1}^{0}y(i)^{T}R y(i)-2 \sum_{i=-h+1}^{0}y(i)^{T}R \frac{1}{h}\Theta_{0} +\frac{1}{h^{2}}\sum _{i=-h+1}^{0}\Theta_{0}^{T}R \Theta_{0} \\ &{}-\sum_{i=-h+1}^{0}p(i)^{2} \hat{v}^{T}R \hat{v} \\ =&\sum_{i=-h+1}^{0}y(i)^{T}R y(i)- \frac{1}{h}\Theta_{0}^{T}R\Theta_{0} -\sum _{i=-h+1}^{0}p^{2}(i) \hat{v}^{T}R \hat{v}. \end{aligned}$$
(7)

By the non-negative characteristic of the energy function \(J(v)\), we have

$$ \sum_{i=-h+1}^{0}y(i)^{T}R y(i) \geq \frac{1}{h}\Theta_{0}^{T}R \Theta_{0} +\frac{1}{\sum_{i=-h+1}^{0}p^{2}(i)}\Biggl[\sum_{i=-h+1}^{0}y(i)p(i) \Biggr]^{T}R \Biggl[\sum_{i=-h+1}^{0}y(i)p(i) \Biggr]. $$
(8)

If \(\sum_{k=-h+1}^{0}p^{2}(k)=0\), obviously, inequality (3) holds.

This completes the proof of Theorem 1. □

By choosing an appropriate sequence \(p(k)\), we get the following corollaries.

Corollary 1

For a positive definite matrix \(R>0\) and any sequence of discrete-time variables \(y:[-h, 0]\cap Z\rightarrow R^{n}\), the following inequality holds:

$$ \sum_{i=-h+1}^{0}y(i)^{T}R y(i)\geq \frac{1}{h}\Theta_{0}^{T}R\Theta_{0} +\frac{3}{h}\frac{h+1}{h-1}\Omega_{1}^{T}R \Omega_{1}, $$
(9)

where \(\Theta_{0}=\sum_{k=-h+1}^{0}y(k)\), \(\Omega_{1}=\sum_{s=-h+1}^{0}y(s) -\frac{2}{h+1}\sum_{k=-h+1}^{0}\sum_{s=-h+1}^{k}y(s)\).

Proof

Let \(p(k)=h-1+2k\), then \(\sum_{k=-h+1}^{0}p(k)=0\) and \(\sum_{i=-h+1}^{0}p^{2}(i)=\frac{(h-1)h(h+1)}{3}\),

$$\begin{aligned} \sum_{i=-h+1}^{0}y(i)p(i) =& \sum_{i=-h+1}^{0}(h-1+2i)y(i) \\ =&-2\sum_{k=-h+1}^{-1}\sum _{s=-h+1}^{k}y(s) +(h-1)\sum _{s=-h+1}^{0}y(s) \\ =&-2\sum_{k=-h+1}^{0}\sum _{s=-h+1}^{k}y(s) +(h+1)\sum _{s=-h+1}^{0}y(s) \\ =&(h+1)\Biggl[\sum_{s=-h+1}^{0}y(s) - \frac{2}{h+1}\sum_{k=-h+1}^{0}\sum _{s=-h+1}^{k}y(s)\Biggr] \\ =&(h+1)\Omega_{1}. \end{aligned}$$
(10)

By using Theorem 1, inequality (9) holds. □

Let \(\Omega_{1}^{*}=\sum_{s=-h+1}^{0}y(s) -\frac{2}{h+1}\sum_{k=-h+1}^{0}\sum_{s=k}^{0}y(s)\). Due to

$$ \sum_{k=-h+1}^{0}\sum _{s=-h+1}^{k}y(s) +\sum_{k=-h+1}^{0} \sum_{s=k}^{0}y(s)=(h+1)\sum _{s=-h+1}^{0}y(s), $$
(11)

we have

$$\begin{aligned} \Omega_{1} =&\sum_{s=-h+1}^{0}y(s) -\frac{2}{h+1}\sum_{k=-h+1}^{0}\sum _{s=-h+1}^{k}y(s) \\ =&-\sum_{s=-h+1}^{0}y(s) +\frac{2}{h+1} \sum_{k=-h+1}^{0}\sum _{s=k}^{0}y(s) \\ =&-\Omega_{1}^{*}. \end{aligned}$$
(12)

Hence, Corollary 1 is equivalent to Corollary 2.

Corollary 2

For a positive definite matrix \(R>0\) and any sequence of discrete-time variables \(y:[-h, 0]\cap Z\rightarrow R^{n}\), the following inequality holds:

$$ \sum_{i=-h+1}^{0}y(i)^{T}R y(i)\geq \frac{1}{h}\Theta_{0}^{T}R\Theta_{0} +\frac{3}{h}\frac{h+1}{h-1}{\Omega_{1}^{*}}^{T}R \Omega_{1}^{*}, $$
(13)

where \(\Theta_{0}= \sum_{k=-h+1}^{0}y(k)\), \(\Omega_{1}^{*}=\sum_{s=-h+1}^{0}y(s) -\frac{2}{h+1}\sum_{k=-h+1}^{0}\sum_{s=k}^{0}y(s)\).

Corollary 3

For a positive definite matrix \(R>0\) and any sequence of discrete-time variables \(x:[-h, 0]\cap Z\rightarrow R^{n}\), the following inequality holds:

$$ \sum_{i=-h+1}^{0}\Delta x(i)^{T}R \Delta x(i)\geq \frac{1}{h}\Theta_{0}^{T}R \Theta_{0} +\frac{1}{h}\Theta_{1}^{T} \frac{3(h+1)}{h-1}R \Theta_{1}, $$
(14)

where \(\Delta x(i)=x(i)-x(i-1)\), \(\Theta_{0}=x(0)-x(-h)\), \(\Theta_{1}=x(0)+x(-h)-\frac{2}{h+1}\sum_{k=-h}^{0}x(k)\).

Proof

Let \(y(i)=\Delta x(i)=x(i)-x(i-1)\) in Corollary 1. Then we have

$$ \Theta_{0}= \sum_{k=-h+1}^{0}y(k)=x(0)-x(-h) $$
(15)

and

$$\begin{aligned} \Omega_{1} =&\sum_{s=-h+1}^{0}y(s) -\frac{2}{h+1}\sum_{k=-h+1}^{0}\sum _{s=-h+1}^{k}y(s) \\ =&x(0)-x(-h)-\frac{2}{h+1}\sum_{k=-h+1}^{0} \bigl[x(k)-x(-h)\bigr] \\ =&x(0)-x(-h)-\frac{2}{h+1}\sum_{k=-h+1}^{0}x(k)+ \frac {2h}{h+1}x(-h) \\ =&x(0)+x(-h)-\frac{2}{h+1}\sum_{k=-h}^{0}x(k) \\ =&\Theta_{1}. \end{aligned}$$
(16)

Using Corollary 1, we have completed the proof of Corollary 3. □

Remark 1

Corollary 1 or Corollary 2 in this paper can be regard as a discrete version of Wirtinger-based integral inequality proved in [26]. Corollary 3 is a special case of Corollary 1 or Corollary 2. In fact, Corollary 3 is equivalent to Theorem A and Theorem B. So Corollary 1 or Corollary 2 in this paper implies Theorem A and Theorem B.

Generally, we have the following result which includes Corollary 3 as a special case.

Corollary 4

For a positive definite matrix \(R>0\), any sequence of discrete-time variables \(x:[-h, 0]\cap Z\rightarrow R^{n}\), and any sequence of discrete-time variables \(p:[-h, 0]\cap Z\rightarrow R\) satisfying \(\sum_{k=-h+1}^{0}p(k)=0\), the following inequality holds:

$$\begin{aligned}& \sum_{i=-h+1}^{0}\Delta x(i)^{T}R \Delta x(i) \\& \quad \geq \frac{1}{h}\Theta_{0}^{T}R \Theta_{0} +\frac{1}{\sum_{i=-h+1}^{0}p^{2}(i)}\Biggl[\sum_{i=-h+1}^{0}p(i) \Delta x(i)\Biggr]^{T}R \Biggl[\sum_{i=-h+1}^{0}p(i) \Delta x(i)\Biggr], \end{aligned}$$
(17)

where \(\Theta_{0}=x(0)-x(-h)\), \(\Delta x(i)=x(i)-x(i-1)\).

To go a step further, suppose that \(p_{1}(i)=h-1+2i\), \(p_{2}(i)=i^{2}+(h-1)i+\frac{(h-1)(h-2)}{6}\), then

  1. (1)

    \(\sum_{i=-h+1}^{0}p_{m}(i)=0\), \(m=1, 2\),

  2. (2)

    \(\sum_{i=-h+1}^{0}p_{1}(i)p_{2}(i)=0\),

  3. (3)

    \(\sum_{i=-h+1}^{0}p_{2}^{2}(i)=\frac {(h-2)(h-1)h(h+1)(h+2)}{180}\).

Noting that

$$ \sum_{i=-h+1}^{0}\sum _{k=i}^{0}y(k)=\sum_{i=-h+1}^{0}(h+i)y(i) $$
(18)

and

$$ \sum_{i=-h+1}^{0}\sum _{k=i}^{0}\sum_{m=k}^{0}y(m) =\sum_{i=-h+1}^{0}\frac{(h+i)(h+i+1)}{2}y(i). $$
(19)

Then we get

$$\begin{aligned}& \sum_{i=-h+1}^{0}p_{2}(i)y(i) \\& \quad = 2\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\sum_{m=k}^{0}y(m) +\frac{(h+1)(h+2)}{6}\sum_{i=-h+1}^{0}y(i) -(h+2) \sum_{i=-h+1}^{0}\sum _{k=i}^{0}y(k) \\& \quad = \frac{(h+1)(h+2)}{6}\Biggl[\sum_{i=-h+1}^{0}y(i) -\frac{6}{h+1}\sum_{i=-h+1}^{0}\sum _{k=i}^{0}y(k) \\& \qquad {} +\frac{12}{(h+1)(h+2)}\sum_{i=-h+1}^{0} \sum_{k=i}^{0}\sum _{m=k}^{0}y(m)\Biggr]. \end{aligned}$$
(20)

Let \(p(k)=p_{2}(k)\) in Theorem 1, we have the following theorem.

Theorem 2

For a positive definite matrix \(R>0\) and any sequence of discrete-time variables \(y:[-h, 0]\cap Z\rightarrow R^{n}\) the following inequality holds:

$$ \sum_{i=-h+1}^{0}y(i)^{T}R y(i)\geq \frac{1}{h}\Theta_{0}^{T}R\Theta_{0} +\frac{5(h+1)(h+2)}{(h-2)(h-1)h}\Omega_{2}^{T}R \Omega_{2}, $$
(21)

where \(\Omega_{2}=\sum_{i=-h+1}^{0}y(i)-\frac{6}{h+1}\sum_{i=-h+1}^{0} \sum_{k=i}^{0}y(k) +\frac{12}{(h+1)(h+2)}\sum_{i=-h+1}^{0}\sum_{k=i}^{0} \sum_{m=k}^{0}y(m)\), \(\Theta_{0}=\sum_{k=-h+1}^{0}y(k)\).

Remark 2

Theorem 2 gives a new form of summation inequality and the idea which stimulates our interests in establishing a novel combinational summation inequality underlying quadrature rules. Based on Theorem 1 and Theorem 2, an improved summation inequality can be obtained as follows.

Theorem 3

For a positive definite matrix \(R>0\), any sequence of discrete-time variables \(y:[-h, 0]\cap Z\rightarrow R^{n}\), and any two sequences of discrete-time variables \(p_{i}:[-h, 0]\cap Z\rightarrow R\) satisfying \(\sum_{k=-h+1}^{0}p_{i}(k)=0\), \(i=1, 2\), \(\sum_{k=-h+1}^{0}p_{1}(k)p_{2}(k)=0\), then the following inequality holds:

$$\begin{aligned}& \sum_{i=-h+1}^{0}y(i)^{T}R y(i) \\& \quad \geq \frac{1}{h}\Theta_{0}^{T}R \Theta_{0} +\frac{1}{\sum_{i=-h+1}^{0}p_{1}(i)^{2}} \Biggl[\sum_{i=-h+1}^{0}y(i)p_{1}(i) \Biggr]^{T}R \Biggl[\sum_{i=-h+1}^{0}y(i)p_{1}(i) \Biggr] \\& \qquad {} +\frac{1}{\sum_{i=-h+1}^{0}p_{2}(i)^{2}} \Biggl[\sum_{i=-h+1}^{0}y(i)p_{2}(i) \Biggr]^{T}R \Biggl[\sum_{i=-h+1}^{0}y(i)p_{2}(i) \Biggr], \end{aligned}$$
(22)

where \(\Theta_{0}= \sum_{k=-h+1}^{0}y(k)\).

Proof

Let \(z(i)=y(i)-\frac{1}{h}\Theta_{0} -\frac{p_{1}(i)}{\sum_{i=-h+1}^{0}p_{1}(i)^{2}}\sum_{i=-h+1}^{0}y(i)p_{1}(i)\). Based on the proof of Theorem 1, we have

$$\begin{aligned}& \sum_{i=-h+1}^{0}y(i)^{T}R y(i) \\& \quad = \frac{1}{h}\Theta_{0}^{T}R\Theta_{0} +\frac{1}{\sum_{i=-h+1}^{0}p_{1}(i)^{2}}\Biggl[\sum_{i=-h+1}^{0}y(i)p_{1}(i) \Biggr]^{T}R \Biggl[\sum_{i=-h+1}^{0}y(i)p_{1}(i) \Biggr] \\& \qquad {} +\sum_{i=-h+1}^{0}z(i)^{T}R z(i). \end{aligned}$$
(23)

Let \(x(i)=z(i)-\frac{p_{2}(i)}{\sum_{i=-h+1}^{0}p_{2}(i)^{2}}\sum_{i=-h+1}^{0}z(i)p_{2}(i)\). Similarly, we have

$$\begin{aligned}& \sum_{i=-h+1}^{0}z(i)^{T}R z(i) \\& \quad = \frac{1}{\sum_{i=-h+1}^{0}p_{2}(i)^{2}}\Biggl[\sum_{i=-h+1}^{0}z(i)p_{2}(i) \Biggr]^{T}R \Biggl[\sum_{i=-h+1}^{0}z(i)p_{2}(i) \Biggr]+\sum_{i=-h+1}^{0}x(i)^{T}R x(i). \end{aligned}$$
(24)

So

$$\begin{aligned}& \sum_{i=-h+1}^{0}y(i)^{T}R y(i) \\& \quad = \frac{1}{h}\Theta_{0}^{T}R\Theta_{0} +\frac{1}{\sum_{i=-h+1}^{0}p_{1}(i)^{2}}\Biggl[\sum_{i=-h+1}^{0}y(i)p_{1}(i) \Biggr]^{T}R \Biggl[\sum_{i=-h+1}^{0}y(i)p_{1}(i) \Biggr] \\& \qquad {} +\frac{1}{\sum_{i=-h+1}^{0}p_{2}(i)^{2}}\Biggl[\sum_{i=-h+1}^{0}z(i)p_{2}(i) \Biggr]^{T}R \Biggl[\sum_{i=-h+1}^{0}z(i)p_{2}(i) \Biggr] \\& \qquad {}+\sum_{i=-h+1}^{0}x(i)^{T}R x(i) \\& \quad \geq \frac{1}{h}\Theta_{0}^{T}R \Theta_{0} +\frac{1}{\sum_{i=-h+1}^{0}p_{1}(i)^{2}}\Biggl[\sum_{i=-h+1}^{0}y(i)p_{1}(i) \Biggr]^{T}R \Biggl[\sum_{i=-h+1}^{0}y(i)p_{1}(i) \Biggr] \\& \qquad {} +\frac{1}{\sum_{i=-h+1}^{0}p_{2}(i)^{2}}\Biggl[\sum_{i=-h+1}^{0}z(i)p_{2}(i) \Biggr]^{T}R \Biggl[\sum_{i=-h+1}^{0}z(i)p_{2}(i) \Biggr]. \end{aligned}$$
(25)

Since \(\sum_{k=-h+1}^{0}p_{i}(k)=0\) (\(i=1, 2\)) and \(\sum_{k=-h+1}^{0}p_{1}(k)p_{2}(k)=0\), we obtain

$$\begin{aligned}& \sum_{i=-h+1}^{0}z(i)p_{2}(i) \\& \quad = \sum_{i=-h+1}^{0}\Biggl[y(i) - \frac{1}{h}\Theta_{0}-\frac{p_{1}(i)}{\sum_{j=-h+1}^{0}p_{1}(j)^{2}} \sum _{j=-h+1}^{0}y(j)p_{1}(j)\Biggr]p_{2}(i) \\& \quad = \sum_{i=-h+1}^{0}y(i)p_{2}(i)- \frac{\Theta_{0}}{h}\sum_{i=-h+1}^{0}p_{2}(i) -\frac{\sum_{i=-h+1}^{0}p_{1}(i)p_{2}(i)}{ \sum_{i=-h+1}^{0}p_{1}(i)^{2}}\sum_{i=-h+1}^{0}y(i)p_{1}(i) \\& \quad = \sum_{i=-h+1}^{0}y(i)p_{2}(i). \end{aligned}$$
(26)

This completes the proof of Theorem 3. □

Noting that \(\sum_{i=-h+1}^{0}p_{m}(i)=0\) (\(m=1, 2\)) and \(\sum_{i=-h+1}^{0}p_{1}(i)p_{2}(i)=0\), combining Theorem 3 with Corollary 2 and Theorem 2 gives the following result.

Corollary 5

For a positive definite matrix \(R>0\) and any sequence of discrete-time variables \(y:[-h, 0]\cap Z\rightarrow R^{n}\), the following inequality holds:

$$\begin{aligned}& \sum_{i=-h+1}^{0}y(i)^{T}R y(i) \\& \quad \geq \frac{1}{h}\Theta_{0}^{T}R\Theta_{0} +\frac{3}{h}\frac{h+1}{h-1}{\Omega_{1}^{*}}^{T}R \Omega_{1}^{*}+\frac{5(h+1)(h+2)}{(h-2)(h-1)h}\Omega_{2}^{T}R \Omega_{2}, \end{aligned}$$
(27)

where \(\Theta_{0}= \sum_{k=-h+1}^{0}y(k)\), \(\Omega_{1}^{*}=\sum_{s=-h+1}^{0}y(s) -\frac{2}{h+1}\sum_{k=-h+1}^{0}\sum_{s=k}^{0}y(s)\), \(\Omega_{2}=\sum_{i=-h+1}^{0}y(i) -\frac{6}{h+1}\sum_{i=-h+1}^{0}\sum_{k=i}^{0}y(k) +\frac{12}{(h+1)(h+2)}\sum_{i=-h+1}^{0}\sum_{k=i}^{0}\sum_{m=k}^{0}y(m)\).

Corollary 6

For a positive definite matrix \(R>0\) and any sequence of discrete-time variables \(y:[-h, 0]\cap Z\rightarrow R^{n}\), the following inequality holds:

$$\begin{aligned}& \sum_{i=-h+1}^{0}\Delta x(i)^{T}R \Delta x(i) \\& \quad \geq \frac{1}{h}\Theta_{0}^{T}R \Theta_{0} +\frac{3}{h}\frac{h+1}{h-1}{\Omega_{1}}^{T}R \Omega_{1}+\frac{5(h+1)(h+2)}{(h-2)(h-1)h}\Omega_{2}^{T}R \Omega_{2}, \end{aligned}$$
(28)

where \(\Theta_{0}= x(0)-x(-h)\), \(\Omega_{1}=x(0)+x(-h)-\frac{2}{h+1}\sum_{k=-h}^{0}x(k)\), \(\Omega_{2}=x(0)-x(-h)+\frac{6h}{(h+1)(h+2)} \sum_{i=-h}^{0}x(i) -\frac{12}{(h+1)(h+2)}\sum_{i=-h+1}^{0}\sum_{k=i}^{0}x(k)\).

Proof

Let \(y(i)=\Delta x(i)=x(i)-x(i-1)\), \(\Omega_{1}=-\Omega_{1}^{*}\) in Corollary 5. Then \(\Theta_{0}=\sum_{k=-h+1}^{0}y(k)=x(0)-x(-h)\). Simple computation leads to

$$\begin{aligned} \Omega_{1} =&\sum_{s=-h+1}^{0}y(s) -\frac{2}{h+1}\sum_{k=-h+1}^{0}\sum _{s=-h+1}^{k}y(s) \\ =&x(0)-x(-h)-\frac{2}{h+1}\sum_{k=-h+1}^{0} \bigl[x(k)-x(-h)\bigr] \\ =&x(0)+x(-h)-\frac{2}{h+1}\sum_{k=-h+1}^{0}x(k)- \frac {2}{h+1}x(-h) \\ =&x(0)+x(-h)-\frac{2}{h+1}\sum_{k=-h}^{0}x(k) \end{aligned}$$
(29)

and

$$\begin{aligned} \Omega_{2} =&\sum_{i=-h+1}^{0}y(i)- \frac{6}{h+1}\sum_{i=-h+1}^{0}\sum _{k=i}^{0}y(k) +\frac{12}{(h+1)(h+2)}\sum _{i=-h+1}^{0}\sum_{k=i}^{0} \sum_{m=k}^{0}y(m) \\ =&\sum_{i=-h+1}^{0}\bigl[x(i)-x(i-1)\bigr]- \frac{6}{h+1}\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\bigl[x(k)-x(k-1)\bigr] \\ &{}+\frac{12}{(h+1)(h+2)}\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\sum_{m=k}^{0} \bigl[x(m)-x(m-1)\bigr] \\ =&x(0)-x(-h)-\frac{6}{h+1}\sum_{i=-h+1}^{0} \bigl[x(0)-x(i-1)\bigr] \\ &{}+\frac{12}{(h+1)(h+2)}\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\bigl[x(0)-x(k-1)\bigr] \\ =&x(0)-x(-h)+\frac{6}{h+1}\sum_{i=-h+1}^{0}x(i-1)- \frac {6h}{h+1}x(0) \\ &{}+\frac{12}{(h+1)(h+2)}x(0)\sum_{i=-h+1}^{0}(-i+1) -\frac{12}{(h+1)(h+2)}\sum_{i=-h+1}^{0}\sum _{k=i}^{0}x(k-1) \\ =&x(0)-x(-h)+\frac{6}{h+1}\sum_{i=-h+1}^{0}x(i-1) \\ &{}-\frac{6h}{(h+1)(h+2)}x(0)-\frac{12}{(h+1)(h+2)}\sum_{i=-h+1}^{0} \sum_{k=i}^{0}x(k-1). \end{aligned}$$
(30)

An identical transformation leads to

$$\begin{aligned} \Omega_{2} =&x(0)-x(-h)+\frac{6}{h+1}\sum _{i=-h}^{0}x(i)-\frac {6}{h+1}x(0) \\ &{}-\frac{6h}{(h+1)(h+2)}x(0)-\frac{12}{(h+1)(h+2)}\sum_{i=-h+1}^{0} \sum_{k=i}^{0}x(k-1) \\ =&x(0)-x(-h)+\frac{6}{h+1}\sum_{i=-h}^{0}x(i) \\ &{}-\frac{12}{(h+1)(h+2)}\Biggl[(h+1)x(0)+\sum_{i=-h+1}^{0} \sum_{k=i}^{0}x(k-1)\Biggr] \\ =&x(0)-x(-h)+\frac{6h}{(h+1)(h+2)}\sum_{i=-h}^{0}x(i)- \frac {12}{(h+1)(h+2)}\sum_{i=-h+1}^{0}\sum _{k=i}^{0}x(k). \end{aligned}$$
(31)

This completes the proof of Corollary 6. □

Remark 3

The right-hand side of summation inequality in Corollary 5 (or Corollary 6) contains a term \(\frac{5(h+1)(h+2)}{(h-2)(h-1)h}\Omega _{2}^{T}R\Omega_{2}\). However, the summation inequality in Theorem A or Theorem B neglects this term. If \(h>2\) and \(\Omega_{2}\neq0\), then \(\frac {5(h+1)(h+2)}{(h-2)(h-1)h}\Omega_{2}^{T}R\Omega_{2}>0\). Since a positive quantity is added in the right-hand side of the inequality, the summation inequality in Corollary 5 (or Corollary 6) can provide a sharper bound for \(\sum_{i=-h+1}^{0}y^{T}(i)Ry(i)\) than the summation inequalities in [24, 25].

As we have mentioned before, Jensen’s inequality has mostly been used as a powerful mathematical tool in dealing with the difference of Lyapunov-Krasovskii functionals, single or double. In the case of a double, just like \(\sum_{i=-h+1}^{0}\sum_{k=i}^{0} y(k)^{T}Ry(k)\), Jensen’s inequality may neglect some terms, which unavoidably introduces conservatism. Then we will give some improved double summation inequalities.

Theorem 4

For a positive definite matrix \(R>0\), any sequence of discrete-time variables \(y:[-h, 0]\cap Z\rightarrow R^{n}\), and any nonzero sequence of discrete-time variables \(p:[-h, 0]\cap Z\rightarrow R\) satisfying \(\sum_{i=-h+1}^{0}\sum_{k=i}^{0}p(k)=0\), the following inequality holds:

$$ \sum_{i=-h+1}^{0}\sum _{k=i}^{0} y(k)^{T}Ry(k)\geq \frac{2}{h(h+1)}E_{0}^{T}RE_{0} + \frac{1}{\sum_{i=-h+1}^{0}\sum_{k=i}^{0}p(k)^{2}}E_{1}^{T} RE_{1}, $$
(32)

where \(E_{0}=\sum_{i=-h+1}^{0}\sum_{k=i}^{0}y(k)\), \(E_{1}=\sum_{i=-h+1}^{0}\sum_{k=i}^{0}y(k)p(k)\).

Proof

Define the energy function as \(J(v)=\sum_{i=-h+1}^{0}\sum_{k=i}^{0}z(k)^{T}Rz(k)\) and \(z(i)=y(i)-\frac{2}{h(h+1)}E_{0}-p(i)v \). Similar to the proof of Theorem 1, we are now proceeding to find a vector to minimize the energy function \(J(v)\).

If \(\sum_{i=-h+1}^{0}\sum_{k=i}^{0}p(k)^{2}>0\) and \(\sum_{i=-h+1}^{0}\sum_{k=i}^{0}p(k)=0\), then

$$\begin{aligned} J'(v) =&-2\sum_{i=-h+1}^{0} \sum_{k=i}^{0}\biggl[y(k)p(k)- \frac {2}{h(h+1)}E_{0}p(k)-p(k)^{2}v \biggr]^{T}R \\ = & \Biggl[-2\sum_{i=-h+1}^{0}\sum _{k=i}^{0}y(k)p(k)+2v\sum _{i=-h+1}^{0}\sum_{k=i}^{0}p(k)^{2} \Biggr]^{T}R +2\frac{2}{h(h+1)}E_{0}^{T}\sum _{i=-h+1}^{0}\sum_{k=i}^{0}p(k)R \\ =& \Biggl[-2\sum_{i=-h+1}^{0}\sum _{k=i}^{0}y(k)p(k)+2v\sum _{i=-h+1}^{0}\sum_{k=i}^{0}p(k)^{2} \Biggr]^{T}R. \end{aligned}$$
(33)

The solution of \(J'(v )=0\) can be found as

$$ \hat{v}=\Biggl[\sum_{i=-h+1}^{0} \sum_{k=i}^{0}p(k)^{2} \Biggr]^{-1} \sum_{i=-h+1}^{0}\sum _{k=i}^{0}y(k)p(k). $$
(34)

In this case, we have

$$\begin{aligned} J(\hat{v}) =&\sum_{i=-h+1}^{0} \sum_{k=i}^{0}\biggl[y(k)- \frac {2}{h(h+1)}E_{0} -p(k)\hat{v}\biggr] ^{T}R\biggl[y(k)- \frac{2}{h(h+1)}E_{0}-p(k)\hat{v}\biggr] \\ =&\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\biggl[y(k)-\frac {2}{h(h+1)}E_{0} \biggr]^{T}R \biggl[y(k)-\frac{2}{h(h+1)}E_{0}\biggr] \\ &{}-2\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\biggl[y(k)-\frac {2}{h(h+1)}E_{0} \biggr]^{T}R p(k)\hat{v} +\sum_{i=-h+1}^{0} \sum_{k=i}^{0}p(k)^{2} \hat{v}^{T} R\hat {v} \\ =&\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\biggl[y(k)-\frac {2}{h(h+1)}E_{0} \biggr]^{T}R \biggl[y(k)-\frac{2}{h(h+1)}E_{0}\biggr] \\ &{}+\sum_{i=-h+1}^{0}\sum _{k=i}^{0}p(k)^{2}\hat{v}^{T} R \hat{v}-2\sum_{i=-h+1}^{0}\sum _{k=i}^{0}y(k)R p(k)\hat{v} \\ &{}+2\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\frac {2}{h(h+1)}E_{0}^{T}R p(k)\hat{v} \\ =&\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\biggl[y(k)-\frac {2}{h(h+1)}E_{0} \biggr]^{T}R\biggl[y(k)-\frac{2}{h(h+1)}E_{0}\biggr] \\ &{}-2\sum_{i=-h+1}^{0}\sum _{k=i}^{0}p(k)^{2}\hat{v}^{T} R \hat{v}+\sum_{i=-h+1}^{0}\sum _{k=i}^{0}p(k)^{2}\hat {v}^{T} R \hat{v} \\ =&\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\biggl[y(k)-\frac {2}{h(h+1)}E_{0} \biggr]^{T}R\biggl[y(k)-\frac{2}{h(h+1)}E_{0}\biggr] \\ &{}-\sum_{i=-h+1}^{0}\sum _{k=i}^{0}p(k)^{2}\hat{v}^{T} R \hat{v} \\ =&\sum_{i=-h+1}^{0}\sum _{k=i}^{0} y(k)^{T}Ry(k) - 2\sum _{i=-h+1}^{0}\sum_{k=i}^{0} y(k)^{T}R\frac {2}{h(h+1)}E_{0} \\ &{}+\frac{4}{h^{2}(h+1)^{2}}\sum_{i=-h+1}^{0}\sum _{k=i}^{0}E_{0}^{T}RE_{0} -\sum_{i=-h+1}^{0}\sum _{k=i}^{0}p(k)^{2}\hat{v}^{T} R \hat {v} \\ =&\sum_{i=-h+1}^{0}\sum _{k=i}^{0} y(k)^{T}Ry(k) - 2E_{0}^{T}R\frac{2}{h(h+1)}E_{0} \\ &{}+\frac{4}{h^{2}(h+1)^{2}}\frac{h(h+1)}{2}E_{0}^{T}RE_{0} -\sum_{i=-h+1}^{0}\sum _{k=i}^{0}p(k)^{2}\hat{v}^{T} R \hat {v} \\ \geq& 0. \end{aligned}$$
(35)

This completes the proof of Theorem 4. □

Specially, the choice of \(p(k)\) in Theorem 4 as \(p_{3}(k)=3k+h-1\) satisfying

$$\sum_{i=-h+1}^{0}\sum_{k=i}^{0} p_{3}(k)=0 $$

yields

$$ \sum_{i=-h+1}^{0}\sum _{k=i}^{0} p_{3}(k)^{2}= \frac{(h-1)h(h+1)(h+2)}{4} $$
(36)

and

$$ \sum_{i=-h+1}^{0}\sum _{k=i}^{0} p_{3}(k)y(k) =-2(h+2)\sum _{i=-h+1}^{0}\sum_{k=i}^{0}y(k) +6\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\sum_{m=k}^{0}y(m). $$
(37)

Let \(\Omega_{3}=\sum_{i=-h+1}^{0}\sum_{k=i}^{0}y(k) -\frac{3}{h+2}\sum_{i=-h+1}^{0}\sum_{k=i}^{0}\sum_{m=k}^{0}y(m)\). Then the following inequality based on Theorem 4 holds:

$$\begin{aligned}& \sum_{i=-h+1}^{0}\sum _{k=i}^{0}y(k)^{T}R y(k) \\ & \quad \geq \frac{2}{h(h+1)}\Biggl(\sum_{i=-h+1}^{0} \sum_{k=i}^{0}y(k)\Biggr)^{T}R\sum _{i=-h+1}^{0}\sum_{k=i}^{0}y(k) \\ & \qquad {} +\frac{1}{\sum_{i=-h+1}^{0}\sum_{k=i}^{0}p_{3}(k)^{2}}\Biggl[\sum_{i=-h+1}^{0} \sum_{k=i}^{0}y(k)p_{3}(k) \Biggr]^{T}R \Biggl[\sum_{i=-h+1}^{0} \sum_{k=i}^{0}y(k)p_{3}(k)\Biggr] \\ & \quad = \frac{2}{h(h+1)}\Biggl(\sum_{i=-h+1}^{0} \sum_{k=i}^{0}y(k)\Biggr)^{T}R\sum _{i=-h+1}^{0}\sum_{k=i}^{0}y(k) \\ & \qquad {} +\frac{4}{(h-1)h(h+1)(h+2)}\Biggl[\sum_{i=-h+1}^{0} \sum_{k=i}^{0}y(k)p_{3}(k) \Biggr]^{T}R \Biggl[\sum_{i=-h+1}^{0} \sum_{k=i}^{0}y(k)p_{3}(k)\Biggr] \\ & \quad = \frac{2}{h(h+1)}\Biggl(\sum_{i=-h+1}^{0} \sum_{k=i}^{0}y(k)\Biggr)^{T}R\sum _{i=-h+1}^{0}\sum_{k=i}^{0}y(k) +\frac{16(h+2)}{(h-1)h(h+1)}\Omega_{3}^{T}R \Omega_{3}. \end{aligned}$$
(38)

Furthermore, we have the following corollary.

Corollary 7

For a positive definite matrix \(R>0\) and any sequence of discrete-time variables \(y:[-h, 0]\cap Z\rightarrow R^{n}\), the following inequality holds:

$$\begin{aligned}& \sum_{i=-h+1}^{0}\sum _{k=i}^{0}y(k)^{T}Ry(k) \\ & \quad \geq \frac{2}{h(h+1)}\Biggl(\sum_{i=-h+1}^{0} \sum_{k=i}^{0}y(k)\Biggr)^{T}R\sum _{i=-h+1}^{0}\sum_{k=i}^{0}y(k) +\frac{16(h+2)}{(h-1)h(h+1)}\Omega_{3}^{T}R\Omega_{3}, \end{aligned}$$
(39)

where \(\Omega_{3}=\sum_{i=-h+1}^{0}\sum_{k=i}^{0}y(k)-\frac {3}{h+2}\sum_{i=-h+1}^{0}\sum_{k=i}^{0}\sum_{m=k}^{0}y(m)\).

Corollary 8

For a positive definite matrix \(R>0\) and any sequence of discrete-time variables \(y:[-h, 0]\cap Z\rightarrow R^{n}\), the following inequality holds:

$$\begin{aligned}& \sum_{i=-h+1}^{0}\sum _{k=i}^{0}\Delta x(k)^{T}R \Delta x(k) \\ & \quad \geq \frac{2(h+1)}{h}\Biggl[x(0)-\frac{1}{(h+1)}\sum _{i=-h}^{0}x(i)\Biggr]^{T}R\Biggl[x(0)- \frac{1}{(h+1)}\sum_{i=-h}^{0}x(i)\Biggr] \\ & \qquad {} +\frac{4(h+1)(h+2)}{h(h-1)}\Omega_{4}^{T}R \Omega_{4}, \end{aligned}$$
(40)

where \(\Omega_{4}=[x(0)+\frac{2}{(h+1)}\sum_{i=-h}^{0}x(i)-\frac {6}{(h+1)(h+2)}\sum_{i=-h}^{0}\sum_{k=i}^{0}x(k)]\).

Proof

Let \(y(i)=\Delta x(i)=x(i)-x(i-1)\) in Corollary 7, we have

$$\begin{aligned}& \frac{2}{h(h+1)}\Biggl(\sum_{i=-h+1}^{0} \sum_{k=i}^{0}y(k)\Biggr)^{T}R\sum _{i=-h+1}^{0}\sum_{k=i}^{0}y(k) \\& \quad = \frac{2}{h(h+1)}\Biggl(\sum_{i=-h+1}^{0} \bigl[x(0)-x(i-1)\bigr]\Biggr)^{T}R\sum_{i=-h+1}^{0} \bigl[x(0)-x(i-1)\bigr] \\& \quad = \frac{2}{h(h+1)}\Biggl[hx(0)-\sum_{i=-h+1}^{0}x(i-1) \Biggr]^{T}R\Biggl[hx(0)-\sum_{i=-h+1}^{0}x(i-1) \Biggr] \\& \quad = \frac{2}{h(h+1)}\Biggl[(h+1)x(0)-\sum_{i=-h}^{0}x(i) \Biggr]^{T}R\Biggl[(h+1)x(0)-\sum_{i=-h}^{0}x(i) \Biggr] \\& \quad = \frac{2(h+1)}{h}\Biggl[x(0)-\frac{1}{(h+1)}\sum _{i=-h}^{0}x(i)\Biggr]^{T}R \Biggl[x(0)- \frac{1}{(h+1)}\sum_{i=-h}^{0}x(i)\Biggr] \end{aligned}$$
(41)

and

$$\begin{aligned} \Omega_{3} =&\sum_{i=-h+1}^{0} \sum_{k=i}^{0}y(k)-\frac {3}{h+2}\sum _{i=-h+1}^{0}\sum_{k=i}^{0} \sum_{m=k}^{0}y(m) \\ =&(h+1)x(0)-\sum_{i=-h}^{0}x(i)- \frac{3}{h+2}\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\sum_{m=k}^{0} \bigl[x(m)-x(m-1)\bigr] \\ =&(h+1)x(0)-\sum_{i=-h}^{0}x(i)- \frac{3}{h+2}\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\bigl[x(0)-x(k-1)\bigr] \\ =&(h+1)x(0)-\sum_{i=-h}^{0}x(i)- \frac{3}{h+2}\sum_{i=-h+1}^{0} \Biggl[(-i+1)x(0)-\sum_{k=i}^{0}x(k-1)\Biggr] \\ =&(h+1)x(0)-\sum_{i=-h}^{0}x(i)- \frac{3}{h+2}\frac{h(h+1)}{2}x(0) +\frac{3}{h+2}\sum _{i=-h+1}^{0}\sum_{k=i-1}^{-1}x(k) \\ =&(h+1)x(0)-\sum_{i=-h}^{0}x(i)- \frac{3}{h+2}\frac {h(h+1)}{2}x(0) \\ &{}+\frac{3}{h+2}\sum_{i=-h+1}^{0} \Biggl[-x(0)+x(i-1)+\sum_{k=i}^{0}x(k)\Biggr] \\ =&(h+1)x(0)-\sum_{i=-h}^{0}x(i)- \frac{3h(h+1)}{2(h+2)}x(0) \\ &{}-\frac{3h}{h+2}x(0)+\frac{3}{h+2}\sum_{i=-h+1}^{0}x(i-1)+ \frac {3}{h+2}\sum_{i=-h+1}^{0}\sum _{k=i}^{0}x(k) \\ =&(h+1)x(0)-\sum_{i=-h}^{0}x(i)- \frac{3h(h+1)}{2(h+2)}x(0) \\ &{}-\frac{3h}{h+2}x(0)-\frac{3}{h+2}x(0)+\frac{3}{h+2}\sum _{i=-h}^{0}\sum_{k=i}^{0}x(k). \end{aligned}$$
(42)

So

$$\begin{aligned} \Omega_{3} =&(h+1)x(0)-\sum _{i=-h}^{0}x(i)-\frac {3h(h+1)}{2(h+2)}x(0) \\ &{}- \frac{3(h+1)}{h+2}x(0) +\frac{3}{h+2}\sum_{i=-h}^{0} \sum_{k=i}^{0}x(k) \\ =&-\frac{(h+1)}{2}x(0)-\sum_{i=-h}^{0}x(i)+ \frac{3}{h+2}\sum_{i=-h}^{0}\sum _{k=i}^{0}x(k) \\ =&-\frac{(h+1)}{2}\Biggl[x(0)+\frac{2}{(h+1)}\sum _{i=-h}^{0}x(i) -\frac{6}{(h+1)(h+2)}\sum _{i=-h}^{0}\sum_{k=i}^{0}x(k) \Biggr]. \end{aligned}$$
(43)

Replacing \(y(i)\) by \(\Delta x(i)\) in Corollary 7 leads to

$$\begin{aligned}& \sum_{i=-h+1}^{0}\sum _{k=i}^{0}\Delta x(k)^{T}R \Delta x(k) \\& \quad \geq \frac{2}{h(h+1)}\Biggl(\sum_{i=-h+1}^{0} \sum_{k=i}^{0}\Delta x(k) \Biggr)^{T}R\sum_{i=-h+1}^{0}\sum _{k=i}^{0}\Delta x(k) \\& \qquad {} +\frac{16(h+2)}{h(h+1)(h-1)}\Omega_{3}^{T}R \Omega_{3} \\& \quad = \frac{2(h+1)}{h}\Biggl[x(0)-\frac{1}{(h+1)}\sum _{i=-h}^{0}x(i)\Biggr]^{T}R\Biggl[x(0)- \frac{1}{(h+1)}\sum_{i=-h}^{0}x(i)\Biggr] \\& \qquad {} +\frac{4(h+1)(h+2)}{h(h-1)}\Biggl[x(0)+\frac{2}{(h+1)}\sum _{i=-h}^{0}x(i) -\frac{6}{(h+1)(h+2)}\sum _{i=-h}^{0}\sum_{k=i}^{0}x(k) \Biggr]^{T}R \\& \qquad {}\times \Biggl[x(0)+\frac{2}{(h+1)}\sum_{i=-h}^{0}x(i)- \frac {6}{(h+1)(h+2)}\sum_{i=-h}^{0}\sum _{k=i}^{0}x(k)\Biggr] \\& \quad = \frac{2(h+1)}{h}\Biggl[x(0)-\frac{1}{(h+1)}\sum _{i=-h}^{0}x(i)\Biggr]^{T}R\Biggl[x(0)- \frac{1}{(h+1)}\sum_{i=-h}^{0}x(i)\Biggr] \\& \qquad {} +\frac{4(h+1)(h+2)}{h(h-1)}\Omega_{4}^{T}R \Omega_{4}, \end{aligned}$$
(44)

where \(\Omega_{4}=[x(0)+\frac{2}{(h+1)}\sum_{i=-h}^{0}x(i) -\frac{6}{(h+1)(h+2)}\sum_{i=-h}^{0}\sum_{k=i}^{0}x(k)]\).

This completes the proof of Corollary 8. □

Remark 4

The double Jensen inequality is often used to estimate a upper bound of \(-\sum_{i=-h+1}^{0}\sum_{k=i}^{0} y(k)^{T}Ry(k)\) in the difference of Lyapunov-Krasovskii functionals. In this paper, we have extended the double Jensen inequality. Some improved double summation inequalities are presented in Corollary 7 (or Corollary 8). Since these improved double summation inequalities contain \(\frac {16(h+2)}{(h-1)h(h+1)}\Omega_{3}^{T}R\Omega_{3}\), they can provide a tighter bound for \(\sum_{i=-h+1}^{0}\sum_{k=i}^{0} y(k)^{T}Ry(k)\). Therefore, these improved double summation inequalities can be used to establish less conservative stability conditions for the discrete-time systems with variable delays.

3 Application in stability analysis

In this section, we will consider the following linear discrete system with time-varying delay:

$$ \left \{ \textstyle\begin{array}{l} x(k+1) = Ax(k)+Bx(k-h(k)),\quad k\geq0, \\ x(k) = \varphi(k),\quad k\in[-h_{2},0], \end{array}\displaystyle \right . $$
(45)

where \(x(k)\in R^{n}\) is the state vector, φ is the initial value, A and B are \(n\times n\) constant matrices. The delay \(h(k)\) is assumed to be a positive integer-valued function, for some integers \(h_{2}\geq h_{1}>1\), \(h(k)\in[h_{1},h_{2}]\), \(\forall k\geq0\).

Based on the above summation inequalities, we will establish a new criterion on asymptotical stability for system (45).

First, the following notations are needed:

$$\begin{aligned}& h_{12}=h_{2}-h_{1}, \\& e_{i}=\bigl[\underbrace{0,0,\ldots, \overbrace{I}^{i}, \ldots, 0}_{8}\bigr]^{T}_{8n\times n},\quad i=1,2, \ldots,8, \\& y(k)=x(k)-x(k-1), \\& \xi(k)=\Biggl[x^{T}(k),x^{T}(k-h_{1}),x^{T} \bigl(k-h(k)\bigr),x^{T}(k-h_{2}), \\& \hphantom{\xi(k)={}}{}\frac{1}{h_{1}+1}\sum^{k}_{i=k-h_{1}}x^{T}(i), \frac{1}{h(k)-h_{1}+1}\sum^{k-h_{1}}_{i=k-h(k)}x^{T}(i), \\& \hphantom{\xi(k)={}}{}\frac{1}{h_{2}-h(k)+1}\sum^{k-h(k)}_{i=k-h_{2}}x^{T}(i), \sum^{0}_{i=-h_{1}+1}\sum ^{k}_{j=k+i}x^{T}(j)\Biggr]^{T}, \\& \alpha(k)=\Biggl[x^{T}(k),\sum^{k-1}_{i=k-h_{1}}x^{T}(i), \sum^{k-h_{1}-1}_{i=k-h_{2}}x^{T}(i),\sum ^{0}_{i=-h_{1}+1}\sum^{k}_{j=k+i}x^{T}(j) \Biggr]^{T}, \\& Z_{10}=\operatorname{diag}\biggl\{ Z_{1}, \frac{3(h_{1}+1)}{h_{1}-1}Z_{1},\frac {5(h_{1}+1)(h_{1}+2)}{(h_{1}-2)(h_{1}-1)}Z_{1}\biggr\} , \\& Z_{2}^{*}=\left (\textstyle\begin{array}{@{}c@{\quad}c@{}} Z_{2}& 0\\ 0 &3Z_{2} \end{array}\displaystyle \right ),\qquad Z_{20}=\left (\textstyle\begin{array}{@{}c@{\quad}c@{}} Z_{2}^{*} &X\\ \ast& Z_{2}^{*} \end{array}\displaystyle \right ), \\& \Pi_{0}=[A,0,B,0,0,0,0,0]^{T}, \\& \Pi_{1}=\bigl[\Pi_{0},(h_{1}+1)e_{5}-e_{2}, \bigl(h(k)-h_{1}+1\bigr)e_{6}+\bigl(h_{2}-h(k)+1 \bigr)e_{7}-e_{3}-e_{4}, \\& \hphantom{\Pi_{1}={}}{}e_{8}+h_{1}\Pi_{0}-(h_{1}+1)e_{5}+e_{2} \bigr], \\& \Pi_{2}=\bigl[e_{1},(h_{1}+1)e_{5}-e_{1}, \bigl(h(k)-h_{1}+1\bigr)e_{6}+\bigl(h_{2}-h(k)+1 \bigr)e_{7}-e_{3}-e_{2},e_{8}\bigr], \\& \Pi_{3}=[A-I,0,B,0,0,0,0,0]^{T}, \\& \Pi_{4}=\biggl[e_{1}-e_{2},e_{1}+e_{2}-2e_{5},e_{1}-e_{2}+ \frac{6h_{1}}{h_{1}+2}e_{5}-\frac {12}{(h_{1}+1)(h_{1}+2)}e_{8}\biggr], \\& \Pi_{5}=[e_{3}-e_{4},e_{3}+e_{4}-2e_{7},e_{2}-e_{3},e_{2}+e_{3}-2e_{6}], \\& \Pi_{6}=e_{1}-e_{5}, \\& \Pi_{7}=e_{1}+\biggl(2-\frac{6}{(h_{1}+2)} \biggr)e_{5}-\frac{6}{(h_{1}+1)(h_{1}+2)}e_{8}, \\& \Xi_{1}=\Pi_{1}P\Pi_{1}^{T}- \Pi_{2}P\Pi_{2}^{T}, \\& \Xi_{2}=e_{1}Q_{1}e_{1}^{T}-e_{2}Q_{1}e_{2}^{T}+e_{2}Q_{2}e_{2}^{T}-e_{4}Q_{2}e_{4}^{T}, \\& \Xi_{3}=\Pi_{3}\bigl(h_{1}^{2}Z_{1}+h_{12}^{2}Z_{2} \bigr)\Pi_{3}^{T}-\Pi_{4}Z_{10} \Pi_{4}^{T}-\Pi _{5}Z_{20} \Pi_{5}^{T}, \\& \Xi_{4}=\frac{h_{1}(h_{1}+1)}{2}\Pi_{3}Z_{3} \Pi_{3}^{T}-\frac{2(h_{1}+1)}{h_{1}}\Pi _{6}Z_{3} \Pi_{6}^{T}- \frac{4(h_{1}^{2}-1)}{h_{1}(h_{1}+2)}\Pi_{7}Z_{3} \Pi_{7}^{T}, \\& \Xi=\sum_{i=1}^{4}\Xi_{i}. \end{aligned}$$
(46)

Theorem 5

For given integers \(h_{1}\), \(h_{2}\) satisfying \(1< h_{1}\leq h_{2}\), system (45) is asymptotically stable for \(h_{1}\leq h(k)\leq h_{2}\), if there are positive define matrices \(P\in R^{4n\times4n}\), \(Z_{1}\in R^{n\times n}\), \(Z_{2}\in R^{n\times n}\), \(Z_{3}\in R^{n\times n}\), \(Q_{1}\in R^{n\times n}\), \(Q_{2}\in R^{n\times n}\), and any matrix \(X\in R^{2n\times2n}\) such that the following LMIs are satisfied:

$$ \Xi< 0,\qquad Z_{20}\geq0. $$
(47)

Proof

Choose a Lyapunov functional candidate as follows:

$$ V(k)=\sum^{4}_{j=1}V_{j}(k), $$
(48)

where

$$ \begin{aligned} &V_{1}(k)=\alpha^{T}(k)P \alpha(k), \\ &V_{2}(k)=\sum^{k-1}_{i=k-h_{1}}x^{T}(i)Q_{1}x(i) +\sum^{k-h_{1}-1}_{i=k-h_{2}}x^{T}(i)Q_{2}x(i), \\ &V_{3}(k)=h_{1}\sum^{0}_{i=-h_{1}+1} \sum^{k}_{j=k+i}y^{T}(j)Z_{1}y(j) +h_{12}\sum^{-h_{1}}_{i=-h_{2}+1}\sum ^{k}_{j=k+i}y^{T}(j)Z_{2}y(j), \\ &V_{4}(k)=\sum^{0}_{i=-h_{1}+1}\sum ^{0}_{j=i} \sum^{k}_{u=k+j}y^{T}(u)Z_{3}y(u). \end{aligned} $$
(49)

Next, we calculate the difference of \(V(k)\). For \(V_{1}(k)\) and \(V_{2}(k)\), we have

$$ \Delta V_{1}(k)=\xi^{T}(k)\Xi_{1} \xi(k) $$
(50)

and

$$ \Delta V_{2}(k)=\xi^{T}(k)\Xi_{2} \xi(k). $$
(51)

Calculating \(\Delta V_{3}(k)\) gives

$$\begin{aligned} \Delta V_{3}(k) =&h_{1}^{2}y_{k+1}^{T}Z_{1}y_{k+1}+h_{12}^{2}y_{k+1}^{T}Z_{2}y_{k+1} \\ &{}-h_{1}\sum^{k}_{i=k-h_{1}+1}y^{T}(i)Z_{1}y(i) -h_{12}\sum^{k-h_{1}}_{i=k-h_{2}+1}y^{T}(i)Z_{2}y(i). \end{aligned}$$
(52)

By Corollary 6, we get

$$ -h_{1}\sum^{k}_{i=k-h_{1}+1}y^{T}(i)Z_{1}y(i) \leq-\xi^{T}(k)\Pi_{4}Z_{10}\Pi _{4}^{T}\xi(k). $$
(53)

Under the condition of \(Z_{20}>0\), by Corollary 6 and the lower bounded lemma, we get

$$ -h_{12}\sum^{k-h_{1}}_{i=k-h_{2}+1}y^{T}(i)Z_{2}y(i) \leq-\xi^{T}(k)\Pi _{5}Z_{20}\Pi_{5}^{T} \xi(k). $$
(54)

Then we have

$$ \Delta V_{3}(k)\leq\xi^{T}(k) \Xi_{3}\xi(k). $$
(55)

Calculating \(\Delta V_{4}(k)\) gives

$$ \Delta V_{4}(k) = \frac{h_{1}(h_{1}+1)}{2}y_{k+1}^{T}Z_{3}y_{k+1} -\sum^{0}_{i=-h_{1}+1}\sum ^{k}_{j=k+i}y^{T}(j)Z_{3}y(j). $$
(56)

By Corollary 8, we have

$$\begin{aligned}& -\sum^{0}_{i=-h_{1}+1}\sum ^{k}_{j=k+i}y^{T}(j)Z_{3}y(j) \\& \quad \leq \xi^{T}(k) \biggl(-\frac{2(h_{1}+1)}{h_{1}}\Pi_{6}Z_{3} \Pi_{6}^{T} -\frac{4(h_{1}+1)(h_{1}+2)}{h_{1}(h_{1}-1)}\Pi_{7}Z_{3} \Pi_{7}^{T}\biggr)\xi(k). \end{aligned}$$
(57)

Then we have

$$ \Delta V_{4}(k)\leq\xi^{T}(k) \Xi_{4}\xi(k). $$
(58)

Hence

$$ \Delta V(k)\leq\xi^{T}(k)\sum _{i=1}^{4}\Xi_{i} \xi(k)= \xi^{T}(k)\Xi\xi(k). $$
(59)

If \(\Xi<0\), then \(\Delta V(k)<0\).

This completes the proof of Theorem 5. □

Remark 5

Theorem 5 gives a sufficient condition for asymptotical stability criterion for discrete-time system (45) with variable delay. The free-weighting matrix method was developed and was applied to the stability analysis of systems with time-varying delays [18]. However, the computational burden will increase because of the introduction of free-weighting matrices. Different from the free-weighting matrix method, some new sharper summation inequalities are developed via auxiliary functions. By employing these improved inequalities and the reciprocally convex combination inequality method, a less conservative result is derived. The conditions in Theorem 5 are described in terms of two matrix inequalities, which can be realized by using the linear matrix inequality algorithm proposed in [28].

4 Numerical example

In this section, to demonstrate the effectiveness of our proposed method, we consider the following example, which is widely used in the delay-dependent stability analysis of discrete-time systems with time delay.

Example 1

Consider the discrete-time system

$$x(k+1)=\left (\textstyle\begin{array}{@{}c@{\quad}c@{}} 0.8&0 \\ 0.05&0.9 \end{array}\displaystyle \right )x(k) + \left (\textstyle\begin{array}{@{}c@{\quad}c@{}} -0.1&0 \\ -0.2&-0.1 \end{array}\displaystyle \right )x\bigl(k-h(k)\bigr). $$

Since the system addressed in [24] is a discrete-time system with constant delay, the stability criterion obtained cannot be applied to this system. For different \(h_{1}\), the maximum allowable upper bounds of \(h(k)\) guaranteeing this system to be asymptotically stable are given in Table 1 [1823, 25]. From Table 1, Theorem 5 in our paper can provide larger feasible region than those of [1821]. For the same \(h_{1}\), the maximum allowable upper bound of \(h(k)\) obtained in this paper is the same as that in [25]. Although more decision variables are needed in our stability criterion, the new summation inequality in Corollary 6 is sharper than that in [25].

Table 1 Maximum bound \(\pmb{h_{2}}\) with different \(\pmb{h_{1}}\) (Example 1 )

5 Conclusions

In this paper, by the construction of an appropriate auxiliary function, some new summation inequalities are established. As an application of the summation inequality, an asymptotic stability analysis of discrete linear systems with time delay is carried out. Finally, a numerical example is provided to illustrate the usefulness of the theoretical results.

References

  1. Song, CW, Gao, HJ, Zheng, WX: A new approach to stability analysis of discrete-time recurrent neural networks with time-varying delay. Neurocomputing 72, 2563-2568 (2009)

    Article  Google Scholar 

  2. Wang, Q, Dai, BX: Almost periodic solution for n-species Lotka-Volterra competitive system with delay and feedback controls. Appl. Math. Comput. 200, 133-146 (2008)

    MathSciNet  MATH  Google Scholar 

  3. Wang, Q, Dai, BX: Existence of positive periodic solutions for neutral population model with delays. Int. J. Biomath. 1, 107-120 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  4. Liu, XG, Wu, M, Martin, R, Tang, ML: Stability analysis for neutral systems with mixed delays. J. Comput. Appl. Math. 202, 478-497 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  5. Liu, XG, Martin, R, Wu, M, Tang, ML: Global exponential stability of bidirectional associative memory neural network with time delays. IEEE Trans. Neural Netw. 19, 397-407 (2008)

    Article  Google Scholar 

  6. Guo, S, Tan, XH, Huang, L: Stability and bifurcation in a discrete system of two neurons with delays. Nonlinear Anal., Real World Appl. 9, 1323-1335 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  7. Chen, P, Tang, XH: Existence and multiplicity of solutions for second-order impulsive differential equations with Dirichlet problems. Appl. Math. Comput. 218, 11775-11789 (2012)

    MathSciNet  MATH  Google Scholar 

  8. Zang, YC, Li, JP: Stability in distribution of neutral stochastic partial differential delay equations driven by a-stable process. Adv. Differ. Equ. 2014, 13 (2014)

    Article  MathSciNet  Google Scholar 

  9. Tang, ML, Liu, XG: Positive periodic solution of higher order functional difference equation. Adv. Differ. Equ. 2011, 56 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Qiu, SB, Liu, XG, Shu, YJ: New approach to state estimator for discrete-time BAM neural networks with time-varying delay. Adv. Differ. Equ. 2015, 189 (2015)

    Article  MathSciNet  Google Scholar 

  11. Zhang, XM, Han, QL: Global asymptotic stability for a class of generalized neural networks with interval time-varying delays. IEEE Trans. Neural Netw. 22, 1180-1192 (2011)

    Article  Google Scholar 

  12. Shu, YJ, Liu, XG, Liu, YJ: Stability and passivity analysis for uncertain discrete-time neural networks with time-varying delay. Neurocomputing 173, 1706-1714 (2016)

    Article  Google Scholar 

  13. Zeng, HB, Park, JH, Zhang, CF, Wang, W: Stability and dissipativity analysis of static neural networks with interval time-varying delay. J. Franklin Inst. 352, 1284-1295 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  14. Ratchagit, K, Phat, VN: Stability criterion for discrete-time systems. J. Inequal. Appl. 2010, Article ID 201459 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Phat, VN, Ratchagit, K: Stability and stabilization of switched linear discrete-time systems with interval time-varying delay. Nonlinear Anal. Hybrid Syst. 5, 605-612 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  16. Liu, XG, Tang, ML, Martin, RR, Liu, XB: Discrete-time BAM neural networks with variable delays. Phys. Lett. A 367, 322-330 (2007)

    Article  MATH  Google Scholar 

  17. Park, P, Ko, JW, Jeong, C: Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47, 235-238 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  18. Fridman, E, Shaked, U: Stability and guaranteed cost control of uncertain discrete delay systems. Int. J. Control 78, 235-246 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  19. Zhang, B, Xu, S, Zou, Y: Improved stability criterion and its applications in delayed controller design for discrete-time systems. Automatica 44, 2963-2967 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  20. He, Y, Wu, M, Liu, GP, She, JH: Output feedback stabilization for a discrete-time system with a time-varying delay. IEEE Trans. Autom. Control 53, 2372-2377 (2008)

    Article  MathSciNet  Google Scholar 

  21. Shao, H, Han, QL: New stability criterion for linear discrete-time systems with interval-like time-varying delays. IEEE Trans. Autom. Control 56, 619-625 (2011)

    Article  MathSciNet  Google Scholar 

  22. Kao, CY: On stability of discrete-time LTI systems with varying time delays. IEEE Trans. Autom. Control 57, 1243-1248 (2012)

    Article  MathSciNet  Google Scholar 

  23. Kwon, OM, Park, MJ, Park, JH, Lee, SM, Cha, EJ: Stability and stabilization for discrete-time systems with time-varying delays via augmented Lyapunov-Krasovskii functional. J. Franklin Inst. 350, 521-540 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  24. Zhang, XM, Han, QL: Abel lemma-based finite-sum inequality and its application to stability analysis for linear discrete time-delay systems. Automatica 57, 199-202 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  25. Seuret, A, Gouaisbaut, F, Fridman, E: Stability of discrete-time systems with time-varying delay via a novel summation inequality. IEEE Trans. Autom. Control 6, 2740-2745 (2015)

    Article  MathSciNet  Google Scholar 

  26. Seuret, A, Gouaisbaut, F: Wirtinger-based integral inequality: application to time-delay systems. Automatica 49, 2860-2866 (2013)

    Article  MathSciNet  Google Scholar 

  27. Park, P, Lee, WI, Lee, SY: Auxiliary function-based integral inequalities for quadratic functions and their applications to time-delay systems. J. Franklin Inst. 352, 1378-1396 (2015)

    Article  MathSciNet  Google Scholar 

  28. Boyd, S, Ghaoui, LEI, Feron, E, Balakrishnan, V: Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia (1994)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

This work is partly supported by NSFC under grant nos. 61271355 and 61375063 and the ZNDXYJSJGXM under grant no. 2015JGB21.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin-Ge Liu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, FX., Liu, XG., Tang, ML. et al. Stability analysis of discrete-time systems with variable delays via some new summation inequalities. Adv Differ Equ 2016, 95 (2016). https://doi.org/10.1186/s13662-016-0829-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-016-0829-z

Keywords