Open Access

LMI-based robust exponential stability criterion of impulsive integro-differential equations with uncertain parameters via contraction mapping theory

Advances in Difference Equations20172017:19

https://doi.org/10.1186/s13662-016-1059-0

Received: 20 August 2016

Accepted: 6 December 2016

Published: 18 January 2017

Abstract

By formulating a new contraction mapping on a product space, the authors originally employed Banach fixed point theorem to derive the LMI-based robust exponential stability criterion for impulsive BAM neural networks with distributed delays and uncertain parameters. Numerical example illuminates that the new criterion is not worse than the existing results derived by Lyapunov functional method. Hence both the methods and the results of this paper are really novel to a certain extent.

Keywords

contraction mapping theoryintegro-differential equationsimpulsive BAM neural networksrobust exponential stabilityparameters uncertainty

1 Introduction

In [1], Kosko proposed originally the time-delay differential equations as follows:
$$ \left \{ \textstyle\begin{array}{@{}l} \frac{dx_{i}(t)}{dt} = - a_{i}x_{i}(t)+\sum_{j=1}^{n}c_{ij}f_{j}(y_{j}(t-\tau_{j}))+I_{i},\\ \frac{dy_{i}(t)}{dt} = - b_{i}y_{i}(t)+\sum_{j=1}^{n}d_{ij}g_{j}(x_{j}(t-\rho_{j}))+J_{i}, \end{array}\displaystyle \quad i=1,2,\ldots,n, t>0, \right . $$
(1.1)
which belongs to the so-called bidirectional associative memory (BAM) neural networks. Here, the parameters \(a_{i}>0, b_{i}>0\) denote the time scales of the respective layers of the network. The first term in each of the right sides of system (1.1) corresponds to a stabilizing negative feedback of the system which acts instantaneously without time delay. These terms are known as forgetting or leakage terms [2, 3]. Since then, various generalized BAM neural networks have become a hot research topic due to their potential applications in associative memory, parallel computation, artificial intelligence, and signal and image processes [4, 5]. However, the above successful applications often depend on whether the system has a certain stability, and the robust stability always plays an important role. In recent decades, stability analyses of neural networks have attracted the attention of many researchers (see e.g. [630]). The Lyapunov function method was always employed in the existing literature to obtain stability criteria. But any method has its limitations. As one of the alternative methods, fixed point theorems played positive roles in the stability analysis of BAM neural networks (see e.g. [31, 32]). Some stability criteria of BAM neural networks without impulse were derived by fixed point theorems. But the impulsive model of BAM neural networks has rarely been investigated. Besides, there are a lot of other factors and problems in practical engineering. In fact, there exist parameter errors unavoidable in factual systems due to aging of electronic components, external disturbance, and parameter perturbations. It is very important to ensure that the system is stable with respect to these uncertainties. So, in this paper, the contraction mapping principle and linear matrix inequalities (LMIs) technique are applied to generate the LMI-based exponential robust stability criterion for the impulsive BAM neural networks model with distributed delays and uncertain parameters. Finally, a numerical example and two comparable tables are presented to show the novelty and effectiveness of the derived result.
For the sake of convenience, we introduce the following standard notations.
  • \(L=(l_{ij})_{n\times n}>0\) (<0): a positive (negative) definite matrix, i.e., \(y^{T}Ly>0\) (<0) for any \(0\neq y\in R^{n}\).

  • \(L=(l_{ij})_{n\times n}\geqslant0\) (0): a semi-positive (semi-negative) definite matrix, i.e., \(y^{T}Ly\geqslant 0(\leqslant0)\) for any \(y\in R^{n}\).

  • \(L\in[-L^{*},L^{*}]\) implies that \(|l_{ij}|\leqslant l_{ij}^{*}\) for all \(i,j\) with \(L=(l_{ij})_{n\times n}\) and \(L^{*}=(l_{ij}^{*})_{n\times n}\).

  • \(L_{1}\geqslant L_{2}\) (\(L_{1}\leqslant L_{2}\)): this means matrix \((L_{1}-L_{2})\) is a semi-positive (semi-negative) definite matrix.

  • \(L_{1}> L_{2}\) (\(L_{1}< L_{2}\)): this means matrix \((L_{1}-L_{2})\) is a positive (negative) definite matrix.

  • \(\lambda_{\max}(\Phi), \lambda_{\min}(\Phi)\) denotes the largest and smallest eigenvalue of matrix Φ, respective.

  • Denote \(|L|=(|l_{ij}|)_{n\times n}\) for any matrix \(L=(l_{ij})_{n\times n}\).

  • \(|u |=(|u_{1} |,|u_{2} |,\ldots,|u_{n} |)^{T}\) for any vector \(u =(u_{1} ,u_{2} ,\ldots,u_{n} )^{T}\in R^{n}\).

  • \(u\leqslant(\geqslant) v\) implies that \(u_{i}\leqslant(\geqslant) v_{i}\), i, and \(u< (>) v\) implies that \(u_{i}< (>) v_{i}\), i, for any vectors \(u =(u_{1} ,u_{2} ,\ldots,u_{n} )^{T}\in R^{n}\) and \(v =(v_{1} ,v_{2} ,\ldots,v_{n} )^{T}\in R^{n}\).

  • I: the identity matrix with compatible dimension.

  • Denote vector \(\mu=(1,1,\ldots,1)^{T}\in R^{n}\).

Remark 1

The main purpose of this paper is to obtain the LMI-based robust stability criteria of BAM neural networks with uncertain parameters by using the Banach fixed point theorem. To overcome mathematical difficulties, it is necessary to formulate a novel contraction mapping. Therefore, the following main task is to construct a new contraction mapping on the space suitable, and prove that the fixed point of this mapping is the robust stability solution of the BAM neural network.

2 Preliminaries

The physical background of the following integro-differential equations is in the BAM neural networks (see e.g. [33, 34]):
$$ \left \{ \textstyle\begin{array}{@{}l} \frac{dx(t)}{dt} = -(A+\Delta A(t))x(t)+(C+\Delta C(t))f(y(t))\\ \hphantom{\frac{dx(t)}{dt} =}{}+(M+\Delta M(t))\int_{t-\tau(t)}^{t}f(y(s))\,ds, \quad t\geqslant0, t\neq t_{k},\\ \frac{dy(t)}{dt} = -(B+\Delta B(t))y(t)+(D+\Delta D(t))g(x(t))\\ \hphantom{\frac{dy(t)}{dt} =}{}+(W+\Delta W(t))\int_{t-\rho(t)}^{t}g(x(s))\,ds,\quad t\geqslant0, t\neq t_{k}, \\ x(t_{k}^{+})-x(t_{k}^{-}) =\phi(x(t_{k})), \qquad y(t_{k}^{+})-y(t_{k}^{-}) =\varphi (y(t_{k})), \quad k=1,2,\ldots, \end{array}\displaystyle \right . $$
(2.1)
with the initial condition
$$ x(s)=\xi(s),\qquad y(s)=\eta(s),\quad s\in[-\tau,0], $$
(2.2)
where \(x=(x_{1},x_{2},\ldots,x_{n}), y=(y_{1},y_{2},\ldots,y_{n})\in R^{n}\) with \(x_{i}(t),y_{j}(t)\) being the state variables of the ith neuron and the jth neuron at time t, respectively. Also \(f(x) = (f_{1}(x_{1}(t)), f_{2}(x_{2}(t)), \ldots, f_{n}(x_{n}(t)))^{T}\), \(g(x) = (g_{1}(x_{1}(t)), g_{2}(x_{2}(t)), \ldots, g_{n}(x_{n}(t)))^{T} \in R^{n}\) are the neuron active functions. Both \(A=\operatorname{diag}(a_{1},a_{2},\ldots,a_{n}) \) and \(B=\operatorname{diag}(b_{1},b_{2},\ldots,b_{n})\) are \((n\times n)\)-dimension positive definite matrices with \(a_{i}\) and \(b_{j}\) denoting the rate with which the ith neuron and the jth neuron will reset their potential to the resting state in isolation when disconnected from the networks and the external inputs, respectively. C and D denote the connection weight matrices with \((n\times n)\) dimensions. M and W are the distributively delayed connection weight matrices with \((n\times n)\) dimensions. The parameter uncertainties considered here are norm-bounded and of the following forms:
$$ \begin{aligned} &\Delta A(t) \in[-A_{*}, A_{*}], \qquad \Delta B(t) \in[-B_{*}, B_{*}],\qquad \Delta C(t) \in[-C_{*}, C_{*}], \\ &\Delta D(t) \in[-D_{*}, D_{*}], \qquad \Delta M(t) \in[-M_{*}, M_{*}], \qquad \Delta W(t) \in[-W_{*}, W_{*}], \end{aligned} $$
(2.3)
where \(A_{*},B_{*}, C_{*},D_{*}, M_{*},W_{*}\) all are nonnegative matrices.
Assume, in addition, distributed delays \(\tau(t),\rho(t)\in[0,\tau]\). The fixed impulsive moments \(t_{k}\) (\(k=,1,2,\ldots\)) satisfy \(0< t_{1}< t_{2}<\cdots\) with \(\lim_{k\to+\infty}t_{k}=+\infty\). \(x(t_{k}^{+})\) and \(x(t_{k}^{-})\) stand for the right-hand and left-hand limit of \(x(t)\) at time \(t_{k}\), respectively. Further, suppose that
$$ x\bigl(t_{k}^{-}\bigr)=\lim_{t\to t_{k}^{-}}x(t)=x(t_{k}),\qquad y\bigl(t_{k}^{-}\bigr)= \lim_{t\to t_{k}^{-}}y(t)=y(t_{k}), \quad k=1,2,\ldots. $$
(2.4)
Throughout this paper, we assume that \(f(0)=g(0)=\phi(0)=\varphi (0)=0\in R^{n}\), and \(F=\operatorname{diag}(F_{1},F_{2},\ldots,F_{n})\), \(G=\operatorname{diag}(G_{1},G_{2},\ldots,G_{n})\), \(H=\operatorname{diag}(H_{1},H_{2},\ldots,H_{n})\), and \(\mathcal{H}=\operatorname{diag}(\mathcal{H}_{1}, \mathcal{H}_{2},\ldots,\mathcal{H}_{n})\) are positive definite diagonal matrices, satisfying
  1. (H1)

    \(|f(x)-f(y)|\leqslant F|x-y|, x,y\in R^{n}\);

     
  2. (H2)

    \(|g(x)-g(y)|\leqslant G|x-y|, x,y\in R^{n}\);

     
  3. (H3)

    \(|\phi(x)-\phi(y)|\leqslant H|x-y|, x,y\in R^{n}\);

     
  4. (H4)

    \(|\varphi(x)-\varphi(y)|\leqslant\mathcal{H}|x-y|, x,y\in R^{n}\);

     
  5. (H5)

    there exist nonnegative matrices \(A_{*},B_{*}, C_{*},D_{*}, M_{*},W_{*}\), satisfying (2.3).

     

Definition 2.1

System (2.1) with initial condition (2.2) shows globally exponential robust stability in mean square for all admissible uncertainties if for any initial condition \(\bigl( {\scriptsize\begin{matrix}{} \xi(s)\cr \eta(s) \end{matrix}} \bigr)\in C([-\tau,0],R^{2n})\), there exist positive constants a and b such that
$$\left\| \begin{pmatrix} x(t;s,\xi,\eta)\\ y(t;s,\xi,\eta) \end{pmatrix} \right\|\leqslant be^{-at},\quad \mbox{for all } t>0, $$
for all admissible uncertainties in (2.3), where the norm \(\bigl\| \bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr) \bigr\|= (\sum_{i=1}^{n}|x_{i}(t)|^{2}+ \sum_{i=1}^{n}|y_{i}(t)|^{2} )^{\frac{1}{2}}\), and \(x=(x_{1},\ldots,x_{n}), y=(y_{1},\ldots,y_{n})\in R^{n}\).

Lemma 2.2

Contraction mapping theorem [35]

Let P be a contraction operator on a complete metric space Θ, then there exists a unique point \(\theta\in\Theta\) for which \(P(\theta )=\theta\).

3 Global exponential robust stability via contraction mapping

For convenience, we may denote
$$C_{t}=C+\Delta C(t), \qquad D_{t}=D+\Delta D(t),\qquad M_{t}=M+\Delta M(t),\qquad W_{t}=W+\Delta W(t). $$

Before giving the LMI-based robust stability criterion, we may firstly present the following fact.

Lemma 3.1

The impulsive system (2.1) with initial condition (2.2) is equivalent to the following integral equations with initial condition (2.2):
$$\begin{aligned} {{\begin{pmatrix} x(t)\\ y(t) \end{pmatrix} = \begin{pmatrix} e^{-At} \{\xi(0)+\int_{0}^{t} e^{As} [-\Delta A(s)x(s)+C_{s}f(y(s))+M_{s}\int_{s-\tau (s)}^{s}f(y(r))\,dr ]\,ds+\sum_{0< t_{k}< t}e^{At_{k}}\phi(x_{t_{k}}) \}\\ e^{-Bt} \{\eta(0)+\int_{0}^{t} e^{Bs} [-\Delta B(s)x(s)+D_{s}g(x(s))+W_{s}\int_{s-\rho (s)}^{s}g(x(r))\,dr ]\,ds+\sum_{0< t_{k}< t}e^{Bt_{k}}\varphi(x_{t_{k}}) \} \end{pmatrix},}} \end{aligned}$$
(3.1)
for all \(t\geqslant0\), and \(x(s)=\xi(s), y(s)=\eta(s), s\in[-\tau,0]\).

Proof

Indeed, we only need to prove that each solution of system (3.1) with initial condition (2.2) is a solution of the impulsive system (2.1) with initial condition (2.2), and the converse is also true.

On the one hand, suppose that \(\bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr)\) is a solution of (3.1) with initial condition (2.2). Then we have
$$\begin{aligned} e^{At}x(t)={}&\xi(0)+ \int_{0}^{t} e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau (s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds\\ &{}+\sum _{0< t_{k}< t}e^{At_{k}}\phi(x_{t_{k}}). \end{aligned}$$
For \(t\geqslant0\), \(t\neq t_{k}\), taking the derivative in both sides of the above equation results in
$$e^{At}\frac{dx(t)}{dt}+Ae^{At}x(t)=\frac{d}{dt} \bigl(e^{At}x(t) \bigr) =e^{At} \biggl[-\Delta A(t)x(t)+C_{t}f\bigl(y(t)\bigr)+M_{t} \int_{t-\tau (t)}^{t}f\bigl(y(r)\bigr)\,dr \biggr], $$
or
$$\frac{dx(t)}{dt}+Ax(t) =-\Delta A(t)x(t)+C_{t}f\bigl(y(t) \bigr)+M_{t} \int_{t-\tau(t)}^{t}f\bigl(y(r)\bigr)\,dr, $$
which is the first equation of system (2.1). Similarly, we can also derive the second equation of (2.1).
Moreover, as \(t\to t_{j}^{-}\), we can get by (3.1)
$$x\bigl(t_{j}^{-}\bigr)=\lim_{\varepsilon\to0^{+}}x(t_{j}- \varepsilon)=x(t_{j}),\qquad y\bigl(t_{j}^{-}\bigr)=\lim _{\varepsilon\to0^{+}}y(t_{j}-\varepsilon)=y(t_{j}),\quad j=1,2,\ldots, $$
and
$$\begin{aligned}& x\bigl(t_{j}^{+}\bigr)=\lim_{\varepsilon\to0^{+}}x(t_{j}+ \varepsilon)=x(t_{j})+\phi \bigl(x(t_{j})\bigr),\\& y \bigl(t_{j}^{+}\bigr)=\lim_{\varepsilon\to0^{+}}y(t_{j}+ \varepsilon)=y(t_{j})+\phi \bigl(y(t_{j})\bigr),\quad j=1,2, \ldots. \end{aligned}$$
Hence, we have proved that each solution of (3.1) with initial condition (2.2) is a solution of (2.1) with initial condition (2.2).
On the other hand, suppose that \(\bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr)\) is a solution of (2.1) with initial condition (2.2). Then multiplying both sides of the first equation of system (2.1) with \(e^{At}\) results in
$$e^{At}\frac{dx(t)}{dt}+Ae^{At}x(t) =e^{At} \biggl[-\Delta A(t)x(t)+C_{t}f\bigl(y(t)\bigr)+M_{t} \int_{t-\tau (t)}^{t}f\bigl(y(s)\bigr)\,ds \biggr],\quad t \geqslant0, t\neq t_{k}. $$
Moreover, integrating from \(t_{k-1}+\varepsilon\) to \(t\in(t_{k-1},t_{k})\) gives
$$\begin{aligned} e^{At}x(t)={}&e^{A(t_{k-1}+\varepsilon)}x(t_{k-1}+\varepsilon)\\ &{}+ \int _{t_{k-1}+\varepsilon}^{t}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int _{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds, \end{aligned}$$
which yields, after letting \(\varepsilon\to0^{+}\),
$$\begin{aligned} &e^{At}x(t)=e^{A(t_{k-1})}x\bigl(t_{k-1}^{+}\bigr)+ \int_{t_{k-1}}^{t}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds, \\ &\quad t \in(t_{k-1},t_{k}). \end{aligned}$$
(3.2)
Throughout this paper, we assume that ε is a sufficient small positive number. Now, taking \(t=t_{k}-\varepsilon\) in (3.2) one obtains
$$\begin{aligned} e^{At_{k}-\varepsilon}x(t_{k}-\varepsilon)={}&e^{At_{k-1}}x \bigl(t_{k-1}^{+}\bigr)\\ &{}+ \int _{t_{k-1}}^{t_{k}-\varepsilon}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds, \end{aligned}$$
which yields by (2.4) and letting \(\varepsilon\to0^{+}\)
$$e^{At_{k}}x(t_{k})=e^{At_{k-1}}x\bigl(t_{k-1}^{+} \bigr)+ \int_{t_{k-1}}^{t_{k}}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds. $$
Combining (3.2) and the above equation generates
$$\begin{aligned} e^{At}x(t)={}&e^{At_{k-1}}x\bigl(t_{k-1}^{+}\bigr)+ \int_{t_{k-1}}^{t}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds \\ ={}&e^{At_{k-1}}x(t_{k-1})+ \int_{t_{k-1}}^{t}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds\\ &{}+e^{At_{k-1}}\phi\bigl(x(t_{k-1})\bigr), \end{aligned}$$
for all \(t\in(t_{k-1},t_{k}]\), \(k=1,2,\ldots \) . Thereby, we have
$$\begin{aligned}& \begin{aligned}[b] e^{At_{k-1}}x(t_{k-1}) ={}&e^{At_{k-2}}x(t_{k-2})+ \int_{t_{k-2}}^{t_{k-1}}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)\\ &{}+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds +e^{At_{k-2}}\phi\bigl(x(t_{k-2})\bigr), \end{aligned} \\& \vdots \\& \begin{aligned}[b] e^{At_{2}}x(t_{2}) ={}&e^{At_{1}}x(t_{1})+ \int_{t_{1}}^{t_{2}}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds\\ &{}+e^{At_{1}}\phi \bigl(x(t_{1})\bigr), \end{aligned}\\& e^{At_{1}}x(t_{1}) =\phi(0)+ \int_{0}^{t_{1}}e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int _{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds . \end{aligned}$$
Synthesizing the above analysis results in the first equation of system (3.1). Similarly, the second equation of system (3.1) can also be derived by system (2.1) with initial condition (2.2). Hence, we have proved that each solution of (2.1) with initial condition (2.2) is that of (3.1) with initial condition (2.2). This completes the proof. □

Theorem 3.2

The impulsive system (2.1) with initial condition (2.2) shows globally exponential robust stability in mean square for all admissible uncertainties if there exists a positive number \(\alpha<1\), satisfying the following two LMIs conditions:
$$\begin{aligned}& A^{*}+\bigl(|C|+C^{*}\bigr)F+\tau\bigl(|M|+M^{*}\bigr)F +\frac{1}{\delta}H+AH -\alpha A < 0, \\& B^{*}+\bigl(|D|+D^{*}\bigr)G +\tau\bigl(|W|+W^{*}\bigr)G +\frac{1}{\delta}\mathcal{H}+B\mathcal {H}-\alpha B< 0, \end{aligned}$$
where \(\delta=\inf_{k=1,2,\ldots}(t_{k+1}-t_{k})>0\), and \(A^{*}, B^{*}, C^{*}, D^{*}, M^{*}, W^{*}\) are real matrices defined in (2.3).

Proof

To apply the contraction mapping theorem, we firstly define the complete metric space \(\Omega=\Omega_{1}\times\Omega_{2}\) as follows.

Let \(\Omega_{i}\) (\(i=1,2\)) be the space consisting of functions \(q_{i}(t): [-\tau,\infty)\to R^{n}\), satisfying
  1. (a)

    \(q_{i}(t)\) is continuous on \(t\in[0,+\infty)\backslash\{t_{k}\} _{k=1}^{\infty}\);

     
  2. (b)

    \(q_{1}(t)=\xi(t)\), \(q_{2}(t)=\eta(t)\), for \(t\in[-\tau,0]\);

     
  3. (c)

    \(\lim_{t\to t_{k}^{-}}q_{i}(t)=q_{i}(t_{k})\), and \(\lim_{t\to t_{k}^{+}}q_{i}(t)\) exists, for all \(k=1,2,\ldots\) ;

     
  4. (d)

    \(e^{\gamma t}q_{i}(t)\to0\in R^{n}\) as \(t\to+\infty\), where \(\gamma >0\) is a positive constant, satisfying \(\gamma<\min\{\lambda_{\min}A,\lambda_{\min}B\}\).

     
It is not difficult to verify that the product space Ω is a complete metric space if it is equipped with the following metric:
$$ \operatorname{dist}(\overline{q},\widetilde{q})=\max_{i=1,2,\ldots,2n-1,2n} \Bigl(\sup _{t\geqslant-\tau}\bigl|\overline{q}^{(i)}(t)-\widetilde{q}^{(i)}(t)\bigr| \Bigr), $$
(3.3)
where
$$\begin{aligned}& \overline{q}=\overline{q}(t)= \begin{pmatrix} \overline{q}_{1}(t)\\ \overline{q}_{2}(t) \end{pmatrix} =\bigl(\overline{q}^{(1)}(t), \overline{q}^{(2)}(t),\ldots,\overline {q}^{(2n)}(t) \bigr)^{T}\in\Omega,\\& \widetilde{q}=\widetilde{q}(t)= \begin{pmatrix} \widetilde{q}_{1}(t)\\ \widetilde{q}_{2}(t) \end{pmatrix}= \bigl(\widetilde{q}^{(1)}(t),\ldots,\widetilde{q}^{(2n)}(t) \bigr)^{T}\in\Omega, \end{aligned}$$
and \(\overline{q}_{i}\in\Omega_{i}, \widetilde{q}_{i}\in\Omega_{i}, i=1,2\).
Next, we want to formulate the contraction mapping \(P: \Omega\to \Omega\) as follows:
$$\begin{aligned} {{P \begin{pmatrix} x(t)\\ y(t) \end{pmatrix} = \begin{pmatrix} e^{-At} \{\xi(0)+\int_{0}^{t} e^{As} [-\Delta A(s)x(s)+C_{s}f(y(s))+M_{s}\int_{s-\tau (s)}^{s}f(y(r))\,dr ]\,ds+\sum_{0< t_{k}< t}e^{At_{k}}\phi(x_{t_{k}}) \}\\ e^{-Bt} \{\eta(0)+\int_{0}^{t} e^{Bs} [-\Delta B(s)x(s)+D_{s}g(x(s))+W_{s}\int_{s-\rho (s)}^{s}g(x(r))\,dr ]\,ds+\sum_{0< t_{k}< t}e^{Bt_{k}}\varphi(x_{t_{k}}) \} \end{pmatrix},}} \end{aligned}$$
(3.4)
for all \(t\geqslant0\), and
$$ P \begin{pmatrix} x(t)\\ y(t) \end{pmatrix} = \begin{pmatrix} \xi(t)\\ \eta(t) \end{pmatrix}, \quad \mbox{for all } t\in[-\tau,0]. $$
(3.5)

From Lemma 3.1, it is obvious that each fixed point of P is a solution of system (2.1) with initial condition (2.2), and each solution of system (2.1) with initial condition (2.2) is a fixed point of P.

Below, we only need to prove that the mapping P defined as (3.4)-(3.5) is truly a contraction mapping from Ω into Ω, which may be divided into two steps.

Step 1. We claim that \(P(\Omega)\subset\Omega\). That is, for any \(\bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr)\in \Omega\), we shall prove that \(P \bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t)\end{matrix}} \bigr)\) satisfies the conditions (a)-(d) of Ω.

Indeed, it follows by the definition of P that \(P \bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t)\end{matrix}} \bigr)\) satisfies the conditions (a)-(b). Besides, because of
$$\lim_{\varepsilon\to0^{+}} \begin{pmatrix} \sum_{0< t_{k}< t_{j}-\varepsilon}e^{At_{k}}\phi (x_{t_{k}})\\ \sum_{0< t_{k}< t_{j}-\varepsilon}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix}= \begin{pmatrix} \sum_{0< t_{k}< t_{j}}e^{At_{k}}\phi(x_{t_{k}})\\ \sum_{0< t_{k}< t_{j}}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} $$
and
$$\lim_{\varepsilon\to0^{+}} \begin{pmatrix} \sum_{0< t_{k}< t_{j}+\varepsilon}e^{At_{k}}\phi (x_{t_{k}})\\ \sum_{0< t_{k}< t_{j}+\varepsilon}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} = \begin{pmatrix} \sum_{0< t_{k}< t_{j}}e^{At_{k}}\phi(x_{t_{k}})\\ \sum_{0< t_{k}< t_{j}}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} + \begin{pmatrix} e^{At_{j}}\phi(x_{t_{j}})\\ e^{Bt_{j}}\varphi(y_{t_{j}}) \end{pmatrix}, $$
we can conclude directly from (3.4) that
$$\lim_{\varepsilon\to0^{+}} P \begin{pmatrix} x(t_{j}-\varepsilon)\\ y(t_{j}-\varepsilon) \end{pmatrix} =P \begin{pmatrix} x(t_{j})\\ y(t_{j}) \end{pmatrix} $$
and
$$\lim_{\varepsilon\to0^{+}} P \begin{pmatrix} x(t_{j}+\varepsilon)\\ y(t_{j}+\varepsilon) \end{pmatrix} =P \begin{pmatrix} x(t_{j})\\ y(t_{j}) \end{pmatrix} + \begin{pmatrix} \phi(x(t_{j}))\\ \varphi(y(t_{j})) \end{pmatrix}, $$
which implies that \(P(\cdot)\) satisfies the condition (c).
Finally, we verify that \(P(\cdot)\) satisfies the condition (d). In fact, we can conclude from \(e^{\gamma t}x(t)\to0\) and \(e^{\gamma t}y(t)\to0\) that, for any given \(\varepsilon>0\), there exists a corresponding constant \(t^{*}>\tau\) such that
$$\bigl\vert e^{\gamma t}x(t)\bigr\vert +\bigl\vert e^{\gamma t}y(t) \bigr\vert < \varepsilon\mu, \quad\forall t\geqslant t^{*}, \mbox{where } \mu=(1,1, \ldots,1)^{T}\in R^{n}. $$
Next, we get by (H1)
$$ \begin{aligned}[b] & \biggl|e^{\gamma t}e^{-At} \int_{0}^{t}e^{As}C_{s}f \bigl(y(s)\bigr)\,ds \biggr|\\ &\quad\leqslant e^{-(A-\gamma I)t} \int_{0}^{t^{*}}e^{As}|C_{s}|F\bigl\vert y(s)\bigr\vert \,ds+e^{-(A-\gamma I)t} \int_{t^{*}}^{t}e^{As}|C_{s}|F\bigl\vert y(s)\bigr\vert \,ds. \end{aligned} $$
(3.6)
On the one hand, the boundedness assumption (2.3) produces \(|C_{s}|\mu \leqslant(|C|+C^{*})\mu\), and then
$$\begin{aligned} &e^{-(A-\gamma I)t} \int_{0}^{t^{*}}e^{As}|C_{s}|F\bigl\vert y(s)\bigr\vert \,ds \\ &\quad\leqslant t^{*}e^{-(A-\gamma I)t}e^{At^{*}}\bigl( \vert C\vert +C^{*}\bigr)F \Bigl[\max_{i} \Bigl(\sup _{s\in [0,t^{*}]}\bigl\vert y_{i}(s)\bigr\vert \Bigr) \Bigr]\mu \to0\in R^{n},\quad t\to\infty. \end{aligned}$$
(3.7)
Remark that the convergence in (3.7) is in the sense of the metric defined as (3.3). Below, all the cases of convergence are in the sense of the metric defined as (3.3), and we need not repeat it anywhere else.
Due to (2.3), obviously there exists a positive number \(a_{0}\) such that
$$\bigl(\vert C_{s}\vert +\vert M_{s}\vert \bigr)F \mu \leqslant a_{0}\mu. $$
So we have
$$\begin{aligned} &e^{-(A-\gamma I)t} \int_{t^{*}}^{t}e^{As}|C_{s}|F\bigl|y(s)\bigr|\,ds \\ &\quad\leqslant\varepsilon e^{-(A-\gamma I)t} \int_{t^{*}}^{t}e^{(A-\gamma I)s}|C_{s}|F \mu \,ds \\ &\quad\leqslant \varepsilon a_{0}e^{-(A-\gamma I)t} \begin{pmatrix} \frac{e^{(a_{1}-\gamma)t}}{a_{1}-\gamma}& 0& \cdots& 0& 0\\ 0&\frac{e^{(a_{2}-\gamma)t}}{a_{2}-\gamma}& 0& \cdots& 0\\ & &\ddots\\ 0&0&\cdots&0&\frac{e^{(a_{n}-\gamma)t}}{a_{n}-\gamma} \end{pmatrix} \mu \\ &\quad=\varepsilon a_{0}\biggl(\frac{1}{a_{1}-\gamma},\frac{1}{a_{2}-\gamma}, \ldots,\frac {1}{a_{n}-\gamma}\biggr)^{T}. \end{aligned}$$
(3.8)
Now, the arbitrariness of ε together with (3.6)-(3.8) implies that
$$ e^{\gamma t}e^{-At} \int_{0}^{t}e^{As}C_{s}f \bigl(y(s)\bigr)\,ds\to0\in R^{n},\quad t\to +\infty. $$
(3.9)
Similarly as the proof of (3.9), we can prove from (2.3) that
$$ e^{\gamma t}e^{-At} \int_{0}^{t} e^{As} \bigl(-\Delta A(s)x(s) \bigr)\,ds\to0\in R^{n},\quad t\to+\infty. $$
(3.10)
Further, the definition of γ gives
$$ e^{\gamma t}e^{-At} \xi(0)\to0\in R^{n}, \quad t\to+ \infty. $$
(3.11)
Below, we estimate
$$\begin{aligned} & \biggl|e^{\gamma t}e^{-At} \int_{0}^{t} e^{As}M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr\,ds \biggr| \\ &\quad\leqslant e^{-(A-\gamma I)t} \int_{0}^{t} e^{As}|M_{s}| \int_{s-\tau(s)}^{s}\bigl|f\bigl(y(r)\bigr)\bigr|\,dr\,ds \\ &\quad\leqslant e^{-(A-\gamma I)t} \int_{0}^{t} e^{As}|M_{s}| \int_{s-\tau}^{s}F\bigl|y(r)\bigr|\,dr\,ds \\ &\quad\leqslant e^{\gamma\tau}e^{-(A-\gamma I)t} \int_{0}^{t^{*}+\tau} e^{(A-\gamma I) s}|M_{s}| \int_{s-\tau}^{s}Fe^{\gamma r}\bigl|y(r)\bigr|\,dr\,ds \\ &\qquad{}+e^{\gamma \tau}e^{-(A-\gamma I)t} \int_{t^{*}+\tau}^{t} e^{(A-\gamma I) s}|M_{s}| \int_{s-\tau}^{s}Fe^{\gamma r}\bigl|y(r)\bigr|\,dr\,ds. \end{aligned}$$
(3.12)
It follows by the definitions of \(a_{0}\), \(t^{*}\), and τ that
$$\begin{aligned} 0&\leqslant e^{\gamma\tau}e^{-(A-\gamma I)t} \int_{0}^{t^{*}+\tau} e^{(A-\gamma I) s}|M_{s}| \int_{s-\tau}^{s}Fe^{\gamma r}\bigl|y(r)\bigr|\,dr\,ds \\ &\leqslant e^{\gamma\tau}e^{-(A-\gamma I)t} \int_{0}^{t^{*}+\tau} e^{(A-\gamma I) s}|M_{s}| \int_{-\tau}^{t^{*}+\tau}F\max_{i} \Bigl( \sup_{r\in[-\tau,t^{*}+\tau]}e^{\gamma r}\bigl|y_{i}(r)\bigr| \Bigr)\mu \,dr\,ds \\ &= e^{\gamma\tau}e^{-(A-\gamma I)t} \int_{0}^{t^{*}+\tau} e^{(A-\gamma I) s}|M_{s}| \bigl(t^{*}+\tau+\tau\bigr)F\max_{i} \Bigl(\sup _{r\in[-\tau,t^{*}+\tau]}e^{\gamma r}\bigl|y_{i}(r)\bigr| \Bigr)\mu \,ds \\ &\leqslant e^{-(A-\gamma I)t} \biggl[\bigl(t^{*}+2\tau\bigr)e^{\gamma\tau}\max _{i} \Bigl(\sup_{r\in[-\tau,t^{*}+\tau]}e^{\gamma r}\bigl|y_{i}(r)\bigr| \Bigr)e^{(A-\gamma I)(t^{*}+\tau)} \\ &\qquad{}\times \int_{0}^{t^{*}+\tau} |M_{s}|F\mu \,ds \biggr]\to0 \in R^{n},\quad t\to+\infty, \end{aligned}$$
(3.13)
and
$$\begin{aligned} &e^{\gamma\tau}e^{-(A-\gamma I)t} \int_{t^{*}+\tau}^{t} e^{(A-\gamma I) s}|M_{s}| \int_{s-\tau}^{s}Fe^{\gamma r}\bigl|y(r)\bigr|\,dr\,ds \\ &\quad\leqslant\varepsilon\tau a_{0}e^{\gamma\tau}e^{-(A-\gamma I)t} \biggl( \int _{t^{*}+\tau}^{t} e^{(A-\gamma I) s}\,ds \biggr)\mu \\ &\quad\leqslant\varepsilon\tau a_{0}e^{\gamma\tau}e^{-(A-\gamma I)t} \begin{pmatrix} \frac{e^{(a_{1}-\gamma)t}}{a_{1}-\gamma}& 0& \cdots& 0& 0\\ 0&\frac{e^{(a_{2}-\gamma)t}}{a_{2}-\gamma}& 0& \cdots& 0\\ & &\ddots\\ 0&0&\cdots&0&\frac{e^{(a_{n}-\gamma)t}}{a_{n}-\gamma} \end{pmatrix} \mu \\ &\quad=\varepsilon \biggl[\tau a_{0}e^{\gamma\tau}\biggl( \frac{1}{a_{1}-\gamma},\frac {1}{a_{2}-\gamma},\ldots,\frac{1}{a_{n}-\gamma} \biggr)^{T} \biggr]. \end{aligned}$$
(3.14)
Synthesizing (3.12)-(3.14) derives
$$ e^{\gamma t}e^{-At} \int_{0}^{t} e^{As}M_{s} \int_{s-\tau(s)}^{s}f\bigl(y(r)\bigr)\,dr\,ds\to0\in R^{n}, \quad t\to+\infty . $$
(3.15)
Combining (3.9)-(3.11) and (3.15) results in
$$\begin{aligned} &e^{\gamma t}e^{-At} \biggl\{ \xi(0)+ \int_{0}^{t} e^{As} \biggl[-\Delta A(s)x(s)+C_{s}f\bigl(y(s)\bigr)+M_{s} \int_{s-\tau (s)}^{s}f\bigl(y(r)\bigr)\,dr \biggr]\,ds \biggr\} \to R^{n}, \\ &\quad t\to+\infty. \end{aligned}$$
(3.16)
Similarly, we can prove and obtain
$$\begin{aligned} &e^{\gamma t}e^{-Bt} \biggl\{ \eta(0)+ \int_{0}^{t} e^{Bs} \biggl[-\Delta B(s)x(s)+D_{s}g\bigl(x(s)\bigr)+W_{s} \int_{s-\rho (s)}^{s}g\bigl(x(r)\bigr)\,dr \biggr]\,ds \biggr\} \to R^{n}, \\ &\quad t\to+\infty. \end{aligned}$$
(3.17)
In addition, we claim that if \(t\to+\infty\),
$$\begin{aligned} e^{\gamma t} \begin{pmatrix} e^{-At} \sum_{0< t_{k}< t}e^{At_{k}}\phi(x_{t_{k}}) \\ e^{-Bt} \sum_{0< t_{k}< t}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} ={}&e^{\gamma t} \left[ \begin{pmatrix} e^{-At} \sum_{0< t_{k}\leqslant t^{*}}e^{At_{k}}\phi(x_{t_{k}}) \\ e^{-Bt} \sum_{0< t_{k}\leqslant t^{*}}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} \right.\\ &{}\left.+\begin{pmatrix} e^{-At} \sum_{t^{*}< t_{k}< t}e^{At_{k}}\phi (x_{t_{k}}) \\ e^{-Bt} \sum_{t^{*}< t_{k}< t}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} \right] \to0\in R^{2n} . \end{aligned}$$
(3.18)
In fact, on the one hand,
$$\begin{aligned} &e^{\gamma t} \begin{pmatrix} e^{-At} \sum_{0< t_{k}\leqslant t^{*}}e^{At_{k}}\phi(x_{t_{k}}) \\ e^{-Bt} \sum_{0< t_{k}\leqslant t^{*}}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} = \begin{pmatrix} e^{(\gamma I -A)t} \sum_{0< t_{k}\leqslant t^{*}}e^{At_{k}}\phi(x_{t_{k}}) \\ e^{(\gamma I -B)t} \sum_{0< t_{k}\leqslant t^{*}}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix} \to0\in R^{2n}, \\ &\quad t\to+ \infty. \end{aligned}$$
(3.19)
Below we shall prove
$$ e^{\gamma t} \begin{pmatrix} e^{-At} \sum_{t^{*}< t_{k}< t}e^{At_{k}}\phi (x_{t_{k}}) \\ e^{-Bt} \sum_{t^{*}< t_{k}< t}e^{Bt_{k}}\varphi(y_{t_{k}}) \end{pmatrix}\to0\in R^{2n}, \quad t\to+ \infty. $$
(3.20)
Firstly, we may assume that \(t_{m-1}< t^{*}\leqslant t_{m}\) and \(t_{j}< t\leqslant t_{j+1}\) for any given \(t>t^{*}\),
$$\begin{aligned} & \biggl|e^{\gamma t} e^{-At} \sum_{t^{*}< t_{k}< t}e^{At_{k}} \phi (x_{t_{k}}) \biggr| \\ &\quad\leqslant\frac{\varepsilon}{\delta} \begin{pmatrix} e^{-(\gamma-a_{1})t} (\delta e^{(a_{1}-\gamma )t_{j+1}}+ \sum_{t_{m}\leqslant t_{k}\leqslant t_{j}}(t_{k+1}-t_{k})e^{(a_{1}-\gamma)t_{k}} )H_{1}\\ \vdots\\ e^{-(\gamma-a_{n})t} (\delta e^{(a_{n}-\gamma)t_{j+1}}+ \sum_{t_{m}\leqslant t_{k}\leqslant t_{j}}(t_{k+1}-t_{k})e^{(a_{n}-\gamma)t_{k}} )H_{n} \end{pmatrix} \\ &\quad\leqslant\frac{\varepsilon}{\delta} \begin{pmatrix} e^{-(\gamma-a_{1})t} (\delta e^{(a_{1}-\gamma )t_{j+1}}+ \frac{1}{a_{1}-\gamma}e^{(a_{1}-\gamma)t} )H_{1}\\ \vdots\\ e^{-(\gamma-a_{n})t} (\delta e^{(a_{n}-\gamma)t_{j+1}}+ \frac{1}{a_{n}-\gamma }e^{(a_{1}-\gamma)t} )H_{n} \end{pmatrix}, \end{aligned}$$
which together with the arbitrariness of the positive number ε implies that
$$e^{-At} \sum_{t^{*}< t_{k}< t}e^{At_{k}} \phi(x_{t_{k}})\to0\in R^{n},\quad t\to+\infty. $$
Similarly, we can also obtain
$$e^{-Bt} \sum_{t^{*}< t_{k}< t}e^{Bt_{k}} \varphi(y_{t_{k}})\to0\in R^{n},\quad t\to+\infty. $$
So we have proved (3.20). Moreover, combining (3.18)-(3.20) implies that
$$e^{\gamma t}P \begin{pmatrix} x(t)\\ y(t) \end{pmatrix} \to0\in R^{2n}, \quad t\to+\infty. $$
Hence \(P(\cdot)\) satisfies the condition (d). So we have proved that \(P(\Omega)\subset\Omega\).

Step 2. Below, we only need to prove that the operator \(P: \Omega\to\Omega\) is a contraction mapping.

Indeed, for any \(\bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr), \bigl( {\scriptsize\begin{matrix}{}\overline{x}(t)\cr \overline{y}(t) \end{matrix}} \bigr)\in \Omega\), we can get by the LMI conditions of Theorem 3.2
$$\begin{aligned} & \left|P \begin{pmatrix} x(t)\\ y(t) \end{pmatrix} -P \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix}\right| \\ &\quad \leqslant \begin{pmatrix} e^{-At}\int_{0}^{t} e^{As}|\Delta A(s)||x(s)-\overline{x}(s)|\,ds\\ e^{-Bt}\int_{0}^{t} e^{Bs}|\Delta B(s)||y(s)-\overline{y}(s)|\,ds \end{pmatrix} + \begin{pmatrix} e^{-At}\int_{0}^{t} e^{As}|C_{s}||f(y(s))-f(\overline{y}(s))|\,ds\\ e^{-Bt}\int_{0}^{t} e^{Bs}|D_{s}||g(x(s))-g(\overline{x}(s))|\,ds \end{pmatrix} \\ &\qquad{}+ \begin{pmatrix} e^{-At}\int_{0}^{t} e^{As}|M_{s}|\int_{s-\tau(s)}^{s}|f(y(r))-f(\overline{y}(r))|\,dr\,ds\\ e^{-Bt}\int_{0}^{t} e^{Bs}|W_{s}|\int_{s-\rho(s)}^{s}|g(x(r))-g(\overline{x}(r))|\,dr\,ds \end{pmatrix} \\ &\qquad{}+ \begin{pmatrix} e^{-At} \sum_{0< t_{k}< t}e^{At_{k}}|\phi (x_{t_{k}})-\phi(\overline{x}_{t_{k}})| \\ e^{-Bt} \sum_{0< t_{k}< t}e^{Bt_{k}}|\varphi(y_{t_{k}})-\varphi(\overline{y}_{t_{k}})| \end{pmatrix} \\ &\quad\leqslant \left[ \begin{pmatrix} A^{-1}A^{*}\mu\\ B^{-1}B^{*}\mu \end{pmatrix} + \begin{pmatrix} A^{-1}(|C|+C^{*})F\mu \\ B^{-1}(|D|+D^{*})G\mu \end{pmatrix} +\tau \begin{pmatrix} e^{-At}\int_{0}^{t} e^{As}(|M|+M^{*})F\mu \,ds\\ e^{-Bt}\int_{0}^{t} e^{Bs}(|W|+W^{*})G\mu \,dr \end{pmatrix}\right. \\ &\qquad{}\left.+\frac{1}{\delta}\begin{pmatrix} e^{-At} ( \sum_{t_{1}\leqslant t_{k}\leqslant t_{j-1}}(t_{k+1}-t_{k})e^{At_{k}}+\delta e^{At_{j}} )H\mu\\ e^{-Bt} ( \sum_{t_{1}\leqslant t_{k}\leqslant t_{j-1}}(t_{k+1}-t_{k})e^{Bt_{k}}+\delta e^{Bt_{j}} )\mathcal{H}\mu \end{pmatrix} \right] \operatorname{dist} \left( \begin{pmatrix} x(t)\\ y(t) \end{pmatrix}, \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix} \right) \\ &\quad\leqslant \biggl[ \begin{pmatrix} A^{-1}A^{*}\mu\\ B^{-1}B^{*}\mu \end{pmatrix}+ \begin{pmatrix} A^{-1}(|C|+C^{*})F\mu \\ B^{-1}(|D|+D^{*})G\mu \end{pmatrix} +\tau \begin{pmatrix} A^{-1}(|M|+M^{*})F\mu \\ B^{-1}(|W|+W^{*})G\mu \end{pmatrix} \\ &\qquad{}+\frac{1}{\delta}\begin{pmatrix} e^{-At} (\int_{0}^{t}e^{As}\,ds+\delta e^{At} )H\mu\\ e^{-Bt} (\int_{0}^{t}e^{Bs}\,ds+\delta e^{Bt} )\mathcal{H}\mu \end{pmatrix} \biggr] \operatorname{dist} \left( \begin{pmatrix} x(t)\\ y(t) \end{pmatrix}, \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix} \right) \\ &\quad\leqslant \left[ \begin{pmatrix} (A^{-1}A^{*}+A^{-1}(|C|+C^{*})F+\tau A^{-1}(|M|+M^{*})F +\frac{1}{\delta}A^{-1}H+H )\mu \\ ( B^{-1}B^{*}+B^{-1}(|D|+D^{*})G +\tau B^{-1}(|W|+W^{*})G +\frac{1}{\delta}B^{-1}\mathcal{H}+\mathcal{H} )\mu \end{pmatrix} \right]\\ &\qquad{}\times \operatorname{dist} \left( \begin{pmatrix} x(t)\\ y(t) \end{pmatrix}, \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix} \right) \\ &\quad< \alpha \begin{pmatrix} \mu\\ \mu \end{pmatrix} \operatorname{dist} \left( \begin{pmatrix} x(t)\\ y(t) \end{pmatrix}, \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix} \right), \end{aligned}$$
where we assume \(t_{j}< t\leqslant t_{j+1}\) with \(j=0,1,2,\ldots \) . Here, \(t_{0}=0\). Hence
$$\operatorname{dist} \left(P \begin{pmatrix} x(t)\\ y(t) \end{pmatrix}, P \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix} \right)\leqslant \alpha \operatorname{dist} \left( \begin{pmatrix} x(t)\\ y(t) \end{pmatrix}, \begin{pmatrix} \overline{x}(t)\\ \overline{y}(t) \end{pmatrix} \right), $$
where \(A^{-1}\) and \(B^{-1}\) are the inverse matrices of A and B, respectively.

Therefore, \(P: \Omega\to\Omega\) is a contraction mapping such that there exists the fixed point \(\bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr)\) of P in Ω, which implies that \(\bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr)\) is a solution of the impulsive dynamic equations (2.1) with the initial condition (2.2), satisfying \(e^{\gamma t} \bigl\| \bigl( {\scriptsize\begin{matrix}{}x(t)\cr y(t) \end{matrix}} \bigr) \bigr\|\to0\) as \(t\to+\infty\). Therefore, the impulsive equations (2.1)-(2.2) show globally exponential robust stability in mean square for all admissible uncertainties. □

Remark 2

It is the first time one employs the contraction mapping theory to derive the LMI-based exponential robust stability criterion for BAM neural networks with distributed delays and parameter uncertainties. So the obtained criterion is novel against existing results (see below Remarks 3-5 and Tables 1 and 2).
Table 1

Numerical comparison of Theorem  3.2 with [ 36 ], Theorem 2, in Example 1

 

Impulse

Distributed delays

Upper bound of time-delays

[36], Theorem 2, Example 1

yes

yes

1.9

Theorem 3.2, Example 1

yes

yes

2.1

Table 2

Comparing our Theorem  3.2 with other existing results about BAM neural networks models

 

Our Th. 3.2

[37], Th. 4.1

[31], Th. 3.1

[32], Thms. 2-4

[38], Th. 3

[34], Thm. 1

Impulse

yes

yes

no

no

no

no

Distributed delays

yes

no

no

no

no

yes

Parameter uncertainty

yes

yes

no

no

no

no

Using fixed point theory

yes

no

yes

yes

no

no

BAM neural networks model

yes

yes

yes

yes

yes

yes

Equations type

integro-differential

differential

differential

differential

differential

integro-differential

LMI-based criterion

yes

yes

no

no

no

yes

Robustness of stability

yes

yes

no

no

no

no

Stability type

robust exponential

robust asymptotical

exponential

exponential

exponential

exponential

If impulse behaviors are ignored, we can derive the following corollary from Theorem 3.2.

Corollary 3.3

The following system with initial condition (2.2) shows globally exponential robust stability in mean square for all admissible uncertainties:
$$\left \{ \textstyle\begin{array}{@{}l} \frac{dx(t)}{dt} = -(A+\Delta A(t))x(t)+(C+\Delta C(t))f(y(t)) +(M+\Delta M(t))\int_{t-\tau(t)}^{t}f(y(s))\,ds,\\ \quad t\geqslant0,\\ \frac{dy(t)}{dt} = -(B+\Delta B(t))y(t)+(D+\Delta D(t))g(x(t))+ (W+\Delta W(t))\int_{t-\rho(t)}^{t}g(x(s))\,ds, \\ \quad t\geqslant0, \end{array}\displaystyle \right . $$
if there exists a positive number \(\alpha<1\), satisfying the following two LMIs conditions:
$$\begin{aligned}& A^{*}+\bigl(|C|+C^{*}\bigr)F+\tau\bigl(|M|+M^{*}\bigr)F -\alpha A < 0, \\& B^{*}+\bigl(|D|+D^{*}\bigr)G +\tau\bigl(|W|+W^{*}\bigr)G -\alpha B< 0, \end{aligned}$$
where \(A^{*}, B^{*}, C^{*}, D^{*}, M^{*}, W^{*}\) are the real matrices defined in (2.3).

4 Numerical example

Example 1

Equip the impulsive system (2.1)-(2.2) with the following parameters:
$$\begin{aligned}& f\bigl(y(t)\bigr)= \begin{pmatrix} \sin(0.1y_{1}(t))\\ 0.2\sin(y_{2}(t)) \end{pmatrix},\qquad g\bigl(x(t)\bigr)= \begin{pmatrix} \sin(0.2x_{1}(t))\\ 0.1\sin(x_{2}(t)) \end{pmatrix}, \end{aligned}$$
(4.1)
$$\begin{aligned}& \phi\bigl(x(t_{k})\bigr)= \begin{pmatrix} 0.3x_{1}(t_{k})\cos(100.01t_{k})\\ \sin(0.2x_{2}(t_{k})) \end{pmatrix},\qquad \varphi \bigl(y(t_{k})\bigr)= \begin{pmatrix} \cos(0.3y_{1}(t_{k}))\\ 0.2y_{2}(t_{k})\sin(100.01t_{k}) \end{pmatrix}, \\& A= \begin{pmatrix} 1.9& 0\\ 0& 2 \end{pmatrix}, \qquad B= \begin{pmatrix} 2& 0\\ 0& 1.8 \end{pmatrix},\qquad C= \begin{pmatrix} -0.1& 0.01\\ 0& 0.2 \end{pmatrix}, \\& D= \begin{pmatrix} 0.2& 0.02\\ 0& -0.1 \end{pmatrix}, \qquad M= \begin{pmatrix} 0.002&-0.001\\ 0.001& 0.001 \end{pmatrix}, \\& W= \begin{pmatrix} 0.008&-0.001\\ 0.001& 0.001 \end{pmatrix},\qquad F= \begin{pmatrix} 0.1& 0\\ 0& 0.2 \end{pmatrix}, \\& G= \begin{pmatrix} 0.2& 0\\ 0& 0.1 \end{pmatrix},\qquad H= \begin{pmatrix} 0.3& 0\\ 0& 0.2 \end{pmatrix}= \mathcal{H}, \\& A^{*}= \begin{pmatrix} 0.01& 0.01\\ 0& 0.03 \end{pmatrix},\qquad B^{*}= \begin{pmatrix} 0.03& 0\\ 0.01& 0.02 \end{pmatrix},\qquad C^{*}= \begin{pmatrix} 0.03&0\\ 0.01& 0.02 \end{pmatrix}, \\& D^{*}= \begin{pmatrix} 0.01&0.01\\ 0& 0.02 \end{pmatrix},\qquad M^{*}= \begin{pmatrix} 0.015&0\\ 0.01&\ 0.012 \end{pmatrix},\qquad W^{*}= \begin{pmatrix} 0.011&0.01\\ 0& 0.018 \end{pmatrix}. \end{aligned}$$
(4.2)
Take \(t_{1}=0.3, t_{k}=t_{k-1}+0.3k\), and \(\delta=0.5, \tau(t)=\rho(t)=\tau=2.1\). Let \(x_{1}(s)=\tanh s, x_{2}(s)=e^{s+0.5}, y_{1}(s)=2\sin s, y_{2}(s)=2\cos(s+0.5)\). Then we can use the matlab LMI toolbox to solve the two LMI conditions in Theorem 3.2, obtaining the following datum feasible:
$$\alpha=0.9885, $$
satisfying \(0<\alpha<1\). Thereby, we can conclude from Theorem 3.2 that the impulsive equations (2.1)-(2.2) show globally exponential robust stability in mean square for all admissible uncertainties (see Figure 1).
Figure 1

State trajectory of the system for Example  1 .

Remark 3

Example 1 can be studied by [36], Theorem 2, Example 1. Table 1 is presented to compare our Example 1 with it. Our Example 1 illustrates the effectiveness of the LMI-based criterion (Theorem 3.2). Although both [36], Theorem 2, and our Theorem 3.2 are involved with BAM neural networks with distributed delays, our Theorem 3.2 includes an impulse.

Remark 4

In Example 1, our upper bound of time-delays \(\tau=2.1\) while the upper bound of time-delays of [36], Example 1, is 1.9, which implies that our obtained result is close to some of the current good results. For this purpose, below we give another table to verify it.

Remark 5

From Table 2, we can synthetically analyze the criteria involved to various mathematical models, main methods, and the effectiveness of the conclusions. In summary, the different methods and models imply that our Theorem 3.2 is really novel against existing results.

5 Conclusions

Impulsive uncertain BAM neural networks modeling brings about some mathematical difficulties to the applications of the contraction mapping theorem. Thereby, the contraction mapping theorem has never been employed to derive the robust stability of the impulsive uncertain BAM neural networks before our Theorem 3.2. Moreover, our new criterion can easily be applied to the computer Matlab LMI toolbox. Example 1 illustrates the effectiveness and feasibility by using the Matlab LMI toolbox. In addition, Tables 1 and 2 are presented to show the novelty of our Theorem 3.2 (see Remarks 2-5).

Declarations

Acknowledgements

All the authors thank the anonymous reviewers for their constructive suggestions and pertinent comments, which actually improve this paper. This work was supported by National Basic Research Program of China (2010CB732501), Scientific Research Fund of Science Technology Department of Sichuan Province (2010JY0057, 2012JY010), Sichuan Educational Committee Science Foundation (08ZB002, 12ZB349, 14ZA0274), and the Initial Founding of Scientific Research for Chengdu Normal university introduction of talents.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Mathematics, Chengdu Normal University, Chengdu, China
(2)
Research Institute of Mathematics, Chengdu Normal University, Chengdu, China
(3)
Research Institute of Mathematical Education, Chengdu Normal University, Chengdu, China

References

  1. Kosko, B: Adaptive bi-directional associative memories. Appl. Opt. 26(23), 4947-4960 (1987) View ArticleGoogle Scholar
  2. Kosko, B: Neural Networks and Fuzzy Systems. Prentice Hall, New Delhi (1992) MATHGoogle Scholar
  3. Haykin, S: Neural Networks. Prentice Hall, New Jersey (1999) MATHGoogle Scholar
  4. Mathai, G, Upadhyaya, BR: Performance analysis and application of the bidirectional associative memory to industrial spectral signatures. In: Proceedings of the IJCNN, 1989, pp. 33-37 (1989) Google Scholar
  5. Liu, B, Huang, L: Global exponential stability of BAM neural networks with recent-history distributed delays and impulses. Neurocomputing 69(16-18), 2090-2096 (2006) View ArticleGoogle Scholar
  6. Song, Q, Zhao, Z, Liu, Y: Impulsive effects on stability of discrete-time complex-valued neural networks with both discrete and distributed time-varying delays. Neurocomputing 168(30), 1044-1050 (2015) View ArticleGoogle Scholar
  7. Zhao, H: Exponential stability and periodic oscillatory of bi-directional associative memory neural network involving delays. Neurocomputing 69(4-6), 424-448 (2006) View ArticleGoogle Scholar
  8. Wang, L, Zhang, Z, Wang, Y: Stochastic exponential stability of the delayed reaction-diffusion recurrent neural networks with Markovian jumping parameters. Phys. Lett. A 372(18), 3201-3209 (2008) MathSciNetView ArticleMATHGoogle Scholar
  9. Li, B, Xu, D: Exponential p-stability of stochastic recurrent neural networks with mixed delays and Markovian switching. Neurocomputing 103, 239-246 (2013) View ArticleGoogle Scholar
  10. Rao, R, Zhong, S, Pu, Z: On the role of diffusion factors in stability analysis for p-Laplace dynamical equations involved to BAM Cohen-Grossberg neural networks. Neurocomputing 223, 54-62 (2017) View ArticleGoogle Scholar
  11. Xu, D, Yang, Z: Impulsive delay differential inequality and stability of neural networks. J. Math. Anal. Appl. 305(1), 107-120 (2005) MathSciNetView ArticleMATHGoogle Scholar
  12. Du, Y, Zhong, S, Zhou, N, Nie, L, Wang, W: Exponential passivity of BAM neural networks with time-varying delays. Appl. Math. Comput. 221, 727-740 (2013) MathSciNetMATHGoogle Scholar
  13. Xu, D, Zhao, H, Zhu, H: Global dynamics of Hopfield neural networks involving variable delays. Comput. Math. Appl. 42(1-2), 39-45 (2001) MathSciNetView ArticleMATHGoogle Scholar
  14. Chen, H, Zhong, S, Shao, J: Exponential stability criterion for interval neural networks with discrete and distributed delays. Appl. Math. Comput. 250, 121-130 (2015) MathSciNetMATHGoogle Scholar
  15. Song, Q, Cao, J: Global exponential stability and existence of periodic solutions in BAM networks with delays and reaction-diffusion terms. Chaos Solitons Fractals 23(2), 421-430 (2005) MathSciNetView ArticleMATHGoogle Scholar
  16. Rao, R, Zhong, S, Wang, X: Stochastic stability criteria with LMI conditions for Markovian jumping impulsive BAM neural networks with mode-dependent time-varying delays and nonlinear reaction-diffusion. Commun. Nonlinear Sci. Numer. Simul. 19(1), 258-273 (2014) MathSciNetView ArticleMATHGoogle Scholar
  17. Zhao, H: Global stability of bidirectional associative memory neural networks with distributed delays. Phys. Lett. A 297(3-4), 182-190 (2002) MathSciNetView ArticleMATHGoogle Scholar
  18. Wang, L, Zhang, R, Wang, Y: Global exponential stability of reaction-diffusion cellular neural networks with S-type distributed time delays. Nonlinear Anal., Real World Appl. 10(2), 1101-1113 (2009) MathSciNetView ArticleMATHGoogle Scholar
  19. Li, B, Song, Q: Some new results on periodic solution of Cohen-Grossberg neural network with impulses. Neurocomputing 177, 401-408 (2016) View ArticleGoogle Scholar
  20. Song, Q, Zhao, Z, Li, Y: Global exponential stability of BAM neural networks with distributed delays and reaction-diffusion terms. Phys. Lett. A 335(2-3), 213-225 (2005) View ArticleMATHGoogle Scholar
  21. Li, X, Fu, X: Global asymptotic stability of stochastic Cohen-Grossberg-type BAM neural networks with mixed delays: an LMI approach. J. Comput. Appl. Math. 235(12), 3385-3394 (2011) MathSciNetView ArticleMATHGoogle Scholar
  22. Chen, X, Song, Q, Zhao, Z, Liu, Y: Global μ-stability analysis of discrete-time complex-valued neural networks with leakage delay and mixed delays. Neurocomputing 175A, 723-735 (2016) View ArticleGoogle Scholar
  23. Li, X, Rakkiyappan, R, Pradeep, C: Robust μ-stability analysis of Markovian switching uncertain stochastic genetic regulatory networks with unbounded time-varying delays. Commun. Nonlinear Sci. Numer. Simul. 17(10), 3894-3905 (2012) MathSciNetView ArticleMATHGoogle Scholar
  24. Li, D, Li, X: Robust exponential stability of uncertain impulsive delays differential systems. Neurocomputing 191, 12-18 (2016) View ArticleGoogle Scholar
  25. Xu, D, Li, S, Pu, Z, Guo, Q: Domain of attraction of nonlinear discrete systems with delays. Comput. Math. Appl. 38(5-6), 155-162 (1999) MathSciNetView ArticleMATHGoogle Scholar
  26. Song, Q, Zhao, Z, Liu, Y: Stability analysis of complex-valued neural networks with probabilistic time-varying delays. Neurocomputing 159, 96-104 (2015) View ArticleGoogle Scholar
  27. Xu, D, Li, B, Long, S, Teng, L: Moment estimate and existence for solutions of stochastic functional differential equations. Nonlinear Anal. TMA 108, 128-143 (2014) MathSciNetView ArticleMATHGoogle Scholar
  28. Li, X, Bohner, M, Wang, C: Impulsive differential equations: periodic solutions and applications. Automatica 52, 173-178 (2015) MathSciNetView ArticleMATHGoogle Scholar
  29. Xu, D, Li, B, Long, S, Teng, L: Corrigendum to ‘Moment estimate and existence for solutions of stochastic functional differential equations’ [Nonlinear Anal.: TMA 108 (2014) 128-143]. Nonlinear Anal. TMA 114, 40-41 (2015) View ArticleGoogle Scholar
  30. Zhou, W, Teng, L, Xu, D: Mean-square exponentially input-to-state stability of stochastic Cohen-Grossberg neural networks with time-varying delays. Neurocomputing 153, 54-61 (2015) View ArticleGoogle Scholar
  31. Liu, B: Global exponential stability for BAM neural networks with time-varying delays in the leakage terms. Nonlinear Anal., Real World Appl. 14(1), 559-566 (2013) MathSciNetView ArticleMATHGoogle Scholar
  32. Zhou, L: Novel global exponential stability criteria for hybrid BAM neural networks with proportional delays. Neurocomputing 161, 99-106 (2015) View ArticleGoogle Scholar
  33. Yang, X, Wang, X, Zhong, S, Rao, R: Robust stability analysis for discrete and distributed time-delays Markovian jumping reaction-diffusion integro-differential equations with uncertain parameters. Adv. Differ. Equ. 2015, 186 (2015) MathSciNetView ArticleGoogle Scholar
  34. Bao, H, Cao, J: Exponential stability for stochastic BAM networks with discrete and distributed delays. Appl. Math. Comput. 218, 6188-6199 (2012) MathSciNetMATHGoogle Scholar
  35. Smart, DR: Fixed Point Theorems. Cambridge University Press, Cambridge (1980) MATHGoogle Scholar
  36. Zhu, Q, Huang, C, Yang, X: Exponential stability for stochastic jumping BAM neural networks with time-varying and distributed delays. Nonlinear Anal. Hybrid Syst. 5, 52-77 (2011) MathSciNetView ArticleMATHGoogle Scholar
  37. Sayli, M, Yilmaz, E: Global robust asymptotic stability of variable-time impulsive BAM neural networks. Neural Netw. 60, 67-73 (2014) View ArticleMATHGoogle Scholar
  38. Li, H, Jiang, H, Hu, C: Existence and global exponential stability of periodic solution of memristor-based BAM neural networks with time-varying delays. Neural Netw. 75, 97-109 (2016) View ArticleGoogle Scholar

Copyright

© The Author(s) 2017