Skip to main content

Theory and Modern Applications

Synchronization for complex dynamical networks with mixed mode-dependent time delays

Abstract

In this paper, the problem of synchronization control is investigated for complex dynamical networks with discrete intervals and distributed time-varying delays. The main thing is to design a properly pinning controller, for which the error system of complex dynamical networks is asymptotically stable. Based on the theory of Lyapunov stability and linear matrix inequality, the suitable Lyapunov-Krasovskii functional is constructed in terms of the Kronecker product, and then we obtain a novel synchronization criterion. Finally, a numerical example is given to illustrate the effectiveness of the proposed methods.

1 Introduction

Over the past two decades, extensive efforts have been devoted to the research on complex dynamical networks due to their theoretical importance, and practical applications in various fields, such as the Internet and World Wide Web, food web, genetic networks, and neural networks [1–6]. Many of these networks exhibit complexity in the overall topological properties and dynamical properties of the networks nodes and the coupled units. It is not surprising that one witnesses a surge of research interest in the analysis of the dynamical behavior of complex dynamical networks.

An important concern in the research of complex dynamical networks is the synchronization problem, of which the purpose is to design the proper controller to achieve synchronization of the considered complex dynamical networks. Many significant advances on this issue have been reported in the literature; see [7–22] and the references therein. In this paper, we study a non-fragile approach [7–10], the authors of [7] investigated the controlled synchronization for complex dynamical networks with random delayed information exchanges, for which stochastic variables are utilized to model randomly occurring phenomena. Reference [9] investigates the non-fragile synchronization for bidirectional associative memory neural networks with time-varying delays. The sufficient conditions are derived to guarantee its synchronization based on a master-slave system approach. The discrete-time complex dynamical networks with interval time-varying delays is studied in [11]. In this paper, the randomly occurring perturbations will be considered, which are assumed to belong to the binomial sequence. According to the author in [12], the complex dynamical networks are synchronized, by using the sampled-data control method. Pinning sample-data control in [14] for synchronization of complex networks has been studied with probabilistic time-varying delays, by applying quadratic convex approach. The impulsive control [15–17] for synchronization of complex networks also is investigated. Impulsive control [15] is applied to the synchronization for complex dynamical networks with coupling delays. Two types of time-varying coupling delays are considered: the delays without any restriction on the delay derivatives, and the delays whose derivatives are strictly less than the former. The pinning control [18–22] is one of the most commonly used to achieve the synchronization between the nodes. In [18], pinning synchronization of nonlinearly coupled complex networks with time-varying delays is investigated, by using M-matrix strategies. Through to pinning control all reached the purpose of synchronization, by using different methods. For instance, in [19], one investigated the pinning adaptive hybrid synchronization of two general complex dynamical networks with mixed coupling. Also the authors of [22] studied the intermittent pinning control for cluster synchronization of delayed heterogeneous dynamical networks. In this paper, the complex networks system is a system with non-identical delayed dynamical nodes. By these mentioned control methods, pinning control can effectively decrease the number of nodes causing the system to achieve synchronization. Since pinning control can reduce the control of nodes, it is very valuable to study the synchronization of a complex dynamical networks system via pinning control.

The phenomena of time delays are common in various systems. They exist for the case the events occur. It is known clearly that time delays can destabilize the behavior of networks. Hence, a system with time delays may be complicated and interesting. As is well known, time delays would degrade the synchronization performance or even destroy the synchronization. So, the problem of time delays should be introduced in the synchronization of complex networks. Furthermore, we have mixed delays including discrete delays and distributed delays, but the mixed time delay can have the effect of more pressure close to the actual application. There are a lot of papers about the delay of complex network using different methods. For instance in [23–33], and in [25], the authors considered the synchronization and exponential estimates of complex networks with mixed time-varying coupling delays. A generalized complex networks model involving both neutral delays and retarded ones is presented. By utilizing the free weighting matrix technique, a less conservative delay-dependent synchronization criterion is derived. In [27], delay-distribution-dependent synchronization of T-S fuzzy stochastic complex networks with mixed time delays was studied, in which the mixed time delays are composed of discrete and distributed delays. The discrete-time delays are assumed to be random and its probability distribution is known a priori. The stochastic nonlinear time-delay system [30] is studied. The purpose of this paper is to investigate the dynamic output feedback tracking control for this time-delay system with the prescribed performance. The problem of synchronization of T-S fuzzy complex dynamical networks with time delay, impulsive delays, and stochastic effects is studied in [33], based on T-S fuzzy methods and the LMIs approach. On the basis of the above discussion, we can see that the time delay has very much significance for any systems. Therefore, it is significant to study the synchronization for complex dynamical networks system with mixed mode-dependent time delays.

Inspired by the above discussion, in this paper we deal with the synchronization for complex dynamical networks with mixed mode-dependent time delays. The main contribution of this paper is summarized as follows: (1) A more general system was introduced to solve practical problems. (2) Sufficient conditions were obtained to ensure the complex dynamical networks synchronization. (3) The pinning controller is designed to ensure the complex dynamical networks synchronization. (4) The design of the control results can be seen from Figures 1-6. (5) The simulation results demonstrate the feasibility of the method.

The rest of the paper is organized as follows. In Section 2, the complex dynamical networks model is introduced, several assumptions, lemmas, and definitions are also proposed. The main results of the synchronization on complex dynamical networks are given in Section 3. In Section 4, we provide a numerical example to illustrate the effectiveness of the obtained results. Conclusions are finally drawn in Section 5.

Notation: The notation in this paper is standard. \({\mathbb{R}^{n}}\) denotes the n dimensional Euclidean space, \({\mathbb{R}^{m \times n}}\) represents the set of all \(m \times n\) real matrices. For a real asymmetric matrix X and Y, the notation \(X > Y\) (respectively, \(X \geqslant Y\)) means \(X - Y\) is semi-positive definite (respectively, positive definite). The superscript ‘T’ denotes matrix transposition. Moreover, in symmetric block matrices, ‘∗’ is used as an ellipsis for the terms that are introduced by asymmetry and \(\operatorname{diag} \{ \cdots \}\) denotes a block-diagonal matrix. Let \(\tau > 0\) and \(C ( { [ { - \tau,0} ],{\mathbb{R}^{n}}} )\) denote the family of continuous functions ϕ, from \([ { - \tau,0} ]\) to \({\mathbb{R}^{n}}\). \(( {\Omega,F, \{ {{F_{t}}} \},P} )\) is a complete probability space with a filtration, \({ \{ {{F_{t}}} \}_{t \geqslant0}}\) satisfying the usual conditions. \(\mathcal{E} \{ \cdot \}\) represents the mathematical expectation. \({\lambda_{\max}}( \cdot)\) means the largest eigenvalue of a matrix. If not explicitly stated the matrices are assumed to have compatible dimensions.

2 System description and preliminary lemma

Let \(\{ r(t)(t \geqslant0)\} \) be a right-continuous Markovian chain on the probability space \((\Omega,F, {\{ {F_{t}}\} _{t \geqslant0}},P)\) taking values in the finite space \(S = \{ 1,2, \ldots, m\} \) with generator \(\Pi = {\{ {\pi_{ij}}\} _{m \times m}}\) (\(i,j \in S\)) given by

$$ \Pr({r_{t + \vartriangle t}} = j|{r_{t}} = i) = \left \{ \textstyle\begin{array}{l@{\quad}l} {\pi_{ij}}\Delta t + o(\Delta t), & i \ne j, \\ 1 + {\pi_{ij}}\Delta t + o(\Delta t),& i = j, \end{array}\displaystyle \right . $$
(1)

where \(\Delta t > 0\), \(\lim_{\vartriangle t \to 0} (o\Delta t/\Delta t) = 0\), and \({\pi_{ij}}\) is the transition rate from mode i to mode j satisfying \({\pi_{ij}} \geqslant0\) for \(i \ne j\) with \({\pi_{ij}} = - \sum_{j = 1 j \ne i}^{m} {{\pi _{ij}}}\) (\(i,j \in S\)).

Considering the following Markovian jumping complex networks with mixed mode-dependent time-varying delays consisting of N nodes, in which each node is an n-dimensional dynamical subsystem:

$$\begin{aligned} {{\dot{x}}_{k}}(t) =& - D\bigl(r(t) \bigr){x_{k}}(t) + A\bigl(r(t)\bigr)f\bigl({x_{k}}(t)\bigr) + B\bigl(r(t)\bigr)g\bigl({x_{k}}\bigl(t - {\tau_{1}}(t) \bigr)\bigr) \\ &{}+ C\bigl(r(t)\bigr) \int_{t - {\tau_{2}}(t)}^{t} {h\bigl({x_{k}}(s)\bigr)} \,ds + c\sum_{j = 1}^{N} {{G_{kj}} \Gamma\bigl(r(t)\bigr){x_{j}}\bigl(t - {\tau_{1}}(t)\bigr)} + {u_{k}}(t), \end{aligned}$$
(2)

where \({x_{k}}(t) = {({x_{k1}}(t),{x_{k2}}(t), \ldots ,{x_{kn}}(t))^{T}} \in{\mathbb{R}^{n}}\) denotes the state vector of the ith node, \(D(r(t)) = \operatorname{diag}\{ {d_{1,r(t)}},{d_{2,r(t)}}, \ldots,{d_{n,r(t)}}\} > 0\); \(A(r(t))\), \(B(r(t))\), and \(C(r(t)) \in {\mathbb{R}^{n \times n}}\), are, respectively, the connection weight matrix, the discretely delayed connection weight matrix, and the distributively delayed connection weight matrix; \(\Gamma (r(t))\) is an inner-coupling matrix; \(c > 0\) represents the coupling strength; the bounded functions \({\tau_{1}}(t)\) and \({\tau_{2}}(t)\) represent the unknown discrete-time delay and the distributed delay of system with \(0 \leqslant{\tau_{1}}(t) \leqslant{\tau _{1}}\), \(0 \leqslant{\tau_{2}}(t) \leqslant{\tau_{2}}\), \(0 \leqslant {\dot{\tau}_{1}}(t) \leqslant{d_{1}}\), and \(0 \leqslant{\dot{\tau}_{2}}(t) \leqslant{d_{2}}\). \({\tau_{1}}\), \({\tau_{2}}\), \({d_{1}}\) and \({d_{2}}\) are positive constants. \(G = {({G_{ij}})_{N \times N}}\) denotes the coupling configuration matrix, if there is a connection between node i and node j, then \({G_{kj}} = {G_{jk = 1}} = 1\) (\(i \ne j\)), otherwise \({G_{kj}} = {G_{jk}} = 0\) (\(k \ne j\)). The row sums of G are zero, i.e. \(\sum_{k \ne j}^{N} {{G_{kj}}} = - {G_{kk}}\), \(k = 1,2, \ldots,N\).

Correspondingly, the response complex networks with the control inputs \({u_{k}}(t) \in{\mathbb{R}^{n}}\) (\(k = 1,2, \ldots,N\)) can be written as

$$\begin{aligned} {{\dot{y}}_{k}}(t) =& - D\bigl(r(t) \bigr){y_{k}}(t) + A\bigl(r(t)\bigr)f\bigl({y_{k}}(t)\bigr) + B\bigl(r(t)\bigr)g\bigl({y_{k}}\bigl(t - {\tau_{1}}(t) \bigr)\bigr) \\ &{}+ C\bigl(r(t)\bigr) \int_{t - {\tau_{2}}(t)}^{t} {h\bigl({y_{k}}(s)\bigr) \,ds} + c\sum_{j = 1}^{N} {{G_{ij}} \Gamma\bigl(r(t)\bigr){y_{j}}\bigl(t - {\tau_{1}}(t)\bigr)} + {u_{k}}(t) , \end{aligned}$$
(3)

where \({u_{k}}(t)\) is given as

$$ {u_{k}} = \left \{ \textstyle\begin{array}{l@{\quad}l} - {c_{2}}{\sigma_{k}}{\Gamma_{3}}({y_{k}} - {x_{k}}), & k = 1,2, \ldots,l, \\ 0, & k = l + 1,l + 2, \ldots,N. \end{array}\displaystyle \right . $$
(4)

For simplicity, we denote \(D(r(t)) = {D_{i}}\), \(A(r(t)) = {A_{i}}\), \(B(r(t)) = {B_{i}}\), \(C(r(t)) = {C_{i}}\), \(\Gamma(r(t)) = {\Gamma _{i}}\). Let the error be \({e_{k}}(t) = {y_{k}}(t) - {x_{k}}(t)\), we can arrive the error dynamical networks:

$$\begin{aligned} {{\dot{e}}_{k}}(t) =& - {D_{i}} {e_{k}}(t) + {A_{i}}f\bigl({e_{k}}(t)\bigr) + {B_{i}}g\bigl({e_{k}}\bigl(t - {\tau_{1}}(t) \bigr)\bigr) + {C_{i}} \int_{t - {\tau_{2}}(t)}^{t} {h\bigl({e_{k}}(s)\bigr)} \,ds \\ &{}+ c\sum_{j = 1}^{N} {{G_{kj}} { \Gamma _{i}} {e_{k}}\bigl(t - {\tau_{1}}(t)\bigr)} - {c_{2}} {\sigma_{k}} {\Gamma_{3}} {e_{k}}(t), \end{aligned}$$
(5)

where

$$\begin{aligned}& f\bigl({e_{k}}(t)\bigr) = f\bigl({y_{k}}(t)\bigr) - f \bigl({x_{k}}(t)\bigr), \\& g\bigl({e_{k}}\bigl(t - { \tau_{1}}(t)\bigr)\bigr) = g\bigl({y_{k}}\bigl(t - { \tau_{1}}(t)\bigr)\bigr) - g\bigl({x_{k}}\bigl(t - { \tau_{1}}(t)\bigr)\bigr), \\& h\bigl({e_{k}}(t)\bigr) = h\bigl({y_{k}}(t)\bigr) - h \bigl({x_{k}}(t)\bigr). \end{aligned}$$

With the matrix Kronecker product, we can rewrite the error networks system (5) in the following compact form:

$$\begin{aligned} \dot{e}(t) =& - \bigl({I_{N}} \otimes({D_{i}} + {c_{2}} {\sigma_{k}} {\Gamma _{3}})\bigr)e(t) + ({I_{N}} \otimes{A_{i}})f\bigl(e(t)\bigr) + ({I_{N}} \otimes{B_{i}})g\bigl(e\bigl(t - { \tau_{1}}(t)\bigr)\bigr) \\ &{}+ ({I_{N}} \otimes{C_{i}}) \int_{t - {\tau_{1}}(t)}^{t} {h\bigl(e(s)\bigr)} \,ds + c(G \otimes{ \Gamma_{i}})e\bigl(t - {\tau_{1}}(t)\bigr), \end{aligned}$$
(6)

where

$$\begin{aligned}& e(t) = { \bigl[ {e_{1}^{T}(t),e_{2}^{T}(t), \ldots,e_{N}^{T}(t)} \bigr]^{T}} , \\& e\bigl(t - {\tau_{1}}(t)\bigr) = { \bigl[ {{e_{1}}\bigl(t - {\tau_{1}}(t)\bigr),{e_{2}}\bigl(t - {\tau_{1}}(t) \bigr), \ldots,{e_{N}}\bigl(t - {\tau_{1}}(t)\bigr)} \bigr]^{T}} , \\& f\bigl(e(t)\bigr) = { \bigl[ {f\bigl({e_{1}}(t)\bigr),f \bigl({e_{2}}(t)\bigr), \ldots,f\bigl({e_{N}}(t)\bigr)} \bigr]^{T}} , \\& h\bigl(e(t)\bigr) = { \bigl[ {h\bigl({e_{1}}(t)\bigr),h \bigl({e_{2}}(t)\bigr), \ldots,h\bigl({e_{N}}(t)\bigr)} \bigr]^{T}} , \\& g\bigl(e\bigl(t - {\tau_{1}}(t)\bigr)\bigr) = { \bigl[ {g \bigl({e_{1}}\bigl(t - {\tau _{1}}(t)\bigr)\bigr),g \bigl({e_{2}}\bigl(t - {\tau_{1}}(t)\bigr)\bigr), \ldots,g \bigl({e_{N}}\bigl(t - {\tau _{1}}(t)\bigr)\bigr)} \bigr]^{T}} . \end{aligned}$$

The purpose of this paper is to design a series of pinning controllers (4) to ensure the asymptotical synchronization of complex dynamical networks (2). Before proceeding with the main results, we present the following definitions, lemmas, and assumptions.

Definition 2.1

[34]

Complex dynamical networks (2) are said to be asymptotically synchronized by pinning control, if \(\lim_{t \to0} \Vert {{x_{k}}(t) - {y_{k}}(t)} \Vert = 0\), \(k = 1,2, \ldots, N\).

Lemma 2.1

(Jensen’s inequality)

For a positive matrix M, scalar \({h_{U}} > {h_{L}} > 0\) such that the following integrations are well defined, we have:

  1. (i)

    \(- ({h_{U}} - {h_{L}})\int_{t - {h_{U}}}^{t - {h_{L}}} {{x^{T}}(s)} Mx(s)\,ds \le - (\int_{t - {h_{U}}}^{t - {h_{L}}} {{x^{T}}(s)\,ds} )M(\int_{t - {h_{U}}}^{t - {h_{L}}} {{x^{T}}(s)\,ds} )\),

  2. (ii)

    \(- (\frac{{h_{U}^{2} - h_{L}^{2}}}{2})\int_{t - {h_{U}}}^{t - {h_{L}}} {\int_{s}^{t} {{x^{T}}(u)} Mx(u)} \,du\,ds \le - (\int_{t - {h_{U}}}^{t - {h_{L}}} {\int_{s}^{t} {{x^{T}}(u)} } \,du\,ds)M(\int_{t - {h_{U}}}^{t - {h_{L}}} {\int_{s}^{t} {x(u)} } \,du\,ds)\).

Lemma 2.2

[35]

If for any constant matrix \(R \in {\mathbb{R}^{n \times n}}\), \(R = {R^{T}} > 0\), scalar \(\gamma > 0\), and vector function \(\phi:[0,\gamma] \to{\mathbb{R}^{m}}\) the integrations concerned are well defined, the following inequality holds:

$$- \gamma \int_{t - \gamma}^{t} {{{\dot{\phi}}^{T}}(s)} R\dot{\phi}(s)\,ds \leqslant{ \left ( { \textstyle\begin{array}{@{}c@{}} {\phi(t)} \\ {\phi(t - \gamma)} \end{array}\displaystyle } \right )^{T}}\left ( { \textstyle\begin{array}{@{}c@{\quad}c@{}} { - R} & R \\ * & { - R} \end{array}\displaystyle } \right ) \left ( { \textstyle\begin{array}{@{}c@{}} {\phi(t)} \\ {\phi(t - \gamma)} \end{array}\displaystyle } \right ). $$

Lemma 2.3

[36]

Let \(f( \cdot)\) be a non-negative function defined on \([ {0, + \infty} ]\), if \(f( \cdot )\) is Lebesgue integrable and uniformly continuous on \([ {0, + \infty} ]\), then \(\lim_{t \to\infty} f(t) = 0\).

Lemma 2.4

[37]

For a given symmetric positive definite matrix \(R > 0\) any function and differentiable signal ω in \([ {a,b} ] \to{R^{n}}\) and any twice differential function \([ {a,b} ] \to R\), then the following inequality holds:

$$\int_{a}^{b} {\dot{\omega}(u)} R\dot{\omega}(u)\,du \geqslant \frac{{{\eta^{T}}(a,b)R\eta(a,b)}}{{b - a}} + \frac{{\nu _{0}^{T}(a,b)R\nu(a,b)}}{{b - a}}, $$

where \(\eta(a,b) = \omega(b) - \omega(a)\), \({\nu_{0}}(a,b) = \frac{{\omega(b) + \omega(a)}}{2} - \frac{1}{{b - a}}\int_{a}^{b} {\omega(u)} \,du\).

Assumption 2.1

The nonlinear function \(f:{\mathbb{R}^{n}} \to{\mathbb{R}^{n}}\) satisfies

$${ \bigl[ {f(x) - f(y) - U(x - y)} \bigr]^{T}} \bigl[ {f(x) - f(y) - V(x - y)} \bigr] \leqslant0, \quad \forall x,y \in {\mathbb{R}^{n}}, $$

where U and \(V \in{\mathbb{R}^{n}}\) are constant matrices with \(V - U > 0\). For presentation simplicity and without loss of generality, it is assumed that \(f(0) = 0\).

Remark 1

In Assumption 2.1, the sector-bounded description of the nonlinear term is quite general, which includes the common Lipschitz and norm-bounded conditions as special cases. It is possible to reduce the conservation of the main results caused by quantifying nonlinear functions via the well-known convex optimization technique. The same lemma, Lemma 2.4, will play a key role in the derivation of a less conservation delay-dependent condition than the general Jensen inequality in [38].

3 Main results

Theorem 3.1

The error dynamical network (6) is asymptotically stable with mixed time-varying delays \({\tau _{1}}(t)\) and \({\tau_{2}}(t)\), constant matrices \({P_{i}} > 0\), \({R_{kl}} > 0\), \(N{_{kl}} > 0\) (\(l = 1,2,3,4\)), \({W_{kb}} > 0\), \({M_{kb}} > 0\), and \({K_{kb}}\) (\(b = 1,2\)) with appropriate dimensions. The following LMIs holds, if all \(k \in S\):

$$ \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} {{\Theta_{1}}} & {{\Theta_{2}}} \\ * & {{\Theta_{3}}} \end{array}\displaystyle } \right ] < 0, $$
(7)

where

$$\begin{aligned}& {\Theta_{1}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {\Theta_{11}^{1}} & {\Theta_{12}^{1}} & {\Theta_{13}^{1}} & {\Theta _{14}^{1}} & 0 & {\Theta_{16}^{1}} & 0 & 0 & 0 & 0 \\ * & {\Theta_{22}^{1}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & {\Theta_{33}^{1}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & {\Theta_{44}^{1}} & {\Theta_{45}^{1}} & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & { - (1 - {d_{2}}){R_{k4}}} & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & {\Theta_{66}^{1}} & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & {{R_{k2}} - {R_{k1}}} & 0 & 0 & 0 \\ * & * & * & * & * & * & * & {{R_{k4}} - {R_{k3}}} & 0 & 0 \\ * & * & * & * & * & * & * & * & {\Theta_{99}^{1}} & 0 \\ * & * & * & * & * & * & * & * & * & { - (1 - {d_{2}}){R_{k4}}} \end{array}\displaystyle } \right ], \\& \Theta_{11}^{1} = {P_{i}}\bigl( - \bigl({I_{N}} \otimes({D_{i}} + {c_{2}} {\sigma _{k}} {\Gamma_{3}})\bigr)\bigr) + {\bigl( - \bigl({I_{N}} \otimes({D_{i}} + {c_{2}} {\sigma _{k}} {\Gamma_{3}})\bigr)\bigr)^{T}} {P_{i}} + {R_{k1}} + {R_{k3}} \\& \hphantom{\Theta_{11}^{1} ={}}{}+ \sum _{j \in S} {{\pi_{ij}} {P_{j}}}+ \tau_{1}^{2}{M_{k1}} + \tau_{2}^{2}{M_{k2}} - {M_{k1}} - {M_{k2}} - \frac{{{N_{k1}}}}{{{\tau_{1}}}} - \frac{{{N_{k2}}}}{{{\tau _{2}}}} \\& \hphantom{\Theta_{11}^{1} ={}}{}- \frac{3}{4}{N_{k3}} - \frac{3}{4}{N_{k4}} - 2{K_{k1}} - 2{K_{k2}} - {\lambda_{1}} {{\bar{U}}_{1}} - {\lambda_{2}} {{\bar{U}}_{2}} \\& \hphantom{\Theta_{11}^{1} ={}}{}- \frac{{\tau_{1}^{2}}}{2}{\bigl({I_{N}} \otimes({D_{i}} + {c_{2}} {\sigma_{k}} { \Gamma_{3}})\bigr)^{T}} {K_{k1}} \bigl({I_{N}} \otimes({D_{i}} + {c_{2}} { \sigma_{k}} {\Gamma_{3}})\bigr), \\& \Theta_{12}^{1} = {M_{k1}} + \frac{{{N_{k1}}}}{{{\tau_{1}}}} + \frac{3}{4}{N_{k3}}, \\& \Theta_{13}^{1} = {M_{k2}} + \frac{{{N_{k2}}}}{{{\tau_{2}}}} + \frac{3}{4}{N_{k4}}, \\& \Theta_{14}^{1} = c{P_{i}}(G \otimes{ \Gamma_{i}}) + \frac{{c\tau _{1}^{2}}}{2}{\bigl({I_{N}} \otimes({D_{i}} + {c_{2}} {\sigma_{k}} {\Gamma _{2}})\bigr)^{T}} {K_{k1}}(G \otimes{ \Gamma_{i}}), \\& \Theta_{16}^{1} = {P_{i}}({I_{N}} \otimes{A_{i}}) - {\lambda_{1}} {\bar{V}_{1}} + \frac{{\tau_{1}^{2}}}{2}{\bigl({I_{N}} \otimes({D_{i}} + {c_{2}} {\sigma _{k}} {\Gamma_{3}}) \bigr)^{T}} {K_{k1}}({I_{N}} \otimes{A_{i}}), \\& \Theta_{22}^{1} = {R_{k2}} - {R_{k1}} - {M_{k1}} - \frac{{{N_{k1}}}}{{{\tau_{1}}}} - \frac{3}{4}{N_{k3}}, \\& \Theta_{33}^{1} = {R_{k4}} - {R_{k3}} - {M_{k2}} - \frac{{{N_{k2}}}}{{{\tau_{2}}}} - \frac{3}{4}{N_{k4}}, \\& \Theta_{44}^{1} = - (1 - {d_{1}}){R_{k2}} - \frac{{{c^{2}}\tau _{1}^{2}}}{2}\bigl({G^{T}} \otimes\Gamma_{i}^{T} \bigr){K_{k1}}(G \otimes{\Gamma _{i}}), \\& \Theta_{45}^{1} = - \frac{{c\tau_{1}^{2}}}{2}\bigl({I_{N}} \otimes A_{i}^{T}\bigr){K_{k1}}(G \otimes{ \Gamma_{i}}), \\& \Theta_{66}^{1} = {R_{k1}} + {R_{k3}} - {\lambda_{1}}I - \frac{{\tau_{1}^{2}}}{2}\bigl({I_{N}} \otimes A_{i}^{T}\bigr){K_{k1}}({I_{N}} \otimes {A_{i}}), \\& \Theta_{99}^{1} = - (1 - {d_{1}}){R_{k2}}, \\& {\Theta_{2}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} { - {\lambda_{2}}{{\bar{V}}_{2}}} & 0 & 0 & {\Theta_{14}^{2}} & 0 & {\Theta _{16}^{2}} & { - \frac{{{N_{k3}}}}{{2{\tau_{1}}}} + \frac{2}{{{\tau _{1}}}}{K_{k1}}} & { - \frac{{{N_{k4}}}}{{2{\tau_{2}}}} + \frac{2}{{{\tau _{2}}}}{K_{k2}}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & {\frac{{{N_{k3}}}}{{2{\tau_{1}}}}} & {\frac {{{N_{k4}}}}{{2{\tau_{2}}}}} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & {\Theta_{44}^{2}} & 0 & {\Theta_{46}^{2}} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & {\Theta_{64}^{2}} & 0 & {\Theta_{66}^{2}} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array}\displaystyle } \right ], \\& \Theta_{14}^{2} = {P_{i}}({I_{N}} \otimes{B_{i}}) + \frac{{\tau _{1}^{2}}}{2}{\bigl({I_{N}} \otimes({D_{i}} + {c_{2}} {\sigma_{k}} {\Gamma _{3}})\bigr)^{T}} {K_{k1}}({I_{N}} \otimes{B_{i}}), \\& \Theta_{16}^{2} = {P_{i}}({I_{N}} \otimes{C_{i}}) + \frac{{\tau _{1}^{2}}}{2}{\bigl({I_{N}} \otimes({D_{i}} + {c_{2}} {\sigma_{k}} {\Gamma _{3}})\bigr)^{T}} {K_{k1}}({I_{N}} \otimes{C_{i}}), \\& \Theta_{44}^{2} = - \frac{{c\tau_{1}^{2}}}{2}\bigl({I_{N}} \otimes B_{i}^{T}\bigr){K_{k1}}(G \otimes{ \Gamma_{i}}), \\& \Theta_{46}^{2} = - \frac{{c\tau_{1}^{2}}}{2}\bigl({I_{N}} \otimes C_{i}^{T}\bigr){K_{k1}}(G \otimes{ \Gamma_{i}}), \\& \Theta_{64}^{2} = - \frac{{\tau_{1}^{2}}}{2}\bigl({I_{N}} \otimes A_{i}^{T}\bigr){K_{k1}}({I_{N}} \otimes{B_{i}}), \\& \Theta_{66}^{2} = - \frac{{\tau_{1}^{2}}}{2}\bigl({I_{N}} \otimes A_{i}^{T}\bigr){K_{k1}}({I_{N}} \otimes{C_{i}}), \\& {\Theta_{3}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {\Theta_{11}^{3}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & {{R_{k2}} - {R_{k1}}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & {{R_{k4}} - {R_{k3}}} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & {\Theta_{44}^{3}} & 0 & {\Theta_{46}^{3}} & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & {\Theta_{55}^{3}} & 0 & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & {\Theta_{66}^{3}} & 0 & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & {\Theta_{77}^{3}} & 0 & 0 & 0 & 0 \\ * & * & * & * & * & * & * & {\Theta_{88}^{3}} & 0 & 0 & 0 \\ * & * & * & * & * & * & * & * & {\Theta_{99}^{3}} & 0 & 0 \\ * & * & * & * & * & * & * & * & * & { - {W_{k1}}} & 0 \\ * & * & * & * & * & * & * & * & * & * & { - {W_{k2}}} \end{array}\displaystyle } \right ], \\& \Theta_{11}^{3} = {R_{k1}} + {R_{k3}} - {\lambda_{2}}I, \\& \Theta_{44}^{3} = - (1 - {d_{1}}){R_{k2}} - \frac{{\tau _{1}^{2}}}{2}\bigl({I_{N}} \otimes B_{i}^{T} \bigr){K_{k1}}({I_{N}} \otimes{B_{i}}), \\& \Theta_{46}^{3} = - \frac{{\tau_{1}^{2}}}{2}\bigl({I_{N}} \otimes B_{i}^{T}\bigr){K_{k1}}({I_{N}} \otimes{C_{i}}), \\& \Theta_{55}^{3} = - (1 - {d_{2}}){R_{k4}}, \\& \Theta_{66}^{3} = - \frac{{\tau_{1}^{2}}}{2}\bigl({I_{N}} \otimes C_{i}^{T}\bigr){K_{k1}}({I_{N}} \otimes{C_{i}}), \\& \Theta_{77}^{3} = - {M_{k1}} + \frac{1}{{\tau_{1}^{2}}}{N_{k3}} - \frac{2}{{\tau_{1}^{2}}}{K_{k1}}, \\& \Theta_{88}^{3} = - {M_{k2}} + \frac{1}{{\tau_{2}^{2}}}{N_{k4}} - \frac{2}{{\tau_{2}^{2}}}{K_{k2}}, \\& \Theta_{99}^{3} = {W_{k1}} + {W_{k2}} + \tau_{1}^{2}{M_{k1}} + \tau _{2}^{2}{M_{k2}} + {\tau_{1}} {N_{k1}} + {\tau_{2}} {N_{k2}} + \tau _{1}^{2}{N_{k3}} + \tau_{2}^{2}{N_{k4}} - \frac{{\tau_{2}^{2}}}{2}{K_{k2}}. \end{aligned}$$

Proof

Define the Lyapunov functional candidate as follows:

$$\begin{aligned}& {V_{1}}\bigl(e(t),i,t\bigr) = {e^{T}}(t){P_{i}}e(t), \\& {V_{2}}\bigl(e(t),i,t\bigr) = \int_{t - {\tau_{1}}}^{t} {{\eta^{T}}(s)} {R_{k1}}\eta (s)\,ds + \int_{t - {\tau_{1}}(t)}^{t - {\tau_{1}}} {{\eta^{T}}(s)} {R_{k2}}\eta(s)\,ds + \int_{t - {\tau_{1}}}^{t} {{{\dot{e}}^{T}}(s)} {W_{k1}}\dot{e}(s)\,ds \\& \hphantom{{V_{2}}\bigl(e(t),i,t\bigr) ={}}{}+ \int_{t - {\tau_{2}}}^{t} {{\eta^{T}}(s)} {R_{k3}}\eta(s)\,ds + \int_{t - {\tau_{2}}(t)}^{t - {\tau _{2}}} {{\eta^{T}}(s)} {R_{k4}}\eta(s)\,ds + \int_{t - {\tau_{2}}}^{t} {{{\dot{e}}^{T}}(s)} {W_{k2}}\dot{e}(s)\,ds, \\& {V_{3}}\bigl(e(t),i,t\bigr) = {\tau_{1}} \int_{ - {\tau_{1}}}^{0} { \int_{t + \theta}^{t} {{\xi^{T}}(s)} } {M_{k1}}\xi(s)\, ds\, d\theta + {\tau _{2}} \int_{ - {\tau_{2}}}^{0} { \int_{t + \theta}^{t} {{\xi^{T}}(s)} } {M_{k2}}\xi(s)\, ds\, d\theta, \\& {V_{4}}\bigl(e(t),i,t\bigr) = \int_{ - {\tau_{1}}}^{0} { \int_{t + \theta}^{t} {{{\dot{e}}^{T}}(s)} } {N_{k1}}\dot{e}(s)\,ds\,d\theta + \int_{ - {\tau_{2}}}^{0} { \int _{t + \theta}^{t} {{{\dot{e}}^{T}}(s)} } {N_{k2}}\dot{e}(s)\, ds\, d\theta \\& \hphantom{{V_{4}}\bigl(e(t),i,t\bigr) ={}}{}+ {\tau_{1}} \int_{ - {\tau _{1}}}^{0} { \int_{t + \theta}^{t} {{{\dot{e}}^{T}}(s)} } {N_{k3}}\dot{e}(s)\,ds\,d\theta + {\tau_{2}} \int_{ - {\tau_{2}}}^{0} { \int_{t + \theta}^{t} {{{\dot{e}}^{T}}(s)} } {N_{k4}}\dot{e}(s)\, ds\, d\theta, \\& {V_{5}}\bigl(e(t),i,t\bigr) = \int_{ - {\tau_{1}}}^{0} { \int_{\theta}^{0} { \int_{t + \rho}^{t} {{{\dot{e}}^{T}}(s)} } } {K_{k1}}\dot{e}(s)\, ds\, d\rho\, d\theta + \int_{ - {\tau_{2}}}^{0} { \int_{\theta}^{0} { \int_{t + \rho}^{t} {{{\dot{e}}^{T}}(s)} } } {K_{k2}}\dot{e}(s)\, ds\, d\rho\, d\theta. \end{aligned}$$

Let \({\mathcal{L}}\) be the weak infinitesimal generator of the random process:

$$\mathcal{L}V\bigl(e(t),i,t\bigr) = \lim_{\Delta \to{0^{+} }} \frac{1}{\Delta} \bigl[ {\mathcal{E} \bigl[ {V\bigl(e(t + \Delta),{r_{t}} = i,t + \Delta\bigr)} \bigr]\vert {e(t),{r_{t}} = i} } \bigr] - V\bigl(e(t),i,t\bigr). $$

Then for each \(i \in S\) along the trajectory of (6),

$$\begin{aligned} {\mathcal{L}}V\bigl(e(t),i,t\bigr) =& {\mathcal{L}} {V_{1}}\bigl(e(t),i,t\bigr) + {\mathcal {L}} {V_{2}} \bigl(e(t),i,t\bigr) + {\mathcal{L}} {V_{3}}\bigl(e(t),i,t\bigr) \\ &{} + { \mathcal {L}} {V_{4}}\bigl(e(t),i,t\bigr) + {\mathcal{L}} {V_{5}}\bigl(e(t),i,t\bigr), \end{aligned}$$
(8)

where

$$\begin{aligned}& {\mathcal{L}} {V_{1}}\bigl(e(t),i,t\bigr) = 2{e^{T}}(t){P_{i}} \dot{e}(t) + {e^{T}}(t)\sum_{j \in p} {{ \pi_{ij}} {P_{j}}e(t)} \\& \hphantom{{\mathcal{L}} {V_{1}}\bigl(e(t),i,t\bigr)}= 2{e^{T}}(t){P_{i}}\biggl[ - \bigl({I_{N}} \otimes({D_{i}} + {c_{2}} { \sigma_{k}} {\Gamma_{3}})e(t)\bigr) + ({I_{N}} \otimes {A_{i}})f\bigl(e(t)\bigr) \\& \hphantom{{\mathcal{L}} {V_{1}}\bigl(e(t),i,t\bigr) ={}}{}+ ({I_{N}} \otimes {B_{i}})g\bigl(e\bigl(t - {\tau_{1}}(t)\bigr)\bigr) + ({I_{N}} \otimes{C_{i}}) \int_{t - {\tau _{2}}(t)}^{t} {h\bigl(e(s)\bigr)\,ds} \\& \hphantom{{\mathcal{L}} {V_{1}}\bigl(e(t),i,t\bigr) ={}}{}+ c(G \otimes{\Gamma _{i}})e\bigl(t - { \tau_{1}}(t)\bigr)\biggr], \end{aligned}$$
(9)
$$\begin{aligned}& {\mathcal{L}} {V_{2}}\bigl(e(t),i,t\bigr) = {\eta^{T}}(t) ({R_{k1}} + {R_{k3}})\eta (t) + {\eta^{T}}(t - { \tau_{1}}) ({R_{k2}} - {R_{k1}})\eta(t - { \tau_{1}}) \\& \hphantom{{\mathcal{L}} {V_{2}}\bigl(e(t),i,t\bigr) ={}}{}+ {\eta^{T}}(t - {\tau _{{2}}}) ({R_{{{k4}}}} - {R_{{{k3}}}})\eta(t - {\tau_{ {2}}}) + {{{{ \dot{e}}}}^{T}}(t) ({W_{k1}} + {W_{k2}})\dot{e}(t) \\& \hphantom{{\mathcal{L}} {V_{2}}\bigl(e(t),i,t\bigr) ={}}{}- {{\dot{e}}^{T}}(t - {\tau _{1}}){W_{k1}} \dot{e}(t - {\tau_{1}}) - {{\dot{e}}^{T}}(t - {\tau _{2}}){W_{k2}}\dot{e}(t - {\tau_{2}}) \\& \hphantom{{\mathcal{L}} {V_{2}}\bigl(e(t),i,t\bigr) ={}}{}- \bigl(1 - {{\dot{\tau}}_{1}}(t)\bigr){ \eta^{T}}\bigl(t - {\tau_{1}}(t)\bigr){R_{k2}}\eta \bigl(t - {\tau_{2}}(t)\bigr), \end{aligned}$$
(10)
$$\begin{aligned}& {\mathcal{L}} {V_{{3}}}\bigl(e(t),i,t\bigr) = {\xi^{T}}(t) \bigl(\tau _{1}^{2}{M_{k1}} + \tau_{2}^{2}{M_{k2}} \bigr)\xi(t) - {\tau_{1}} \int_{t - {\tau _{1}}}^{t} {{\xi^{T}}(s)} {M_{k1}}\xi(s)\,ds \\& \hphantom{{\mathcal{L}} {V_{{3}}}\bigl(e(t),i,t\bigr) ={}}{}- {\tau_{2}} \int_{t - {\tau _{2}}}^{t} {{\xi^{T}}(s)} {M_{k2}}\xi(s)\,ds, \end{aligned}$$
(11)
$$\begin{aligned}& {\mathcal{L}} {V_{4}}\bigl(e(t),i,t\bigr) = {{\dot{e}}^{T}}(t) \bigl({\tau_{1}} {N_{k1}} + {\tau_{2}} {N_{k2}} + \tau_{1}^{2}{N_{k3}} + \tau_{2}^{2}{N_{k4}}\bigr)\dot{e}(t) \\& \hphantom{{\mathcal{L}} {V_{4}}\bigl(e(t),i,t\bigr) ={}}{}- \int_{t - {\tau_{1}}}^{t} {{{\dot{e}}^{T}}(s){N_{k1}} \dot{e}(s)} \,ds- \int_{t - {\tau_{2}}}^{t} {{{\dot{e}}^{T}}(s){N_{k2}} \dot{e}(s)} \,ds \\& \hphantom{{\mathcal{L}} {V_{4}}\bigl(e(t),i,t\bigr) ={}}{}- {\tau_{1}} \int_{t - {\tau_{1}}}^{t} {{{\dot{e}}^{T}}(s){N_{k3}} \dot{e}(s)} \,ds - {\tau_{2}} \int_{t - {\tau_{2}}}^{t} {{{\dot{e}}^{T}}(s){N_{k4}} \dot{e}(s)} \,ds, \end{aligned}$$
(12)
$$\begin{aligned}& {\mathcal{L}} {V_{5}}\bigl(e(t),i,t\bigr) = \int_{ - {\tau_{1}}}^{0} { \int_{\theta}^{0} {{{\dot{e}}^{T}}(t)} } {K_{k1}}e(t)\, d\rho \, d\theta - \int_{ - {\tau_{1}}}^{0} { \int_{\theta}^{0} {{{\dot{e}}^{T}}(t + \rho)} } {K_{k1}}e(t + \rho)\, d\rho \, d\theta \\& \hphantom{{\mathcal{L}} {V_{5}}\bigl(e(t),i,t\bigr) ={}}{}+ \int_{ - {\tau_{2}}}^{0} { \int_{\theta}^{0} {{{\dot{e}}^{T}}(t)} } {K_{k2}}e(t)\, d\rho\, d\theta - \int_{ - {\tau_{2}}}^{0} { \int_{\theta}^{0} {{{\dot{e}}^{T}}(t + \rho)} } {K_{k2}}e(t + \rho)\, d\rho\, d\theta \\& \hphantom{{\mathcal{L}} {V_{5}}\bigl(e(t),i,t\bigr)}= - \frac{{\tau _{1}^{2}}}{2}{{\dot{e}}^{T}}(t){K_{k1}} \dot{e}(t) - \frac{{\tau_{2}^{2}}}{2}{{\dot{e}}^{T}}(t){K_{k2}} \dot{e}(t) - \int_{ - {\tau_{1}}}^{0} { \int_{t + \theta}^{t} {{{\dot{e}}^{T}}(s)} } {K_{k1}}\dot{e}(s)\, ds\, d\theta \\& \hphantom{{\mathcal{L}} {V_{5}}\bigl(e(t),i,t\bigr) ={}}{}- \int_{ - {\tau_{2}}}^{0} { \int_{t + \theta}^{t} {{{\dot{e}}^{T}}(s)} } {K_{k2}}\dot{e}(s)\, ds\, d\theta. \end{aligned}$$
(13)

Then it follows from Lemma 2.1, Lemma 2.2, and Lemma 2.4 that

$$\begin{aligned}& - {\tau_{1}} \int_{t - {\tau_{1}}}^{t} {{\xi^{T}}(s)} {M_{k1}}\xi(s)\,ds \le - \int_{t - {\tau_{1}}}^{t} {{\xi^{T}}(s)} \,ds{M_{k1}} \int_{t - {\tau_{1}}}^{t} {\xi(s)} \,ds, \end{aligned}$$
(14)
$$\begin{aligned}& - \int_{t - {\tau_{1}}}^{t} {{{\dot{e}}^{T}}(s){N_{k1}} \dot{e}(s)} \,ds \le \frac{1}{{{\tau_{1}}}}{\left ( { \textstyle\begin{array}{@{}c@{}} {e(t)} \\ {e(t - {\tau_{1}})} \end{array}\displaystyle } \right )^{T}}\left ( { \textstyle\begin{array}{@{}c@{\quad}c@{}} { - {N_{k1}}} & {{N_{k1}}} \\ * & { - {N_{k1}}} \end{array}\displaystyle } \right )\left ( { \textstyle\begin{array}{@{}c@{}} {e(t)} \\ {e(t - {\tau_{1}})} \end{array}\displaystyle } \right ) , \end{aligned}$$
(15)
$$\begin{aligned}& - \int_{ - {\tau_{1}}}^{0} { \int_{t + \theta}^{t} {{{\dot{e}}^{T}}(s)} } {K_{k1}}\dot{e}(s)\,ds\,d\theta \le - \frac{2}{{\tau_{1}^{2}}} \int_{ - {\tau_{1}}}^{0} { \int_{t + \theta}^{t} {{{\dot{e}}^{T}}(s)} } \,ds \,d\theta {K_{k1}} \int_{ - {\tau_{1}}}^{0} { \int_{t + \theta}^{t} {\dot{e}(s)} } \,ds\,d\theta, \end{aligned}$$
(16)
$$\begin{aligned}& - {\tau_{1}} \int_{t - {\tau_{1}}}^{t} {{{\dot{e}}^{T}}(s){N_{k3}} {\dot{e}}(s)} \,ds \le - {\bigl(e\bigl(t - e(t - {\tau_{1}})\bigr) \bigr)^{T}} {N_{k3}}\bigl(e(t) - e(t - {\tau _{1}}) \bigr) \\& \hphantom{- {\tau_{1}}\int_{t - {\tau_{1}}}^{t} {{{\dot{e}}^{T}}(s){N_{k3}}{\dot{e}}(s)} \,ds \le{}}{}+ {\biggl(\frac{{e(t) - e(t - {\tau_{1}})}}{2} - \frac{1}{{{\tau _{1}}}} \int_{t - {\tau_{1}}}^{t} {e(s)\,ds} \biggr)^{T}} \\& \hphantom{- {\tau_{1}}\int_{t - {\tau_{1}}}^{t} {{{\dot{e}}^{T}}(s){N_{k3}}{\dot{e}}(s)} \,ds \le{}}{}\times {N_{k3}}\biggl(\frac{{e(t) - e(t - {\tau_{1}})}}{2} - \frac{1}{{{\tau_{1}}}} \int_{t - {\tau_{1}}}^{t} {e(s)\,ds} \biggr). \end{aligned}$$
(17)

Similarity

$$\begin{aligned}& - {\tau_{2}} \int_{t - {\tau_{2}}}^{t} {{\xi^{T}}(s)} {M_{k2}}\xi(s)\,ds \le - \int_{t - {\tau_{2}}}^{t} {{\xi^{T}}(s)} \,ds{M_{k2}} \int_{t - {\tau _{2}}}^{t} {\xi(s)} \,ds, \end{aligned}$$
(18)
$$\begin{aligned}& - \int_{t - {\tau_{2}}}^{t} {{{\dot{e}}^{T}}(s){N_{k2}} \dot{e}(s)} \le\frac{1}{{{\tau_{2}}}}{\left ( { \textstyle\begin{array}{@{}c@{}} {e(t)} \\ {e(t - {\tau_{2}})} \end{array}\displaystyle } \right )^{T}}\left ( { \textstyle\begin{array}{@{}c@{\quad}c@{}} { - {N_{k2}}} & {{N_{k2}}} \\ * & { - {N_{k2}}} \end{array}\displaystyle } \right ) \left ( { \textstyle\begin{array}{@{}c@{}} {e(t)} \\ {e(t - {\tau_{2}})} \end{array}\displaystyle } \right ), \end{aligned}$$
(19)
$$\begin{aligned}& - \int_{ - {\tau_{2}}}^{0} { \int_{t + \theta}^{t} {{{\dot{e}}^{T}}(s)} } {K_{k2}}\dot{e}(s)\,ds\,d\theta \le - \frac{2}{{\tau_{2}^{2}}} \int_{ - {\tau_{2}}}^{0} { \int_{t + \theta}^{t} {{{\dot{e}}^{T}}(s)} } \,ds \,d\theta {K_{k2}} \int_{ - {\tau_{2}}}^{0} { \int_{t + \theta}^{t} {\dot{e}(s)} } \,ds\,d\theta, \end{aligned}$$
(20)
$$\begin{aligned}& - {\tau_{2}} \int_{t - {\tau_{2}}}^{t} {{{\dot{e}}^{T}}(s){N_{k4}} {\dot{e}}(s)} \,ds \le - {\bigl(e\bigl(t - e(t - {\tau_{2}})\bigr) \bigr)^{T}} {N_{k4}}\bigl(e(t) - e(t - {\tau _{2}}) \bigr) \\& \hphantom{- {\tau_{2}}\int_{t - {\tau_{2}}}^{t} {{{\dot{e}}^{T}}(s){N_{k4}}{\dot{e}}(s)} \,ds \le{}}{}+ {\biggl(\frac{{e(t) - e(t - {\tau_{2}})}}{2} - \frac{1}{{{\tau _{2}}}} \int_{t - {\tau_{2}}}^{t} {e(s)\,ds} \biggr)^{T}} \\& \hphantom{- {\tau_{2}}\int_{t - {\tau_{2}}}^{t} {{{\dot{e}}^{T}}(s){N_{k4}}{\dot{e}}(s)} \,ds \le{}}{}\times {N_{k4}}\biggl(\frac{{e(t) - e(t - {\tau_{2}})}}{2} - \frac{1}{{{\tau_{4}}}} \int_{t - {\tau_{2}}}^{t} {e(s)\,ds} \biggr). \end{aligned}$$
(21)

We can get from Assumption 2.1 that for any \({\lambda_{1}} > 0\), \({\lambda_{1}} > 0\):

$$\begin{aligned}& {y_{1}}(t) = {\lambda_{1}} {\left [ { \textstyle\begin{array}{@{}c@{}} {e(t)} \\ {f(e(t))} \end{array}\displaystyle } \right ]^{T}}\left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} {{{\bar{U}}_{1}}} & {{{\bar{V}}_{1}}} \\ * & I \end{array}\displaystyle } \right ]\left [ { \textstyle\begin{array}{@{}c@{}} {e(t)} \\ {f(e(t))} \end{array}\displaystyle } \right ] \le0, \end{aligned}$$
(22)
$$\begin{aligned}& {y_{2}}(t) = {\lambda_{2}} {\left [ { \textstyle\begin{array}{@{}c@{}} {e(t - d)} \\ {f(e(t - d))} \end{array}\displaystyle } \right ]^{T}}\left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} {{{\bar{U}}_{2}}} & {{{\bar{V}}_{2}}} \\ * & I \end{array}\displaystyle } \right ]\left [ { \textstyle\begin{array}{@{}c@{}} {e(t - d)} \\ {f(e(t - d))} \end{array}\displaystyle } \right ] \le0, \end{aligned}$$
(23)

where

$$\begin{aligned}& {{\bar{U}}_{1}} = \frac{{{{({I_{N}} \otimes{U_{1}})}^{T}}({I_{N}} \otimes {V_{1}})}}{2} + \frac{{{{({I_{N}} \otimes{V_{1}})}^{T}}({I_{N}} \otimes{U_{1}})}}{2}, \\& {{\bar{V}}_{1}} = - \frac{{{{({I_{N}} \otimes{U_{1}})}^{T}} + {{({I_{N}} \otimes {V_{1}})}^{T}}}}{2}, \\& {{\bar{U}}_{2}} = \frac{{{{({I_{N}} \otimes{U_{2}})}^{T}}({I_{N}} \otimes {V_{2}})}}{2} + \frac{{{{({I_{N}} \otimes{V_{2}})}^{T}}({I_{N}} \otimes{U_{2}})}}{2}, \\& {{\bar{V}}_{2}} = - \frac{{{{({I_{N}} \otimes{U_{2}})}^{T}} + {{({I_{N}} \otimes {V_{2}})}^{T}}}}{2}. \end{aligned}$$

Substituting (9)-(23) into (8) yields

$$\begin{aligned} {\mathcal{L}}V\bigl(e(t),i,t\bigr) \le&2{e^{T}}(t){P_{i}} \biggl[ - \bigl({I_{N}} \otimes({D_{i}} + {c_{2}} { \sigma_{k}} {\Gamma_{3}})e(t)\bigr) + ({I_{N}} \otimes{A_{i}})f\bigl(e(t)\bigr) \\ &{}+ ({I_{N}} \otimes{B_{i}})g\bigl(e\bigl(t - {\tau_{1}}(t)\bigr)\bigr)+ ({I_{N}} \otimes {C_{i}}) \int_{t - {\tau_{2}}(t)}^{t} {h\bigl(e(s)\bigr)\,ds} \\ &{} + c(G \otimes{ \Gamma _{i}})e\bigl(t - {\tau_{1}}(t)\bigr)\biggr] + { \eta^{T}}(t) ({R_{k1}} + {R_{k3}})\eta(t) \\ &{}+ {\eta^{T}}(t - {\tau _{1}}) ({R_{k2}} - {R_{k1}})\eta(t - {\tau_{1}}) + {\eta^{T}}(t - { \tau _{{2}}}) ({R_{{{k4}}}} - {R_{{{k3}}}})\eta(t - { \tau_{ {2}}}) \\ &{}+ {{{{\dot{e}}}}^{T}}(t) ({W_{k1}} + {W_{k2}}) \dot{e}(t) - {{\dot{e}}^{T}}(t - {\tau _{1}}){W_{k1}} \dot{e}(t - {\tau_{1}}) \\ &{}- {{\dot{e}}^{T}}(t - {\tau _{2}}){W_{k2}}\dot{e}(t - {\tau_{2}})- \bigl(1 - {{\dot{\tau}}_{1}}(t)\bigr){\eta^{T}}\bigl(t - {\tau_{1}}(t)\bigr){R_{k2}}\eta\bigl(t - { \tau_{2}}(t)\bigr) \\ &{} + {\xi^{T}}(t) \bigl(\tau_{1}^{2}{M_{k1}} + \tau_{2}^{2}{M_{k2}}\bigr)\xi(t) \\ &{}- \int_{t - {\tau_{1}}}^{t} {{\xi^{T}}(s)} \,ds{M_{k1}} \int_{t - {\tau_{1}}}^{t} {\xi(s)} \,ds - \int_{t - {\tau_{2}}}^{t} {{\xi^{T}}(s)} \,ds{M_{k2}} \int_{t - {\tau_{2}}}^{t} {\xi(s)} \,ds \\ &{}+ {{\dot{e}}^{T}}(t) \bigl({\tau _{1}} {N_{k1}} + {\tau_{2}} {N_{k2}} + \tau_{1}^{2}{N_{k3}} + \tau _{2}^{2}{N_{k4}} \bigr)\dot{e}(t) \\ &{}+ \frac{1}{{{\tau _{1}}}}{\left ( { \textstyle\begin{array}{@{}c@{}} {e(t)} \\ {e(t - {\tau_{1}})} \end{array}\displaystyle } \right )^{T}}\left ( { \textstyle\begin{array}{@{}c@{\quad}c@{}} { - {N_{k1}}} & {{N_{k1}}} \\ * & { - {N_{k1}}} \end{array}\displaystyle } \right ) \left ( { \textstyle\begin{array}{@{}c@{}} {e(t)} \\ {e(t - {\tau_{1}})} \end{array}\displaystyle } \right ) \\ &{}+ \frac{1}{{{\tau _{2}}}}{\left ( { \textstyle\begin{array}{@{}c@{}} {e(t)} \\ {e(t - {\tau_{2}})} \end{array}\displaystyle } \right )^{T}}\left ( { \textstyle\begin{array}{@{}c@{\quad}c@{}} { - {N_{k2}}} & {{N_{k2}}} \\ * & { - {N_{k2}}} \end{array}\displaystyle } \right ) \left ( { \textstyle\begin{array}{@{}c@{}} {e(t)} \\ {e(t - {\tau_{2}})} \end{array}\displaystyle } \right ) \\ &{}- {\bigl(e\bigl(t - e(t - {\tau _{1}})\bigr)\bigr)^{T}} {N_{k3}}\bigl(e(t) - e(t - {\tau_{1}})\bigr) \\ &{}- {\bigl(e \bigl(t - e(t - {\tau _{2}})\bigr)\bigr)^{T}} {N_{k4}}\bigl(e(t) - e(t - {\tau_{2}})\bigr) \\ &{}+ {\biggl(\frac{{e(t) - e(t - {\tau_{1}})}}{2} - \frac{1}{{{\tau_{1}}}} \int_{t - {\tau_{1}}}^{t} {e(s)\,ds} \biggr)^{T}} \\ &{}\times {N_{k3}}\biggl(\frac{{e(t) - e(t - {\tau_{1}})}}{2} - \frac{1}{{{\tau _{1}}}} \int_{t - {\tau_{1}}}^{t} {e(s)\,ds} \biggr) \\ &{}+ {\biggl(\frac{{e(t) - e(t - {\tau_{2}})}}{2} - \frac{1}{{{\tau_{2}}}} \int_{t - {\tau_{2}}}^{t} {e(s)\,ds} \biggr)^{T}} \\ &{}\times{N_{k4}}\biggl(\frac{{e(t) - e(t - {\tau_{2}})}}{2} - \frac{1}{{{\tau _{4}}}} \int_{t - {\tau_{2}}}^{t} {e(s)\,ds} \biggr) \\ &{}- \frac{{\tau _{1}^{2}}}{2}{{\dot{e}}^{T}}(t){K_{k1}}\dot{e}(t) - \frac{{\tau_{2}^{2}}}{2}{{\dot{e}}^{T}}(t){K_{k2}}\dot{e}(t) \\ &{}- \frac{2}{{\tau_{1}^{2}}} \int_{ - {\tau_{1}}}^{0} { \int_{t + \theta}^{t} {{{\dot{e}}^{T}}(s)} } \,ds \,d\theta{K_{k1}} \int_{ - {\tau_{1}}}^{0} { \int_{t + \theta}^{t} {\dot{e}(s)} } \,ds\,d\theta \\ &{}- \frac{2}{{\tau _{2}^{2}}} \int_{ - {\tau_{2}}}^{0} { \int_{t + \theta}^{t} {{{\dot{e}}^{T}}(s)} } \,ds \,d\theta{K_{k2}} \int_{ - {\tau_{2}}}^{0} { \int_{t + \theta}^{t} {\dot{e}(s)} } \,ds\,d\theta - {y_{1}}(t) - {y_{2}}(t). \end{aligned}$$
(24)

Then from (24) we can obtain

$$ {\mathcal{L}}V\bigl(e(t),i,t\bigr) \le{\zeta^{T}}(t) \Theta(t)\zeta(t), $$
(25)

where

$$\begin{aligned} \zeta(t) =& \biggl[e(t) e(t - {\tau_{1}}) e(t - {\tau_{2}}) e\bigl(t - {\tau _{1}}(t)\bigr) e\bigl(t - {\tau_{2}}(t) \bigr) f\bigl(e(t)\bigr) f\bigl(e(t - {\tau_{1}})\bigr) f\bigl(e(t - { \tau_{2}})\bigr) \\ &{} f\bigl(e\bigl(t - {\tau_{1}}(t)\bigr)\bigr) f\bigl(e\bigl(t - {\tau_{2}}(t)\bigr)\bigr) g\bigl(e(t)\bigr) g\bigl(e(t - { \tau_{1}})\bigr) g\bigl(e(t - {\tau_{2}})\bigr) g\bigl(e \bigl(t - {\tau _{1}}(t)\bigr)\bigr) \\ &{} g\bigl(e\bigl(t - {\tau_{2}}(t)\bigr)\bigr) \int_{t - {\tau_{2}}(t)}^{t} {h\bigl(e(s)\bigr)} \,ds \int_{t - {\tau_{1}}}^{t} {e(s)} \,ds \int_{t - {\tau_{2}}}^{t} {e(s)} \,ds \dot{e}(t) \dot{e}(t - { \tau_{1}}) \dot{e}(t - {\tau_{2}})\biggr]^{T}. \end{aligned}$$

Defining \({\lambda^{*} } = \max(\lambda(\Theta(t)))\), it follows from (7) and combining with (25) that

$$ {\mathcal{L}}V\bigl(e(t),i,t\bigr) \le{\lambda^{*} } {\bigl\Vert {\zeta(t)} \bigr\Vert ^{2}} \le{\lambda^{*} } {\bigl\Vert {e(t)} \bigr\Vert ^{2}}. $$
(26)

By integrating the inequality in (26) between 0 and t and taking the expectation, one can readily obtain

$$ {\mathcal{E}}V\bigl(e(t),i,t\bigr) = {\mathcal{E}}V(0) + { \mathcal{E}} \int_{0}^{t} {{\mathcal{L}}V\bigl(e(t),i,t\bigr)} \,ds \le{\mathcal{E}}V(0) + {\lambda^{*} } {\mathcal{E}} \int_{0}^{t} {{{\bigl\Vert {e(t)} \bigr\Vert }^{2}}} \,ds. $$
(27)

Notice that \({\lambda^{*} } < 0\) and \(V(e(t),i,t) > 0\), we can infer that \(\int_{0}^{t} {{{\Vert {e(s)} \Vert }^{2}}} \,ds\) is convergent when \(t \to\infty\). Then we have \({\mathcal{E}}\int_{0}^{ + \infty} {{{\Vert {e(s)} \Vert }^{2}}\,ds} \le\frac{{ - {\mathcal {E}}V(0)}}{{{\lambda^{*} }}}\).

It can be shown from Lemma 2.3 that

$$\lim_{t \to\infty} {\mathcal{E}} {\bigl\Vert {e(t)} \bigr\Vert ^{2}} = 0. $$

Basing on Definition 2.1, the complex networks dynamical networks (2) are asymptotically synchronized via the pinning controller. The proof is completed. □

Remark 2

The main purpose of this paper is to design suitable pinning controller, for which one can be sure that the complex dynamic networks is asymptotically stable. The constant \({\sigma _{k}}\) in the controller can be chosen suitably to adapt the synchronization criterion and the identification speed. And we can ensure that the inequalities (7) are just only the sufficient conditions but not the necessary ones.

4 Example

In this section, a numerical example is provided to demonstrate that the proposed method in this paper is effective.

Example 1

For the sake of simplification, we consider the following 3-nodes error complex dynamical networks (6) with matrix parameters. We have

$$\begin{aligned}& {D_{1}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} 9 & 0 \\ 0 & 9 \end{array}\displaystyle } \right ], \qquad {A_{1}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} 2 & 3 \\ 1 & 2 \end{array}\displaystyle } \right ], \qquad {B_{1}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} { - 3} & 5 \\ {0.1} & 7 \end{array}\displaystyle } \right ],\qquad {C_{1}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} 6 & { - 1} \\ 2 & 4 \end{array}\displaystyle } \right ], \\ & {\Gamma_{1}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} 1 & 0 \\ 1 & 1 \end{array}\displaystyle } \right ],\qquad {D_{2}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} 5 & 0 \\ 0 & 5 \end{array}\displaystyle } \right ], \qquad {A_{2}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} 1 & 0 \\ 1 & 1 \end{array}\displaystyle } \right ],\qquad {B_{2}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} 4 & 2 \\ 3 & { - 5} \end{array}\displaystyle } \right ], \\ & {C_{2}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} 5 & { - 2} \\ 4 & 7 \end{array}\displaystyle } \right ],\qquad {\Gamma_{2}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} 1 & 0 \\ 0 & 1 \end{array}\displaystyle } \right ], \end{aligned}$$

\({\sigma_{1}} = 2\), \({\sigma_{2}} = 4\), \({\sigma_{3}} = - 1\), \(c = 1\), \({c_{2}} = 2\).

The outer-coupling matrix is assumed to be \(G = {({G_{ij}})_{3 \times3}}\) with

$${G_{ij}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{}} { - 1} & 0 & 1 \\ 0 & { - 1} & 1 \\ 1 & 1 & { - 2} \end{array}\displaystyle } \right ]. $$

The time-varying delay is chosen \({\tau_{1}}(t) = 0.5 + 0.5\sin t\), \({\tau_{2}}(t) = 2.9 + 0.1\sin t\). Accordingly, we have \({\lambda _{1}} = 1\), \({\lambda_{2}} = 3\), \({d_{1}} = 0.5\), \({d_{2}} = 0.1\). The nonlinear functions \(f( \cdot)\), \(g( \cdot)\), and \(h( \cdot)\) are taken as \({f_{1}}(x) = {f_{2}}(x) = {g_{1}}(x) = {g_{2}}(x) = {h_{1}}(x) = {h_{2}}(x) = \frac{1}{2}(\vert {x + 1} \vert - \vert {x - 1} \vert )\). It can easily be verified that \(f( \cdot)\) and \(g(x)\) satisfy Assumption 2.1 with

$$\begin{aligned}& {U_{1}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} { - 5} & { - 6} \\ { - 6} & 5 \end{array}\displaystyle } \right ],\qquad {V_{1}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} { - 0.5} & 1 \\ {2.5} & { - 1} \end{array}\displaystyle } \right ], \\& {U_{2}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} { - 5} & { - 4.5} \\ { - 4.5} & 4 \end{array}\displaystyle } \right ],\qquad {V_{2}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} { - 0.5} & 1 \\ 2 & { - 1} \end{array}\displaystyle } \right ]. \end{aligned}$$

Suppose the transition probability matrix

$$ \pi = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{}} { - 1} & 1 \\ 2 & { - 2} \end{array}\displaystyle } \right ]. $$
(28)

By using Matlab, the LMI (7) can be solved and a feasible solution is obtained. In order to save space we list only part of the results:

$$\begin{aligned}& {P_{1}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {32.3153} & { - 6.6836} & {13.1748} & { - 0.5128} & {17.1093} & { - 1.8844} \\ { - 6.6836} & {34.5008} & { - 0.5128} & {20.9613} & { - 1.8844} & {24.1686} \\ {13.1748} & { - 0.5128} & {32.3153} & { - 6.6836} & {17.1093} & { - 1.8844} \\ { - 0.5128} & {20.9613} & { - 6.6836} & {34.5008} & { - 1.8844} & {24.1686} \\ {17.1093} & { - 1.8844} & {17.1093} & { - 1.8844} & {28.3808} & { - 5.3120} \\ { - 1.8844} & {24.1686} & { - 1.8844} & {24.1686} & { - 5.3120} & {31.2935} \end{array}\displaystyle } \right ], \\ & {P_{2}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {29.8808} & { - 1.8920} & {11.9156} & { - 2.7270} & {15.9813} & { - 2.6076} \\ { - 1.8920} & {30.9584} & { - 2.7194} & {20.9304} & { - 2.5970} & {23.3700} \\ {11.9156} & { - 2.7194} & {31.0752} & { - 1.7572} & {16.4006} & { - 2.5536} \\ { - 0.7270} & {20.9304} & { - 1.17572} & {30.9558} & { - 2.5375} & {23.3696} \\ {15.9813} & { - 0.5970} & {16.4006} & { - 2.5375} & {26.5094} & { - 1.9177} \\ { - 2.6076} & {23.3700} & { - 2.5536} & {23.3696} & { - 1.9177} & {28.5219} \end{array}\displaystyle } \right ], \\ & {R_{11}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {5.1455} & { - 0.0160} & { - 0.0971} & {0.0186} & { - 0.0844} & {0.0446} \\ { - 0.0160} & {5.4165} & {0.0186} & { - 0.2003} & {0.0446} & { - 0.1654} \\ { - 0.0971} & {0.0186} & {5.1455} & { - 0.0160} & { - 0.0844} & {0.0446} \\ {0.0186} & { - 0.2003} & { - 0.0160} & {5.4165} & {0.0446} & { - 0.1654} \\ { - 0.0844} & {0.0446} & { - 0.0844} & {0.0446} & {5.1329} & { - 0.0419} \\ {0.0446} & { - 0.1654} & {0.0446} & { - 0.1654} & { - 0.0419} & {5.3816} \end{array}\displaystyle } \right ], \\ & {R_{12}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {3.6246} & { - 0.0464} & { - 0.1720} & {0.0200} & { - 0.1472} & {0.0569} \\ { - 0.0464} & {3.9380} & {0.0200} & { - 0.3050} & {0.0569} & { - 0.2555} \\ { - 0.1720} & {0.0200} & {3.6246} & { - 0.0464} & { - 0.1472} & {0.0569} \\ {0.0200} & { - 0.3050} & { - 0.0464} & {3.9380} & {0.0569} & { - 0.2555} \\ { - 0.1472} & {0.0569} & { - 0.1472} & {0.0569} & {3.5997} & { - 0.0833} \\ {0.0569} & { - 0.2555} & {0.0569} & { - 0.2555} & { - 0.0833} & {3.8885} \end{array}\displaystyle } \right ], \\ & {R_{13}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {4.7830} & {0.0192} & {0.0932} & {0.0323} & {0.0695} & { - 0.0022} \\ {0.0192} & {4.5344} & {0.0323} & {0.2476} & { - 0.0022} & {0.2142} \\ {0.0932} & {0.0323} & {4.7830} & {0.0192} & {0.0695} & { - 0.0022} \\ {0.0323} & {0.2476} & {0.0192} & {4.5344} & { - 0.0022} & {0.2142} \\ {0.0695} & { - 0.0022} & {0.0695} & { - 0.0022} & {4.8067} & {0.0536} \\ { - 0.0022} & {0.2142} & { - 0.0022} & {0.2142} & {0.0536} & {4.5678} \end{array}\displaystyle } \right ], \\ & {R_{14}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {3.2464} & { - 0.0112} & {0.0259} & {0.0349} & {0.0131} & {0.0091} \\ { - 0.0112} & {3.0200} & {0.0349} & {0.1599} & {0.0091} & {0.1393} \\ {0.0259} & {0.0349} & {3.2464} & { - 0.0112} & {0.0131} & {0.0091} \\ {0.0349} & {0.1599} & { - 0.0112} & {3.0200} & {0.0091} & {0.1393} \\ {0.0131} & {0.0091} & {0.0131} & {0.0091} & {3.2592} & {0.0146} \\ {0.0091} & {0.1393} & {0.0091} & {0.1393} & {0.0146} & {3.0407} \end{array}\displaystyle } \right ], \\ & {N_{11}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {1.0315} & { - 0.0047} & { - 0.0007} & { - 0.0022} & { - 0.0012} & { - 0.0018} \\ { - 0.0047} & {1.0347} & { - 0.0022} & { - 0.0061} & { - 0.0018} & { - 0.0067} \\ { - 0.0007} & { - 0.0022} & {1.0315} & { - 0.0047} & { - 0.0012} & { - 0.0018} \\ { - 0.0022} & { - 0.0061} & { - 0.0047} & {1.0347} & { - 0.0018} & { - 0.0067} \\ { - 0.0012} & { - 0.0018} & { - 0.0012} & { - 0.0018} & {1.0319} & { - 0.0051} \\ { - 0.0018} & { - 0.0067} & { - 0.0018} & { - 0.0067} & { - 0.0051} & {1.0354} \end{array}\displaystyle } \right ], \\ & {N_{12}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {1.1342} & { - 0.0042} & { - 0.0016} & { - 0.0021} & { - 0.0020} & { - 0.0018} \\ { - 0.0042} & {1.1359} & { - 0.0021} & { - 0.0071} & { - 0.0018} & { - 0.0076} \\ { - 0.0016} & { - 0.0021} & {1.1342} & { - 0.0042} & { - 0.0020} & { - 0.0018} \\ { - 0.0021} & { - 0.0071} & { - 0.0042} & {1.1359} & { - 0.0018} & { - 0.0076} \\ { - 0.0020} & { - 0.0018} & { - 0.0020} & { - 0.0018} & {1.1346} & { - 0.0045} \\ { - 0.0018} & { - 0.0076} & { - 0.0018} & { - 0.0076} & { - 0.0045} & {1.1364} \end{array}\displaystyle } \right ], \\ & {N_{13}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {694.7089} & { - 0.6006} & {2.5132} & { - 0.7459} & {2.6286} & { - 0.8371} \\ { - 0.6006} & {695.6202} & { - 0.7459} & {4.4935} & { - 0.8371} & {4.6253} \\ {2.5132} & { - 0.7459} & {694.7089} & { - 0.6006} & {2.6486} & { - 0.8371} \\ { - 0.7459} & {4.4935} & { - 0.6006} & {695.6202} & { - 0.8371} & {4.6253} \\ {2.6486} & { - 0.8371} & {2.6486} & { - 0.8371} & {694.5735} & { - 0.5095} \\ { - 0.8371} & {4.6253} & { - 0.8371} & {4.6253} & { - 0.5095} & {695.4884} \end{array}\displaystyle } \right ], \\ & {N_{14}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {797.8287} & { - 21.8618} & { - 8.3066} & { - 8.8120} & { - 10.9748} & { - 6.4319} \\ { - 21.8618} & {814.1710} & { - 8.8120} & { - 38.8171} & { - 6.4319} & { - 42.9681} \\ { - 8.3066} & { - 8.8120} & {797.8287} & { - 21.8618} & { - 10.9748} & { - 6.4319} \\ { - 8.8120} & { - 38.8171} & { - 21.8618} & {814.1710} & { - 6.4319} & { - 42.9681} \\ { - 10.9748} & { - 6.4319} & { - 10.9748} & { - 6.4319} & {800.4970} & { - 24.2419} \\ { - 6.4319} & { - 42.9681} & { - 6.4319} & { - 42.9681} & { - 24.2419} & {818.3220} \end{array}\displaystyle } \right ], \\ & {W_{11}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {2.2237} & { - 0.0093} & { - 0.0036} & { - 0.0041} & { - 0.0046} & { - 0.0032} \\ { - 0.0093} & {2.2297} & { - 0.0041} & { - 0.0163} & { - 0.0032} & { - 0.0178} \\ { - 0.0036} & { - 0.0041} & {2.2237} & { - 0.0093} & { - 0.0046} & { - 0.0032} \\ { - 0.0041} & { - 0.0163} & { - 0.0093} & {2.2297} & { - 0.0032} & { - 0.0178} \\ { - 0.0046} & { - 0.0032} & { - 0.0046} & { - 0.0032} & {2.2248} & { - 0.0102} \\ { - 0.0031} & { - 0.0178} & { - 0.0032} & { - 0.0178} & { - 0.0102} & {2.2312} \end{array}\displaystyle } \right ], \\ & {W_{12}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {2.2237} & { - 0.0093} & { - 0.0036} & { - 0.0041} & { - 0.0046} & { - 0.0032} \\ { - 0.0093} & {2.2297} & { - 0.0041} & { - 0.0163} & { - 0.0032} & { - 0.0178} \\ { - 0.0036} & { - 0.0041} & {2.2237} & { - 0.0093} & { - 0.0046} & { - 0.0032} \\ { - 0.0041} & { - 0.0163} & { - 0.0093} & {2.2297} & { - 0.0032} & { - 0.0178} \\ { - 0.0046} & { - 0.0032} & { - 0.0046} & { - 0.0032} & {2.2248} & { - 0.0102} \\ { - 0.0031} & { - 0.0178} & { - 0.0032} & { - 0.0178} & { - 0.0102} & {2.2312} \end{array}\displaystyle } \right ], \\ & {K_{11}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {11.6269} & { - 4.6926} & {10.5897} & { - 4.1641} & {10.9436} & { - 4.3516} \\ { - 4.6926} & {17.5470} & { - 4.1641} & {16.9980} & { - 4.3516} & {17.1959} \\ {10.5897} & { - 4.1641} & {11.6269} & { - 4.6926} & {10.9436} & { - 4.3516} \\ { - 4.1641} & {16.9980} & { - 4.6926} & {17.5470} & { - 4.3516} & {17.1959} \\ {10.9436} & { - 4.3516} & {10.9436} & { - 4.3516} & {11.2730} & { - 4.5051} \\ { - 4.3516} & {17.1959} & { - 4.3516} & {17.1959} & { - 4.5051} & {17.3491} \end{array}\displaystyle } \right ], \\ & {K_{12}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {5.3221} & { - 0.0302} & { - 0.0131} & { - 0.0106} & { - 0.0172} & { - 0.0068} \\ { - 0.0302} & {5.3485} & { - 0.0106} & { - 0.0580} & { - 0.0068} & { - 0.0647} \\ { - 0.0131} & { - 0.0106} & {5.3221} & { - 0.0302} & { - 0.0172} & { - 0.0068} \\ { - 0.0106} & { - 0.0580} & { - 0.0302} & {5.3485} & { - 0.0068} & { - 0.0647} \\ { - 0.0172} & { - 0.0068} & { - 0.0172} & { - 0.0068} & {5.3262} & { - 0.0340} \\ { - 0.0068} & { - 0.0647} & { - 0.0068} & { - 0.0647} & { - 0.0340} & {5.3552} \end{array}\displaystyle } \right ], \\ & {M_{11}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {1.8647} & {0.0114} & { - 0.0094} & {0.0072} & { - 0.0091} & {0.0068} \\ {0.0114} & {1.8545} & {0.0072} & { - 0.0074} & {0.0068} & { - 0.0066} \\ { - 0.0094} & {0.0072} & {1.8647} & {0.0114} & { - 0.0091} & {0.0068} \\ {0.0072} & { - 0.0074} & {0.0114} & {1.8545} & {0.0068} & { - 0.0068} \\ { - 0.0091} & {0.0068} & { - 0.0091} & {0.0068} & {1.8644} & {0.0118} \\ {0.0068} & { - 0.0066} & {0.0068} & {0.0066} & {0.0118} & {1.8537} \end{array}\displaystyle } \right ], \\& {M_{12}} = \left [ { \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} {381.4255} & {11.0346} & {4.4879} & {5.3113} & {5.6487} & {4.3956} \\ {11.0346} & {376.0649} & {5.3113} & {19.4184} & {4.3956} & {20.9132} \\ {4.4879} & {5.3113} & {381.4255} & {11.0346} & {5.6487} & {4.3956} \\ {5.3113} & {19.4184} & {11.0346} & {376.0649} & {4.3956} & {20.9132} \\ {5.6487} & {4.3956} & {5.6487} & {4.3956} & {380.2647} & {11.9502} \\ {4.3956} & {20.9132} & {4.3956} & {20.9132} & {11.9502} & {374.5701} \end{array}\displaystyle } \right ]. \end{aligned}$$

By Theorem 3.1, the Markovian jumping complex networks with varying delays achieve synchronization through the pinning controller \(u(t)\), with the above mentioned parameters. The numerical simulations are presented in Figures 1-6. We pay attention in Figure 1 to Figure 3 to see that the error dynamical system (6) is non-synchronous without the pinning controller \(u(t)\). As a comparison, it can been seen in Figures 4-6 that the synchronization errors can first converge asymptotically to zero with the pinning control \(u(t)\). In accordance with the transition probability matrix in (28), a possible time sequence of the mode jumps is illustrated in Figure 7.

Figure 1
figure 1

The error state response of \(\pmb{{e_{1i}}(t)}\) ( \(\pmb{i = 1,2}\) ) with out control \(\pmb{u(t)}\) .

Figure 2
figure 2

The error state response of \(\pmb{{e_{2i}}(t)}\) ( \(\pmb{i = 1,2}\) ) with out control \(\pmb{u(t)}\) .

Figure 3
figure 3

The error state response of \(\pmb{{e_{3i}}(t)}\) ( \(\pmb{i = 1,2}\) ) with out control \(\pmb{u(t)}\) .

Figure 4
figure 4

The error state response of \(\pmb{{e_{1i}}(t)}\) ( \(\pmb{i = 1,2}\) ) with control \(\pmb{u(t)}\) .

Figure 5
figure 5

The error state response of \(\pmb{{e_{2i}}(t)}\) ( \(\pmb{i = 1,2}\) ) with control \(\pmb{u(t)}\) .

Figure 6
figure 6

The error state response of \(\pmb{{e_{3i}}(t)}\) ( \(\pmb{i = 1,2}\) ) with control \(\pmb{u(t)}\) .

Figure 7
figure 7

Modes.

5 Conclusion

In this paper, we have developed an approach to solve the problem of synchronization for a class of complex dynamical networks with mixed mode-dependent delays. By building properly Lyapunov-Krasovskii functions involving triple integral terms and using integral inequalities, new synchronization criteria have been obtained, which guaranteed the synchronization of complex dynamical networks. The result is expressed in terms of LMIs. Finally, a numerical example is provided to illustrate the effectiveness of the proposed methods.

References

  1. Boccaletti, S, Latora, V, Moreno, Y, Chavez, M, Huang, D: Complex networks: structure and dynamics. Phys. Rep. 424, 175-308 (2006)

    Article  MathSciNet  Google Scholar 

  2. Albert, R, Barabasi, A: Statistical mechanics of complex networks. Rev. Mod. Phys. 74, 47-97 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  3. Li, X, Chen, GR: Synchronization and desynchronization of complex dynamical networks: an engineering viewpoint. IEEE Trans. Circuits Syst. I, Fundam. Theory Appl. 50(11), 1381-1390 (2003)

    Article  MathSciNet  Google Scholar 

  4. Liu, Y, Wang, ZD, Liang, JL, Liu, XH: Synchronization and state estimation for discrete-time complex networks with distributed delays. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 38(5), 1314-1325 (2008)

    Article  MathSciNet  Google Scholar 

  5. Hoppensteadt, FC, Izhikevich, EM: Pattern recognition via synchronization in phase-locked loop neural networks. IEEE Trans. Neural Netw. 11(3), 734-738 (2000)

    Article  Google Scholar 

  6. Wei, GW, Jia, YQ: Synchronization-based image edge detection. Europhys. Lett. 59(6), 814-819 (2002)

    Article  Google Scholar 

  7. Li, D, Wang, Z, Ma, G: Controlled synchronization for complex dynamical networks with random delayed information exchanges: a non-fragile approach. Neurocomputing 171, 1047-1052 (2016)

    Article  Google Scholar 

  8. Mathiyalagan, K, Anbuvithya, R, Sakthivel, R, Park, JH, Prakash, P: Non-fragile \({H_{\infty}}\) synchronization of memristor-based neural networks using passivity theory. Neural Netw. 74, 85-100 (2016)

    Article  Google Scholar 

  9. Anbuvithya, R, Mathiyalagan, K, Sakthivel, R, Prakash, P: Non-fragile synchronization of memrisitive BAM networks with random feedback gain fluctuations. Commun. Nonlinear Sci. Numer. Simul. 29, 427-440 (2015)

    Article  MathSciNet  Google Scholar 

  10. Mathiyalagan, K, Park, JH, Sakthivel, R: Exponential synchronization for fractional-order chaotic systems with mixed uncertainties. Complexity 21, 114-125 (2015)

    Article  MathSciNet  Google Scholar 

  11. Park, MJ, Kwon, OM, Ju, HP, Lee, SM, Cha, EJ: Synchronization of discrete-time complex dynamical networks with interval time-varying delays via non-fragile controller with randomly occurring perturbation. J. Franklin Inst. 351, 4850-4871 (2014)

    Article  MathSciNet  Google Scholar 

  12. Su, L, Shen, H: Mixed/passive synchronization for complex dynamical networks with sampled-data control. Appl. Math. Comput. 259(15), 931-942 (2015)

    Article  MathSciNet  Google Scholar 

  13. Hua, CC, Ge, C, Guan, XP: Synchronization of chaotic Lur’e systems with time delays using sampled-data control. IEEE Trans. Neural Netw. Learn. Syst. 26, 1214-1221 (2015)

    Article  MathSciNet  Google Scholar 

  14. Rakkiyappan, R, Sakthivel, N: Pinning sampled-data control for synchronization of complex networks with probabilistic time-varying delays using quadratic convex approach. Neurocomputing 162(25), 26-40 (2015)

    Article  Google Scholar 

  15. Chen, WH, Jiang, ZY, Lu, XM, Luo, SX: \({H_{\infty}}\) synchronization for complex dynamical networks with coupling delays using distributed impulsive control. Nonlinear Anal. Hybrid Syst. 17, 111-127 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  16. Li, HL, Jiang, YL, Wang, Z, Zhang, L, Teng, Z: Parameter identification and adaptive-impulsive synchronization of uncertain complex networks with nonidentical topological structures. Optik 126(24), 5771-5776 (2015)

    Article  Google Scholar 

  17. Li, Z, Fang, JA, Miao, Q, He, G: Exponential synchronization of impulsive discrete-time complex networks with time-varying delay. Neurocomputing 157, 335-343 (2015)

    Article  Google Scholar 

  18. Wang, JY, Feng, JW, Xu, C, Zhao, Y, Feng, JQ: Pinning synchronization of nonlinearly coupled complex networks with time-varying delays using M-matrix strategies. Neurocomputing 177, 89-97 (2016)

    Article  Google Scholar 

  19. Li, B: Pinning adaptive hybrid synchronization of two general complex dynamical networks with mixed coupling. Appl. Math. Model. 40(4), 2983-2998 (2016)

    Article  MathSciNet  Google Scholar 

  20. Liu, XW, Xu, Y: Cluster synchronization in complex networks of nonidentical dynamical systems via pinning control. Neurocomputing 168, 260-268 (2015)

    Article  Google Scholar 

  21. Guo, L, Pan, H, Nian, XH: Adaptive pinning control of cluster synchronization in complex networks with Lurie-type nonlinear dynamics. Neurocomputing 182, 294-303 (2016)

    Article  Google Scholar 

  22. Cai, SM, Zhou, PP, Liu, ZR: Intermittent pinning control for cluster synchronization of delayed heterogeneous dynamical networks. Nonlinear Anal. Hybrid Syst. 18, 134-155 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  23. Jing, TY, Chen, FQ, Li, QH: Finite-time mixed outer synchronization of complex networks with time-varying delay and unknown parameters. Appl. Math. Model. 39(23-24), 7734-7743 (2015)

    Article  MathSciNet  Google Scholar 

  24. Ma, YC, Zheng, YQ: Synchronization of continuous-time Markovian jumping singular complex networks with mixed mode-dependent time delays. Neurocomputing 156, 52-59 (2015)

    Article  Google Scholar 

  25. Shi, KB, Zhong, SM, Zhu, H, Liu, XZ, Zeng, Y: New delay-dependent stability criteria for neutral-type neural networks with mixed random time-varying delays. Neurocomputing 168, 896-907 (2015)

    Article  Google Scholar 

  26. Dai, Y, Cai, YZ, Xu, XM: Synchronization and exponential estimates of complex networks with mixed time-varying coupling delays. Int. J. Autom. Comput. 6(3), 301-307 (2009)

    Article  Google Scholar 

  27. Li, HJ: Delay-distribution-dependent synchronization of T-S fuzzy stochastic complex networks with mixed time delays. In: Chinese Control and Decision Conference, pp. 23-25 (2011)

    Google Scholar 

  28. Song, Q, Cao, JD, Liu, F: Pinning-controlled synchronization of hybrid-coupled complex dynamical networks with mixed time-delays. Int. J. Robust Nonlinear Control 22(6), 690-706 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  29. Sakthivel, R, Anbuvithya, R, Mathiyalagan, K, Ma, Y, Prakash, P: Reliable anti-synchronization conditions for BAM memristive neural networks with different memductance functions. Appl. Math. Comput. 275, 213-228 (2016)

    Article  MathSciNet  Google Scholar 

  30. Hua, CC, Zhang, LL, Guan, XP: Decentralized output feedback adaptive NN tracking control for time-delay stochastic nonlinear systems with prescribed performance. IEEE Trans. Neural Netw. Learn. Syst. 26, 2749-2759 (2015)

    Article  MathSciNet  Google Scholar 

  31. Ali, MS, Saravanakumar, R, Zhu, QX: Less conservation delay-dependent control of uncertain neural networks with discrete interval and distributed time-varying delays. Neurocomputing 166, 84-95 (2015)

    Article  Google Scholar 

  32. Hua, CC, Guan, XP: Smooth dynamic output feedback control for multiple time-delay systems with nonlinear uncertainties. Automatica 68, 1-8 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  33. Yang, XS, Yang, ZC: Synchronization of T-S fuzzy complex dynamical networks with time-varying impulsive delays and stochastic effects. Fuzzy Sets Syst. 235(16), 25-43 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  34. Saaban, AB, Ibrahim, AB, Shehzad, M, Ahmad, I: Global chaos synchronization of identical and nonidentical chaotic systems using only two nonlinear controllers. Int. J. Math. Comput. Phys. Electr. Comput. Eng. 7(12), 338-344 (2013)

    Google Scholar 

  35. Duan, WY, Cai, CX, Zou, Y, You, J: Synchronization criteria for singular complex dynamical networks with delayed coupling and non-delayed coupling. Control Theory Appl. 30(8), 947-955 (2013)

    MATH  Google Scholar 

  36. Wang, Y, Wang, ZD, Liang, JL: Global synchronization stability for delayed complex networks switch randomly occurring nonlinearities and multiple stochastic disturbances. J. Phys. A, Math. Theor. 42(13), 1243-1247 (2009)

    Article  MathSciNet  Google Scholar 

  37. Seuret, A, Gouaisbaut, F: Jensen’s and Wirtinger’s inequalities for time-delay systems. IFAC Proc. Ser. 46(3), 343-348 (2013)

    Article  Google Scholar 

  38. Li, B, Shen, H, Song, XN, Zhao, JJ: Robust exponential \({H_{\infty}}\) control for uncertain time-varying delay systems with input saturation: a Markov jump model approach. Appl. Math. Comput. 237, 190-202 (2014)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This paper is supported by the National Natural Science Foundation of China (No. 61273004) and the Natural Science Foundation of Hebei province (No. F2014203085). The authors would like to thank the editor and anonymous reviewers for their many helpful comments and suggestions, improving the quality of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nannan Ma.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

NM carried out the main part of this paper, YM participated in the discussion and revision of the paper. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ma, Y., Ma, N. Synchronization for complex dynamical networks with mixed mode-dependent time delays. Adv Differ Equ 2016, 242 (2016). https://doi.org/10.1186/s13662-016-0942-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-016-0942-z

Keywords