Skip to main content

Advertisement

Outer-synchronization of fractional-order neural networks with deviating argument via centralized and decentralized data-sampling approaches

Article metrics

  • 53 Accesses

Abstract

This paper is committed to investigating outer-synchronization of fractional-order neural networks with deviating argument via centralized and decentralized data-sampling approaches. Considering the low cost and high reliability of data-sampling control, we adopt two categories of control strategies with principles of centralized and decentralized data-sampling to synchronize fractional-order neural networks with deviating argument. Several sufficient criteria are proposed to realize outer-synchronization by data-sampling control design in two complex coupled networks. It is noteworthy that, based on centralized and decentralized data-sampling methods, the synchronization theory of fractional systems and differential equation with deviating argument, the sampling time points are very well selected in control systems. An example is performed to illustrate the advantage of the presented theoretical analysis and results.

Introduction

Lately, fractional calculus has shown interest and importance, mainly on account of its numerous application in the domains of physics, engineering and science [1,2,3]. It is learned that fractional calculus has been an explored area of mathematical studies and not less than 300 years’ history. In contemporary, in respect that fractional-order model describes the behavior of real-world situations more precisely than integer-order model, a good deal of the practical processes are identified and depicted in terms of fractional-order models. In truth, the design of fractional-order dynamics has gone beyond the capabilities of traditional integer-order models. Comparing with conventional systems, the distinguished superiority of fractional-order systems is that many degrees of freedom and general computation ability can be shown better. Besides, fractional-order model has infinite memory and hereditary characteristics of diverse processes with potential ability. It is exactly because of the practicability of fractional-order systems that the analysis and synthesis with respect to fractional-order dynamics has gained so much attention. For instance, for a fractional-order quaternion-valued neural network with delay, the novel results about the stability and bifurcation were researched through using the time delay as the bifurcation parameter in [4]. In fractional-order non-autonomous neural networks, global asymptotical periodicity was studied by the property of fractional-order calculation in [5]. For fractional-order bidirectional associative memory neural networks, global Mittag-Leffler stabilization problem was investigated by some new analysis approaches of fractional calculation in [6], and more interesting results were reported; see [7,8,9] and the references therein.

As hybrid dynamical equations, the differential equations with deviating argument are neither continuous-time nor merely discrete-time, but the coalition of continuous and discrete equations. These novel equations combining the characteristics of differential and difference equations were initiated in [10, 11], at that time, the stability of system cannot be fully studied due to lack of effective tools. As the theory of differential equations develops, a generalized concept about differential equations with deviating argument was presented in [12, 13]. The main method is that the differential equations with deviating argument are converted into their corresponding equivalent integral equations. So that these equations could be used to deal with issues that cannot be handled by discrete equations. As we have learned, the differential equations with deviating argument have such a particular performance, during motion process, the argument can change the type of deviation. Namely, differential equations with deviating argument unify delayed and advanced equations. Therefore, many open questions about systems with deviating argument are raised, some heuristic results have been shown in the relevant literature [14, 15]. From another perspective, the notion of retard corresponds to the past cases, and the notion of advance corresponds to the future cases. It is well known that the influence of the past and future events on current behavior is very significant in decision-making. So, it is worthwhile to investigate differential equations with mixed type that can give rise to complex behavior.

Synchronization is a common and typical phenomenon in the real world. Over the last few decades, the synchronization problem of complex control systems plays a fabulous role in the fields of secure communication and control processing [16, 17]. To the best of our knowledge, a mass of nodes and connections between nodes constitute a complex network, in accordance of different intention of network nodes in diverse environment, the entire dynamic and topological characteristics of network nodes could lead to complex behaviors of network. Then all sorts of different synchronization phenomenon appear gradually, two of the more common ones are inner-synchronization and outer-synchronization. We call it inner-synchronization if the synchronization occurs within a network, by using the appropriate control schemes, all nodes inside a network will tend to the identical behaviors. But, unlike inner-synchronization, outer-synchronization occurs in two or more complex networks, through proper control design, all individuals between two or more complex networks will realize identical behaviors. In real life, the spread of many diseases between two communities can be explained by the phenomenon of outer-synchronization between two networks, and some attractive results regarding outer-synchronization have been introduced [18, 19]. However, outer-synchronization based on deviating argument systems is seldom reported. Thus, it is important and essential to study outer-synchronization.

Data-sampling control theory has been a hot topic due to the fact that data-sampling control itself is high precision and high reliability in many practical applications [20, 21]. With the development of the high quality need of system performance, single-rate data-sampling control cannot meet the requirements and even increase the cost of the system, a better design scheme called multi-rate data-sampling control has come into being [22, 23]. Multi-rate data-sampling control with less conservativeness can achieve the control objectives that single-rate data-sampling control does not have, such as gain margin improvement, simultaneously stability, strong stabilization, centralized and decentralized control. To the best of our knowledge, data-sampling control based on the sampling signal is a significant concept related to discretization. Unlike continuous-time control that always occupies the communication channels all the time, multi-rate data-sampling control only carried out at part of timing is reasonable and easier to implement, and has the advantages of good versatility and effective interference suppression. According to the characteristics of fractional-order system, it is found that centralized and decentralized data-sampling control may be more appropriate for fractional-order system than other control design [24, 25]. On the one hand, centralized and decentralized data-sampling control approaches are relatively cheaper and simpler to operate. On the other hand, it is feasible that centralized and decentralized data-sampling control approaches only need to select the sampling time points instead of the entire time points in fractional-order system with complex structures. In other words, once we can effectively solve the resource distribution problems with respect to centralized and decentralized data-sampling control, then the data transmission and power consumption will be greatly reduced. Nevertheless, how to achieve the ultimate purpose of optimizing data acquisition is a challenging. As a consequence, constructing high-efficiency, explored and analyzed information-based data-sampling control is worth looking into.

With the discussion illustrated as earlier, we try to use centralized and decentralized data-sampling as two kinds of better control schemes to explore outer-synchronization for fractional-order systems with deviating argument. Primarily, we need to find the connection between the current state and the argument state in fractional-order systems due to the particularity of the deviating argument. Moreover, according to the classifiable character of data-sampling itself, we can divide data-sampling control into structure-dependent and state-dependent sampling control to study outer-synchronization better in fractional-order systems with deviating argument. As we have explored, centralized and decentralized data-sampling methods are first attempt to address outer-synchronization problem in deviating argument systems, and it can prove that the selection of the sampling time points is effective and reasonable. Roughly stated, our main contributions of the paper are summed up into two points: (1) It is the first time that centralized and decentralized data-sampling methods are applied to fractional-order neural networks with deviating argument. (2) Several outer-synchronization criteria are first put forward in fractional-order neural networks with deviating argument by utilizing centralized and decentralized data-sampling control schemes.

The structure of the paper is composed of the following four parts. Some preliminaries are stated from the five aspects in Sect. 2, namely: fractional calculus, model description, definition and problem formulation, assumptions and mathematical notations, lemmas and properties. In Sect. 3, several sufficient conditions about outer-synchronization main results are put forward by centralized and decentralized data-sampling approaches, separately. In Sect. 4, an example is shown to demonstrate the efficiency of the derived results. At last, Sect. 5 summaries some relevant conclusions.

Preliminaries

Fractional calculus

To facilitate the description of our model, some basic notions for fractional calculus are recalled.

Fractional integral with order \(q>0\) of the function \(\mathscr{Q}(t)\) is described by

$$\begin{aligned} ^{RL}_{t_{0}}D^{-q}_{t}\mathscr{Q}(t)= \frac{1}{\varGamma (q)} \int _{t_{0}} ^{t} (t-s)^{q -1} \mathscr{Q}(s)\, \mathrm{d}s, \end{aligned}$$

where \(t \ge t_{0}\), \(\varGamma (\cdot )\) is the Gamma function, that is,

$$\begin{aligned} \varGamma (q)= \int _{t_{0}}^{\infty } s^{q -1} e^{-s}\, \mathrm{d}s. \end{aligned}$$

The Caputo derivative with order \(q>0\) of the function \(\mathscr{Q}(t) \in C^{n+1}([t_{0},+\infty ),\mathcal{R} )\) is characterized as

$$\begin{aligned} {}^{C}_{t_{0}}D^{q}_{t}\mathscr{Q}(t)= \frac{1}{\varGamma (n-q)} \int ^{t} _{t_{0}}\frac{\mathscr{Q}^{(n)}(s)}{(t-s)^{q-n+1}}\,\mathrm{d}s, \end{aligned}$$

where \(t \ge t_{0}\) and n is a positive integer satisfying \(n-1 < q < n\).

The one-parameter Mittag-Leffler function \(E_{q}(\cdot )\) is given as

$$\begin{aligned} E_{q}(s)=\sum^{+\infty }_{k=0} \frac{s^{k}}{\varGamma (kq+1)}, \end{aligned}$$

where \(q>0\), s is a complex number.

Model description

Throughout the paper, \(\mathcal{N}\) and \(\mathcal{R}^{+}\) represent the sets of natural and nonnegative real numbers, separately, i.e., \(\mathcal{N}=\{0,1,2,\ldots \}\), \(\mathcal{R}^{+}=[0,+\infty )\). \(\mathcal{R}^{n}\) stands for the n-dimensional real space. For a given real vector \(\mathscr{A}={({\mathscr{A}_{1}},{\mathscr{A}_{2}},\ldots, {\mathscr{A}_{n}})}^{T}\in \mathcal{R}^{n}\), the norm is defined by \(\Vert \mathscr{A}\Vert =\sum^{n}_{i=1}\alpha _{i}\vert \mathscr{A}_{i}\vert \), where \(\alpha _{i}>0\). Choose two real-valued sequences \(\{t_{k}\}, \{ \varrho _{k}\}\), \(k\in \mathcal{N}\), satisfying \(t_{k}< t_{k+1}\), \(t_{k} \le \varrho _{k}\le t_{k+1}\) for all \(k\in \mathcal{N}\) with \(t_{k}\to +\infty \) as \(k\to +\infty \). Similarly, choose the other two real-valued sequences \(\{t_{k}^{i}\}\), \(\{\rho _{k}\}\), \(i=1,2,\ldots,n\), \(k\in \mathcal{N}\), satisfying \(t_{k}^{i}< t_{k+1}^{i}\), \(t_{k}^{i} \le \rho _{k}\le t_{k+1}^{i}\) for all \(i=1,2,\ldots,n\), \(k\in \mathcal{N}\) with \(t_{k}^{i}\to +\infty \) as \(k\to +\infty \).

We focus on the fractional-order neural networks with deviating argument governed by

$$\begin{aligned} {}^{C}_{t_{0}}D^{q}_{t}z_{i}(t)&= -a_{i}(t)z_{i}(t)+\sum_{j=1}^{n}b_{ij}(t)f _{j}\bigl(z_{j}(t)\bigr) \\ &\quad{} +\sum_{j=1}^{n}c_{ij}(t)g_{j} \bigl(z_{j}\bigl(\gamma (t)\bigr)\bigr)+\nu _{i}(t), \quad i=1,2,\ldots,n, \end{aligned}$$
(1)

where \(0< q<1\), \(z_{i}(t)\) means the state variable, \(\gamma (t)= \varrho _{k}\), if \(t\in [t_{k},t_{k+1})\), \(k\in \mathcal{N}\), \(t\in \mathcal{R}^{+}\), similarly, \(\gamma (t)=\rho _{k}\), if \(t\in [t_{k} ^{i},t_{k+1}^{i})\), \(i=1,2,\ldots,n\), \(k\in \mathcal{N}\), \(t\in \mathcal{R}^{+}\), the self-inhibition \(a_{i}(t)>0\), synaptic strengths \(b_{ij}(t)\), \(c_{ij}(t)\) and the bias \(\nu _{i}(t)\) are piecewise continuous and bounded, the output functions \(f_{j}(\cdot )\) and \(g_{j}(\cdot )\) satisfy

$$\begin{aligned}& 0\le \frac{f_{j}(\mathscr{Z})-f_{j}(\overline{\mathscr{Z}})}{ \mathscr{Z}-\overline{\mathscr{Z}}}\le L_{j}, \quad \forall \mathscr{Z},\overline{\mathscr{Z}}\in \mathcal{R}, \mathscr{Z} \ne \overline{ \mathscr{Z}}, \end{aligned}$$
(2)
$$\begin{aligned}& 0\le \frac{g_{j}(\mathscr{Z})-g_{j}(\overline{\mathscr{Z}})}{ \mathscr{Z}-\overline{\mathscr{Z}}}\le H_{j}, \quad \forall \mathscr{Z},\overline{\mathscr{Z}}\in \mathcal{R}, \mathscr{Z} \ne \overline{ \mathscr{Z}}, \end{aligned}$$
(3)

in which \(L_{j}>0\) and \(H_{j}>0\), \(j=1,2,\ldots,n\).

Actually, consider system (1) on the \([t_{k},t_{k+1})\), \(k\in \mathcal{N}\), the identification function \(\gamma (t)=\varrho _{k}\), \(k \in \mathcal{N}\), \(t\in \mathcal{R}^{+}\), if \(t_{k}\le t<\varrho _{k}\), then system (1) is advanced due to \(\gamma (t)>t\), if \(\varrho _{k}< t< t _{k+1}\), then system (1) is retarded due to \(\gamma (t)< t\). Similarly, consider the model (1) on the \([t_{k}^{i},t_{k+1}^{i})\), \(i=1,2,\ldots,n\), \(k\in \mathcal{N}\), the identification function \(\gamma (t)=\rho _{k}\), \(k\in \mathcal{N}\), \(t\in \mathcal{R}^{+}\), if \(t_{k}^{i}\le t< \rho _{k}\), then system (1) is advanced due to \(\gamma (t)>t\), if \(\rho _{k}< t< t_{k+1}^{i}\), then system (1) is retarded due to \(\gamma (t)< t\). Taken together, system (1) is a mixed-type equation in accordance with the property of identification function \(\gamma (t)\).

Consider the centralized data-sampling approach, (1) can be transformed into

$$\begin{aligned} {}^{C}_{t_{0}}D^{q}_{t}z_{i}(t)&= -a_{i}(t)z_{i}(t_{k})+\sum _{j=1}^{n}b _{ij}(t)f_{j} \bigl(z_{j}(t_{k})\bigr) \\ &\quad{} +\sum_{j=1}^{n}c_{ij}(t)g_{j} \bigl(z_{j}\bigl(\gamma (t_{k})\bigr)\bigr)+\nu _{i}(t), \quad i=1,2,\ldots,n, \end{aligned}$$
(4)

where \(t_{k}\) is the short form of \(t_{k(t)}\) with \(k(t) = \max \{ \mathfrak{K}:t_{\mathfrak{K}}\le t\}\). In addition, \(0=t_{0}< t_{1}< \cdots <t_{k}<\cdots \) is consistent with all the neurons, which implies that all the neurons are sampled at time \(t_{k}\).

Consider the decentralized data-sampling approach, (1) can be transformed into

$$\begin{aligned} {}^{C}_{t_{0}}D^{q}_{t}z_{i}(t)&= -a_{i}(t)z_{i}\bigl(t_{k}^{i}\bigr)+ \sum_{j=1} ^{n}b_{ij}(t)f_{j} \bigl(z_{j}\bigl(t_{k}^{j}\bigr)\bigr) \\ &\quad{} +\sum_{j=1}^{n}c_{ij}(t)g_{j} \bigl(z_{j}\bigl(\gamma \bigl(t_{k}^{j}\bigr) \bigr)\bigr)+\nu _{i}(t), \quad i=1,2,\ldots,n, \end{aligned}$$
(5)

where \(t_{k}^{i}\) is the short form of \(t^{i}_{k(t)}\) with \(k(t) = \max \{\mathfrak{K}:t^{i}_{\mathfrak{K}}\le t\}\). In addition, \(0=t_{0}^{i}< t_{1}^{i}<\cdots <t_{k}^{i}<\cdots \) is dispersed by \(i\in \{1,2,\ldots,n\}\), which implies that the different kinds of neurons are sampled at time \(t^{i}_{k}\).

Remark 2.1

To better understand the features of centralized and decentralized data-sampling control, the general system (1) is first given. For the state variables \(z_{1}(t),z_{2}(t),\ldots, z_{n}(t)\), if all state variables \(z_{1}(t),z_{2}(t),\ldots, z_{n}(t)\) are sampled at time \(t_{k}\), then system (1) can be rewritten as system (4) which represents centralized data-sampling model. If the state variable \(z_{1}(t)\) is sampled at time \(t^{1}_{k}\), the state variable \(z_{2}(t)\) is sampled at time \(t^{2}_{k}\), the state variable \(z_{3}(t)\) is sampled at time \(t^{3}_{k}\) and so on, then system (1) can be rewritten as system (5) which represents decentralized data-sampling model.

Remark 2.2

From a physical standpoint, if at the same time point \(t_{k}\), each neuron spreads the state to the corresponding out-neighbors. Meanwhile, every neuron receives the corresponding in-neighbor state information, then it is a description about the phenomenon of centralized data-sampling control. If when ever y neuron i renews its state, it transmits the state to the corresponding out-neighbors at time \(t_{k}^{i}\). When its neighbor neuron j renews its state, the corresponding in-neighbor state information is received at time \(t_{k}^{j}\), then it is a description about the phenomenon of decentralized data-sampling control.

Remark 2.3

In system (1), the state variables \(z_{i}(t)\), \(z_{j}(t)\) and \(z_{j}(\gamma (t))\) mean that all states are related to the time t and the state \(z_{j}(\gamma (t))\) is also related to the deviating argument \(\gamma (t)\). According to the properties of the deviating argument, it can be seen that system (1) is the coalition of continuous equation and discrete equation. Namely, system (1) is of mixed type. In system (4), the state variables \(z_{i}(t_{k})\), \(z_{j}(t_{k})\) and \(z_{j}(\gamma (t_{k}))\) mean that all states are related to the time \(t_{k}\) and the state \(z_{j}(\gamma (t_{k}))\) is also related to the deviating argument \(\gamma (t_{k})\), it shows that the centralized data-sampling is at the same time and system (4) is a discrete equation. Similarly, in system (5), the state variables \(z_{i}(t^{i}_{k})\), \(z_{j}(t^{j}_{k})\) and \(z_{j}(\gamma (t^{j}_{k}))\) mean that all states are related to the time \(t^{i}_{k}\) or \(t^{j}_{k}\), and the state \(z_{j}(\gamma (t^{j}_{k}))\) is also related to the deviating argument \(\gamma (t^{j}_{k})\), it shows that the decentralized data-sampling is at the different time and system (5) is a discrete equation.

Remark 2.4

For systems (1), (4) and (5), it can be found that systems (4) and (5) are based on system (1). By looking at the structure of systems (1), (4) and (5), it is clear that there exist differences and connections between three systems. In particular, there is no change in the bias \(\nu _{i}(t)\) for systems (4) and (5) because the sampling time is always dependent on the state. Hence, when the centralized and decentralized data-sampling approaches are applied to a given system, the corresponding centralized and decentralized data-sampling models can be obtained by analyzing the relationship between the state and the sampling time.

Remark 2.5

In the model description, two different types of sequences are mentioned, one is the real-valued sequence of deviating argument system, the other is the sampling time sequence of data-sampling control. From the point of view of deviating argument system, it is learned that the advance and retard are the characteristics of deviating argument system, and this feature is mainly reflected in two real-valued sequences. It is easy to see that the relationship of size between two real-valued sequences \(\{t_{k}\}\) and \(\{\varrho _{k}\}\) is given, and the relationship of size between other two real-valued sequences \(\{t_{k}^{i}\}\) and \(\{\rho _{k}\}\) is given. Moreover, \(\{t_{k}\}\) denotes the centralized sampling time sequence for data-sampling system (4) and \(\{t_{k}^{i}\}\) denotes the decentralized sampling time sequence for data-sampling system (5). That is to say, the sequences \(\{t_{k}\}\) and \(\{t_{k}^{i}\}\) represent not only the real-valued sequences of deviating argument system, but also the sampling time sequences of data-sampling control. It can be found the real-valued sequence and the sampling time sequence are skillfully combined in deviating argument system on the basis of property of the sampling time sequence. Indeed, the centralized and decentralized data-sampling strategies are used in fractional-order systems with deviating argument for the first time.

Definition and problem formulation

This subsection gives the definition of outer-synchronization and provides problem formulation about centralized and decentralized data-sampling schemes.

Definition 2.1

Consider any two trajectories \(x(t)\) and \(\bar{x}(t)\) of (1) with initial values \(x(0)\) and \(\bar{x}(0)\), separately. System (1) can realize outer-synchronization if there exists some control design such that

$$\begin{aligned} \lim_{t\to +\infty } \bigl\Vert x(t)-\bar{x}(t) \bigr\Vert =0. \end{aligned}$$

Remark 2.6

Review the classical synchronization, take the drive-response synchronization for example, the drive system \({}^{C}_{t_{0}}D^{q}_{t}x(t)=F(t,x)\) is an uncontrolled system without the controller, and \({}^{C}_{t_{0}}D^{q}_{t}y(t)=G(t,y,w(t))\) represents the response system which possesses the controller \(w(t)\). In a nutshell, if \(\lim_{t\to +\infty }\Vert y(t)-x(t)\Vert =0\), then the drive-response systems can realize synchronization. But for outer-synchronization, it is not difficult to discover that the realization of the outer-synchronization is closely related to the behaviors of any two trajectories of system (1) by Definition 2.1. To be specific, based on some control design, if \(\lim_{t\to +\infty }\Vert x_{i}(t)-x_{j}(t)\Vert =0\), then system (1) can realize outer-synchronization. It is obvious that the difference between them is the features of basic concepts. Namely, \(\lim_{t\to +\infty }\Vert y(t)-x(t)\Vert =0\) means \(y(t)\to x(t)\), but \(\lim_{t\to +\infty }\Vert x_{i}(t)-x_{j}(t)\Vert =0\) means \(x_{1}(t)\to x _{2}(t)\to x_{3}(t)\cdots \to x_{n}(t)\).

Let \(x(t)\) and \(\bar{x}(t)\) be two trajectories of (1) with initial values \(x(0)\) and \(\bar{x}(0)\), separately. Denoting \(y(t)=(y_{1}(t),y _{2}(t),\ldots,y_{n}(t))^{T}\) with \(y_{i}(t)=x_{i}(t)-\bar{x}_{i}(t)\), it follows that

$$\begin{aligned} {}^{C}_{t_{0}}D^{q}_{t}y_{i}(t)&= -a_{i}(t)y_{i}(t)+\sum_{j=1}^{n}b_{ij}(t) \ell _{j}(t) \\ &\quad{} +\sum_{j=1}^{n}c_{ij}(t) \hbar _{j}\bigl(\gamma (t)\bigr), \quad i=1,2,\ldots,n, \end{aligned}$$
(6)

where \(\ell _{j}(t)=f_{j}(x_{j}(t))-f_{j}(\bar{x}_{j}(t))\), \(\hbar _{j}( \gamma (t))=g_{j}(x_{j}(\gamma (t)))-g_{j}(\bar{x}_{j}(\gamma (t)))\), \(j=1,2,\ldots,n\), for all \(t\in \mathcal{R}^{+}\).

In the centralized data-sampling strategy, let \(z(t)\) and \(\bar{z}(t)\) be two trajectories of (4) with initial values \(z(0)\) and \(\bar{z}(0)\), separately. Denoting \(u(t)=(u_{1}(t),u_{2}(t),\ldots, u_{n}(t))^{T}\) with \(u_{i}(t)=z_{i}(t)-\bar{z}_{i}(t)\), we have

$$\begin{aligned} {}^{C}_{t_{0}}D^{q}_{t}u_{i}(t)&= -a_{i}(t)u_{i}(t_{k})+\sum _{j=1}^{n}b _{ij}(t)p_{j}(t_{k}) \\ &\quad{} +\sum_{j=1}^{n}c_{ij}(t)h_{j} \bigl(\gamma (t_{k})\bigr), \quad i=1,2,\ldots,n, \end{aligned}$$
(7)

where \(p_{j}(t)=f_{j}(z_{j}(t))-f_{j}(\bar{z}_{j}(t))\), \(h_{j}(\gamma (t))=g_{j}(z_{j}(\gamma (t)))-g_{j}(\bar{z}_{j}(\gamma (t)))\), \(j=1,2,\ldots,n\), for all \(t\in [t_{k},t_{k+1})\), \(k\in \mathcal{N}\).

Under centralized data-sampling method with structure-dependent, if system (4) can reach outer-synchronization, then control scheme needs to be designed on the basis of the system structure of (7) satisfying

$$\begin{aligned} \lim_{t\to +\infty } \bigl\Vert u(t) \bigr\Vert =0. \end{aligned}$$

Under centralized data-sampling method with state-dependent, consider the state measurement error of system (4)

$$\begin{aligned} r_{i}(t)=u_{i}(t_{k})-u_{i}(t), \quad i=1,2,\ldots,n, \end{aligned}$$
(8)

where \(t\in [t_{k},t_{k+1})\), \(k\in \mathcal{N}\), if system (4) can reach outer-synchronization, then control scheme needs to be designed on the basis of the state measurement error (8) satisfying

$$\begin{aligned} \lim_{t\to +\infty } \bigl\Vert u(t) \bigr\Vert =0. \end{aligned}$$

In the decentralized data-sampling strategy, let \(z(t)\) and \(\bar{z}(t)\) be two trajectories of (5) with initial values \(z(0)\) and \(\bar{z}(0)\), separately. Denoting \(u(t)=(u_{1}(t),u_{2}(t),\ldots, u_{n}(t))^{T}\) with \(u_{i}(t)=z_{i}(t)-\bar{z}_{i}(t)\), we have

$$\begin{aligned} {}^{C}_{t_{0}}D^{q}_{t}u_{i}(t)&= -a_{i}(t)u_{i}\bigl(t_{k}^{i}\bigr)+ \sum_{j=1} ^{n}b_{ij}(t)p_{j} \bigl(t_{k}^{j}\bigr) \\ &\quad{} +\sum_{j=1}^{n}c_{ij}(t)h_{j} \bigl(\gamma \bigl(t_{k}^{j}\bigr)\bigr), \quad i=1,2, \ldots,n, \end{aligned}$$
(9)

where \(p_{j}(t)=f_{j}(z_{j}(t))-f_{j}(\bar{z}_{j}(t))\), \(h_{j}(\gamma (t))=g_{j}(z_{j}(\gamma (t)))-g_{j}(\bar{z}_{j}(\gamma (t)))\), \(j=1,2,\ldots,n\), for all \(t\in [t_{k}^{j},t_{k+1}^{j})\), \(k\in \mathcal{N}\).

Under decentralized data-sampling method with state-dependent, consider the state measurement error of system (5)

$$\begin{aligned} r_{i}(t)=u_{i}\bigl(t_{k}^{i} \bigr)-u_{i}(t), \quad i=1,2,\ldots,n, \end{aligned}$$
(10)

where \(t\in [t_{k}^{i},t_{k+1}^{i})\), \(k\in \mathcal{N}\), if system (5) can reach outer-synchronization, then control scheme needs to be designed on the basis of the state measurement error (10) satisfying

$$\begin{aligned} \lim_{t\to +\infty } \bigl\Vert u(t) \bigr\Vert =0. \end{aligned}$$

Remark 2.7

It follows from Definition 2.1 that whether the system can realize outer-synchronization, the first thing that should be considered is the any two solutions of the system. Clearly, system (6) is derived in accordance with the two solutions of system (1). For the outer-synchronization problem of deviating argument system, it is convenient to find the relation of the current state and the deviating argument state according to system (6). Without doubt, the realization of outer-synchronization in deviating argument system is closely related to the analysis of system (6).

Remark 2.8

Focus on the centralized data-sampling control, system (7) is derived in accordance with the two solutions of system (4), meanwhile the structure-dependent and state-dependent centralized data-sampling methods are described, separately. Just as its name implies, the structure-dependent data-sampling is on the basis of the structure of the system itself, and the state-dependent data-sampling is on the basis of the observation measurement error of the system. It is clear that the observation measurement error is the difference between the state of centralized data-sampling time and the state of the current moment. Similarly, focusing on the decentralized data-sampling control, system (9) is derived in accordance with the two solutions of system (5) and the state-dependent centralized data-sampling method is described. It is clear that the observation measurement error is the difference between the state of decentralized data-sampling time and the state of the current moment.

Assumptions and mathematical notations

Without loss of generality, supposing that the existence and uniqueness of solutions can be guaranteed for systems (1), (4) and (5), more and more detailed discussions are given in [14, Theorem 1]. For simplicity, some assumptions and mathematical notations are recommended which need to be used in the latter case.

Two assumptions are made.

Assumption 2.1

For the sequence \(\{t_{k}\}\), there exists a constant \(\upsilon >0\) satisfying \(t_{k+1}-t_{k}\le \upsilon \), \(k \in \mathcal{N}\).

Assumption 2.2

For the sequence \(\{t_{k}^{i}\}\), there exists a constant \(\varphi >0\) satisfying \(t_{k+1}^{i}-t_{k}^{i}\le \varphi \), \(i=1,2,\ldots,n\), \(k\in \mathcal{N}\).

Throughout this paper, let \(\alpha _{i}>0\) be positive constants for \(i=1,2,\ldots,n\), defining the following mathematical notations:

$$\begin{aligned}& \epsilon =\frac{\phi ^{q}}{\varGamma (q+1)}, \end{aligned}$$
(11)
$$\begin{aligned}& \mu _{1} =\max_{1\le j\le n}\sup _{t\in \mathcal{R}^{+}} \Biggl\{ a_{j}(t)+b _{jj}^{+}(t)L_{j} +\sum^{n}_{i=1,i\ne j}\frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b _{ij}(t) \bigr\vert L_{j} \Biggr\} , \end{aligned}$$
(12)
$$\begin{aligned}& \mu _{2} =\max_{1\le j\le n}\sup _{t\in \mathcal{R}^{+}} \Biggl\{ c_{jj} ^{+}(t)H_{j}+ \sum^{n}_{i=1,i\ne j}\frac{\alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(t) \bigr\vert H _{j} \Biggr\} , \end{aligned}$$
(13)
$$\begin{aligned}& \delta =(1+\epsilon \mu _{2})E_{q}\bigl(\mu _{1}\phi ^{q}\bigr), \end{aligned}$$
(14)
$$\begin{aligned}& \sigma =\frac{1}{1-\epsilon (\mu _{1}\delta +\mu _{2})}>0, \end{aligned}$$
(15)
$$\begin{aligned}& P =\max_{1\le i\le n}\sup_{t\in \mathcal{R}^{+}}\bigl\{ a_{i}(t)-b_{ii} ^{-}(t)L_{i}\bigr\} , \end{aligned}$$
(16)
$$\begin{aligned}& \begin{aligned}[b] Q &=\max_{1\le j\le n}\sup _{t\in \mathcal{R}^{+}} \Biggl\{ a_{j}(t)+\sum ^{n}_{i=1}\frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L_{j} +\sigma c_{jj} ^{+}(t)H_{j} \\ &\quad{} +\sigma \sum^{n}_{i=1,i\ne j} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert c _{ij}(t) \bigr\vert H_{j} \Biggr\} , \end{aligned} \end{aligned}$$
(17)
$$\begin{aligned}& \begin{aligned}[b] \eta _{j}(\alpha ,t) &=a_{j}(t)-b_{jj}^{+}(t)L_{j}-\sum ^{n}_{i=1,i \ne j}\frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L_{j} -\sigma c_{jj} ^{+}(t)H_{j} \\ & \quad {}-\sigma \sum^{n}_{i=1,i\ne j} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert c _{ij}(t) \bigr\vert H_{j}, \end{aligned} \end{aligned}$$
(18)

where \(\phi =\max \{\upsilon ,\varphi \}\), \(b_{ii}^{-}(t)=\min \{b_{ii}(t),0 \}\), \(b_{ii}^{+}(t)=\max \{b_{ii}(t),0\}\), \(c_{ii}^{-}(t)=\min \{c_{ii}(t),0\}\), \(c_{ii}^{+}(t)=\max \{c_{ii}(t),0\}\). Additionally, by the boundedness of parameters σ, \(a_{i}(t)\), \(b_{ij}(t)\) and \(c_{ij}(t)\), there must exist a positive constant U satisfying

$$\begin{aligned} \max_{1\le j\le n}\sup_{t\in \mathcal{R}^{+}}\eta _{j}( \alpha ,t) \le U. \end{aligned}$$

Remark 2.9

Taking into account the realization of dynamic behavior for ideal system, both of Assumption 2.1 and Assumption 2.2 restrict the variation range of the deviating argument function.

Remark 2.10

In order to ensure that parameters Q and \(\eta _{j}(\alpha ,t)\) are reasonable in our system, we demand the parameter σ to be bigger than 0. It is need to point out that the upper bound of the parameter σ can be derived from the boundedness of parameters ϵ, \(\mu _{1}\), \(\mu _{2}\)δ. In addition, if no otherwise specified, each mathematical notation that appears in this subsection has only one meaning.

Lemmas and properties

In this section, we put forward some relevant lemmas, which are utilized for our main results later. Especially, the connection between the current state and the state with deviating argument for all \(t\in \mathcal{R}^{+}\) is given, it is crucial to deal with outer-synchronization problem in our system.

Lemma 2.1

([9])

Assume \(q>0\), \(\mathscr{B}(t)\) is a nonnegative function locally integrable on \([a,b)\), \(\mathscr{H}(t)\) is a nonnegative, nondecreasing, continuous and bounded function on \([a,b)\), assume \(\mathscr{G}(t)\) is nonnegative and locally integrable on \([a,b)\) with

$$\begin{aligned} \mathscr{G}(t)\le \mathscr{B}(t)+\mathscr{H}(t) \int ^{t}_{0}(t-s)^{q-1} \mathscr{G}(s)\, \mathrm{d}s, \quad t\in [a,b), \end{aligned}$$

then

$$\begin{aligned} \mathscr{G}(t)\le \mathscr{B}(t)+ \int ^{t}_{0}\sum^{+\infty }_{k=1} \biggl[\frac{[\mathscr{H}(t)\varGamma (q)]^{k}}{\varGamma (kq)}(t-s)^{kq-1} \mathscr{B}(s) \biggr]\, \mathrm{d}s, \quad t\in [a,b). \end{aligned}$$

Additionally, \(\mathscr{B}(t)\) is a nondecreasing function on \([a,b)\), then

$$\begin{aligned} \mathscr{G}(t)\le \mathscr{B}(t)E_{q}\bigl(\mathscr{H}(t)\varGamma (q)t^{q}\bigr), \quad t\in [a,b). \end{aligned}$$

Lemma 2.2

Let Assumption 2.1 hold, consider the state \(y(t)\) of system (6), then, for all \(t\in \mathcal{R}^{+}\),

$$\begin{aligned} \bigl\Vert y\bigl(\gamma (t)\bigr) \bigr\Vert \le \sigma \bigl\Vert y(t) \bigr\Vert . \end{aligned}$$
(19)

Proof

Fix \(k\in \mathcal{N}\), for \(t\in [t_{k},t_{k+1})\), it follows that

$$\begin{aligned} y_{i}(t) &=y_{i}(\varrho _{k}) + \frac{1}{\varGamma (q)} \int ^{t}_{\varrho _{k}} \Biggl[-a_{i}(s)y_{i}(s)+ \sum^{n}_{j=1}b_{ij}(s)\ell _{j}(s) \\ &\quad {} +\sum^{n}_{j=1}c_{ij}(s) \hbar _{j}(\varrho _{k}) \Biggr](t-s)^{q-1} \, \mathrm{d}s \\ &=y_{i}(\varrho _{k})+\frac{1}{\varGamma (q)} \int ^{t}_{\varrho _{k}} \Biggl[-a _{i}(s)y_{i}(s) +b_{ii}(s)d_{i}(s)y_{i}(s) \\ &\quad {} +\sum^{n}_{j=1,j\ne i}b_{ij}(s)d_{j}(s)y_{j}(s) +c_{ii}(s)e _{i}(\varrho _{k})y_{i}( \varrho _{k}) \\ &\quad {} +\sum^{n}_{j=1,j\ne i}c_{ij}(s)e_{j}( \varrho _{k})y_{j}(\varrho _{k}) \Biggr](t-s)^{q-1}\,\mathrm{d}s, \quad i=1,2,\ldots,n, \end{aligned}$$
(20)

with

$$\begin{aligned}& d_{i}(t)= \textstyle\begin{cases} \frac{ \ell _{i}(t)}{ y_{i}(t)}, & y_{i}(t)\ne 0, \\ 0, & y_{i}(t)=0, \end{cases}\displaystyle \qquad e_{i}( \varrho _{k})= \textstyle\begin{cases} \frac{ \hbar _{i}(\varrho _{k})}{ y_{i}(\varrho _{k})}, & y_{i}(\varrho _{k})\ne 0, \\ 0, & y_{i}(\varrho _{k})=0. \end{cases}\displaystyle \end{aligned}$$

From (20), on the basis of the definition of the norm in this paper,

$$\begin{aligned} \bigl\Vert y(t) \bigr\Vert &\le \sum^{n}_{i=1} \alpha _{i} \bigl\vert y_{i}(\varrho _{k}) \bigr\vert +\frac{1}{ \varGamma (q)}\sum^{n}_{i=1} \alpha _{i} \Biggl[ \int ^{t}_{\varrho _{k}} \Biggl(a_{i}(s) \bigl\vert y_{i}(s) \bigr\vert + \bigl\vert b_{ii}(s)d_{i}(s)y_{i}(s) \bigr\vert \\ &\quad {} +\sum^{n}_{j=1,j\ne i} \bigl\vert b_{ij}(s)d_{j}(s)y_{j}(s) \bigr\vert + \bigl\vert c_{ii}(s)e _{i}(\varrho _{k})y_{i}( \varrho _{k}) \bigr\vert \\ &\quad {} +\sum^{n}_{j=1,j\ne i} \bigl\vert c_{ij}(s)e_{j}(\varrho _{k})y_{j}( \varrho _{k}) \bigr\vert \Biggr) (t-s)^{q-1}\,\mathrm{d}s \Biggr]. \end{aligned}$$
(21)

By (2) and (3), clearly, \(0\le d_{i}(t)\le L_{i}\) and \(0\le e_{i}( \varrho _{k})\le H_{i}\) for all \(i=1,2,\ldots,n\), \(t\in \mathcal{R} ^{+}\), which means

$$\begin{aligned}& b_{ii}^{-}(s)L_{i}\le b_{ii}(s)d_{i}(s)\le b_{ii}^{+}(s)L_{i}, \end{aligned}$$
(22)
$$\begin{aligned}& c_{ii}^{-}(s)H_{i}\le c_{ii}(s)e_{i}( \varrho _{k})\le c_{ii}^{+}(s)H _{i}. \end{aligned}$$
(23)

Substituting (22) and (23) into (21), it follows that

$$\begin{aligned} \bigl\Vert y(t) \bigr\Vert &\le \sum^{n}_{i=1} \alpha _{i} \bigl\vert y_{i}(\varrho _{k}) \bigr\vert +\frac{1}{ \varGamma (q)}\sum^{n}_{i=1} \alpha _{i} \Biggl[ \int ^{t}_{\varrho _{k}} \Biggl(a_{i}(s) \bigl\vert y_{i}(s) \bigr\vert +b_{ii}^{+}(s)L_{i} \bigl\vert y_{i}(s) \bigr\vert \\ &\quad {} +\sum^{n}_{j=1,j\ne i} \bigl\vert b_{ij}(s) \bigr\vert L_{j} \bigl\vert y_{j}(s) \bigr\vert +c_{ii}^{+}(s)H _{i} \bigl\vert y_{i}(\varrho _{k}) \bigr\vert \\ &\quad {} +\sum^{n}_{j=1,j\ne i} \bigl\vert c_{ij}(s) \bigr\vert H_{j} \bigl\vert y_{j}( \varrho _{k}) \bigr\vert \Biggr) (t-s)^{q-1}\,\mathrm{d}s \Biggr] \\ &\le \bigl\Vert y(\varrho _{k}) \bigr\Vert +\frac{1}{\varGamma (q)}\sum ^{n}_{j=1} \Biggl(c_{jj} ^{+}(s)H_{j}+\sum^{n}_{i=1,i\ne j} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(s) \bigr\vert H _{j} \Biggr)\alpha _{j} \bigl\vert y_{j}(\varrho _{k}) \bigr\vert \\ &\quad {} \cdot \int ^{t}_{\varrho _{k}}(t-s)^{q-1}\,\mathrm{d}s + \frac{1}{ \varGamma (q)}\sum^{n}_{j=1} \Biggl[ \int ^{t}_{\varrho _{k}} \Biggl(a_{j}(s)+b _{jj}^{+}(s)L_{j} \\ &\quad {} +\sum^{n}_{i=1,i\ne j} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(s) \bigr\vert L _{j} \Biggr)\alpha _{j} \bigl\vert y_{j}(s) \bigr\vert (t-s)^{q-1}\,\mathrm{d}s \Biggr] \\ &\le \bigl\Vert y(\varrho _{k}) \bigr\Vert +\frac{\upsilon ^{q}}{\varGamma (q+1)}\mu _{2} \bigl\Vert y( \varrho _{k}) \bigr\Vert + \frac{1}{\varGamma (q)} \int ^{t}_{\varrho _{k}}\mu _{1} \bigl\Vert y(s) \bigr\Vert (t-s)^{q-1} \,\mathrm{d}s \\ &\le \bigl\Vert y(\varrho _{k}) \bigr\Vert +\epsilon \mu _{2} \bigl\Vert y(\varrho _{k}) \bigr\Vert + \frac{\mu _{1}}{\varGamma (q)} \int ^{t}_{\varrho _{k}}(t-s)^{q-1} \bigl\Vert y(s) \bigr\Vert \,\mathrm{d}s. \end{aligned}$$

Using Lemma 2.1, then

$$\begin{aligned} \bigl\Vert y(t) \bigr\Vert \le (1+\epsilon \mu _{2}) \bigl\Vert y(\varrho _{k}) \bigr\Vert E_{q}\bigl(\mu _{1} \upsilon ^{q}\bigr)\le \delta \bigl\Vert y(\varrho _{k}) \bigr\Vert . \end{aligned}$$

Similarly, for \(t\in [t_{k},t_{k+1})\), we obtain

$$\begin{aligned} \bigl\Vert y(\varrho _{k}) \bigr\Vert &\le \bigl\Vert y(t) \bigr\Vert +\epsilon \mu _{2} \bigl\Vert y(\varrho _{k}) \bigr\Vert +\frac{ \mu _{1}}{\varGamma (q)} \int ^{t}_{\varrho _{k}}(t-s)^{q-1} \delta \bigl\Vert y( \varrho _{k}) \bigr\Vert \,\mathrm{d}s \\ &\le \bigl\Vert y(t) \bigr\Vert +\epsilon (\mu _{2}+\mu _{1}\delta ) \bigl\Vert y(\varrho _{k}) \bigr\Vert , \end{aligned}$$

and thus

$$\begin{aligned} \bigl\Vert y(\varrho _{k}) \bigr\Vert \le \frac{1}{1-\epsilon (\mu _{1}\delta +\mu _{2})} \bigl\Vert y(t) \bigr\Vert =\sigma \bigl\Vert y(t) \bigr\Vert , \end{aligned}$$

where the parameters ϵ, \(\mu _{1}\), \(\mu _{2}\), δ, σ are defined in (11)–(15). Hence, Lemma 2.2 is true for \(t\in \mathcal{R}^{+}\). □

Lemma 2.3

([5])

Given \(0< q<1\). If \(\mathscr{Q}(t)\in {C} ^{1}[t_{0},+\infty )\), then

$$\begin{aligned} {}^{C}_{t_{0}}D^{q}_{t} \bigl\vert \mathscr{Q}(t) \bigr\vert \le \operatorname{sgn}\bigl(\mathscr{Q}(t)\bigr) {}^{C}_{t_{0}}D^{q}_{t}\mathscr{Q}(t), \quad t\ge t_{0}, \end{aligned}$$

where

$$\begin{aligned} {}^{C}_{t_{0}}D^{q}_{t} \bigl\vert \mathscr{Q}(t) \bigr\vert =\frac{1}{\varGamma (1-q)} \int _{t _{0}}^{t} \frac{\frac{\mathrm{d}}{\mathrm{d}s} \vert \mathscr{Q}(s) \vert }{(t-s)^{q}} \,\mathrm{d}s. \end{aligned}$$

Lemma 2.4

Let Assumption 2.2 hold, consider the state \(y(t)\) of system (6), then, for all \(t\in \mathcal{R}^{+}\),

$$\begin{aligned} \bigl\Vert y\bigl(\gamma (t)\bigr) \bigr\Vert \le \sigma \bigl\Vert y(t) \bigr\Vert . \end{aligned}$$
(24)

Proof

Fix \(k\in \mathcal{N}\), for \(t\in [t_{k}^{i},t_{k+1}^{i})\), \(i=1,2,\ldots,n\), we have

$$\begin{aligned} y_{i}(t) &=y_{i}(\rho _{k})+\frac{1}{\varGamma (q)} \int ^{t}_{\rho _{k}} \Biggl[-a_{i}(s)y_{i}(s) +\sum^{n}_{j=1}b_{ij}(s)\ell _{j}(s) \\ &\quad {} +\sum^{n}_{j=1}c_{ij}(s) \hbar _{j}(\rho _{k}) \Biggr](t-s)^{q-1} \, \mathrm{d}s \\ &=y_{i}(\rho _{k})+\frac{1}{\varGamma (q)} \int ^{t}_{\rho _{k}} \Biggl[-a_{i}(s)y _{i}(s) +b_{ii}(s)d_{i}(s)y_{i}(s) \\ &\quad {} +\sum^{n}_{j=1,j\ne i}b_{ij}(s)d_{j}(s)y_{j}(s)+c_{ii}(s)e_{i}( \rho _{k})y_{i}(\rho _{k}) \\ &\quad {} +\sum^{n}_{j=1,j\ne i}c_{ij}(s)e_{j}( \rho _{k})y_{j}(\rho _{k}) \Biggr](t-s)^{q-1} \,\mathrm{d}s, \quad i=1,2,\ldots,n, \end{aligned}$$
(25)

with

$$\begin{aligned}& d_{i}(t)= \textstyle\begin{cases} \frac{ \ell _{i}(t)}{ y_{i}(t)}, & y_{i}(t)\ne 0, \\ 0, & y_{i}(t)=0, \end{cases}\displaystyle \qquad e_{i}(\rho _{k})= \textstyle\begin{cases} \frac{ \hbar _{i}(\rho _{k})}{ y_{i}(\rho _{k})}, & y_{i}(\rho _{k}) \ne 0, \\ 0, & y_{i}(\rho _{k})=0. \end{cases}\displaystyle \end{aligned}$$

From (25), on the basis of the definition of the norm in this paper,

$$\begin{aligned} \bigl\Vert y(t) \bigr\Vert &\le \sum^{n}_{i=1} \alpha _{i} \bigl\vert y_{i}(\rho _{k}) \bigr\vert +\frac{1}{ \varGamma (q)}\sum^{n}_{i=1} \alpha _{i} \Biggl[ \int ^{t}_{\rho _{k}} \Biggl(a _{i}(s) \bigl\vert y_{i}(s) \bigr\vert + \bigl\vert b_{ii}(s)d_{i}(s)y_{i}(s) \bigr\vert \\ &\quad {} +\sum^{n}_{j=1,j\ne i} \bigl\vert b_{ij}(s)d_{j}(s)y_{j}(s) \bigr\vert + \bigl\vert c_{ii}(s)e _{i}(\rho _{k})y_{i}( \rho _{k}) \bigr\vert \\ &\quad {} +\sum^{n}_{j=1,j\ne i} \bigl\vert c_{ij}(s)e_{j}(\rho _{k})y_{j}(\rho _{k}) \bigr\vert \Biggr) (t-s)^{q-1}\,\mathrm{d}s \Biggr]. \end{aligned}$$
(26)

By (2) and (3), clearly, \(0\le d_{i}(t)\le L_{i}\) and \(0\le e_{i}(\rho _{k})\le H_{i}\) for all \(i=1,2,\ldots,n\), \(t\in \mathcal{R}^{+}\), which implies

$$\begin{aligned} &b_{ii}^{-}(s)L_{i}\le b_{ii}(s)d_{i}(s)\le b_{ii}^{+}(s)L_{i}, \end{aligned}$$
(27)
$$\begin{aligned} &c_{ii}^{-}(s)H_{i}\le c_{ii}(s)e_{i}( \rho _{k})\le c_{ii}^{+}(s)H _{i}. \end{aligned}$$
(28)

Substituting (27) and (28) into (26), it follows that

$$\begin{aligned} \bigl\Vert y(t) \bigr\Vert &\le \sum^{n}_{i=1} \alpha _{i} \bigl\vert y_{i}(\rho _{k}) \bigr\vert +\frac{1}{ \varGamma (q)}\sum^{n}_{i=1} \alpha _{i} \Biggl[ \int ^{t}_{\rho _{k}} \Biggl(a _{i}(s) \bigl\vert y_{i}(s) \bigr\vert +b_{ii}^{+}(s)L_{i} \bigl\vert y_{i}(s) \bigr\vert \\ &\quad {} +\sum^{n}_{j=1,j\ne i} \bigl\vert b_{ij}(s) \bigr\vert L_{j} \bigl\vert y_{j}(s) \bigr\vert +c_{ii}^{+}(s)H _{i} \bigl\vert y_{i}(\rho _{k}) \bigr\vert \\ &\quad {} +\sum^{n}_{j=1,j\ne i} \bigl\vert c_{ij}(s) \bigr\vert H_{j} \bigl\vert y_{j}( \rho _{k}) \bigr\vert \Biggr) (t-s)^{q-1} \,\mathrm{d}s \Biggr] \\ &\le \bigl\Vert y(\rho _{k}) \bigr\Vert +\frac{\varphi ^{q}}{\varGamma (q+1)}\mu _{2} \bigl\Vert y(\rho _{k}) \bigr\Vert + \frac{1}{\varGamma (q)} \int ^{t}_{\rho _{k}}\mu _{1} \bigl\Vert y(s) \bigr\Vert (t-s)^{q-1} \,\mathrm{d}s \\ &\le \bigl\Vert y(\rho _{k}) \bigr\Vert +\epsilon \mu _{2} \bigl\Vert y(\rho _{k}) \bigr\Vert + \frac{\mu _{1}}{ \varGamma (q)} \int ^{t}_{\rho _{k}}(t-s)^{q-1} \bigl\Vert y(s) \bigr\Vert \,\mathrm{d}s. \end{aligned}$$

Using Lemma 2.1, then

$$\begin{aligned} \bigl\Vert y(t) \bigr\Vert \le (1+\epsilon \mu _{2}) \bigl\Vert y(\rho _{k}) \bigr\Vert E_{q}\bigl(\mu _{1} \varphi ^{q}\bigr)\le \delta \bigl\Vert y(\rho _{k}) \bigr\Vert . \end{aligned}$$

Similarly, for \(t\in [t_{k}^{i},t_{k+1}^{i})\), \(i=1,2,\ldots,n\), it follows that

$$\begin{aligned} \bigl\Vert y(\rho _{k}) \bigr\Vert &\le \bigl\Vert y(t) \bigr\Vert +\epsilon \mu _{2} \bigl\Vert y(\rho _{k}) \bigr\Vert +\frac{\mu _{1}}{\varGamma (q)} \int ^{t}_{\rho _{k}}(t-s)^{q-1} \delta \bigl\Vert y(\rho _{k}) \bigr\Vert \,\mathrm{d}s \\ &\le \bigl\Vert y(t) \bigr\Vert +\epsilon (\mu _{2}+\mu _{1}\delta ) \bigl\Vert y(\rho _{k}) \bigr\Vert , \end{aligned}$$

and thus

$$\begin{aligned} \bigl\Vert y(\rho _{k}) \bigr\Vert \le \frac{1}{1-\epsilon (\mu _{1}\delta +\mu _{2})} \bigl\Vert y(t) \bigr\Vert =\sigma \bigl\Vert y(t) \bigr\Vert , \end{aligned}$$

where the parameters ϵ, \(\mu _{1}\), \(\mu _{2}\), δ, σ are defined in (11)–(15). Hence, (24) holds. □

Remark 2.11

Lemma 2.1 states the integral form of Gronwall inequality with generalized type, which can deal with the cross-term on both sides of the inequality very well, so it is often utilized for fractional-order systems. In addition, the idea of proof about Lemma 2.2 and Lemma 2.4 is similar to [14, Theorem 3], the difference is that the self-inhibition \(a_{i}(t)\), synaptic strengths \(b_{ij}(t)\) and \(c_{ij}(t)\) possess piecewise continuity and boundedness in our system.

Main results

For all detailed introduction and explanation in previous preliminaries, in this section, the design methods of centralized and decentralized data-sampling are come up with so as to predict the sampling time points. These two kinds of control design can ensure to realize outer-synchronization in the corresponding systems.

For the sake of narrative, the control schemes are first used to deal with outer-synchronization problem, and then the theoretical results are reviewed and analyzed.

Structure-dependent and state-dependent centralized data-sampling approach

Theorem 3.1

Supposing \(0<\xi < 1\) and \(\zeta >0\) to be constants with \(P\xi \le \zeta \) and \(U\xi \le \zeta (2-\xi )\). Let \(\alpha _{i}>0\) be positive constants to gratify \(\eta _{j}(\alpha ,t) \ge \zeta \) for all \(i = 1,2,\ldots,n\), \(j = 1,2,\ldots,n\) and \(t \in \mathcal{R}^{+}\). Set \(t_{k+1}\) as a time point satisfying

$$\begin{aligned} t_{k+1}=\sup_{\theta \ge t_{k}} \biggl\{ \theta : \min_{1\le j\le n} \biggl(\frac{1}{\varGamma (q)} \int _{t_{k}}^{t}(t-s)^{q-1}\eta _{j}(\alpha ,s)\,\mathrm{d}s \biggr)\le \xi , \forall t\in (t_{k},\theta ] \biggr\} \end{aligned}$$
(29)

for \(k\in \mathcal{N}\). Then system (4) is said to be outer-synchronized.

Proof

From \(\eta _{j}(\alpha ,t)\ge \zeta \) and positive upper bound U, we can get for all \(j=1,2,\ldots,n\) and \(t\in \mathcal{R}^{+}\)

$$\begin{aligned} \begin{aligned} \frac{\zeta }{\varGamma (q)} \int _{t_{k}}^{t}(t-s)^{q-1}\,\mathrm{d}s &\le \frac{1}{\varGamma (q)} \int _{t_{k}}^{t}(t-s)^{q-1}\eta _{j}(\alpha ,s) \,\mathrm{d}s \\ &\le \frac{U}{\varGamma (q)} \int _{t_{k}}^{t}(t-s)^{q-1}\,\mathrm{d}s. \end{aligned} \end{aligned}$$

By computing, for all \(j=1,2,\ldots,n\) and \(t\in [t_{k},t_{k+1})\), it results in

$$\begin{aligned} \frac{\zeta (t-t_{k})^{q}}{\varGamma (q+1)}\le \frac{1}{\varGamma (q)} \int _{t_{k}}^{t}(t-s)^{q-1}\eta _{j}(\alpha ,s)\,\mathrm{d}s \le \frac{U(t-t _{k})^{q}}{\varGamma (q+1)}. \end{aligned}$$

On the basis of system (7), the state \(u(t)\) will not update until

$$\begin{aligned} \min_{1\le j\le n} \biggl(\frac{1}{\varGamma (q)} \int _{t_{k}}^{t}(t-s)^{q-1} \eta _{j}(\alpha ,s)\,\mathrm{d}s \biggr)=\xi \end{aligned}$$
(30)

at time point \(t=t_{k+1}\). Then it yields \(\zeta (t_{k+1}-t_{k})^{q}/ \varGamma (q+1)\le \xi \le U(t_{k+1}-t_{k})^{q}/\varGamma (q+1)\), which means

$$\begin{aligned} \frac{\xi }{U}\le \frac{(t_{k+1}-t_{k})^{q}}{\varGamma (q+1)}\le \frac{ \xi }{\zeta }, \end{aligned}$$
(31)

for \(k\in \mathcal{N}\). Furthermore, noticing that

$$\begin{aligned} \frac{1}{\varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \eta _{j}( \alpha ,s)\,\mathrm{d}s\le \frac{U\xi }{\zeta }\le 2-\xi . \end{aligned}$$
(32)

Together with (30), we can derive

$$\begin{aligned} \xi \le \frac{1}{\varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \eta _{j}(\alpha ,s)\,\mathrm{d}s\le 2-\xi , \end{aligned}$$

for \(k\in \mathcal{N}\). According to the definition of the norm in the paper, then we focus on \(u_{i}(t)\) (\(i=1,2,\ldots,n\)) of system (7) at time \(t=t_{k+1}\),

$$\begin{aligned} \bigl\Vert u(t_{k+1}) \bigr\Vert &=\sum ^{n}_{i=1}\alpha _{i} \bigl\vert u_{i}(t_{k+1}) \bigr\vert \\ &=\sum^{n}_{i=1}\alpha _{i} \Biggl\vert u_{i}(t_{k})+\frac{1}{\varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \Biggl[-a_{i}(s)u_{i}(t_{k}) \\ &\quad {} +\sum_{j=1}^{n}b_{ij}(s)p_{j}(t_{k}) +\sum_{j=1}^{n}c_{ij}(s)h _{j}\bigl(\gamma (t_{k})\bigr) \Biggr]\,\mathrm{d}s \Biggr\vert \\ &=\sum^{n}_{i=1}\alpha _{i} \Biggl\vert u_{i}(t_{k})+\frac{1}{\varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \Biggl[-a_{i}(s)u_{i}(t_{k}) \\ &\quad {} +b_{ii}(s)m_{i}(t_{k})u_{i}(t_{k})+ \sum_{j=1,j\ne i}^{n}b_{ij}(s)m _{j}(t_{k})u_{j}(t_{k}) \\ &\quad {} +c_{ii}(s)n_{i}\bigl(\gamma (t_{k}) \bigr)u_{i}\bigl(\gamma (t_{k})\bigr)+ \sum _{j=1,j\ne i}^{n}c_{ij}(s)n_{j}\bigl( \gamma (t_{k})\bigr)u_{j}\bigl(\gamma (t _{k}) \bigr) \Biggr]\,\mathrm{d}s \Biggr\vert \\ &=\sum^{n}_{i=1}\alpha _{i} \Biggl\vert u_{i}(t_{k}) \biggl\{ 1-\frac{1}{\varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \bigl[a_{i}(s) \\ &\quad {} -b_{ii}(s)m_{i}(t_{k})\bigr]\, \mathrm{d}s \biggr\} + \frac{1}{\varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \\ &\quad {} \cdot \Biggl[\sum_{j=1,j\ne i}^{n}b_{ij}(s)m_{j}(t_{k})u_{j}(t _{k}) +c_{ii}(s)n_{i}\bigl(\gamma (t_{k})\bigr)u_{i}\bigl(\gamma (t_{k})\bigr) \\ &\quad {} +\sum_{j=1,j\ne i}^{n}c_{ij}(s)n_{j} \bigl(\gamma (t_{k})\bigr)u_{j}\bigl( \gamma (t_{k})\bigr) \Biggr]\,\mathrm{d}s \Biggr\vert , \end{aligned}$$
(33)

with

$$\begin{aligned}& m_{i}(t_{k})= \textstyle\begin{cases} \frac{ p_{i}(t_{k})}{ u_{i}(t_{k})}, & u_{i}(t_{k})\ne 0, \\ 0, & u_{i}(t_{k})=0, \end{cases}\displaystyle \qquad n_{i}\bigl(\gamma (t_{k})\bigr)= \textstyle\begin{cases} \frac{ h_{i}(\gamma (t_{k}))}{ u_{i}(\gamma (t_{k}))}, & u_{i}(\gamma (t_{k}))\ne 0, \\ 0, & u_{i}(\gamma (t_{k}))=0. \end{cases}\displaystyle \end{aligned}$$

By (2) and (3), clearly, \(0\le m_{i}(t_{k})\le L_{i}\), \(0\le n_{i}( \gamma (t_{k}))\le H_{i}\) for all \(i=1,2,\ldots,n\), \(t\in \mathcal{R} ^{+}\), and

$$\begin{aligned} &b_{ii}^{-}(s)L_{i}\le b_{ii}(s)m_{i}(t_{k})\le b_{ii}^{+}(s)L_{i}, \end{aligned}$$
(34)
$$\begin{aligned} &c_{ii}^{-}(s)H_{i}\le c_{ii}(s)n_{i}\bigl(\gamma (t_{k})\bigr)\le c_{ii}^{+}(s)H _{i}. \end{aligned}$$
(35)

Observe that \(P\xi \le \zeta \), then, for any \(t\in [t_{k},t_{k+1}]\),

$$\begin{aligned} &\frac{1}{\varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \bigl[a_{i}(s)-b _{ii}(s)m_{i}(t_{k}) \bigr]\,\mathrm{d}s \\ &\quad \le \frac{P}{\varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \, \mathrm{d}s \le \frac{P\xi }{\zeta }\le 1, \end{aligned}$$

which leads to

$$\begin{aligned} 1-\frac{1}{\varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \bigl[a_{i}(s)-b _{ii}(s)m_{i}(t_{k}) \bigr]\,\mathrm{d}s\ge 0. \end{aligned}$$
(36)

From (33) and (36),

$$\begin{aligned} \bigl\Vert u(t_{k+1}) \bigr\Vert &\le \sum ^{n}_{i=1}\alpha _{i} \bigl\vert u_{i}(t_{k}) \bigr\vert \biggl\{ 1-\frac{1}{ \varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \bigl[a_{i}(s) \\ &\quad {} -b_{ii}(s)m_{i}(t_{k})\bigr]\, \mathrm{d}s \biggr\} + \frac{1}{\varGamma (q)}\sum^{n}_{i=1} \alpha _{i} \Biggl\{ \int _{t_{k}}^{t _{k+1}}(t_{k+1}-s)^{q-1} \\ &\quad {} \cdot \Biggl[\sum_{j=1,j\ne i}^{n} \bigl\vert b_{ij}(s)m_{j}(t_{k})u_{j}(t _{k}) \bigr\vert + \bigl\vert c_{ii}(s)n_{i} \bigl(\gamma (t_{k})\bigr)u_{i}\bigl(\gamma (t_{k})\bigr) \bigr\vert \\ &\quad {} +\sum_{j=1,j\ne i}^{n} \bigl\vert c_{ij}(s)n_{j}\bigl(\gamma (t_{k}) \bigr)u_{j}\bigl( \gamma (t_{k})\bigr) \bigr\vert \Biggr] \,\mathrm{d}s \Biggr\} \\ &\le \sum^{n}_{j=1}\alpha _{j} \bigl\vert u_{j}(t_{k}) \bigr\vert \biggl\{ 1- \frac{1}{\varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \bigl[a_{j}(s) \\ &\quad {} -b_{jj}^{+}(s)L_{j}\bigr]\,\mathrm{d}s \biggr\} +\frac{1}{\varGamma (q)} \sum^{n}_{j=1} \alpha _{j} \Biggl\{ \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \\ &\quad {} \cdot \Biggl[\sum_{i=1,i\ne j}^{n} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b _{ij}(s) \bigr\vert L_{j} \bigl\vert u_{j}(t_{k}) \bigr\vert +c_{jj}^{+}(s)H_{j} \bigl\vert u_{j}\bigl(\gamma (t_{k})\bigr) \bigr\vert \\ &\quad {} +\sum_{i=1,i\ne j}^{n} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(s) \bigr\vert H _{j} \bigl\vert u_{j}\bigl(\gamma (t_{k})\bigr) \bigr\vert \Biggr]\,\mathrm{d}s \Biggr\} \\ &\le \sum^{n}_{j=1}\alpha _{j} \bigl\vert u_{j}(t_{k}) \bigr\vert \Biggl\{ 1- \frac{1}{\varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \Biggl[a_{j}(s)-b_{jj}^{+}(s)L _{j} \\ &\quad {} -\sum_{i=1,i\ne j}^{n} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(s) \bigr\vert L _{j} \Biggr]\,\mathrm{d}s \Biggr\} +\sum^{n}_{j=1} \alpha _{j} \bigl\vert u_{j}\bigl(\gamma (t_{k})\bigr) \bigr\vert \frac{1}{\varGamma (q)} \\ &\quad {} \cdot \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \Biggl[c_{jj}^{+}(s)H _{j}+\sum _{i=1,i\ne j}^{n}\frac{\alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(s) \bigr\vert H _{j} \Biggr]\,\mathrm{d}s. \end{aligned}$$

Through Lemma 2.2, then

$$\begin{aligned} \bigl\Vert u(t_{k+1}) \bigr\Vert &\le \bigl\Vert u(t_{k}) \bigr\Vert \Biggl\{ 1-\frac{1}{\varGamma (q)} \int _{t _{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \Biggl[a_{j}(s)-b_{jj}^{+}(s)L_{j} \\ &\quad {} -\sum_{i=1,i\ne j}^{n} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(s) \bigr\vert L _{j} \Biggr]\,\mathrm{d}s \Biggr\} +\sigma \bigl\Vert u(t_{k}) \bigr\Vert \frac{1}{\varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t_{k+1}-s)^{q-1} \\ &\quad {} \cdot \Biggl[c_{jj}^{+}(s)H_{j}+\sum _{i=1,i\ne j}^{n}\frac{ \alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(s) \bigr\vert H_{j} \Biggr]\,\mathrm{d}s \\ &\le \bigl\Vert u(t_{k}) \bigr\Vert \biggl\{ 1-\frac{1}{\varGamma (q)} \int _{t_{k}}^{t_{k+1}}(t _{k+1}-s)^{q-1} \eta _{j}(\alpha ,s)\,\mathrm{d}s \biggr\} \\ &\le (1-\xi ) \bigl\Vert u(t_{k}) \bigr\Vert , \end{aligned}$$

which implies

$$\begin{aligned} \lim_{t_{k}\to +\infty } \bigl\Vert u(t_{k}) \bigr\Vert =0. \end{aligned}$$

Recalling system (7), we have

$$\begin{aligned} \lim_{t\to +\infty } \bigl\Vert u(t) \bigr\Vert &=\lim _{t\to +\infty }\sum^{n}_{i=1}\alpha _{i} \bigl\vert u_{i}(t)-u_{i}(t_{k})+u_{i}(t_{k}) \bigr\vert \\ &=\lim_{t\to +\infty } \bigl\Vert u(t_{k}) \bigr\Vert + \lim_{t\to +\infty }\sum^{n}_{i=1} \alpha _{i} \Biggl\vert \frac{1}{\varGamma (q)} \int _{t_{k}}^{t}(t-s)^{q-1} \\ &\quad {} \cdot \Biggl[-a_{i}(s)u_{i}(t_{k}) + \sum_{j=1}^{n}b_{ij}(s)p _{j}(t_{k}) +\sum_{j=1}^{n}c_{ij}(s)h_{j} \bigl(\gamma (t_{k})\bigr) \Biggr] \,\mathrm{d}s \Biggr\vert \\ &\le \lim_{t\to +\infty } \bigl\Vert u(t_{k}) \bigr\Vert +\lim_{t\to +\infty } \frac{1}{ \varGamma (q)}\sum ^{n}_{i=1}\alpha _{i} \int _{t_{k}}^{t}(t-s)^{q-1} \Biggl[a _{i}(s) \bigl\vert u_{i}(t_{k}) \bigr\vert \\ &\quad {} +\sum_{j=1}^{n} \bigl\vert b_{ij}(s) \bigr\vert L_{j} \bigl\vert u_{j}(t_{k}) \bigr\vert +c_{ii}^{+}(s)H _{i} \bigl\vert u_{i}\bigl(\gamma (t_{k}) \bigr) \bigr\vert \\ &\quad {} +\sum_{j=1,j\ne i}^{n} \bigl\vert c_{ij}(s) \bigr\vert H_{j} \bigl\vert u_{j} \bigl(\gamma (t_{k})\bigr) \bigr\vert \Biggr]\,\mathrm{d}s \\ &\le \lim_{t\to +\infty } \bigl\Vert u(t_{k}) \bigr\Vert +\lim_{t\to +\infty }\sum^{n} _{j=1}\alpha _{j} \bigl\vert u_{j}(t_{k}) \bigr\vert \frac{1}{\varGamma (q)} \int _{t_{k}}^{t}(t-s)^{q-1} \Biggl[a_{j}(s) \\ &\quad {} +\sum_{j=1}^{n} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(s) \bigr\vert L_{j} \Biggr]\,\mathrm{d}s +\lim_{t\to +\infty }\sum ^{n}_{j=1}\alpha _{j} \bigl\vert u _{j}\bigl(\gamma (t_{k})\bigr) \bigr\vert \frac{1}{\varGamma (q)} \\ &\quad {} \cdot \int _{t_{k}}^{t}(t-s)^{q-1} \Biggl[c_{jj}^{+}(s)H_{j} + \sum _{i=1,i\ne j}^{n}\frac{\alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(s) \bigr\vert H_{j} \Biggr]\,\mathrm{d}s \\ &\le \lim_{t\to +\infty } \bigl\Vert u(t_{k}) \bigr\Vert +\lim_{t\to +\infty } \bigl\Vert u(t_{k}) \bigr\Vert \frac{1}{\varGamma (q)} \int _{t_{k}}^{t}(t-s)^{q-1} \Biggl[a_{j}(s) \\ &\quad {} +\sum_{j=1}^{n} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(s) \bigr\vert L_{j} \Biggr]\,\mathrm{d}s +\lim_{t\to +\infty }\sigma \bigl\Vert u(t_{k}) \bigr\Vert \frac{1}{ \varGamma (q)} \int _{t_{k}}^{t}(t-s)^{q-1} \\ &\quad {} \cdot \Biggl[c_{jj}^{+}(s)H_{j} +\sum _{i=1,i\ne j}^{n}\frac{ \alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(s) \bigr\vert H_{j} \Biggr]\,\mathrm{d}s \\ &\le \lim_{t\to +\infty } \bigl\Vert u(t_{k}) \bigr\Vert +\lim_{t\to +\infty }Q \bigl\Vert u(t_{k}) \bigr\Vert \frac{1}{\varGamma (q)} \int _{t_{k}}^{t}(t-s)^{q-1}\,\mathrm{d}s \\ &\le \lim_{t\to +\infty } \bigl\Vert u(t_{k}) \bigr\Vert +Q\lim_{t\to +\infty }\frac{(t-t _{k})^{q}}{\varGamma (q+1)} \bigl\Vert u(t_{k}) \bigr\Vert =0, \end{aligned}$$

where Q is defined in (17). It can be claimed that system (4) can realize outer-synchronization. □

Remark 3.1

From inequality (31), we can get

$$\begin{aligned} \biggl[\frac{\varGamma (q+1)\xi }{U} \biggr]^{\frac{1}{q}}\le t_{k+1}-t_{k} \le \biggl[\frac{\varGamma (q+1)\xi }{\zeta } \biggr]^{\frac{1}{q}} \end{aligned}$$

for all \(k\in \mathcal{N}\), so the exclusion of Zeno behavior has been confirmed for rule (29).

Theorem 3.2

Given that \(\psi (t)=(\psi _{1}(t),\psi _{2}(t),\ldots,\psi _{n}(t))^{T}\) is a positive and continuous function on \([t_{0},+\infty )\). Set \(t_{k+1}\) as a time point satisfying

$$\begin{aligned} t_{k+1}=\sup_{\theta \ge t_{k}}\bigl\{ \theta : \bigl\Vert r(t) \bigr\Vert \le \psi (t), \forall t\in (t_{k},\theta ]\bigr\} \end{aligned}$$
(37)

for all \(k\in \mathcal{N}\), where \(r(t)=(r_{1}(t),r_{2}(t),\ldots,r _{n}(t))^{T}\) is defined in (8). Let \(\alpha _{i}>0\) be positive constants to gratify \(\min_{1\le j\le n}\eta _{j}(\alpha ,t)\ge V\) for \(i=1,2,\ldots,n\), some \(V>0\) and all \(t\ge t_{0}\), \(\sup_{t\ge t_{0}}(1/ \varGamma (q))\int ^{t}_{t_{0}}(t-s)^{q-1}\psi (s)\,\mathrm{d}s<+\infty \), then system (4) is said to be outer-synchronized.

Proof

By Lemma 2.3, from (7) we can derive

$$\begin{aligned} {}^{C}_{t_{k}}D^{q}_{t} \bigl\Vert u(t) \bigr\Vert &=\sum^{n}_{i=1}\alpha _{i} {{}^{C}_{t _{k}}}D^{q}_{t} \bigl\vert u_{i}(t) \bigr\vert \le \sum ^{n}_{i=1}\operatorname{sgn}\bigl(u_{i}(t) \bigr) \alpha _{i} {{}^{C}_{t_{k}}}D^{q}_{t} u_{i}(t) \\ &\le \sum^{n}_{i=1}\operatorname{sgn} \bigl(u_{i}(t)\bigr)\alpha _{i} \Biggl\{ -a_{i}(t)u _{i}(t_{k})+\sum_{j=1}^{n}b_{ij}(t)p_{j}(t_{k}) \\ &\quad {} +\sum_{j=1}^{n}c_{ij}(t)h_{j} \bigl(\gamma (t_{k})\bigr) \Biggr\} \\ &\le \sum^{n}_{i=1}\operatorname{sgn} \bigl(u_{i}(t)\bigr)\alpha _{i} \Biggl\{ -a_{i}(t) \bigl[u _{i}(t_{k})-u_{i}(t)+u_{i}(t) \bigr] \\ &\quad {} +\sum_{j=1}^{n}b_{ij}(t)m_{j}(t_{k}) \bigl[u_{j}(t_{k})-u_{j}(t)+u _{j}(t) \bigr] \\ &\quad {} +\sum_{j=1}^{n}c_{ij}(t)n_{j} \bigl(\gamma (t_{k})\bigr)u_{j}\bigl(\gamma (t _{k})\bigr) \Biggr\} \\ &\le \sum^{n}_{i=1}\operatorname{sgn} \bigl(u_{i}(t)\bigr)\alpha _{i} \Biggl\{ -a_{i}(t)r _{i}(t)-a_{i}(t)u_{i}(t) \\ &\quad {} +\sum_{j=1}^{n}b_{ij}(t)m_{j}(t_{k})r_{j}(t) +b_{ii}(t)m_{i}(t _{k})u_{i}(t) \\ &\quad {} +\sum_{j=1,j\ne i}^{n}b_{ij}(t)m_{j}(t_{k})u_{j}(t) +c_{ii}(t)n _{i}\bigl(\gamma (t_{k}) \bigr)u_{i}\bigl(\gamma (t_{k})\bigr) \\ &\quad {} +\sum_{j=1,j\ne i}^{n}c_{ij}(t)n_{j} \bigl(\gamma (t_{k})\bigr)u_{j}\bigl( \gamma (t_{k})\bigr) \Biggr\} \\ &\le \sum^{n}_{j=1}\alpha _{j} \bigl\vert r_{j}(t) \bigr\vert \Biggl\{ a_{j}(t)+\sum ^{n}_{i=1}\frac{ \alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L_{j} \Biggr\} -\sum _{j=1}^{n}\alpha _{j} \bigl\vert u_{j}(t) \bigr\vert \Biggl\{ a_{j}(t) \\ &\quad {} -b_{jj}^{+}(t)L_{j} -\sum ^{n}_{i=1,i\ne j}\frac{\alpha _{i}}{ \alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L_{j} \Biggr\} +\sum ^{n}_{j=1}\alpha _{j} \bigl\vert u_{j}\bigl( \gamma (t_{k})\bigr) \bigr\vert \\ &\quad {} \cdot \Biggl\{ c_{jj}^{+}(t)H_{j}+\sum _{i=1,i\ne j}^{n}\frac{ \alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(t) \bigr\vert H_{j} \Biggr\} , \end{aligned}$$
(38)

with

$$\begin{aligned}& m_{i}(t_{k})= \textstyle\begin{cases} \frac{ p_{i}(t_{k})}{ u_{i}(t_{k})}, & u_{i}(t_{k})\ne 0, \\ 0, & u_{i}(t_{k})=0, \end{cases}\displaystyle \qquad n_{i}\bigl(\gamma (t_{k})\bigr)= \textstyle\begin{cases} \frac{ h_{i}(\gamma (t_{k}))}{ u_{i}(\gamma (t_{k}))}, & u_{i}(\gamma (t_{k}))\ne 0, \\ 0, & u_{i}(\gamma (t_{k}))=0. \end{cases}\displaystyle \end{aligned}$$

Through Lemma 2.2, it follows from (37) and (38) that

$$\begin{aligned} {}^{C}_{t_{k}}D^{q}_{t} \bigl\Vert u(t) \bigr\Vert &\le \bigl\Vert r(t) \bigr\Vert \Biggl\{ a_{j}(t)+\sum ^{n} _{i=1}\frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L_{j} \Biggr\} - \bigl\Vert u(t) \bigr\Vert \Biggl\{ a_{j}(t)-b_{jj}^{+}(t)L_{j} \\ &\quad {} -\sum^{n}_{i=1,i\ne j} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L _{j} \Biggr\} +\sigma \sum^{n}_{j=1}\alpha _{j} \bigl\vert u_{j}(t_{k})-u_{j}(t)+u _{j}(t) \bigr\vert \\ &\quad {} \cdot \Biggl\{ c_{jj}^{+}(t)H_{j}+\sum _{i=1,i\ne j}^{n}\frac{ \alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(t) \bigr\vert H_{j} \Biggr\} \\ &\le \bigl\Vert r(t) \bigr\Vert \Biggl\{ a_{j}(t)+\sum ^{n}_{i=1}\frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L_{j} \Biggr\} - \bigl\Vert u(t) \bigr\Vert \Biggl\{ a_{j}(t)-b_{jj}^{+}(t)L _{j} \\ &\quad {} -\sum^{n}_{i=1,i\ne j} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L _{j} \Biggr\} +\sigma \bigl( \bigl\Vert r(t) \bigr\Vert + \bigl\Vert u(t) \bigr\Vert \bigr) \\ &\quad {} \cdot \Biggl\{ c_{jj}^{+}(t)H_{j}+\sum _{i=1,i\ne j}^{n}\frac{ \alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(t) \bigr\vert H_{j} \Biggr\} \\ &\le \bigl\Vert r(t) \bigr\Vert \Biggl\{ a_{j}(t)+\sum ^{n}_{i=1}\frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L_{j} +\sigma c_{jj}^{+}(t)H_{j} \\ &\quad {} +\sigma \sum_{i=1,i\ne j}^{n} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert c _{ij}(t) \bigr\vert H_{j} \Biggr\} - \bigl\Vert u(t) \bigr\Vert \Biggl\{ a_{j}(t)-b_{jj}^{+}(t)L_{j} \\ &\quad {} -\sum^{n}_{i=1,i\ne j} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L _{j}-\sigma c_{jj}^{+}(t)H_{j}-\sigma \sum_{i=1,i\ne j}^{n}\frac{ \alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(t) \bigr\vert H_{j} \Biggr\} \\ &\le -V \bigl\Vert u(t) \bigr\Vert +Q\psi (t), \end{aligned}$$
(39)

where Q is defined in (17). On the other hand, by (39),

$$\begin{aligned} \bigl\Vert u(t) \bigr\Vert &\le \bigl\Vert u(t_{0}) \bigr\Vert +\frac{1}{\varGamma (q)} \int _{t_{0}}^{t}(t-s)^{q-1}\bigl[-V \bigl\Vert u(s) \bigr\Vert +Q\psi (s)\bigr]\,\mathrm{d}s \\ &\le \bigl\Vert u(t_{0}) \bigr\Vert -\frac{1}{\varGamma (q)} \int _{t_{0}}^{t}(t-s)^{q-1}V \bigl\Vert u(s) \bigr\Vert \,\mathrm{d}s+Q\varepsilon , \end{aligned}$$
(40)

for \(s\in [t_{k},t)\), \(t\in [t_{k},t_{k+1})\), in which \((1/\varGamma (q)) \int ^{t}_{t_{0}}(t-s)^{q-1}\psi (s)\,\mathrm{d}s\le \varepsilon <+ \infty \).

Utilizing Lemma 2.1, from (40), we obtain

$$\begin{aligned} \lim_{t\to +\infty } \bigl\Vert u(t) \bigr\Vert \le \lim _{t\to +\infty }\bigl[Q\varepsilon + \bigl\Vert u(t _{0}) \bigr\Vert \bigr]E_{q}\bigl(-V(t-t_{0})^{q} \bigr)=0, \end{aligned}$$

where \(t\ge t_{0}\). Namely, based on the centralized data-sampling time sequence \(\{{t_{k}}\}\) for \(k\in \mathcal{N}\), it concluded that \(\Vert u(t)\Vert \) converges to 0. Hence, outer-synchronization can be realized for system (4). □

State-dependent decentralized data-sampling approach

Theorem 3.3

Given that \(\lambda (t)=(\lambda _{1}(t),\lambda _{2}(t),\ldots,\lambda _{n}(t))^{T}\) is a positive and continuous function on \([t_{0},+\infty )\), set \(t_{k+1}\) as a time point satisfying

$$\begin{aligned} t_{k+1}=\sup_{\theta \ge t_{k}}\bigl\{ \theta : \bigl\vert r_{i}(t) \bigr\vert \le \lambda _{i}(t), \forall t\in (t_{k}^{i},\theta ]\bigr\} \end{aligned}$$
(41)

for \(i=1,2,\ldots,n\) and all \(k\in \mathcal{N}\), where \(r(t)=(r_{1}(t),r _{2}(t),\ldots,r_{n}(t))^{T}\) is defined in (10). Let \(\alpha _{i}>0\) be positive constants obeying \(\min_{1\le j\le n}\eta _{j}(\alpha ,t) \ge W\) for \(i=1,2,\ldots,n\), some \(W>0\) and all \(t\ge t_{0}\), \(\sup_{t\ge t_{0}}(1/\varGamma (q))\int ^{t}_{t_{0}}(t-s)^{q-1}\Vert \lambda (s)\Vert \,\mathrm{d}s<+\infty \), then system (5) is said to be outer-synchronized.

Proof

From Lemma 2.3 and (9) we can derive

$$\begin{aligned} {}^{C}_{t_{k}}D^{q}_{t} \bigl\Vert u(t) \bigr\Vert &=\sum^{n}_{i=1}\alpha _{i} {{}^{C}_{t _{k}}}D^{q}_{t} \bigl\vert u_{i}(t) \bigr\vert \le \sum ^{n}_{i=1}\operatorname{sgn}\bigl(u_{i}(t) \bigr) \alpha _{i} {{}^{C}_{t_{k}}}D^{q}_{t} u_{i}(t) \\ &\le \sum^{n}_{i=1}\operatorname{sgn} \bigl(u_{i}(t)\bigr)\alpha _{i} \Biggl\{ -a_{i}(t)u _{i}\bigl(t_{k}^{i}\bigr)+\sum _{j=1}^{n}b_{ij}(t)p_{j} \bigl(t_{k}^{j}\bigr) \\ &\quad {} +\sum_{j=1}^{n}c_{ij}(t)h_{j} \bigl(\gamma \bigl(t_{k}^{j}\bigr)\bigr) \Biggr\} \\ &\le \sum^{n}_{i=1}\operatorname{sgn} \bigl(u_{i}(t)\bigr)\alpha _{i} \Biggl\{ -a_{i}(t) \bigl[u _{i}\bigl(t_{k}^{i}\bigr)-u_{i}(t)+u_{i}(t) \bigr] \\ &\quad {} +\sum_{j=1}^{n}b_{ij}(t)m_{j} \bigl(t_{k}^{j}\bigr) \bigl[u_{j} \bigl(t_{k}^{j}\bigr)-u _{j}(t)+u_{j}(t) \bigr] \\ &\quad {} +\sum_{j=1}^{n}c_{ij}(t)n_{j} \bigl(\gamma \bigl(t_{k}^{j}\bigr)\bigr)u_{j} \bigl(\gamma \bigl(t_{k}^{j}\bigr)\bigr) \Biggr\} \\ &\le \sum^{n}_{i=1}\operatorname{sgn} \bigl(u_{i}(t)\bigr)\alpha _{i} \Biggl\{ -a_{i}(t)r _{i}(t)-a_{i}(t)u_{i}(t) \\ &\quad {} +\sum_{j=1}^{n}b_{ij}(t)m_{j} \bigl(t_{k}^{j}\bigr)r_{j}(t) +b_{ii}(t)m _{i}\bigl(t_{k}^{i} \bigr)u_{i}(t) \\ &\quad {} +\sum_{j=1,j\ne i}^{n}b_{ij}(t)m_{j} \bigl(t_{k}^{j}\bigr)u_{j}(t) +c_{ii}(t)n _{i}\bigl(\gamma \bigl(t_{k}^{i} \bigr)\bigr)u_{i}\bigl(\gamma \bigl(t_{k}^{i} \bigr)\bigr) \\ &\quad {} +\sum_{j=1,j\ne i}^{n}c_{ij}(t)n_{j} \bigl(\gamma \bigl(t_{k}^{j}\bigr)\bigr)u_{j} \bigl( \gamma \bigl(t_{k}^{j}\bigr)\bigr) \Biggr\} \\ &\le \sum^{n}_{j=1}\alpha _{j} \bigl\vert r_{j}(t) \bigr\vert \Biggl\{ a_{j}(t)+\sum ^{n}_{i=1}\frac{ \alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L_{j} \Biggr\} -\sum _{j=1}^{n}\alpha _{j} \bigl\vert u_{j}(t) \bigr\vert \Biggl\{ a_{j}(t) \\ &\quad {} -b_{jj}^{+}(t)L_{j} -\sum ^{n}_{i=1,i\ne j}\frac{\alpha _{i}}{ \alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L_{j} \Biggr\} +\sum ^{n}_{j=1}\alpha _{j} \bigl\vert u_{j}\bigl( \gamma \bigl(t_{k}^{j}\bigr)\bigr) \bigr\vert \\ &\quad {} \cdot \Biggl\{ c_{jj}^{+}(t)H_{j}+\sum _{i=1,i\ne j}^{n}\frac{ \alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(t) \bigr\vert H_{j} \Biggr\} , \end{aligned}$$
(42)

with

$$\begin{aligned}& m_{j}\bigl(t_{k}^{j}\bigr)= \textstyle\begin{cases} \frac{ p_{j}(t_{k}^{j})}{ u_{j}(t_{k}^{j})}, & u_{j}(t_{k}^{j})\ne 0, \\ 0,& u_{j}(t_{k}^{j})=0, \end{cases}\displaystyle \qquad n_{j}\bigl(\gamma \bigl(t_{k}^{j} \bigr)\bigr)= \textstyle\begin{cases} \frac{ h_{j}(\gamma (t_{k}^{j}))}{ u_{j}(\gamma (t_{k}^{j}))}, & u _{j}(\gamma (t_{k}^{j}))\ne 0, \\ 0, & u_{j}(\gamma (t_{k}^{j}))=0. \end{cases}\displaystyle \end{aligned}$$

According to Lemma 2.4, it follows from (41) and (42) that

$$\begin{aligned} {}^{C}_{t_{k}}D^{q}_{t} \bigl\Vert u(t) \bigr\Vert &\le \bigl\Vert r(t) \bigr\Vert \Biggl\{ a_{j}(t)+\sum ^{n} _{i=1}\frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L_{j} \Biggr\} - \bigl\Vert u(t) \bigr\Vert \Biggl\{ a_{j}(t)-b_{jj}^{+}(t)L_{j} \\ &\quad {} -\sum^{n}_{i=1,i\ne j} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L _{j} \Biggr\} +\sigma \sum^{n}_{j=1}\alpha _{j} \bigl\vert u_{j}\bigl(t_{k}^{j} \bigr)-u_{j}(t)+u _{j}(t) \bigr\vert \\ &\quad {} \cdot \Biggl\{ c_{jj}^{+}(t)H_{j}+\sum _{i=1,i\ne j}^{n}\frac{ \alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(t) \bigr\vert H_{j} \Biggr\} \\ &\le \bigl\Vert r(t) \bigr\Vert \Biggl\{ a_{j}(t)+\sum ^{n}_{i=1}\frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L_{j} \Biggr\} - \bigl\Vert u(t) \bigr\Vert \Biggl\{ a_{j}(t)-b_{jj}^{+}(t)L _{j} \\ &\quad {} -\sum^{n}_{i=1,i\ne j} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L _{j} \Biggr\} +\sigma \bigl( \bigl\Vert r(t) \bigr\Vert + \bigl\Vert u(t) \bigr\Vert \bigr) \\ &\quad {} \cdot \Biggl\{ c_{jj}^{+}(t)H_{j}+\sum _{i=1,i\ne j}^{n}\frac{ \alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(t) \bigr\vert H_{j} \Biggr\} \\ &\le -W \bigl\Vert u(t) \bigr\Vert +Q \bigl\Vert \lambda (t) \bigr\Vert , \end{aligned}$$
(43)

where Q is defined in (17). On the other hand, by (43),

$$\begin{aligned} \bigl\Vert u(t) \bigr\Vert &\le \bigl\Vert u(t_{0}) \bigr\Vert +\frac{1}{\varGamma (q)} \int _{t_{0}}^{t}(t-s)^{q-1}\bigl[-W \bigl\Vert u(s) \bigr\Vert +Q \bigl\Vert \lambda (s) \bigr\Vert \bigr]\, \mathrm{d}s \\ &\le \bigl\Vert u(t_{0}) \bigr\Vert -\frac{1}{\varGamma (q)} \int _{t_{0}}^{t}(t-s)^{q-1}W \bigl\Vert u(s) \bigr\Vert \,\mathrm{d}s+Q\varsigma , \end{aligned}$$
(44)

for \(s\in [t^{i}_{k},t)\), \(t\in [t^{i}_{k},t^{i}_{k+1})\), \(i=1,2,\ldots,n\), in which \((1/\varGamma (q))\int ^{t}_{t_{0}}(t-s)^{q-1}\Vert \lambda (s) \Vert \,\mathrm{d}s\le \varsigma <+\infty \).

Utilizing Lemma 2.1, from (44), we obtain

$$\begin{aligned} \lim_{t\to +\infty } \bigl\Vert u(t) \bigr\Vert \le \lim _{t\to +\infty }\bigl[Q\varsigma + \bigl\Vert u(t _{0}) \bigr\Vert \bigr]E_{q}\bigl(-W(t-t_{0})^{q}\bigr)=0, \end{aligned}$$

where \(t\ge t_{0}\). Namely, based on the decentralized data-sampling time sequence \(\{{t_{k}^{i}}\}\) for \(i=1,2,\ldots,n\) and \(k\in \mathcal{N}\), it concluded that \(\Vert u(t)\Vert \) converges to 0. Hence, outer-synchronization can be realized for system (5). □

Remark 3.2

It is obvious that the weighting coefficient \(\alpha _{i}\) of the definition of the norm in this paper is actually the adjustment coefficient \(\alpha _{i}\) with respect to centralized and decentralized data-sampling control, and the selection of the adjustment coefficient \(\alpha _{i}\) is on the basis of outer-synchronization criteria which are derived for data-sampling control systems (4) and (5). It can be seen in Theorem 3.1 that the parameter \(\eta _{j}( \alpha ,t)\) needs to satisfy \(\eta _{j}(\alpha ,t)\ge \zeta \) according to the feature of structure-dependent centralized data-sampling, it implies that the adjustment coefficient \(\alpha _{i}\) can be determined by the condition \(\eta _{j}(\alpha ,t)\ge \zeta \). In Theorem 3.2, the parameter \(\eta _{j}(\alpha ,t)\) needs to satisfy \(\min_{1\le j\le n} \eta _{j}(\alpha ,t)\ge V\) according to the feature of state-dependent centralized data-sampling, it implies that the adjustment coefficient \(\alpha _{i}\) can be determined by the condition \(\min_{1\le j\le n} \eta _{j}(\alpha ,t)\ge V\). Similarly, in Theorem 3.3, the parameter \(\eta _{j}(\alpha ,t)\) needs to satisfy \(\min_{1\le j\le n}\eta _{j}( \alpha ,t)\ge W\) according to the feature of state-dependent decentralized data-sampling, it implies that the adjustment coefficient \(\alpha _{i}\) can be determined by the condition \(\min_{1\le j\le n} \eta _{j}(\alpha ,t)\ge W\).

Remark 3.3

Three data-sampling control approaches are designed according to the type of sampling time point in Theorems 3.13.3. Centralized data-sampling control schemes divided into structure-dependent and state-dependent are investigated in Theorem 3.1 and Theorem 3.2, separately, decentralized data-sampling control scheme with state-dependent is studied in Theorem 3.3. Actually, structure-dependent data-sampling control design take full advantage of the property of its own structure, and state-dependent data-sampling control design connects the characteristics of state measurement error.

Remark 3.4

It is clear to see that centralized data-sampling control method has one more structure-dependent control rule than decentralized data-sampling control method because of the nature of fractional-order system about centralized data-sampling control. Additionally, under the state-dependent control, the analysis of centralized and decentralized data-sampling control approaches is similar. In view of the advantages of these two kinds of control design, the sampling time points can be well found. So in the practical application, the selection of centralized and decentralized data-sampling control schemes will depend on the requirements of the designer.

Remark 3.5

Under state-dependent centralized data-sampling control as Theorem 3.2 and state-dependent decentralized data-sampling control as Theorem 3.3, they imply that the corresponding systems state possesses the strictly positive inter-event interval, and this inter-event interval exists a common lower bound. Moreover, the exclusion of Zeno behavior can be confirmed. The main idea of proof about these conditions can be found in [24, Theorem 5].

Remark 3.6

According to Lemma 2.2 and Lemma 2.4, it is easy to find the connection of the argument state and the current state for all \(t\in \mathcal{R}^{+}\). Namely, when \(t=t_{k}\) or \(t=t^{j}_{k}\), this connection of the argument state and the current state still holds for deviating argument systems. So there is no doubt that this relation from Lemma 2.2 and Lemma 2.4 can be utilized for Theorems 3.13.3. Moreover, in the process of our proof, \(\sum^{n}_{j=1}\alpha _{j} \vert u_{j}(\gamma (t_{k}))\vert =\sigma \sum^{n}_{j=1}\alpha _{j}\vert u_{j}(t_{k})\vert \) is used in Theorem 3.2 and \(\sum^{n}_{j=1}\alpha _{j} \vert u_{j}(\gamma (t_{k}^{j}))\vert = \sigma \sum^{n}_{j=1}\alpha _{j}\vert u_{j}(t_{k}^{j})\vert \) is used in Theorem 3.3. Obviously, they are equivalent forms of the results of Lemma 2.2 and Lemma 2.4 separately on the basis of the definition of the norm in the paper.

Remark 3.7

It is worth noting that only when the sampling time point is reached will the data-sampling control take effect in Theorems 3.13.3. That is, the neighbors’ information of the corresponding systems state is adopted only at \(t_{k}\) or \(t_{k}^{i}\). Therefore, unlike the control of continuous-time, three data-sampling control strategies of Theorems 3.13.3 have better energy saving.

Remark 3.8

As introduced in Theorems 3.13.3, it is shown that the analysis approaches of outer-synchronization are different from the traditional lag synchronization, anticipated synchronization, cluster synchronization, pinning synchronization, distributed synchronization and phase synchronization. In addition, Theorems 3.13.3 have such characteristics about outer-synchronization, the realization of outer-synchronization for the corresponding systems is strongly linked with the sampling time point. That is to say, when the sampling time point is triggered, the corresponding systems can realize outer-synchronization.

Remark 3.9

This paper studies the case that fractional order q is based on \(0< q<1\). In Theorem 3.1, it is easy to see that the realization of outer-synchronization on the basis of structure-dependent centralized data-sampling control is not directly related to fractional order \(0< q<1\), and there is no limit to the range of fractional order q in Lemma 2.2 that is applied to Theorem 3.1. That is to say, if the range of fractional order q is not \(0< q<1\), the main result under structure-dependent centralized data-sampling control is not affected by the change of fractional order q. In Theorems 3.2 and 3.3, although the realization of outer-synchronization on the basis of state-dependent centralized and decentralized data-sampling control is not directly related to fractional order \(0< q<1\), but Lemma 2.3 used to Theorems 3.2 and 3.3 is associated with fractional order \(0< q<1\). Lemma 2.3 shows that fractional-order derivative with absolute value is no greater than the product of sign function and fractional-order derivative itself under fractional order \(0< q<1\). In order to explore whether this connection between fractional-order derivative with absolute value and fractional-order derivative itself is still valid under other fractional orders, then take fractional order \(1< q<2\) for example. According to the demonstration ideas of original literature [5, Lemma 4.3] and [26, Theorem 2], it is inevitable to involve the calculation of two-order difference with absolute value in the proving process when fractional order q is \(1< q<2\). There is no doubt that \(\frac{\mathrm{d}^{2}}{ \mathrm{d}t^{2}}\vert \mathscr{Q}(t)\vert =\frac{\mathrm{d}}{\mathrm{d}t} (\frac{\mathrm{d}}{\mathrm{d}t}\vert \mathscr{Q}(t)\vert ) =\frac{ \mathrm{d}}{\mathrm{d}t} (\operatorname{sgn}(\mathscr{Q}(t)) \frac{ \mathrm{d}}{\mathrm{d}t}\mathscr{Q}(t) ) =\operatorname{sgn}(\mathscr{Q}(t))\frac{\mathrm{d} ^{2}}{\mathrm{d}t^{2}}\mathscr{Q}(t) +\frac{\mathrm{d}}{\mathrm{d}t} \mathscr{Q}(t)\frac{\mathrm{d}}{\mathrm{d}t}\operatorname{sgn}(\mathscr{Q}(t))\). Because the sign function can be represented by the unit step function, namely, \(\operatorname{sgn}(\mathscr{Q}(t))=2H(\mathscr{Q}(t))-1\), \(H(\cdot )\) denotes the unit step function, and one-order difference of unit step function is Dirac delta function, then it has \(\frac{ \mathrm{d}}{\mathrm{d}t}\mathscr{Q}(t)\frac{\mathrm{d}}{\mathrm{d}t} \operatorname{sgn}(\mathscr{Q}(t)) =2\delta (\mathscr{Q}(t))\frac{ \mathrm{d}}{\mathrm{d}t}\mathscr{Q}(t)\). What needs to be pointed out is that \(2\delta (\mathscr{Q}(t))\frac{\mathrm{d}}{\mathrm{d}t} \mathscr{Q}(t)\) needs to be dealt with to get the desired result better. Obviously, \(\delta (\mathscr{Q}(t))\) cannot be eliminated due to the property of Dirac delta function. That is, it does not make sense that fractional-order derivative with absolute value is no greater than the product of sign function and fractional-order derivative itself for fractional order \(1< q<2\), the same does not bold true for other fractional orders. So if the range of fractional order q is not \(0< q<1\), the main results under state-dependent centralized and decentralized data-sampling control cannot be derived. To sum up, based on the whole analysis framework in this paper, when fractional order q is other different cases, the proposed criterion as Theorem 3.1 can be generalized, the proposed criteria as Theorems 3.2 and 3.3 cannot be generalized.

An illustrative example

In this section, for purpose of substantiating the effectiveness of the derived theoretical results, a numerical example is put forward through computer simulation.

Example 4.1

We focus on the fractional-order neural networks with deviating argument given by

$$\begin{aligned}& \begin{gathered} \begin{aligned} {}^{C}_{t_{0}}D^{q}_{t}z_{1}(t) &=-a_{1}(t)z_{1}(t)+b_{11}(t)f_{1} \bigl(z_{1}(t)\bigr)+b _{12}(t)f_{2} \bigl(z_{2}(t)\bigr) \\ &\quad{} +c_{11}(t)g_{1}\biggl(\frac{z_{1}(\gamma (t))}{2} \biggr)+c_{12}(t)g_{2}\biggl(\frac{z _{2}(\gamma (t))}{3}\biggr)+\nu _{1}(t), \end{aligned} \\ \begin{aligned} {}^{C}_{t_{0}}D^{q}_{t}z_{2}(t) &=-a_{2}(t)z_{2}(t)+b_{21}(t)f_{1} \bigl(z_{1}(t)\bigr)+b _{22}(t)f_{2} \bigl(z_{2}(t)\bigr) \\ &\quad{} +c_{21}(t)g_{1}\biggl(\frac{z_{1}(\gamma (t))}{2} \biggr)+c_{22}(t)g_{2}\biggl(\frac{z _{2}(\gamma (t))}{3}\biggr)+\nu _{2}(t), \end{aligned} \end{gathered} \end{aligned}$$
(45)

where \(q=\frac{1}{2}\), \(f_{1}(\mathscr{Z})=f_{2}(\mathscr{Z})=g_{1}( \mathscr{Z})=g_{2}(\mathscr{Z})=\tanh (\mathscr{Z})\), \(t_{0}=0\), A=( a 1 ( t ) 0 0 a 2 ( t ) )= ( 0.9 0 0 0.9 ) , B=( b 11 ( t ) b 12 ( t ) b 21 ( t ) b 22 ( t ) )= ( 0.03 0.04 0.05 0.02 ) , C=( c 11 ( t ) c 12 ( t ) c 21 ( t ) c 22 ( t ) )= ( 0.02 0.03 0.04 0.01 ) , ν=( ν 1 ( t ) ν 2 ( t ) )= ( 0.1 0.1 ) , two real-valued sequences \(\{t_{k}\}=\frac{k}{9}\), \(\{\varrho _{k}\}=\frac{2k+1}{18}\), \(k\in \mathcal{N}\), the identification function \(\gamma (t)=\varrho _{k}\), if \(t\in [t_{k},t_{k+1})\), \(k\in \mathcal{N}\), \(t\in \mathcal{R}^{+}\), the other two real-valued sequences \(\{t_{k}^{i}\}=\frac{k^{i}}{10}\), \(\{\rho _{k}\}=\frac{2k+1}{20}\), \(i=1,2,\ldots,n\), \(k\in \mathcal{N}\), the identification function \(\gamma (t)= \rho _{k}\), if \(t\in [t_{k}^{i},t_{k+1}^{i})\), \(i=1,2,\ldots,n\), \(k \in \mathcal{N}\), \(t\in \mathcal{R}^{+}\).

Clearly, \(L_{1}=L_{2}=1\), \(H_{1}=\frac{1}{2}\), \(H_{2}=\frac{1}{3}\), \(\upsilon =\frac{1}{9}\), \(\varphi =\frac{1}{10}\), and let \(\alpha _{1}= \alpha _{2}=1\). By computing, we can obtain

$$\begin{aligned}& \phi =\max \{\upsilon ,\varphi \}=\frac{1}{9}, \\& \epsilon =\frac{\phi ^{q}}{\varGamma (q+1)}=0.376, \\& \mu _{1} =\max_{1\le j\le n}\sup_{t\in \mathcal{R}^{+}} \Biggl\{ a_{j}(t)+b _{jj}^{+}(t)L_{j} + \sum^{n}_{i=1,i\ne j}\frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b _{ij}(t) \bigr\vert L_{j} \Biggr\} =0.98, \\& \mu _{2} =\max_{1\le j\le n}\sup_{t\in \mathcal{R}^{+}} \Biggl\{ c_{jj} ^{+}(t)H_{j}+\sum ^{n}_{i=1,i\ne j}\frac{\alpha _{i}}{\alpha _{j}} \bigl\vert c_{ij}(t) \bigr\vert H _{j} \Biggr\} =0.03, \\& \delta =(1+\epsilon \mu _{2})E_{q}\bigl(\mu _{1}\phi ^{q}\bigr)=1.5256, \\& \sigma =\frac{1}{1-\epsilon (\mu _{1}\delta +\mu _{2})}=2.3443, \\& P =\max_{1\le i\le n}\sup_{t\in \mathcal{R}^{+}}\bigl\{ a_{i}(t)-b_{ii} ^{-}(t)L_{i}\bigr\} =0.92, \\& \begin{aligned} Q &=\max_{1\le j\le n}\sup_{t\in \mathcal{R}^{+}} \Biggl\{ a_{j}(t)+\sum^{n}_{i=1} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert b_{ij}(t) \bigr\vert L_{j} +\sigma c_{jj} ^{+}(t)H_{j} \\ &\quad {} +\sigma \sum^{n}_{i=1,i\ne j} \frac{\alpha _{i}}{\alpha _{j}} \bigl\vert c _{ij}(t) \bigr\vert H_{j} \Biggr\} =1.0503, \end{aligned} \\& \begin{aligned} \eta _{1}(\alpha ,t) &=a_{1}(t)-b_{11}^{+}(t)L_{1}- \frac{\alpha _{2}}{ \alpha _{1}} \bigl\vert b_{21}(t) \bigr\vert L_{1} -\sigma c_{11}^{+}(t)H_{1} \\ &\quad {} -\sigma \frac{\alpha _{2}}{\alpha _{1}} \bigl\vert c_{21}(t) \bigr\vert H_{1}=0.7497, \end{aligned} \\& \begin{aligned} \eta _{2}(\alpha ,t) &=a_{2}(t)-b_{22}^{+}(t)L_{2}- \frac{\alpha _{1}}{ \alpha _{2}} \bigl\vert b_{12}(t) \bigr\vert L_{2} -\sigma c_{22}^{+}(t)H_{2} \\ &\quad {} -\sigma \frac{\alpha _{1}}{\alpha _{2}} \bigl\vert c_{12}(t) \bigr\vert H_{2}=0.8366. \end{aligned} \end{aligned}$$

Choosing \(\xi =0.7\), \(\zeta =0.75\), \(U=0.84\) to satisfy the following inequalities:

$$\begin{aligned}& P\xi - \zeta \le 0, \\& U\xi -\zeta (2-\xi ) \le 0, \end{aligned}$$

according to Theorem 3.1, system (45) achieves outer-synchronization. The evolutive behaviors of \(z_{1}(t)\) and \(\bar{z}_{1}(t)\), \(z_{2}(t)\) and \(\bar{z}_{2}(t)\) at the sampling time points about Theorem 3.1 are described in Figs. 1 and 2, respectively. The release instants and release intervals at the sampling time points are depicted in Fig. 3.

Figure 1
figure1

Evolutive behavior of \(z_{1}(t)\) and \(\bar{z}_{1}(t)\) at the sampling time points in Theorem 3.1

Figure 2
figure2

Evolutive behavior of \(z_{2}(t)\) and \(\bar{z}_{2}(t)\) at the sampling time points in Theorem 3.1

Figure 3
figure3

The release instants and release intervals at the sampling time points in Theorem 3.1

Selecting \(\psi (t)=\frac{1}{(t+2)^{\frac{1}{2}}}\), together with

$$\begin{aligned} \sup_{t\ge 0}\frac{1}{\varGamma (\frac{1}{2})} \int _{0}^{t}(t-s)^{q-1}\frac{1}{(s+2)^{ \frac{1}{2}}} \,\mathrm{d}s< +\infty , \end{aligned}$$

according to Theorem 3.2, system (45) achieves outer-synchronization. The evolutive behaviors of \(z_{1}(t)\) and \(\bar{z}_{1}(t)\), \(z_{2}(t)\) and \(\bar{z}_{2}(t)\) at the sampling time points about Theorem 3.2 are described in Figs. 4 and 5, respectively. The release instants and release intervals at the sampling time points are depicted in Fig. 6.

Figure 4
figure4

Evolutive behavior of \(z_{1}(t)\) and \(\bar{z}_{1}(t)\) at the sampling time points in Theorem 3.2

Figure 5
figure5

Evolutive behavior of \(z_{2}(t)\) and \(\bar{z}_{2}(t)\) at the sampling time points in Theorem 3.2

Figure 6
figure6

The release instants and release intervals at the sampling time points in Theorem 3.2

Selecting \(\lambda _{1}(t)=\frac{1}{(t+2)^{\frac{1}{2}}}\), \(\lambda _{2}(t)=\frac{1}{(t+3)^{ \frac{1}{2}}}\), together with

$$\begin{aligned} \sup_{t\ge 0}\frac{1}{\varGamma (\frac{1}{2})} \int _{0}^{t}(t-s)^{q-1}\frac{1}{(s+2)^{ \frac{1}{2}}} \,\mathrm{d}s &< +\infty , \\ \sup_{t\ge 0}\frac{1}{\varGamma (\frac{1}{2})} \int _{0}^{t}(t-s)^{q-1}\frac{1}{(s+3)^{ \frac{1}{2}}} \,\mathrm{d}s &< +\infty , \end{aligned}$$

according to Theorem 3.3, system (45) achieves outer-synchronization. The evolutive behaviors of \(z_{1}(t)\) and \(\bar{z}_{1}(t)\), \(z_{2}(t)\) and \(\bar{z}_{2}(t)\) at the sampling time points about Theorem 3.3 are described in Figs. 7 and 8, respectively. The release instants and release intervals at the sampling time points are depicted in Fig. 9.

Figure 7
figure7

Evolutive behavior of \(z_{1}(t)\) and \(\bar{z}_{1}(t)\) at the sampling time points in Theorem 3.3

Figure 8
figure8

Evolutive behavior of \(z_{2}(t)\) and \(\bar{z}_{2}(t)\) at the sampling time points in Theorem 3.3

Figure 9
figure9

The release instants and release intervals at the sampling time points in Theorem 3.3

Remark 4.1

The simulation results of nine figures show that there is no fundamental distinction about the performance of outer-synchronization with three data-sampling control design as Theorems 3.13.3. Moreover, the release intervals at the sampling time points as Theorem 3.1 are comparatively minor for Fig. 3, and the release intervals at the sampling time points as Theorem 3.2 are comparatively thin for Fig. 6.

Remark 4.2

Taking into account the conditions that parameters must satisfy, the range of choices for the parameters is limited. For example, the parameter σ need to be bigger than 0, parameters \(\eta _{j}(\alpha ,t)\) cannot be too small. It is noticeable that the values of parameters \(\eta _{j}(\alpha ,t)\) are affected by the different values of the parameter σ. Namely, the parameter σ is greater than 0, but parameters \(\eta _{j}(\alpha ,t)\) could be too small due to the influence of the parameter σ. Sometimes all the parameters satisfy the conditions, but the figures cannot fully display the characteristics of data-sampling control. In order to better reflect the figures about sampling time points, we selected one of the most representative numerical example by constantly adjusting parameters. It is clear to see that Figs. 13 show the main results about Theorem 3.1, Figs. 46 show the main results about Theorem 3.2 and Figs. 79 show the main results about Theorem 3.3. As a consequence, the exactness of the derived results can be verified by Example 4.1.

Conclusion

The research of dynamic behavior of fractional-order systems by data-sampling control has attracted extensive attention, nevertheless, the control design about fractional-order systems with deviating argument by data-sampling approaches has seldom be studied. In this paper, we exploit outer-synchronization problem for fractional-order systems with deviating argument via centralized and decentralized data-sampling approaches. Our main theoretical results of the paper are that centralized and decentralized data-sampling approaches with the structure-dependent and state-dependent are constructed to realize outer-synchronization in deviating argument systems. Moreover, based on the data-sampling control, the concept and property of deviating argument and the synchronization theory of fractional-order systems, several sufficient criteria guaranteeing to reach outer-synchronization for fractional-order neural networks with deviating argument are shown, and the Zeno behavior has been excluded in the course of the proof.

Many room for improvement is open on the outer-synchronization criteria for fractional-order systems with deviating argument. So future work about fractional-order systems with deviating argument can be extended: (1) designing the outer-synchronization subject to stochastic disturbance, (2) analyzing the outer-synchronization in uncertain environment. Indeed, those issues may be considerable research topics in the future.

References

  1. 1.

    Ding, X.S., Cao, J.D., Zhao, X., Alsaadi, F.E.: Finite-time stability of fractional-order complex-valued neural networks with time delays. Neural Process. Lett. 46(2), 561–580 (2017)

  2. 2.

    Huang, C.D., Zhao, X., Wang, X.H., Wang, Z.X., Xiao, M., Cao, J.D.: Disparate delays-induced bifurcations in a fractional-order neural network. J. Franklin Inst. 356(5), 2825–2846 (2019)

  3. 3.

    Wu, A.L., Zeng, Z.G.: Global Mittag-Leffler stabilization of fractional-order memristive neural networks. IEEE Trans. Neural Netw. Learn. Syst. 28(1), 206–217 (2017)

  4. 4.

    Huang, C.D., Nie, X.B., Zhao, X., Song, Q.K., Tu, Z.W., Xiao, M., Cao, J.D.: Novel bifurcation results for a delayed fractional-order quaternion-valued neural network. Neural Netw. 117, 67–93 (2019). https://doi.org/10.1016/j.neunet.2019.05.002

  5. 5.

    Chen, B.S., Chen, J.J.: Global asymptotical ω-periodicity of a fractional-order non-autonomous neural networks. Neural Netw. 68, 78–88 (2015)

  6. 6.

    Wu, A.L., Zeng, Z.G., Song, X.G.: Global Mittag-Leffler stabilization of fractional-order bidirectional associative memory neural networks. Neurocomputing 177, 489–496 (2016)

  7. 7.

    Huang, C.D., Li, H., Cao, J.D.: A novel strategy of bifurcation control for a delayed fractional predator–prey model. Appl. Math. Comput. 347, 808–838 (2019)

  8. 8.

    Li, H., Huang, C.D., Li, T.X.: Dynamic complexity of a fractional-order predator–prey system with double delays. Physica A 526, 120852 (2019)

  9. 9.

    Ye, H.P., Gao, J.M., Ding, Y.S.: A generalized Gronwall inequality and its application to a fractional differential equation. J. Math. Anal. Appl. 328(2), 1075–1081 (2007)

  10. 10.

    Cooke, K.L., Wiener, J.: Retarded differential equations with piecewise constant delays. J. Math. Anal. Appl. 99(1), 265–297 (1984)

  11. 11.

    Shah, S.M., Wiener, J.: Advanced differential equations with piecewise constant argument deviations. Int. J. Math. Sci. 6(4), 671–703 (1983)

  12. 12.

    Akhmet, M.U.: On the reduction principle for differential equations with piecewise constant argument of generalized type. J. Math. Anal. Appl. 336(1), 646–663 (2007)

  13. 13.

    Akhmet, M.U.: Almost periodic solutions of differential equations with piecewise constant argument of generalized type. Nonlinear Anal. Hybrid Syst. 2(2), 456–467 (2008)

  14. 14.

    Wu, A.L., Liu, L., Huang, T.W., Zeng, Z.G.: Mittag-Leffler stability of fractional-order neural networks in the presence of generalized piecewise constant arguments. Neural Netw. 85, 118–127 (2017)

  15. 15.

    Zhang, J.E.: Robustness analysis of global exponential stability of nonlinear systems with deviating argument and stochastic disturbance. IEEE Access 5, 13446–13454 (2017)

  16. 16.

    Bao, H.B., Park, J.H., Cao, J.D.: Synchronization of fractional-order complex-valued neural networks with time delay. Neural Netw. 81, 16–28 (2016)

  17. 17.

    Zhang, J.E.: Linear-type discontinuous control of fixed-deviation stabilization and synchronization for fractional-order neurodynamic systems with communication delays. IEEE Access 6, 52570–52581 (2018)

  18. 18.

    Wu, Z.Y., Chen, G.R., Fu, X.C.: Outer synchronization of drive-response dynamical networks via adaptive impulsive pinning control. J. Franklin Inst. 352(10), 4297–4308 (2015)

  19. 19.

    Yang, Y., Wang, Y., Li, T.Z.: Outer synchronization of fractional-order complex dynamical networks. Optik 127(19), 7395–7407 (2016)

  20. 20.

    Li, H.Q., Liao, X.F., Huang, T.W., Zhu, W.: Event-triggering sampling based leader-following consensus in second-order multi-agent systems. IEEE Trans. Autom. Control 60(7), 1998–2003 (2015)

  21. 21.

    Wang, Z., Liu, D.R.: A data-based state feedback control method for a class of nonlinear systems. IEEE Trans. Ind. Inform. 9(4), 2284–2292 (2013)

  22. 22.

    Abdelrahim, M., Postoyan, R., Daafouz, J., Nesic, D.: Robust event-triggered output feedback controllers for nonlinear systems. Automatica 75, 96–108 (2017)

  23. 23.

    Sharma, N.K., Sreenivas, T.V.: Event-triggered sampling using signal extrema for instantaneous amplitude and instantaneous frequency estimation. Signal Process. 116, 43–54 (2015)

  24. 24.

    Lu, W.L., Zheng, R., Chen, T.P.: Centralized and decentralized global outer-synchronization of asymmetric recurrent time-varying neural network by data-sampling. Neural Netw. 75, 22–31 (2016)

  25. 25.

    Zhang, J.E.: Centralized and decentralized data-sampling principles for outer-synchronization of fractional-order neural networks. Complexity 2017, Article ID 6290646 (2017)

  26. 26.

    Zhang, S., Yu, Y.G., Wang, H.: Mittag-Leffler stability of fractional-order Hopfield neural networks. Nonlinear Anal. Hybrid Syst. 16, 104–121 (2015)

Download references

Author information

All authors contributed equally to this work. All authors read and approved the final manuscript.

Correspondence to Ailong Wu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Keywords

  • Fractional-order systems
  • Deviating argument
  • Outer-synchronization
  • Centralized and decentralized data-sampling principles