Skip to main content

Theory and Modern Applications

Anti-synchronization control for delayed memristor-based distributed parameter NNs with mixed boundary conditions

Abstract

This paper addresses the anti-synchronization control problem of memristor-based distributed parameter neural networks (MDPNNs) with time-varying delays. An intermittent sampled-data controller is proposed such that the response system can be anti-synchronized with the drive system by using Lyapunov stability theory. The design approach of the desired intermittent sampled-data controller is first proposed. Based on a differential inequality and linear matrix inequality (LMI) technology, novel sufficient conditions for the exponential anti-synchronization are obtained in terms of line matrix inequalities. Numerical simulations are shown to demonstrate the effectiveness of the theoretical results.

1 Introduction

During the last decades, neural networks (NNs) have been successfully employed in many different fields such as pattern recognition, information science, associative memories, and so on [16]. In the implementation of NNs, time delays frequently arise in actual applications because of the finite switching speed of amplifies and the implicit communication of neurons. The existence of time delay often causes poor performance or instability of the designed networks [58]. As is well known, conventional NNs can be implemented by circuits: the Hopfield NNs models [1] can be implemented in a very large-scale electronic circuit where the self feedback connection weights are implemented by resistors, in which resistors are applied to emulate neural synapses. The synapses among neurons in biological NNs are called long-term memories. In 1971, Chua [9] predicted the existence of a fourth fundamental device, which is called a memristor, and due to the Hewlett-Packard (HP) research team, the realization of the memristor was demonstrated by [10, 11]. Since the memristor has been successfully utilized as a notional tool to analyze signals and nonlinear semiconductor devices, now it is becoming an interesting problem to investigate the memristor-based neural networks (MNNs) because of its practical applications of the NNs [912]. In [12], the MNNs coming from very large-scale integration on account of the memristor or made of bio-inspired neuron oscillatory circuits with neural memristor were considered. In fact, the MNNs can be utilized to carry out analog multiplication and contain some identical memristors in a bridge circuit. Because of the individual ability of memristor for memorizing the passed quantity of electric charge, the MNNs can keep in memory its past dynamical history. For the sake of emulating the human brain in real time, the memristor could play an important role in novel analog circuits of NNs. In addition, complex switching phenomena and the switching rule depending on the state of the network are shown by memristor-based circuit networks.

Synchronization means that the dynamical behavior of coupled chaos systems achieves the same time spatial state. In fact, synchronization is a common phenomenon in nature, which has been investigated for a rather long time. Ever since the famous physicist Christian Huygens found the synchronization phenomenon on two pendulum clocks in 1665, the applications of chaos synchronization are concerned with secret communication, information science, optimization problems, and some related nonlinear areas. Based on different initial conditions, Pecora and Carroll [13] have studied synchronization analysis for two identical chaotic systems in 1990, which is the first idea of theoretical research on synchronization and the approach was improved in electronic circuits. During recent years, synchronization for the chaotic NNs systems has rapidly been developed by different control methods including feedback control, intermittent control, adaptive control, sampled-data control, etc. [1317]. At the same time, the synchronization control problem of MNNs has been studied extensively [14]. The anti-synchronization analysis of MNNs plays an important role in many possible uses, neuromorphic devices to imitate learning, including non-volatile memories, and adaptive spontaneous behavior, which has been successfully applied to many different areas besides information science, image processing, secure communication, and so on [15]. Additionally, by applying anti-synchronization to communication systems, one may transmit digital signals by transforming between synchronization and anti-synchronization continuously, which will strengthen the security and secrecy. Furthermore, the anti-synchronization control study for MNNs can provide the designer with various properties, opportunities, and richness of flexibility. Thus, there are a few works dealing with the anti-synchronization analysis of the MNNs [1517].

As far as I know, in most existing works on complex dynamical networks, it is supposed that the node state only is dependent on time. In practice, the node state not only depends on time, but also substantially depends on space in some circumstances. In nature and many discipline fields, there exist some reaction-diffusion phenomena, particularly, in chemistry and biology, for example, the phenomena typically met in chemical reactions show the interaction between different chemicals which react to each other and spatially diffuse on the chemical medium until a steady-state spatial concentration pattern has completely developed [18]. Also pattern formation and wave propagation effects could be represented by reaction-diffusion equations. Those wave propagation phenomena are shown by systems pertaining to different scientific disciplines. Additionally, complex networks and food webs have attracted increasing attention of researchers from different fields in recent years. A food web can be characterized by a model of complex network, in which a node represents a species [19]. Moreover, there exists reaction-diffusion influence in the NN systems when electrons are moving in asymmetric electromagnetic field. In such cases, the activations and states vary in space as well as in time, and reaction-diffusion phenomena need to be taken into account. MNNs can be seen as a special kind of NNs, consequently, it is natural for us to investigate the state of the neurons varying in space as well as in time [20, 21]. Thus, the authors of [20] considered the anti-synchronization of a kind of MDPNNs with mixed time delays by using an adaptive controller and Lyapunov stability theory. The authors of [21] investigated the stability problem for a class of delayed uncertain MDPNNs with leakage term. In addition, an adaptive learning control strategy was applied to investigate the synchronization problem for delayed NNs with reaction-diffusion terms and unknown time-varying coupling strengths in [22]. In [23], by utilizing the Lyapunov functional method combined with the inequality techniques, a sufficient condition was given to ensure that the proposed coupled chaos reaction-diffusion NNs with hybrid coupling was synchronized.

On the other hand, since modern society extensively uses digital technology in implementing controllers, the sampled-data control scheme has gained significant attention. Because of the rapid development of the digital hardware technologies, the sampled-data control strategy, whose control signals are permitted to alter only at the sampling instant and are maintained constant during the sampling period, has been more advantageous than other control approaches. The sampled-data control theory also has rapidly developed. In many real-world applications, it is difficult to ensure that the state variables transmitted to controllers are continuous. During the past decades, sampled-data control of finite-dimensional systems has been widely studied. In order to make full use of modern computer techniques, the sampled-data feedback control method has been used to synchronize delayed networks [24]. With sampled-data measurements, ever since the authors of [25] investigated the observability of parabolic distributed parameter systems, the sampled-data control scheme has been applied to some partial differential systems; see, e.g., [2630]. Recently, many authors have used the sampled-data control scheme to resolve synchronization control problems in various ordinary differential systems such as complex dynamical networks [24] etc. In [29], the sampled-data synchronization schemes for a class of delayed DPNNs have been investigated. In fact, there are few theoretical results for MDPNNs by applying the sampled-data control method. Intermittent control, which was first proposed to control nonlinear models in [31], has been applied for various aims, including communication manufacturing, transportation, and so on. Intermittent control is an especial modality of switching control, which has been divided into two classes. That is to say, one is a state-development switching regulation and the other is a time-switching regulation. The former activate control operation only in some particular region, while the latter means that control operation is activated only in some certain finite time intervals. In comparison with continuous synchronization control, the discontinuous control approaches including intermittent control have attracted much interest recently because of their easy implementation in engineering control fields and in real-life uses [3234]. For instance, in [34], the authors have investigated exponential synchronization of delayed memristor-based NNs with periodically intermittent control strategy. From the point of view of complicated mathematical models, the discontinuous control chaotic delayed systems are difficult and offer more challenges.

Enlightened by the above ideals, in this paper we attempt to introduce an intermittent sampled-data control strategy to achieve anti-synchronization results of delayed MDPNNs. It is a challenging problem how to develop an intermittent sampled-data controller design to solve the anti-synchronization control problem of delayed MDPNNs. As far as we know, this extension has not been reported in the literature at the present stage. Compared with the existing results, the main contributions of this paper can be summarized as follows: the first involves that we make the first attempt to consider the intermittent sampled-data anti-synchronization control analysis for a kind of MDPNNs with time-varying delays; the second relates to that an intermittent sampled-data (in time and in space) controller is designed to get the anti-synchronization of the considered networks. Here we assume that the sampling intervals in time and in space are bounded; the third aspect obtains simple and less conservative anti-synchronization criteria by using the drive-response concept, extended Wirtinger’s inequality and technical estimations. A direct Lyapunov approach for the stability analysis of the resulting closed-loop system is developed, which is based on the application of a differential inequality technology. Novel sufficient conditions for the exponential anti-synchronization are derived in terms of line matrix inequalities. Finally, a numerical example is shown to demonstrate the effectiveness of the proposed results.

The remainder of this paper is organized as follows. In the next section, an anti-synchronization scheme by means of intermittent sampled-data control is given. In addition, we present some necessary preliminaries there. Section 3 proposes the main results of this paper. Section 4 illustrates the effectiveness of the proposed results through numerical simulations. Finally, the conclusion is drawn in Section 5.

Notation: Throughout this paper, solutions of all the concerning systems in the following are intended in the sense of Filippov. \(A = ( a_{ij} )_{n \times n}\) denotes an \(n \times n\) real matrix, \(A^{T}\) represents the transpose of matrix A. For symmetric matrices \(P_{1}\), \(P_{2}\), \(P_{1} > P_{2}\) ( \(P_{1} \ge P_{2}\), \(P_{1} < P_{2}\), \(P_{1} \le P_{2} \)) means that \(P_{1} - P_{2} > 0\) ( \(P_{1} - P_{2} \ge 0\), \(P_{1} - P_{2} < 0\), \(P_{1} - P_{2} \le 0\)) is symmetric and positive (negative) definite matrix. The symmetric terms in a symmetric matrix are denoted by . Let \(a_{ij}^{ *} = \max \{ \stackrel{\frown}{a}_{ij},\stackrel{\smile}{a}_{ij} \}\), \(a_{ij}^{ * *} = \min \{ \stackrel{\frown}{a}_{ij},\stackrel{\smile}{a}_{ij} \}\). Let \(C[ ( - \tau,0 ] \times R;R^{n} ]\) denote the Banach space of continuous functions which map \(( - \tau,0 ] \times R\) into \(R^{n}\) with the topology of uniform convergence. \(\Omega = \{ x|0 \le x \le d \}\) is a compact set with smooth boundary Ω and \(\operatorname{mes}\Omega > 0\) in R, \(d > 0\) is a constant; \(L^{2} ( \Omega )\) is the space of real functions on Ω which is \(L^{2}\) for the Lebesgue measure. It is a Banach space for the norm

$$\bigl\Vert u ( t,x ) \bigr\Vert _{2} = \sqrt{\sum _{i = 1}^{n} \bigl\Vert u_{i} ( t,x ) \bigr\Vert _{2}^{2}}, $$

where \(u ( t,x ) = ( u_{1} ( t,x ),\ldots,u_{n} ( t,x ) )^{T} \in R^{n}\) and \(\Vert u_{i} ( t,x ) \Vert _{2} = ( \int_{\Omega} \vert u_{i} ( t,x ) \vert ^{2}\,dx )^{1 / 2}\). For any \(\varphi ( t,x ) \in C [ ( - \tau,0 ] \times R;R^{n} ]\), we define

$$\Vert \varphi \Vert _{2} = \sup_{ - \tau \le s \le 0}\sqrt {\sum_{i = 1}^{n} \bigl\Vert \varphi_{i} ( s,x ) \bigr\Vert _{2}^{2}}, $$

where \(\Vert \varphi_{i} ( s,x ) \Vert _{2} = ( \int_{\Omega} \vert \varphi_{i} ( s,x ) \vert ^{2} \,dx )^{1 / 2}\).

2 Model description and preliminaries

Consider the following delayed MDPNNs:

$$ \begin{aligned}[b] \frac{\partial u_{i} ( t,x )}{\partial t} &= \frac{\partial}{\partial x} \biggl( D_{i}\frac{\partial u_{i} ( t,x )}{\partial x} \biggr) - d_{i} ( u_{i} )u_{i} ( t,x ) + \sum_{j = 1}^{n} a_{ij} ( u_{j} )g_{j} \bigl( u_{j} ( t,x ) \bigr) \\ &\quad{} + \sum_{j = 1}^{n} b_{ij} ( u_{j} )g_{j} \bigl( u_{j} ( t - \tau_{j},x ) \bigr),\quad t \ge 0,x \in \Omega, \end{aligned} $$
(1)

with mixed boundary conditions and initial conditions

$$\begin{aligned}& \frac{\partial u_{i} ( t,0 )}{\partial x} = 0,\quad\quad u_{i} ( t,d ) = 0,\quad t \in ( - \tau, + \infty ), \end{aligned}$$
(2)
$$\begin{aligned}& u_{i} ( s,x ) = \varphi_{i} ( s,x ),\quad ( s,x ) \in ( - \tau,0 ] \times \Omega, \end{aligned}$$
(3)

where \(x \in \Omega \), \(u_{i} ( t,x )\) denotes the state of the ith neural unit at time t and in space x; \(d_{i} ( u_{i} ) > 0\) corresponds to the rate with which the ith neuron will reset its potential to the resting state; \(a_{ij} ( u_{j} )\) and \(b_{ij} ( u_{j} )\) correspond to the connection weight and the time-varying delay connection weight of the jth neuron on the ith neuron, respectively; \(g_{j} ( \cdot )\) corresponds to the activation function. \(\tau_{j}\) is the time delay and satisfies \(0 \le \tau_{j} \le \tau \), where τ is constant; \(D_{i} > 0\) corresponds to the transmission diffusion coefficient along the neurons. \(\varphi_{i} ( s,x )\) corresponds to bounded and continuously differential function on \(( - \tau,0 ] \times \Omega \), \(i,j = 1,2,\ldots,n\).

According to the characteristic of the memristor and the current-voltage feature [4, 8, 13], we are able to know:

$$ \begin{aligned} &d_{i} ( u_{i} ) = d_{i} \bigl( u_{i} ( t,x ) \bigr) = \left \{ \textstyle\begin{array}{l@{\quad}l} \stackrel{\frown}{d}_{i},&u_{i} ( t,x ) \le T_{i}, \\ \stackrel{\smile}{d}_{i},&u_{i} ( t,x ) > T_{i}, \end{array}\displaystyle \right . \\ &a_{ij} ( u_{j} ) = a_{ij} \bigl( u_{j} ( t,x ) \bigr) = \left \{ \textstyle\begin{array}{l@{\quad}l} \stackrel{\frown}{a}_{ij},&u_{j} ( t,x ) \le T_{i}, \\ \stackrel{\smile}{a}_{ij},&u_{j} ( t,x ) > T_{i}, \end{array}\displaystyle \right . \\ &b_{ij} ( u_{j} ) = b_{ij} \bigl( u_{j} ( t,x ) \bigr) = \left \{ \textstyle\begin{array}{l@{\quad}l} \stackrel{\frown}{b}_{ij},&u_{j} ( t,x ) \le T_{i}, \\ \stackrel{\smile}{b}_{ij},&u_{j} ( t,x ) > T_{i}, \end{array}\displaystyle \right . \end{aligned} $$
(4)

where switching jumps \(T_{i} > 0\), \(\stackrel{\frown}{d}_{i}\), \(\stackrel{\smile}{d}_{i}\), \(\stackrel{\frown}{a}_{ij}\), \(\stackrel{\smile}{a}_{ij}\), \(\stackrel{\frown}{b}_{ij}\), \(\stackrel{\smile}{b}_{ij}\), \(i,j = 1,2,\ldots,n\), are constants.

For model (1), we can obtain (5) by using the theories of set-valued maps and differential inclusions as follows:

$$ \begin{aligned}[b] \frac{\partial u_{i} ( t,x )}{\partial t} &\in \frac{\partial}{\partial x} \biggl( D_{i}\frac{\partial u_{i} ( t,x )}{\partial x} \biggr) - \mathit {co}\bigl[ d_{i} ( u_{i} ) \bigr]u_{i} ( t,x ) + \sum _{j = 1}^{n} \mathit {co}\bigl[ a_{ij} ( u_{j} ) \bigr]g_{j} \bigl( u_{j} ( t,x ) \bigr) \\ &\quad{} + \sum_{j = 1}^{n} \mathit {co}\bigl[ b_{ij} ( u_{j} ) \bigr]g_{j} \bigl( u_{j} ( t - \tau_{j},x ) \bigr),\quad t \ge 0,x \in \Omega. \end{aligned} $$
(5)

Considering model (5) as the drive system, we introduce the response system for model (5) as

$$ \begin{aligned}[b] \frac{\partial \tilde{u}_{i} ( t,x )}{\partial t} &\in \frac{\partial}{\partial x} \biggl( D_{i}\frac{\partial \tilde{u}_{i} ( t,x )}{\partial x} \biggr) - \mathit {co}\bigl[ d_{i} ( \tilde{u}_{i} ) \bigr]\tilde{u}_{i} ( t,x ) + \sum _{j = 1}^{n} \mathit {co}\bigl[ a_{ij} ( \tilde{u}_{j} ) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \\ &\quad{} + \sum_{j = 1}^{n} \mathit {co}\bigl[ b_{ij} ( \tilde{u}_{j} ) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t - \tau_{j},x ) \bigr) + v_{i} ( t,x ),\quad t \ge 0,x \in \Omega.\end{aligned} $$
(6)

The mixed boundary conditions and initial conditions of model (6) are of the form

$$\begin{aligned}& \tilde{u}_{ix} ( t,0 ) = 0,\quad\quad \tilde{u}_{i} ( t,d ) = 0,\quad t \in ( - \tau, + \infty ), \end{aligned}$$
(7)
$$\begin{aligned}& \tilde{u}_{i} ( s,x ) = \tilde{\varphi}_{i} ( s,x ),\quad ( s,x ) \in ( - \tau,0 ] \times \Omega, \end{aligned}$$
(8)

where \(\tilde{u} ( t,x ) = ( \tilde{u}_{1} ( t,x ),\ldots,\tilde{u}_{n} ( t,x ) )^{T}\), \(v ( t,x ) = ( v_{1} ( t,x ),\ldots,v_{n} ( t,x ) )^{T}\) is the appropriate control input that will be devised in order to get a certain control objective. In practical situations, the response system (6) is able to receive the output signals of the drive system (5). \(\tilde{\varphi}_{i} ( s,x )\) is bounded and continuously differential function on \(( - \tau,0 ] \times \Omega\), \(y_{ix} ( t,0 ) = \frac{\partial y_{i} ( t,0 )}{\partial x}\).

Throughout this paper, the assumptions to be used are given as follows:

  1. (A1)

    The functions \(g_{j} ( \cdot )\), \(j = 1,2,\ldots,n\), are bounded and odd functions, and satisfies the Lipschitz condition, that is, for all \(\varpi_{1},\varpi_{2} \in R\), there exists a Lipschitz constant \(L_{j} > 0\) such that

    $$\bigl\vert g_{j} ( \varpi_{1} ) - g_{j} ( \varpi_{2} ) \bigr\vert \le L_{j}\vert \varpi_{1} - \varpi_{2} \vert . $$

The anti-synchronization error of drive system (5) and response system (6) are defined as

$$y ( t,x ) = \bigl( y_{1} ( t,x ),\ldots,y_{n} ( t,x ) \bigr)^{T} = \tilde{u} ( t,x ) + u ( t,x ). $$

By using the theories of set-valued maps and differential inclusions,we have the following error system:

$$ \begin{aligned}[b] \frac{\partial y_{i} ( t,x )}{\partial t} &\in \frac{\partial}{\partial x} \biggl( D_{i}\frac{\partial y_{i} ( t,x )}{\partial x} \biggr) - \bigl\{ \mathit {co}\bigl[ d_{i} ( u_{i} ) \bigr]u_{i} ( t,x ) + \mathit {co}\bigl[ d_{i} ( \tilde{u}_{i} ) \bigr]\tilde{u}_{i} ( t,x ) \bigr\} \\ &\quad{} + \sum_{j = 1}^{n} \bigl\{ \mathit {co}\bigl[ a_{ij} ( u_{j} ) \bigr]g_{j} \bigl( u_{j} ( t,x ) \bigr) + \mathit {co}\bigl[ a_{ij} ( \tilde{u}_{j} ) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\} \\ &\quad{}+ \sum_{j = 1}^{n} \bigl\{ \mathit {co}\bigl[ b_{ij} ( u_{j} ) \bigr]g_{j} \bigl( u_{j} ( t - \tau_{j},x ) \bigr) + \mathit {co}\bigl[ b_{ij} ( \tilde{u}_{j} ) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t - \tau_{j},x ) \bigr) \bigr\} \\ &\quad{}+ v_{i} ( t,x ),\quad t \ge 0,x \in \Omega. \end{aligned} $$
(9)

Next, model (6) with the mixed boundary conditions (7) is taken into account. Set the points \(0 = x_{0} < x_{1} < \cdots < x_{N} = l\) divide \([ 0,l ]\) into N sampling intervals. We assume that N sensors are laid in the middle \(\bar{x}_{j} = \frac{x_{j} + x_{j + 1}}{2}\) (\(j = 0,\ldots,N - 1 \)) of these intervals. The sampling intervals in space may be bounded and variable,

$$x_{j + 1} - x_{j} \le \Delta. $$

Ones design the intermittent sampled-data control as follows:

$$ v_{i} ( t,x ) = K_{i} ( t )y_{i} ( t,\bar{x}_{j} ),\quad K_{i} ( t ) = \left \{ \textstyle\begin{array}{l@{\quad}l} - k_{i},&m\omega \le t \le m\omega + \delta, \\ 0,&m\omega + \delta < t \le (m + 1)\omega, \end{array}\displaystyle \right . $$
(10)

in which \(\bar{x}_{j} = \frac{x_{j} + x_{j + 1}}{2}\), \(x \in [ x_{j},x_{j + 1} )\), \(j = 0,\ldots,N - 1\), \(i = 1,\ldots,n\), \(\omega > 0\) is the control period, \(\omega > 0\) is called the control width (control duration), and \(K = \operatorname {diag}( k_{1},k_{2},\ldots,k_{n} ) > 0\) is the gain control matrix.

By applying \(y_{i} ( t,\bar{x}_{j} ) = y_{i} ( t,x ) - \int_{\bar{x}_{j}}^{x} y_{is} ( t,s ) \,ds \), \(v_{i} ( t,x )\) in model (6) can be represented as

$$ v_{i} ( t,x ) = \left \{ \textstyle\begin{array}{l@{\quad}l} - Ky_{i} ( t,x ) + K\int_{\bar{x}_{j}}^{x} y_{is} ( t,s ) \,ds,&m\omega \le t \le m\omega + \delta, \\ 0,&m\omega + \delta < t \le (m + 1)\omega. \end{array}\displaystyle \right . $$
(11)

By applying the theories of differential inclusions and set-valued maps, we can get the anti-synchronization error system as follows:

$$\begin{aligned}& \begin{aligned}[b] \frac{\partial y_{i} ( t,x )}{\partial t} &\in \frac{\partial}{\partial x} \biggl( D_{i}\frac{\partial y_{i} ( t,x )}{\partial x} \biggr) - \bigl\{ \mathit {co}\bigl[ d_{i} ( u_{i} ) \bigr]u_{i} ( t,x ) + \mathit {co}\bigl[ d_{i} ( \tilde{u}_{i} ) \bigr]\tilde{u}_{i} ( t,x ) \bigr\} \\ &\quad{} - K_{i}y_{i} ( t,x ) + \sum _{j = 1}^{n} \bigl\{ \mathit {co}\bigl[ a_{ij} ( u_{j} ) \bigr]g_{j} \bigl( u_{j} ( t,x ) \bigr) + \mathit {co}\bigl[ a_{ij} ( \tilde{u}_{j} ) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\} \\ &\quad{} + \sum_{j = 1}^{n} \bigl\{ \mathit {co}\bigl[ b_{ij} ( u_{j} ) \bigr]g_{j} \bigl( u_{j} ( t - \tau_{j},x ) \bigr) + \mathit {co}\bigl[ b_{ij} ( \tilde{u}_{j} ) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t - \tau_{j},x ) \bigr) \bigr\} \\ &\quad{}+ K_{i} \int_{\bar{x}_{j}}^{x} y_{is} ( t,s ) \,ds,\quad ( t,x ) \in [m\omega,m\omega + \delta ] \times \Omega, \end{aligned} \end{aligned}$$
(12)
$$\begin{aligned}& \begin{aligned}[b] \frac{\partial y_{i} ( t,x )}{\partial t} &\in \frac{\partial}{\partial x} \biggl( D_{i}\frac{\partial y_{i} ( t,x )}{\partial x} \biggr) - \bigl\{ \mathit {co}\bigl[ d_{i} ( u_{i} ) \bigr]u_{i} ( t,x ) + \mathit {co}\bigl[ d_{i} ( \tilde{u}_{i} ) \bigr]\tilde{u}_{i} ( t,x ) \bigr\} \\ &\quad{}+ \sum_{j = 1}^{n} \bigl\{ \mathit {co}\bigl[ a_{ij} ( u_{j} ) \bigr]g_{j} \bigl( u_{j} ( t,x ) \bigr) + \mathit {co}\bigl[ a_{ij} ( \tilde{u}_{j} ) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\} \\ &\quad{}+ \sum_{j = 1}^{n} \bigl\{ \mathit {co}\bigl[ b_{ij} ( u_{j} ) \bigr]g_{j} \bigl( u_{j} ( t - \tau_{j},x ) \bigr) \\ &\quad{} + \mathit {co}\bigl[ b_{ij} ( \tilde{u}_{j} ) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t - \tau_{j},x ) \bigr) \bigr\} ,\quad ( t,x ) \in \bigl( m\omega + \delta,(m + 1)\omega \bigr] \times \Omega. \end{aligned} \end{aligned}$$
(13)

Remark 1

On the basis of assumption (A1), we know the activation functions \(g_{j} ( \cdot )\), \(j = 1,2,\ldots,n\), are odd functions. Then we are able to derive \(\tilde{g}_{j} ( y_{j} ( \cdot,x ) )\) possess the following properties:

$$ \bigl\vert \tilde{g}_{j} \bigl( y_{j} ( \cdot,x ) \bigr) \bigr\vert \le L_{j}\bigl\vert y_{j} ( \cdot,x ) \bigr\vert ,\quad\quad \tilde{g}_{j} ( 0 ) = g_{j} \bigl( y_{j} ( \cdot,x ) \bigr) + g_{j} \bigl( - y_{j} ( \cdot,x ) \bigr),\quad j = 1,2,\ldots,n. $$
(14)

For the sake of obtaining our main results, the following lemmas are necessary.

Lemma 1

Wirtinger’s inequality [35]

Let \(z \in H^{1} ( \Omega )\), \(\Omega = ( 0,l )\) be a scalar function with \(z ( 0 ) = 0\) or \(z ( l ) = 0\). Then

$$\int_{\Omega} z^{2} ( x ) \,dx \le \frac{4l^{2}}{\pi^{2}} \int_{\Omega} \biggl[ \frac{dz}{dx} \biggr]^{2} \,dx. $$

Moreover, if \(z ( 0 ) = z ( l ) = 0\), then

$$ \int_{\Omega} z^{2} ( x ) \,dx \le \frac{l^{2}}{\pi^{2}} \int_{\Omega} \biggl[ \frac{dz}{dx} \biggr]^{2} \,dx. $$
(15)

Lemma 2

Halanay inequality [36]

Let \(\tau \ge 0\) be a constant, and \(V ( t )\) be a non-negative function for \([ - \tau, + \infty )\), which satisfies

$$ \dot{V} ( t ) \le - aV ( t ) + b\sup_{t - \tau \le s \le t}V ( s ),\quad t \ge 0, $$
(16)

where \(a > b > 0\), then

$$ V ( t ) \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - \alpha t},\quad t \ge 0, $$
(17)

where \(\Vert V ( 0 ) \Vert _{\tau} = \sup_{ - \tau \le s \le 0}V ( s )\) and α is a unique positive root of the equation \(\alpha = a - be^{\alpha \tau}\).

Lemma 3

[34]

Let \(\tau \ge 0\) be a constant, and \(V ( t )\) be a non-negative function for \([ a - \tau,b ]\), which satisfies

$$ \dot{V} ( t ) \le \nu_{1}V ( t ) + \nu_{2}V ( t - \tau ),\quad a \le t \le b, $$
(18)

then

$$ V ( t ) \le \bigl\vert V ( a ) \bigr\vert _{\tau} e^{ ( \nu_{1} + \nu_{2} )t},\quad a \le t \le b, $$
(19)

where \(\vert V ( a ) \vert _{\tau} = \sup_{a - \tau \le t \le a}V ( t )\).

Lemma 4

[37]

For any vectors \(X,Y \in R^{n}\), scalar \(\varepsilon > 0\) and positive definite matrix \(Q \in R^{n \times n}\), then the following matrix inequality holds:

$$2X^{T}Y \le \varepsilon^{ - 1}X^{T}Q^{ - 1}X + \varepsilon Y^{T}QY. $$

Lemma 5

Under the assumption (A1), we get

$$\begin{aligned}& \bigl\vert \mathit {co}\bigl[ a_{ij} \bigl( u_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( u_{j} ( t,x ) \bigr) + \mathit {co}\bigl[ a_{ij} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\vert \\& \quad \le A_{ij}\bigl\vert g_{j} \bigl( u_{j} ( t,x ) \bigr) + g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\vert , \\& \bigl\vert \mathit {co}\bigl[ b_{ij} \bigl( u_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( u_{j} ( t - \tau_{j},x ) \bigr) + \mathit {co}\bigl[ b_{ij} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t - \tau_{j},x ) \bigr) \bigr\vert \\& \quad \le B_{ij}\bigl\vert g_{j} \bigl( u_{j} ( t - \tau_{j},x ) \bigr) + g_{j} \bigl( \tilde{u}_{j} ( t - \tau_{j},x ) \bigr) \bigr\vert , \\& - \mathit {co}\bigl[ d_{i} ( u_{i} ) \bigr]u_{i} ( t,x ) - \mathit {co}\bigl[ d_{i} ( \tilde{u}_{i} ) \bigr]\tilde{u}_{i} ( t,x ) \le - \bar{D}_{i}\bigl\vert u_{i} ( t,x ) + \tilde{u}_{i} ( t,x ) \bigr\vert , \end{aligned}$$

in which \(A_{ij} = \max \{ \vert a_{ij}^{ *} \vert ,\vert a_{ij}^{ * *} \vert \}\), \(B_{ij} = \max \{ \vert b_{ij}^{ *} \vert , \vert b_{ij}^{ * *} \vert \}\), \(\bar{D}_{i} = \min \{ \vert d_{i}^{ *} \vert ,\vert d_{i}^{ * *} \vert \}\), \(a_{ij}^{ *} = \min \{ \stackrel{\frown}{a}_{ij},\stackrel{\smile}{a}_{ij} \}\), \(d_{i}^{ *} = \min \{ \stackrel{\frown}{d}_{i},\stackrel{\smile}{d}_{i} \}\), \(d_{i}^{ * *} = \max \{ \stackrel{\frown}{d}_{i},\stackrel{\smile}{d}_{i} \}\), \(b_{ij}^{ *} = \min \{ \stackrel{\frown}{b}_{ij},\stackrel{\smile}{b}_{ij} \}\), \(b_{ij}^{ * *} = \max \{ \stackrel{\frown}{b}_{ij},\stackrel{\smile}{b}_{ij} \}\), \(i,j = 1,2,\ldots,n\).

Proof

(i) For \(u_{j} ( t,x ) < T_{j}\), \(\tilde{u}_{j} ( t,x ) < T_{j}\), we obtain

$$\begin{aligned}& \bigl\vert \mathit {co}\bigl[ a_{ij} \bigl( u_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( u_{j} ( t,x ) \bigr) + \mathit {co}\bigl[ a_{ij} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\vert \\& \quad = \bigl\vert \stackrel{\frown}{a}_{ij} \bigl( g_{j} \bigl( u_{j} ( t,x ) \bigr) + g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr) \bigr\vert \le A_{ij} \bigl\vert g_{j} \bigl( u_{j} ( t,x ) \bigr) + g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\vert . \end{aligned}$$

(ii) For \(u_{j} ( t,x ) > T_{j}\), \(\tilde{u}_{j} ( t,x ) > T_{j}\), we can have

$$\begin{aligned}& \bigl\vert \mathit {co}\bigl[ a_{ij} \bigl( u_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( u_{j} ( t,x ) \bigr) + \mathit {co}\bigl[ a_{ij} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\vert \\& \quad = \bigl\vert \stackrel{\smile}{a}_{ij} \bigl( g_{j} \bigl( u_{j} ( t,x ) \bigr) + g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr) \bigr\vert \le A_{ij} \bigl\vert g_{j} \bigl( u_{j} ( t,x ) \bigr) + g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\vert . \end{aligned}$$

(iii) For \(u_{j} ( t,x ) \le T_{j}\), \(\tilde{u}_{j} ( t,x ) \ge T_{j}\) or \(u_{j} ( t,x ) \ge T_{j}\), \(\tilde{u}_{j} ( t,x ) \le T_{j}\), we suppose that \(u_{j} ( t,x ) \le T_{j}\), \(\tilde{u}_{j} ( t,x ) \ge T_{j}\) since the other is similar,

$$\begin{aligned}& \bigl\vert \mathit {co}\bigl[ a_{ij} \bigl( u_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( u_{j} ( t,x ) \bigr) + \mathit {co}\bigl[ a_{ij} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\vert \\& \quad = \bigl\vert \stackrel{\frown}{a}_{ij} \bigl( g_{j} \bigl( u_{j} ( t,x ) \bigr) - g_{j} \bigl( u_{j} ( 0 ) \bigr) \bigr) \bigr\vert + \bigl\vert \stackrel{ \smile}{a}_{ij} \bigl( g_{j} \bigl( u_{j} ( 0 ) \bigr) - g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr) \bigr\vert \\& \quad \le A_{ij} \bigl( \bigl\vert \bigl( g_{j} \bigl( u_{j} ( t,x ) \bigr) - g_{j} \bigl( u_{j} ( 0 ) \bigr) \bigr) \bigr\vert + \bigl\vert \bigl( g_{j} \bigl( u_{j} ( 0 ) \bigr) - g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr) \bigr\vert \bigr) \\& \quad \le A_{ij}\bigl\vert g_{j} \bigl( u_{j} ( t,x ) \bigr) + g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\vert . \end{aligned}$$

Thus,

$$\begin{aligned}& \bigl\vert \mathit {co}\bigl[ a_{ij} \bigl( u_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( u_{j} ( t,x ) \bigr) + \mathit {co}\bigl[ a_{ij} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\vert \\& \quad \le A_{ij}\bigl\vert g_{j} \bigl( u_{j} ( t,x ) \bigr) + g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\vert . \end{aligned}$$

Similarly, we get

$$\begin{aligned}& \bigl\vert \mathit {co}\bigl[ b_{ij} \bigl( u_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( u_{j} ( t - \tau_{j},x ) \bigr) + \mathit {co}\bigl[ b_{ij} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t - \tau_{j},x ) \bigr) \bigr\vert \\& \quad \le B_{ij}\bigl\vert g_{j} \bigl( u_{j} ( t - \tau_{j},x ) \bigr) + g_{j} \bigl( \tilde{u}_{j} ( t - \tau_{j},x ) \bigr) \bigr\vert , \\& - \mathit {co}\bigl[ d_{i} ( u_{i} ) \bigr]u_{i} ( t,x ) - \mathit {co}\bigl[ d_{i} ( \tilde{u}_{i} ) \bigr] \tilde{u}_{i} ( t,x ) \le - \bar{D}_{i}\bigl\vert u_{i} ( t,x ) + \tilde{u}_{i} ( t,x ) \bigr\vert . \end{aligned}$$

The proof is completed. □

3 Main results

For convenience in our presentation, we let \(u ( t,x ) = ( u_{1} ( t,x ),\ldots,u_{n} ( t,x ) )^{T}\), \(g ( u ( \cdot,x ) ) = ( g_{1} ( u_{1} ( \cdot,x ) ),\ldots,g_{n} ( u_{n} ( \cdot,x ) ) )^{T}\), \(g ( \tilde{u} ( \cdot,x ) ) = ( g_{1} ( \tilde{u}_{1} ( \cdot,x ) ),\ldots,g_{n} ( \tilde{u}_{n} ( \cdot,x ) ) )^{T}\), \(\tilde{g} ( y ( \cdot,x ) ) = ( \tilde{g}_{1} ( y_{1} ( \cdot, x ) ),\ldots,\tilde{g}_{n} ( y_{n} ( \cdot,x ) ) )^{T}\), \(\tilde{g}_{j} ( y_{j} ( \cdot, x ) ) = g_{j} ( \tilde{u}_{j} ( \cdot,x ) ) + g_{j} ( u_{j} ( \cdot,x ) )\), \(\mathcal{D} = \operatorname {diag}( D_{1},\ldots,D_{n} )\), \(\tilde{A} = ( A_{ij} )_{n \times n}\), \(\tilde{D} = \operatorname {diag}\{ \bar{D}_{1},\ldots,\bar{D}_{n} \}\), \(\tilde{B} = ( B_{ij} )_{n \times n}\), \(A_{ij} = \max \{ \vert a_{ij}^{ *} \vert ,\vert a_{ij}^{ * *} \vert \}\), \(B_{ij} = \max \{ \vert b_{ij}^{ *} \vert ,\vert b_{ij}^{ * *} \vert \}\), \(\bar{D}_{i} = \min \{ \vert d_{i}^{ *} \vert , \vert d_{i}^{ * *} \vert \}\), \(a_{ij}^{ *} = \min \{ \stackrel{\frown}{a}_{ij},\stackrel{\smile}{a}_{ij} \}\), \(d_{i}^{ *} = \min \{ \stackrel{\frown}{d}_{i},\stackrel{\smile}{d}_{i} \}\), \(d_{i}^{ * *} = \max \{ \stackrel{\frown}{d}_{i},\stackrel{\smile}{d}_{i} \}\), \(b_{ij}^{ *} = \min \{ \stackrel{\frown}{b}_{ij},\stackrel{\smile}{b}_{ij} \}\), \(b_{ij}^{ * *} = \max \{ \stackrel{\frown}{b}_{ij},\stackrel{\smile}{b}_{ij} \}\), \(i,j = 1,2,\ldots,n\).

Theorem 1

Under the assumption (A1), assume that there exist a positive definite matrix \(P > 0\) and positive scalars ε, \(\sigma_{1}\), \(\sigma_{2}\), \(\bar{\sigma}_{1}\), \(\bar{\sigma}_{2}\), \(a_{1}\), \(b_{1}\), \(a_{2}\), \(b_{2}\) such that \(a_{1} > b_{1}\), and the following conditions hold:

$$\begin{aligned}& \left( \textstyle\begin{array}{c@{\quad}c@{\quad}c} - 2P\tilde{D} - 2PK + \varepsilon \frac{\Delta}{\pi} PK + \sigma_{1}^{ - 1}\bar{L}^{2}I + a_{1}P & * & * \\ \bar{A}^{T}P & - \sigma_{1}^{ - 1}I & * \\ \bar{B}^{T}P & 0 & - \sigma_{2}^{ - 1}I \end{array}\displaystyle \right) \le 0, \end{aligned}$$
(20)
$$\begin{aligned}& - 2P\mathcal{D} + \varepsilon^{ - 1}\frac{\Delta}{\pi} PK < 0, \end{aligned}$$
(21)
$$\begin{aligned}& \sigma_{2}^{ - 1}\bar{L}^{2}I - b_{1}P \le 0, \end{aligned}$$
(22)
$$\begin{aligned}& \left( \textstyle\begin{array}{c@{\quad}c@{\quad}c} - 2P\tilde{D} + \bar{\sigma}_{1}^{ - 1}\bar{L}^{2}I - a_{2}P & * & * \\ \bar{A}^{T}P & - \bar{\sigma}_{1}^{ - 1}I & * \\ \bar{B}^{T}P & 0 & - \bar{\sigma}_{2}^{ - 1}I \end{array}\displaystyle \right) \le 0, \end{aligned}$$
(23)
$$\begin{aligned}& \bar{\sigma}_{2}^{ - 1}\bar{L}^{2}I - b_{2}P \le 0, \end{aligned}$$
(24)
$$\begin{aligned}& \sigma = \alpha ( \delta - \tau ) - ( a_{1} + a_{2} ) ( \omega - \delta ), \end{aligned}$$
(25)

in which \(\bar{L} = \max_{1 \le i \le n} ( L_{i} )\), \(\alpha > 0\) is the unique solution to \(a_{1} - \alpha - b_{1}e^{\alpha \tau} = 0\). Then the response system (6) and drive system (5) can be exponentially anti-synchronized under the intermittent sampled-data controller (10).

Proof

We introduce a Lyapunov functional candidate as

$$ V ( t ) = \int_{\Omega} y ( t,x )^{T}Py ( t,x )\,dx. $$
(26)

For \(t \in [ m\omega,m\omega + \delta ]\), calculating the derivatives of \(V ( t )\) along the trajectory of (12), and combining with Lemma 5 yields

$$ \begin{aligned}[b] \dot{V} ( t ) &\le \int_{\Omega} \biggl\{ 2y ( t,x )^{T}P \biggl[ \frac{\partial}{\partial x} \biggl( \mathcal{D} \frac{\partial y ( t,x )}{\partial x} \biggr) - \tilde{D}y ( t,x ) + \tilde{A}\tilde{g} \bigl( y ( t,x ) \bigr) \\ &\quad{} + \tilde{B}\tilde{g} \bigl( y ( t - \tau,x ) \bigr) - Ky ( t,x ) \biggr] \biggr\} \,dx \\ &\quad{} + 2\sum_{j = 0}^{N - 1} \int_{x_{j}}^{x_{j + 1}} y ( t,x )^{T}PK \bigl[ y ( t,x ) - y ( t,\bar{x}_{j} ) \bigr] \,dx. \end{aligned} $$
(27)

In view of assumption (A1), we have

$$ \bigl\vert \tilde{g}_{j} \bigl( y_{j} ( t,x ) \bigr) \bigr\vert \le L_{j}\bigl\vert y_{j} ( t,x ) \bigr\vert ,\quad\quad \tilde{g}_{j} ( 0 ) = g_{j} \bigl( u_{j} ( t,x ) \bigr) + g_{j} \bigl( - u_{j} ( t,x ) \bigr),\quad j = 1,2,\ldots,n. $$
(28)

Moreover, we can know

$$ \bigl\Vert \tilde{g} \bigl( y ( t,x ) \bigr) \bigr\Vert _{2} \le \bar{L}\bigl\Vert y ( t,x ) \bigr\Vert _{2}, \quad\quad \bigl\Vert \tilde{g} \bigl( y ( t - \tau,x ) \bigr) \bigr\Vert _{2} \le \bar{L}\bigl\Vert y ( t - \tau,x ) \bigr\Vert _{2}. $$
(29)

Integrating by parts and by substitution of the mixed boundary conditions, we have

$$ \int_{\Omega} y ( t,x )P\frac{\partial}{\partial x} \biggl( \mathcal{D} \frac{\partial y ( t,x )}{\partial x} \biggr)\,dx = - \int_{\Omega} \frac{\partial y ( t,x )^{T}}{\partial x}P\mathcal{D}\frac{\partial y ( t,x )}{\partial x}\,dx . $$
(30)

For any scalar \(\varepsilon_{1} > 0\), we have the following inequality by using Young’s inequality:

$$ \begin{aligned}[b] & 2\sum_{j = 0}^{N - 1} \int_{x_{j}}^{x_{j + 1}} y ( t,x )^{T}PK \bigl[ y ( t,x ) - y ( t,\bar{x}_{j} ) \bigr] \,dx \\ &\quad = 2\sum_{i = 1}^{n} \sum _{j = 0}^{N - 1} \int_{x_{j}}^{x_{j + 1}} p_{i}k_{i}y_{i} ( t,x ) \bigl[ y_{i} ( t,x ) - y_{i} ( t, \bar{x}_{j} ) \bigr]\,dx \\ &\quad \le \varepsilon_{1} \int_{\Omega} y ( t,x )^{T}PKy ( t,x ) \,dx \\ &\quad\quad{} + \varepsilon_{1}^{ - 1}\sum _{j = 0}^{N - 1} \int_{x_{j}}^{x_{j + 1}} \bigl[ y ( t,x ) - y ( t, \bar{x}_{j} ) \bigr]^{T} PK \bigl[ y ( t,x ) - y ( t, \bar{x}_{j} ) \bigr]\,dx. \end{aligned} $$
(31)

By Lemma 1, we have

$$ \begin{aligned}[b] & \int_{x_{j}}^{x_{j + 1}} \bigl[ y ( t,x ) - y ( t, \bar{x}_{j} ) \bigr]^{T}PK \bigl[ y ( t,x ) - y ( t, \bar{x}_{j} ) \bigr] \,dx \\ &\quad = \int_{x_{j}}^{\bar{x}_{j}} \bigl[ y ( t,x ) - y ( t, \bar{x}_{j} ) \bigr]^{T}PK \bigl[ y ( t,x ) - y ( t, \bar{x}_{j} ) \bigr] \,dx \\ &\quad\quad{} + \int_{\bar{x}_{j}}^{x_{j + 1}} \bigl[ y ( t,x ) - y ( t, \bar{x}_{j} ) \bigr]^{T}PK \bigl[ y ( t,x ) - y ( t, \bar{x}_{j} ) \bigr] \,dx \\ &\quad \le \frac{\Delta^{2}}{\pi^{2}} \int_{x_{j}}^{x_{j + 1}} \frac{\partial y ( t,x )^{T}}{\partial x}PK\frac{\partial y ( t,x )}{\partial x} \,dx. \end{aligned} $$
(32)

Choosing next \(\varepsilon_{1} = \frac{\Delta}{\pi} \varepsilon\), we see from (31) and (32) that

$$ \begin{aligned}[b] &2\sum_{j = 0}^{N - 1} \int_{x_{j}}^{x_{j + 1}} y ( t,x )^{T}PK \bigl[ y ( t,x ) - y ( t,\bar{x}_{j} ) \bigr] \,dx \\ &\quad \le \varepsilon^{ - 1}\frac{\Delta}{\pi} \int_{\Omega} \frac{\partial y ( t,x )^{T}}{\partial x}PK\frac{\partial y ( t,x )}{\partial x} \,dx + \varepsilon \frac{\Delta}{\pi} \int_{\Omega} y ( t,x )^{T}PKy ( t,x ) \,dx. \end{aligned} $$
(33)

Then we can know from (27), (33), and Lemma 4 that

$$ \begin{aligned}[b] \dot{V} ( t ) &\le \int_{\Omega} \biggl\{ 2y ( t,x )^{T} P \frac{\partial}{\partial x} \biggl( \mathcal{D} \frac{\partial y ( t,x )}{\partial x} \biggr) - 2y ( t,x )^{T}P\tilde{D}y ( t,x ) \\ &\quad{} + \sigma_{1}y ( t,x )^{T}P\tilde{A} \tilde{A}^{T}Py ( t,x ) + \sigma_{1}^{ - 1}\tilde{g} \bigl( y ( t,x ) \bigr)^{T}\tilde{g} \bigl( y ( t,x ) \bigr) \\ &\quad{}+ \sigma_{2}y ( t,x )^{T}P\tilde{B} \tilde{B}^{T}Py ( t,x ) + \sigma_{2}^{ - 1}\tilde{g} \bigl( y ( t - \tau,x ) \bigr)^{T}\tilde{g} \bigl( y ( t - \tau,x ) \bigr) \\ &\quad{} - 2y ( t,x )^{T}PKy ( t,x ) \biggr\} \,dx + \varepsilon^{ - 1}\frac{\Delta}{\pi} \int_{\Omega} \frac{\partial y ( t,x )^{T}}{\partial x}PK\frac{\partial y ( t,x )}{\partial x} \,dx \\ &\quad{}+ \varepsilon \frac{\Delta}{\pi} \int_{\Omega} y ( t,x )^{T}PKy ( t,x ) \,dx . \end{aligned} $$
(34)

We have from (34)

$$ \begin{aligned}[b] \dot{V} ( t ) &\le \int_{\Omega} \biggl[ y ( t,x )^{T}\biggl( - 2P\tilde{D} - 2PK + \varepsilon \frac{\Delta}{\pi} PK + \sigma_{1}P\bar{A} \bar{A}^{T}P \\ &\quad{}+ \sigma_{2}P\bar{B}\bar{B}^{T}P + \sigma_{1}^{ - 1}\bar{L}^{2}I\biggr)y ( t,x ) + \sigma_{2}^{ - 1}y ( t - \tau,x )^{T} \bar{L}^{2}Iy ( t - \tau,x ) \biggr]\,dx \\ &\quad{}+ \int_{\Omega} \frac{\partial y ( t,x )^{T}}{\partial x}\biggl( - 2P\mathcal{D} + \varepsilon^{ - 1}\frac{\Delta}{\pi} PK\biggr)\frac{\partial y ( t,x )}{\partial x} \,dx. \end{aligned} $$
(35)

We know from (21) that

$$ \begin{aligned}[b] \dot{V} ( t ) &\le \int_{\Omega} \biggl[ y ( t,x )^{T}\biggl( - 2P\tilde{D} - 2PK + \varepsilon \frac{\Delta}{\pi} PK + \sigma_{1}P\bar{A} \bar{A}^{T}P \\ &\quad{}+ \sigma_{2}P\bar{B}\bar{B}^{T}P + \sigma_{1}^{ - 1}\bar{L}^{2}I\biggr)y ( t,x ) + \sigma_{2}^{ - 1}y ( t - \tau,x )^{T} \bar{L}^{2}Iy ( t - \tau,x ) \biggr]\,dx. \end{aligned} $$
(36)

According to the Schur complement to (36), by (20), (22), (26), and (36), we can conclude

$$ \dot{V} ( t ) \le - a_{1}V ( t ) + b_{1}\sup _{ - \tau \le s \le 0}V ( t + s ) $$
(37)

when \(t \in ( m\omega + \delta,(m + 1)\omega ]\), similar to (34), we obtain

$$ \begin{aligned}[b] \dot{V} ( t )& \le \int_{\Omega} \bigl[ y ( t,x )^{T}\bigl( - 2P\tilde{D} + \bar{\sigma}_{1}P\bar{A}\bar{A}^{T}P + \bar{ \sigma}_{2}P\bar{B}\bar{B}^{T}P + \bar{\sigma}_{1}^{ - 1} \bar{L}^{2}I\bigr)y ( t,x ) \\ &\quad{} + \bar{\sigma}_{2}^{ - 1}y ( t - \tau,x )^{T}\bar{L}^{2}Iy ( t - \tau,x ) \bigr]\,dx + \int_{\Omega} \frac{\partial y ( t,x )^{T}}{\partial x}( - 2P\mathcal{D})\frac{\partial y ( t,x )}{\partial x} \,dx . \end{aligned} $$
(38)

Using the Schur complement to (38), from condition (25) in addition to Lemma 3, we get

$$ \dot{V} ( t ) \le a_{2}V ( t ) + b_{2}\sup _{ - \tau \le s \le 0}V ( t + s ). $$
(39)

From (37) and Lemma 2, we get

$$ V ( t ) \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - \alpha t}, \quad 0 \le t \le \delta, $$
(40)

in which \(\alpha > 0\) is the unique solution to \(a_{1} - \alpha - b_{1}e^{\alpha \tau} = 0\).

When \(\delta \le t \le \omega \), we can conclude by (39) and Lemma 3

$$ V ( t ) \le \bigl\Vert V ( \delta ) \bigr\Vert _{\tau} e^{ ( a_{2} + b_{2} )t},\quad \delta \le t \le \omega. $$
(41)

From (40), it follows that

$$ \bigl\Vert V ( \delta ) \bigr\Vert _{\tau} = \sup _{\delta - \tau \le t \le \delta} \bigl\Vert V ( t ) \bigr\Vert \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - \alpha ( \delta - \tau )}. $$
(42)

By the above inequality and (41), we obtain the following:

$$ V ( t ) \le \bigl\Vert V ( \delta ) \bigr\Vert _{\tau} e^{ ( a_{2} + b_{2} ) ( t - \delta )} \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - \alpha ( \delta - \tau )}e^{ ( a_{2} + b_{2} ) ( t - \delta )}. $$
(43)

Thus, we have

$$ V ( \omega ) \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - \alpha ( \delta - \tau ) + ( a_{2} + b_{2} ) ( \omega - \delta )} = \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - \sigma}, $$
(44)

in which \(\sigma = \alpha ( \delta - \tau ) - ( a_{2} + b_{2} ) ( \omega - \delta )\).

Therefore, we have

$$ \begin{aligned}[b] \bigl\Vert V ( \omega ) \bigr\Vert _{\tau} &= \sup_{\omega - \tau \le t \le \omega} \bigl\Vert V ( t ) \bigr\Vert \le \sup_{\omega - \tau \le t \le \omega} \bigl\{ \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - \alpha ( \delta - \tau )}e^{ ( a_{2} + b_{2} ) ( t - \delta )} \bigr\} \\ &\le \sup_{\omega - \tau \le t \le \omega} \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - \alpha ( \delta - \tau )}e^{ ( a_{2} + b_{2} ) ( \omega - \delta )}, \end{aligned} $$
(45)

and

$$ \bigl\Vert V ( \omega ) \bigr\Vert _{\tau} \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - ( a_{1} - b_{1} ) ( \delta - \tau ) + ( a_{2} + b_{2} ) ( \omega - \delta )} = \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - \sigma}. $$
(46)

For the positive integer m, applying mathematical induction, we are able to conclude

$$ V ( m\omega ) \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - m\sigma}. $$
(47)

Suppose (47) holds when \(m \le r\). Now, we prove (45) is true when \(m=r+1\). Similar to the process to get (46), we know

$$ \bigl\Vert V ( r\omega ) \bigr\Vert _{\tau} \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - r\sigma}. $$
(48)

For \(t \in [ r\omega,r\omega + \delta ]\), we obtain

$$ V ( t ) \le \bigl\Vert V ( r\omega ) \bigr\Vert _{\tau} e^{ - \alpha ( t - r\omega )} \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - r\sigma} e^{ - \alpha ( t - r\omega )}. $$
(49)

Hence, we have

$$ \bigl\Vert V ( r\omega + \delta ) \bigr\Vert _{\tau} \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - r\sigma} e^{ - \alpha \delta}. $$
(50)

When \(t \in [ r\omega + \delta, ( r + 1 )\omega ]\),

$$ V ( t ) \le \bigl\Vert V ( r\omega + \delta ) \bigr\Vert _{\tau} e^{ ( a_{2} + b_{2} ) ( t - \delta - r\omega )} \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - r\rho} e^{ - \alpha \rho} e^{ ( a_{2} + b_{2} ) ( t - \delta - r\omega )}. $$
(51)

Thus, for \(t = r\omega + \delta \), we derive

$$ \bigl\Vert V \bigl( ( r + 1 )\omega \bigr) \bigr\Vert \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - r\sigma} e^{ - \alpha \delta} e^{ ( a_{2} + b_{2} ) ( \omega - \delta )} \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - (r + 1)\sigma}. $$
(52)

Therefore, (47) is true for all positive integers.

For any \(t > 0\), there exists a \(m_{0} \ge 0\), such that \(m_{0}\omega \le t \le ( m_{0} + 1 )\omega \).

$$ V ( t ) \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ ( a_{2} + b_{2} )\omega} e^{ - m_{0}\sigma} \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ ( a_{2} + b_{2} )\omega} e^{\sigma} \exp \biggl( - \frac{\sigma t}{\omega} \biggr). $$
(53)

Let \(M = \Vert V ( 0 ) \Vert _{\tau} e^{ ( a_{2} + b_{2} )\omega} e^{\sigma}\), we know

$$ V ( t ) \le M\exp \biggl( - \frac{\sigma t}{\omega} \biggr),\quad t > 0. $$
(54)

According to (54) and Lemma 2, we derive

$$ \lambda_{m} ( \Xi )\bigl\Vert y ( t,x ) \bigr\Vert _{2}^{2} \le M\exp \biggl( - \frac{\sigma t}{\omega} \biggr). $$
(55)

Then we easily get

$$ \bigl\Vert y ( t,x ) \bigr\Vert _{2} \le \sqrt{ \frac{M}{\lambda_{m} ( \Xi )}} \exp \biggl( - \frac{\sigma t}{2\omega} \biggr),\quad \mbox{for }t > 0. $$
(56)

Consequently, the equilibrium point of error system (9) is exponentially stable, this means that the drive system (5) and response system (6) are exponentially anti-synchronized under the sampled feedback controller (10). Our proof is completed. □

In Theorem 1, we choose the Lyapunov function as \(V ( t ) = \int_{\Omega} y ( t,x )^{T}y ( t,x )\,dx\), i.e., \({P=I}\), then Corollary 1 holds.

Corollary 1

Under the assumption (A1), if there exist positive scalars ε, \(\sigma_{3}\), \(\sigma_{4}\), \(\bar{\sigma}_{3}\), \(\bar{\sigma}_{4}\), \(a_{3}\), \(b_{3}\), \(a_{4}\), \(b_{4}\) such that \(a_{3} > b_{3}\) and the following condition holds:

$$\begin{aligned}& \left\{ \textstyle\begin{array}{l} - 2\tilde{D} - 2K + \varepsilon \frac{\Delta}{\pi} K + \sigma_{3}\bar{A}\bar{A}^{T} + \sigma_{4}\bar{B}\bar{B}^{T} + \sigma_{3}^{ - 1}\bar{L}^{2}I + a_{3}I \le 0, \\ - 2\mathcal{D} + \varepsilon^{ - 1}\frac{\Delta}{\pi} K < 0, \\ \sigma_{4}^{ - 1}\bar{L}^{2}I - b_{3}I \le 0, \end{array}\displaystyle \right . \end{aligned}$$
(57)
$$\begin{aligned}& \left\{ \textstyle\begin{array}{l} - 2\tilde{D} + \bar{\sigma}_{3}\bar{A}\bar{A}^{T} + \bar{\sigma}_{4}\bar{B}\bar{B}^{T} + \bar{\sigma}_{3}^{ - 1}\bar{L}^{2}I - a_{4}I \le 0, \\ \bar{\sigma}_{4}^{ - 1}\bar{L}^{2}I - b_{4}I \le 0, \end{array}\displaystyle \right . \end{aligned}$$
(58)
$$\begin{aligned}& \sigma = \bar{\alpha} ( \delta - \tau ) - ( a_{3} + a_{4} ) ( \omega - \delta ), \end{aligned}$$
(59)

in which \(\bar{L} = \max_{1 \le i \le n} ( L_{i} )\), \(\bar{\alpha} > 0\) is the unique solution to \(a_{3} - \bar{\alpha} - b_{3}e^{\bar{\alpha} \tau} = 0\). Then the systems (5) and (6) realize exponential anti-synchronization.

In the following, anti-synchronization problem of system (5) and system (6) under the generalized intermittent sampled-data control is discussed. We have

$$ \begin{aligned}[b] &\bar{v}_{i} ( t,x ) = \xi_{i} ( t )y ( t,\bar{x}_{j} ) + \eta_{i} ( t )y ( t - \tau,\bar{x}_{j} ), \\ &\quad \xi_{i} ( t ) = \left\{ \textstyle\begin{array}{l@{\quad}l} - \xi_{i},&m\omega \le t \le m\omega + \delta, \\ 0,&m\omega + \delta < t \le (m + 1)\omega, \end{array}\displaystyle \right . \\ &\quad \eta_{i} ( t ) = \left\{ \textstyle\begin{array}{l@{\quad}l} - \eta_{i},&m\omega \le t \le m\omega + \delta, \\ 0,&m\omega + \delta < t \le (m + 1)\omega, \end{array}\displaystyle \right . \end{aligned} $$
(60)

in which \(\bar{x}_{j} = \frac{x_{j} + x_{j + 1}}{2}\), \(x \in [ x_{j},x_{j + 1} )\), \(j = 0,\ldots,N - 1\), \(i = 1,\ldots,n\), \(\omega > 0\) is the control period, \(\omega > 0\) is called the control width (control duration), \(\xi = \operatorname {diag}( \xi_{1},\xi_{2},\ldots,\xi_{n} ) > 0\), and \(\eta = \operatorname {diag}( \eta_{1},\eta_{2},\ldots,\eta_{n} ) > 0\) are the gain control matrices.

By applying the relation \(y ( \cdot,\bar{x}_{j} ) = y ( \cdot,x ) - \int_{\bar{x}_{j}}^{x} y_{s} ( \cdot,s ) \,ds \), (58) can be represented as

$$\begin{aligned}& \bar{v} ( t,x ) = - \xi ( t )y ( t,x ) + \xi ( t ) \int_{\bar{x}_{j}}^{x} y_{s} ( t,s ) \,ds - \eta ( t )y ( t - \tau,x ) + \eta ( t ) \int_{\bar{x}_{j}}^{x} y_{s} ( t - \tau,s ) \,ds, \\& \quad \xi_{i} ( t ) = \left\{ \textstyle\begin{array}{l@{\quad}l} - \xi_{i},&m\omega \le t \le m\omega + \delta, \\ 0,&m\omega + \delta < t \le (m + 1)\omega, \end{array}\displaystyle \right . \\& \quad \eta_{i} ( t ) = \left\{ \textstyle\begin{array}{l@{\quad}l} - \eta_{i},&m\omega \le t \le m\omega + \delta, \\ 0,&m\omega + \delta < t \le (m + 1)\omega, \end{array}\displaystyle \right . \end{aligned}$$
(61)

where \(\xi ( t ) = \operatorname {diag}( \xi_{1} ( t ), \xi_{2} ( t ),\ldots,\xi_{n} ( t ) )\), \(\eta ( t ) = \operatorname {diag}( \eta_{1} ( t ), \eta_{2} ( t ),\ldots,\eta_{n} ( t ) )\).

Under the control (61), for \(t>0\), \(y(t,x)\) satisfies the following:

$$\begin{aligned}& \frac{\partial y_{i} ( t,x )}{\partial t} \in \frac{\partial}{\partial x} \biggl( D_{i} \frac{\partial y_{i} ( t,x )}{\partial x} \biggr) - \bigl\{ \mathit {co}\bigl[ d_{i} ( u_{i} ) \bigr]u_{i} ( t,x ) + \mathit {co}\bigl[ d_{i} ( \tilde{u}_{i} ) \bigr]\tilde{u}_{i} ( t,x ) \bigr\} - \xi_{i}y_{i} ( t,x ) \\& \hphantom{\frac{\partial y_{i} ( t,x )}{\partial t} \in}{} - \eta_{i}y_{i} ( t - \tau,x ) + \sum _{j = 1}^{n} \bigl\{ \mathit {co}\bigl[ a_{ij} ( u_{j} ) \bigr]g_{j} \bigl( u_{j} ( t,x ) \bigr) + \mathit {co}\bigl[ a_{ij} ( \tilde{u}_{j} ) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\} \\& \hphantom{\frac{\partial y_{i} ( t,x )}{\partial t} \in}{}+ \sum_{j = 1}^{n} \bigl\{ \mathit {co}\bigl[ b_{ij} ( u_{j} ) \bigr]g_{j} \bigl( u_{j} ( t - \tau_{j},x ) \bigr)\left . + \mathit {co}\bigl[ b_{ij} ( \tilde{u}_{j} ) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t - \tau_{j},x ) \bigr) \bigr\} \right . \\& \hphantom{\frac{\partial y_{i} ( t,x )}{\partial t} \in}{}+ \xi_{i} \int_{\bar{x}_{j}}^{x} y_{is} ( t,s ) \,ds + \eta_{i} \int_{\bar{x}_{j}}^{x} y_{is} ( t - \tau,s ) \,ds, \\& \hphantom{\frac{\partial y_{i} ( t,x )}{\partial t} \in} ( t,x ) \in [m\omega,m\omega + \delta ] \times \Omega, \end{aligned}$$
(62)
$$\begin{aligned}& \begin{aligned}[b] \frac{\partial y ( t,x )}{\partial t} &\in \frac{\partial}{\partial x} \biggl( D_{i}\frac{\partial y ( t,x )}{\partial x} \biggr) - \bigl\{ \mathit {co}\bigl[ d_{i} ( u_{i} ) \bigr]u_{i} ( t,x ) + \mathit {co}\bigl[ d_{i} ( \tilde{u}_{i} ) \bigr]\tilde{u}_{i} ( t,x ) \bigr\} \\ &\quad{}+ \sum_{j = 1}^{n} \bigl\{ \mathit {co}\bigl[ a_{ij} ( u_{j} ) \bigr]g_{j} \bigl( u_{j} ( t,x ) \bigr) + \mathit {co}\bigl[ a_{ij} ( \tilde{u}_{j} ) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \bigr\} \\ &\quad{}+ \sum_{j = 1}^{n} \bigl\{ \mathit {co}\bigl[ b_{ij} ( u_{j} ) \bigr]g_{j} \bigl( u_{j} ( t - \tau_{j},x ) \bigr) \\ &\quad{} + \mathit {co}\bigl[ b_{ij} ( \tilde{u}_{j} ) \bigr]g_{j} \bigl( \tilde{u}_{j} ( t - \tau_{j},x ) \bigr) \bigr\} ,\quad ( t,x ) \in \bigl( m\omega + \delta,(m + 1)\omega \bigr] \times \Omega. \end{aligned} \end{aligned}$$
(63)

The mixed boundary conditions of model (6) are of the form

$$ y_{ix} ( t,0 ) = 0,\quad\quad y_{i} ( t,d ) = 0, \quad t \in ( - \tau, + \infty ). $$
(64)

Theorem 2

Under the assumption (A1), assume that there exist positive definite matrices P, Q, η̄ and positive scalars ζ, γ, \(\sigma_{5}\), \(\sigma_{6}\), \(a_{5}\), \(b_{5}\) such that the following conditions hold:

$$\begin{aligned}& \Xi = \left( \textstyle\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} \Pi_{1} & * & * & * & * & * \\ - \eta & \Pi_{2} & * & * & * & * \\ 0 & 0 & \Pi_{3} & * & * & * \\ 0 & 0 & 0 & \Pi_{4} & * & * \\ \bar{A}^{T}P & 0 & 0 & 0 & - \sigma_{5}^{ - 1}I & * \\ \bar{B}^{T}P & 0 & 0 & 0 & 0 & - \sigma_{6}^{ - 1}I \end{array}\displaystyle \right) < 0, \end{aligned}$$
(65)
$$\begin{aligned}& \left( \textstyle\begin{array}{c@{\quad}c@{\quad}c} - 2P\tilde{D} + Q_{1} + \sigma_{5}^{ - 1}\bar{L}^{2}I + Q - a_{5}P & * & * \\ \bar{A}^{T}P & - \sigma_{5}^{ - 1}I & * \\ \bar{B}^{T}P & 0 & - \sigma_{6}^{ - 1}I \end{array}\displaystyle \right) \le 0, \end{aligned}$$
(66)
$$\begin{aligned}& \sigma_{6}^{ - 1}\bar{L}^{2}I - Q - b_{5}P \le 0, \end{aligned}$$
(67)
$$\begin{aligned}& \bar{\eta} - 2P\mathcal{D} < 0, \end{aligned}$$
(68)
$$\begin{aligned}& \bar{\sigma} = 2\theta ( \delta - \tau ) + ( a_{5} + b_{5} ) ( \omega - \delta ), \end{aligned}$$
(69)

where \(\Pi_{1} = - 2P\tilde{D} + \sigma_{5}^{ - 1}\bar{L}^{2}I - 2P\xi + \zeta \frac{\Delta}{\pi} P\xi + \gamma \frac{\Delta}{\pi} P\eta + Q + 2\theta P\), \(\Pi_{2} = \sigma_{6}^{ - 1}\bar{L}^{2}I - e^{ - 2\theta \tau} Q\), \(\Pi_{3} = \zeta^{ - 1}\frac{\Delta}{\pi} P\xi + \bar{\eta} - 2P\mathcal{D}\), \(\Pi_{4} = \gamma^{ - 1}\frac{\Delta}{\pi} P\eta - e^{ - 2\theta \tau} \bar{\eta}\), \(\theta > 0\). Then the response system (6) and drive system (5) can be exponentially anti-synchronized under the intermittent sampled feedback controller (61).

Proof

We consider another Lyapunov functional,

$$\begin{aligned} V ( t ) &= \int_{\Omega} y ( t,x )^{T}Py ( t,x )\,dx \\ &\quad{} + \int_{\Omega} \int_{t - \tau}^{t} e^{2\theta ( s - t )} \biggl[ y ( s,x )^{T}Qy ( s,x ) + \frac{\partial y ( s,x )^{T}}{\partial x}\bar{\eta} \frac{\partial y ( s,x )}{\partial x} \biggr] \,ds\,dx. \end{aligned} $$

For \(t \in [ m\omega,m\omega + \delta ]\), calculate the derivatives of \(V ( t )\) along the trajectory of (62) and combine with Lemma 5, and one obtains

$$\begin{aligned}& \dot{V} ( t ) + 2\theta V ( t ) \\& \quad \le \int_{\Omega} \biggl\{ 2y ( t,x )^{T}P \biggl[ \frac{\partial}{\partial x} \biggl( \mathcal{D}\frac{\partial y ( t,x )}{\partial x} \biggr) - \tilde{D}y ( t,x ) + \tilde{A}\tilde{g} \bigl( y ( t,x ) \bigr) \\& \quad\quad {} + \tilde{B}\tilde{g} \bigl( y ( t - \tau,x ) \bigr) - \xi y ( t,x ) - \eta y ( t - \tau,x ) \biggr] \biggr\} \,dx \\& \quad\quad {}+ 2\sum_{j = 0}^{N - 1} \int_{x_{j}}^{x_{j + 1}} y ( t,x )^{T}P\xi \bigl[ y ( t,x ) - y ( t,\bar{x}_{j} ) \bigr] \,dx \\& \quad\quad {}+ 2\sum_{j = 0}^{N - 1} \int_{x_{j}}^{x_{j + 1}} y ( t,x )^{T}P\eta \bigl[ y ( t - \tau,x ) - y ( t - \tau,\bar{x}_{j} ) \bigr] \,dx \\& \quad\quad {}+ \int_{\Omega} \biggl[ y ( t,x )^{T}Qy ( t,x ) + \frac{\partial y ( t,x )^{T}}{\partial x}\bar{\eta} \frac{\partial y ( t,x )}{\partial x} \biggr]\,dx \\& \quad\quad {} - \int_{\Omega} e^{ - 2\theta \tau} \biggl[ y ( t - \tau,x )^{T}Qy ( t - \tau,x ) + \frac{\partial y ( t - \tau,x )^{T}}{\partial x}\bar{\eta} \frac{\partial y ( t - \tau,x )}{\partial x} \biggr]\,dx \\& \quad\quad {}+ 2\theta \int_{\Omega} y ( t,x )^{T}Py ( t,x )\,dx. \end{aligned}$$
(70)

For any scalar \(\zeta_{1} > 0\), we get the following inequality through Young’s inequality:

$$ \begin{aligned}[b] &2\sum_{j = 0}^{N - 1} \int_{x_{j}}^{x_{j + 1}} y ( t,x )^{T}P\xi \bigl[ y ( t,x ) - y ( t,\bar{x}_{j} ) \bigr] \,dx \\ &\quad \le \zeta_{1} \int_{\Omega} y ( t,x )^{T}P\xi y ( t,x ) \,dx \\ &\quad\quad{} + \zeta_{1}^{ - 1}\sum _{j = 0}^{N - 1} \int_{\bar{x}_{j}}^{x_{j + 1}} \bigl[ y ( t,x ) - y ( t, \bar{x}_{j} ) \bigr]^{T} P\xi \bigl[ y ( t,x ) - y ( t, \bar{x}_{j} ) \bigr]\,dx. \end{aligned} $$
(71)

From Lemma 1, we see that

$$ \begin{aligned}[b] &\int_{x_{j}}^{x_{j + 1}} \bigl[ y ( t,x ) - y ( t, \bar{x}_{j} ) \bigr]^{T}P\xi \bigl[ y ( t,x ) - y ( t, \bar{x}_{j} ) \bigr] \,dx \\ &\quad \le \frac{\Delta^{2}}{\pi^{2}} \int_{x_{j}}^{x_{j + 1}} \frac{\partial y ( t,x )^{T}}{\partial x}P\xi \frac{\partial y ( t,x )}{\partial x} \,dx. \end{aligned} $$
(72)

Choosing next \(\zeta_{1} = \frac{\Delta}{\pi} \zeta\), we obtain from (71) and (72)

$$ \begin{aligned}[b] &2\sum_{j = 0}^{N - 1} \int_{x_{j}}^{x_{j + 1}} y ( t,x )^{T}P\xi \bigl[ y ( t,x ) - y ( t,\bar{x}_{j} ) \bigr] \,dx \\ &\quad \le \zeta^{ - 1}\frac{\Delta}{2\pi} \int_{\Omega} \frac{\partial y ( t,x )^{T}}{\partial x}P\xi \frac{\partial y ( t,x )}{\partial x} \,dx + \zeta \frac{\Delta}{2\pi} \int_{\Omega} y ( t,x )^{T}P\xi y ( t,x ) \,dx. \end{aligned} $$
(73)

Similar to (71)-(73), we know

$$ \begin{aligned}[b] &2\sum_{j = 0}^{N - 1} \int_{x_{j}}^{x_{j + 1}} y ( t,x )^{T}P\eta \bigl[ y ( t - \tau,x ) - y ( t - \tau,\bar{x}_{j} ) \bigr] \,dx \\ &\quad \le \gamma^{ - 1}\frac{\Delta}{\pi} \int_{\Omega} \frac{\partial y ( t - \tau,x )^{T}}{\partial x}P\eta \frac{\partial y ( t - \tau,x )}{\partial x} \,dx + \gamma \frac{\Delta}{\pi} \int_{\Omega} y ( t,x )^{T}P\eta y ( t,x ) \,dx. \end{aligned} $$
(74)

Thus, we can find from (30), (70), (73)-(74) and Lemma 4 that

$$\begin{aligned}& \dot{V} ( t ) + 2\theta V ( t ) \\& \quad \le \int_{\Omega} \biggl[ - 2\frac{\partial y ( t,x )^{T}}{\partial x}P\mathcal{D} \frac{\partial y ( t,x )}{\partial x} - 2y ( t,x )^{T}P\tilde{D}y ( t,x ) \\& \quad\quad{} + \sigma_{5}y ( t,x )^{T}P\tilde{A} \tilde{A}^{T}Py ( t,x ) + \sigma_{5}^{ - 1}\tilde{g} \bigl( y ( t,x ) \bigr)^{T}\tilde{g} \bigl( y ( t,x ) \bigr) \\& \quad\quad{}+ \sigma_{6}y ( t,x )^{T}P\tilde{B} \tilde{B}^{T}Py ( t,x ) \\& \quad\quad{}+ \sigma_{6}^{ - 1}\tilde{g} \bigl( y \bigl( t - \tau ( t ),x \bigr) \bigr)^{T}\tilde{g} \bigl( y \bigl( t - \tau ( t ),x \bigr) \bigr) - 2y ( t,x )^{T}P\xi y ( t,x ) \\& \quad\quad{} - 2y ( t,x )^{T}P\eta y ( t - \tau,x ) \biggr]\,dx + \zeta^{ - 1}\frac{\Delta}{\pi} \int_{\Omega} \frac{\partial y ( t,x )^{T}}{\partial x}P\xi \frac{\partial y ( t,x )}{\partial x} \,dx \\& \quad\quad{} + \zeta \frac{\Delta}{\pi} \int_{\Omega} y ( t,x )^{T}P\xi y ( t,x ) \,dx + \gamma^{ - 1}\frac{\Delta}{\pi} \int_{\Omega} \frac{\partial y ( t - \tau,x )^{T}}{\partial x}P\eta \frac{\partial y ( t - \tau,x )}{\partial x} \,dx \\& \quad\quad{}+ \gamma \frac{\Delta}{\pi} \int_{\Omega} y ( t,x )^{T}P\eta y ( t,x ) \,dx \\& \quad\quad{}+ \int_{\Omega} \biggl[ y ( t,x )^{T}Qy ( t,x ) + \frac{\partial y ( t,x )^{T}}{\partial x}\bar{\eta} \frac{\partial y ( t,x )}{\partial x} \biggr]\,dx \\& \quad\quad{}- \int_{\Omega} e^{ - 2\theta \tau} \biggl[ y ( t - \tau,x )^{T}Qy ( t - \tau,x ) + \frac{\partial y ( t - \tau,x )^{T}}{\partial x}\bar{\eta} \frac{\partial y ( t - \tau,x )}{\partial x} \biggr]\,dx \\& \quad\quad{} + 2\theta \int_{\Omega} y ( t,x )^{T}Py ( t,x )\,dx. \end{aligned}$$
(75)

Using (29), (65)-(68), and (75), we obtain

$$\begin{aligned}& \dot{V} ( t ) + 2\theta V ( t ) \\& \quad \le \int_{\Omega} \biggl[ y ( t,x )^{T} \biggl( - 2P\tilde{D} + \sigma_{5}P\bar{A}\bar{A}^{T}P + \sigma_{5}^{ - 1} \bar{L}^{2}I + \sigma_{6}P\bar{B}\bar{B}^{T}P \\& \quad\quad{} - 2P\xi + \zeta \frac{\Delta}{\pi} P\xi + \gamma \frac{\Delta}{ \pi} P\eta + Q + 2\theta P \biggr)y ( t,x ) - 2y ( t,x )^{T}P\eta y ( t - \tau,x ) \\& \quad\quad{} + y ( t - \tau,x )^{T} \bigl( \sigma_{6}^{ - 1}\bar{L}^{2}I - e^{ - 2\theta \tau} Q \bigr)y ( t - \tau,x ) \biggr]\,dx \\& \quad\quad{} + \int_{\Omega} y_{x} ( t,x )^{T} \biggl( \zeta^{ - 1}\frac{\Delta}{ \pi} P\xi + \bar{\eta} - 2P\mathcal{D} \biggr)y_{x} \,dx \\& \quad\quad{} + \int_{\Omega} y_{x} ( t - \tau,x )^{T} \biggl( \gamma^{ - 1}\frac{\Delta}{\pi} P\eta - e^{ - 2\theta \tau} \bar{\eta} \biggr)y_{x} ( t - \tau,x ) \,dx \\& \quad = \varpi ( t,x )^{T}\Xi \varpi ( t,x ) \le 0, \end{aligned}$$
(76)

where \(\varpi ( t,x ) = ( y ( t,x )^{T}, y ( t - \tau,x )^{T}, y_{x} ( t,x )^{T}, y_{x} ( t - \tau,x )^{T} )^{T}\) is a column vector, which implies that

$$ V ( t,x ) \le V ( m\omega,x )e^{ - 2\theta ( t - m\omega )},\quad t \in [ m \omega,m\omega + \delta ]. $$
(77)

Thus, we have

$$ V ( t,x ) \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - 2\theta t},\quad t \in [ 0,\delta ]. $$
(78)

When \(t \in ( m\omega + \delta,(m + 1)\omega ]\), we see from the Schur complement, (23)-(25), and (28) that

$$\begin{aligned} \dot{V} ( t ) \le& \int_{\Omega} \bigl[ y ( t,x )^{T} \bigl( - 2P\tilde{D} + \sigma_{5}P\bar{A}\bar{A}^{T}P + \sigma_{5}^{ - 1} \bar{L}^{T}\bar{L} + \sigma_{6}P\bar{B}\bar{B}^{T}P + Q \bigr)y ( t,x ) \\ &{} + y ( t - \tau,x )^{T} \bigl( \sigma_{6}^{ - 1} \bar{L}^{T}\bar{L} - Q \bigr)y ( t - \tau,x ) \bigr]\,dx + \int_{\Omega} y_{x} ( t,x )^{T} \bigl( \gamma^{ *} \bar{\eta} - 2P\mathcal{D} \bigr)y_{x} \,dx \\ &{} + \int_{\Omega} y_{x} ( t - \tau,x )^{T} \bigl( - \gamma^{ *} \bar{\eta} \bigr)y_{x} ( t - \tau,x ) \,dx \\ \le& a_{5}V ( t ) + b_{5}V ( t - \tau ). \end{aligned}$$
(79)

Thus, applying Lemma 3, we get

$$ V ( t ) \le \bigl\Vert V ( m\omega + \delta ) \bigr\Vert _{\tau} e^{ ( a_{5} + b_{5} )t},\quad m\omega + \delta \le t \le ( m + 1 )\omega. $$
(80)

For \(\delta \le t \le \omega \), we know

$$ V ( t ) \le \bigl\Vert V ( \delta ) \bigr\Vert _{\tau} e^{ ( a_{5} + b_{5} )t},\quad \delta \le t \le \omega. $$
(81)

From (78), we are able to get

$$ \bigl\Vert V ( \delta ) \bigr\Vert _{\tau} = \sup _{\delta - \tau \le t \le \delta} \bigl\Vert V ( t ) \bigr\Vert \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - 2\theta ( \delta - \tau )}. $$
(82)

From the above inequality and (81), for \(\delta \le t \le \omega \), we obtain the following:

$$ V ( t ) \le \bigl\Vert V ( \delta ) \bigr\Vert _{\tau} e^{ ( a_{5} + b_{5} ) ( t - \delta )} \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - 2\theta ( \delta - \tau )}e^{ ( a_{5} + b_{5} ) ( t - \delta )}. $$
(83)

Hence, we have

$$ V ( \omega ) \le \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - 2\theta ( \delta - \tau ) + ( a_{5} + b_{5} ) ( \omega - \delta )} = \bigl\Vert V ( 0 ) \bigr\Vert _{\tau} e^{ - \bar{\sigma}}, $$
(84)

where \(\bar{\sigma} = 2\theta ( \delta - \tau ) + ( a_{5} + b_{5} ) ( \omega - \delta )\). The next proof is similar to Theorem 1. Consequently, the drive system (5) and the response system (6) realize exponential anti-synchronization with the intermittent sampled feedback controller (61). □

Remark 2

In Theorem 1 and Theorem 2, the sufficient conditions on anti-synchronization criteria are derived by different sampled-data controllers in space and continuous in time. It should be mentioned that there are few results concerning the anti-synchronization schemes for distributed parameter MNNs using intermittent sampled-data control. This motivated us to write this paper. It is the first time that one builds the anti-synchronization criteria for delayed MDPNNs via intermittent sampled-data control. Our results are novel and they effectively supplement the existing results. In this paper, the sampled-data control is generalized to consider a more reasonable MDPNNs model. Furthermore, the deduction process to deal with reaction-diffusion terms is also different due to the different boundary conditions. Accordingly, an interesting and important open problem is whether or not we also obtain similar results for MDPNNs with Neumann boundary conditions or Dirichlet boundary conditions.

Remark 3

As a universal and important tool, LMI has been widely used in many fields, such as control systems, NNs and its applications etc. In this paper, we have obtained the anti-synchronization criteria for a class of delayed MDPNNs, and the LMI can be used to express and solve the derived anti-synchronization criteria. Additionally, there are some advantages of LMI in stability and synchronization analysis of NNs. Initially, since the LMI-based synchronization results investigate the sign difference of the elements in connection matrices, the neuron’s excitatory and inhibitory effects on NNs have been investigated, which overcome the shortcomings of the results via other methods such as M-matrix, algebraic inequality, etc. Furthermore, the stability analysis by utilizing Lyapunov stability theory can be considered as an optimization problem with multiple matrix variables and many constraints. Thus, LMI has been the best and only tool to solve this kind of stability problems until now. Finally, LMI-based anti-synchronization criteria can be solved directly by using LMI ToolBox of MATLAB.

4 An illustrative example

In this section, a numerical example is presented to demonstrate the effectiveness of our results obtained.

Example 1

Consider the following MNNs with time delays:

$$ \begin{aligned}[b] \frac{\partial u_{i} ( t,x )}{\partial t} & = D_{i} \frac{\partial^{2}u_{i} ( t,x )}{\partial x^{2}} - d_{i} ( u_{i} )u_{i} ( t,x ) + \sum_{j = 1}^{2} a_{ij} ( u_{i} )g_{j} \bigl( u_{j} ( t,x ) \bigr) \\ &\quad{} + \sum_{j = 1}^{2} b_{ij} ( u_{i} )g_{j} \bigl( u_{j} ( t - \tau_{j},x ) \bigr) + J_{i},\quad i = 1,2, \end{aligned} $$
(85)

where \(x \in \Omega = \{ x| 0 \le x \le 2 \}\), \(t \ge 0\), \(g_{j} ( \alpha ) = \tanh ( \alpha )\), \(i = 1,2\), and \(D_{1} = D_{2} = 1\), \(\tau_{j} = 0.1\),

$$\begin{aligned}& d_{1} ( z ) = \left\{ \textstyle\begin{array}{l@{\quad}l} 1.5,&\vert z \vert \le 1, \\ 1.3,&\vert z \vert > 1, \end{array}\displaystyle \right . \quad\quad d_{2} ( z ) = \left\{ \textstyle\begin{array}{l@{\quad}l} 0.8,&\vert z \vert \le 1, \\ 0.5,&\vert z \vert > 1, \end{array}\displaystyle \right . \quad\quad a_{11} ( z ) = \left\{ \textstyle\begin{array}{l@{\quad}l} 0.3,&\vert z \vert \le 1, \\ 0.5,&\vert z \vert > 1, \end{array}\displaystyle \right . \\& a_{12} ( z ) = \left\{ \textstyle\begin{array}{l@{\quad}l} - 0.3,&\vert z \vert \le 1, \\ - 0.6,&\vert z \vert > 1, \end{array}\displaystyle \right . \quad\quad a_{21} ( z ) = \left\{ \textstyle\begin{array}{l@{\quad}l} - 0.7,&\vert z \vert \le 1, \\ - 1.6,&\vert z \vert > 1, \end{array}\displaystyle \right . \\& a_{22} ( z ) = \left\{ \textstyle\begin{array}{l@{\quad}l} 0.2,&\vert z \vert \le 1, \\ 0.6,&\vert z \vert > 1, \end{array}\displaystyle \right . \quad\quad b_{11} ( z ) = \left\{ \textstyle\begin{array}{l@{\quad}l} - 1.3,&\vert z \vert \le 1, \\ - 0.5,&\vert z \vert > 1, \end{array}\displaystyle \right . \\& b_{12} ( z ) = \left\{ \textstyle\begin{array}{l@{\quad}l} - 0.5,&\vert z \vert \le 1, \\ 0.2,&\vert z \vert > 1, \end{array}\displaystyle \right . \quad\quad b_{21} ( z ) = \left\{ \textstyle\begin{array}{l@{\quad}l} 0.3,&\vert z \vert \le 1, \\ 0.8,&\vert z \vert > 1, \end{array}\displaystyle \right . \\& b_{22} ( z ) = \left\{ \textstyle\begin{array}{l@{\quad}l} - 3,&\vert z \vert \le 1, \\ - 0.9,&\vert z \vert > 1, \end{array}\displaystyle \right .\quad \mbox{here } z \in R. \end{aligned}$$

Obviously, \(g_{j} ( \cdot )\) satisfies (A1). It can be verified that \(\bar{L} = 1\). The numerical simulations of model (85) with initial conditions \(u_{1} ( t,x ) = - 1\), \(u_{2} ( t,x ) = 1.2\), and the mixed boundary conditions are given in Figures 1-2.

Figure 1
figure 1

The state surface of \(\pmb{u_{1} ( t,x )}\) in system ( 85 ).

Figure 2
figure 2

The state surface of \(\pmb{u_{2} ( t,x )}\) in system ( 85 ).

Next, we consider the anti-synchronization of driving system (85) and the response system (86) described by

$$ \begin{aligned}[b] \frac{\partial \tilde{u}_{i} ( t,x )}{\partial t} &= D_{i} \frac{\partial^{2}\tilde{u}_{i} ( t,x )}{\partial x^{2}} - d_{i} ( \tilde{u}_{i} ) \tilde{u}_{i} ( t,x ) + \sum_{j = 1}^{2} a_{ij} ( \tilde{u}_{i} )g_{j} \bigl( \tilde{u}_{j} ( t,x ) \bigr) \\ &\quad{} + \sum_{j = 1}^{2} b_{ij} ( \tilde{u}_{i} )g_{j} \bigl( \tilde{u}_{j} ( t - \tau_{j},x ) \bigr) + J_{i} + v_{i}, \end{aligned} $$
(86)

with initial conditions \(\tilde{u}_{1} ( t,x ) = - 2\), \(\tilde{u}_{2} ( t,x ) = 1\), and the mixed boundary conditions, where the intermittent sampled-data control is devised as follows:

$$v_{i} ( t,x ) = K_{i} ( t )y_{i} ( t, \bar{x}_{j} ), \quad K_{i} ( t ) = \left\{ \textstyle\begin{array}{l@{\quad}l} - k_{i},&m\omega \le t \le m\omega + \delta,\\ 0,&m\omega + \delta < t \le (m + 1)\omega, \end{array}\displaystyle \right . $$

\(m = 0,1,2, \ldots \) . Here the parameters \(D_{i}\), \(a_{i} ( \tilde{u}_{i} )\), \(w_{ij} ( \tilde{u}_{i} )\), \(h_{ij} ( \tilde{u}_{i} )\), \(\tau_{j}\) and the activation functions \(g_{j} ( \cdot )\) are the same as defined in system (80). Set \(k_{1} = 0.8\), \(k_{2} = 2.4\), \(\Delta = \frac{\pi}{4}\), the control period \(\omega = 1\) and the control width \(\delta = 0.7\). Select \(\varepsilon = \sigma_{3} = \sigma_{4} = \bar{\sigma}_{3} = \bar{\sigma}_{4} = 1\), \(a_{3} = 1.43\), \(b_{3} = 1.21\), \(a_{4} = 3.28\), \(b_{4} = 1.15\), and the conditions in Corollary 1 are satisfied.

Let \(e_{i} ( t,x ) = \tilde{u}_{i} ( t,x ) + u_{i} ( t,x )\), \(i = 1,2\). The simulation results of the state variables \(u_{i} ( t,x )\) and \(e_{i} ( t,x )\), are given in Figures 1 to 6, where Figures 1 and 2 show the state surfaces of \(u_{1} ( t,x )\) and \(u_{2} ( t,x )\) in system (85), respectively. Figures 3 and 4 exhibit the anti-synchronization errors \(e_{1} ( t,x )\) and \(e_{2} ( t,x )\), respectively. Figures 5 and 6 depict the anti-synchronization errors \(e_{1} ( t,x )\) and \(e_{2} ( t,x )\) when \(x = 0.8\), respectively.

Figure 3
figure 3

Dynamical behavior of the error \(\pmb{e_{1} ( t,x )}\) .

Figure 4
figure 4

Dynamical behavior of the error \(\pmb{e_{2} ( t,x )}\) .

Figure 5
figure 5

Dynamical behavior of the error \(\pmb{e_{1} ( t,x )}\) when \(\pmb{x = 0.8}\) .

Figure 6
figure 6

Dynamical behavior of the error \(\pmb{e_{2} ( t,x )}\) when \(\pmb{x = 0.8}\) .

According to Corollary 1, the drive system (85) and the response system (86) are exponentially anti-synchronized as shown in Figures 3 and 4. The numerical simulations clearly verify the effectiveness of the developed the intermittent sampled feedback control approach to the exponential anti-synchronization of chaotic delayed MDPNNs with mixed boundary conditions.

Remark 4

The motivation for presenting the experimental results is to prove that the proposed intermittent sampled-data-based controller can be applied to anti-synchronization strategy for MDPNNs with time delays. The proposed controllers will get improved anti-synchronization new results. The simulation is performed using Matlab software. By applying the classical implicit format solving the partial differential equations and the method of steps for differential difference equations,we provide a numerical example to show the theoretical results.

5 Conclusions

In this paper, we contribute to development of anti-synchronization method for a kind of drive and response MDPNNs with time delays. An appropriate Lyapunov-Krasovskii functional is constructed to derive some sufficient conditions for designing intermittent sampled-data control as an anti-synchronization law in terms of LMIs under less restrictive conditions. Thus, the intermittent sampled-data controllers are developed based on the available information to guarantee that the controlled drive system can anti-synchronize with the response system regardless of their initial states. We assume that the intermittent sampling intervals in space are bounded. All the developed results are shown in terms of LMIs and tested on an illustrative example to prove the feasibility and applicability of the proposed method.

References

  1. Hopfield, JJ: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. 79(8), 2554-2558 (1982)

    Article  MathSciNet  Google Scholar 

  2. Amit, DJ: Modeling Brain Function: The World of Attractor Neural Networks. Cambridge University Press, Cambridge (1992)

    Google Scholar 

  3. Ackley, DH, Hinton, GE, Sejnowski, TJ: A learning algorithm for Boltzmann machines. Cogn. Sci. 9(1), 147-169 (1985)

    Article  Google Scholar 

  4. Bengio, Y: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1-127 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  5. Zhang, Z, Cao, J, Zhou, D: Novel LMI-based condition on global asymptotic stability for a class of Cohen-Grossberg BAM networks. IEEE Trans. Neural Netw. Learn. Syst. 25(6), 1161-1172 (2014)

    Article  Google Scholar 

  6. Lakshmanan, S, Park, JH, Jung, HY, Kwon, OM, Rakkiyappan, R: A delay partitioning approach to delay-dependent stability analysis for neutral type neural networks with discrete and distributed delays. Neurocomputing 111, 81-89 (2013)

    Article  Google Scholar 

  7. Balasubramaniam, P, Vidhya, C: Exponential stability of stochastic reaction-diffusion uncertain fuzzy neural networks with mixed delays and Markovian jumping parameters. Expert Syst. Appl. 39, 3109-3115 (2012)

    Article  Google Scholar 

  8. Rakkiyappan, R, Chandrasekar, A, Cao, J: Passivity and passification of memristor-based recurrent neural networks with additive time-varying delays. IEEE Trans. Neural Netw. Learn. Syst. 26(9), 2043-2057 (2014)

    Article  MathSciNet  Google Scholar 

  9. Chua, LO: Memristor - the missing circut element. IEEE Trans. Circuit Theory 18, 507-519 (1971)

    Article  Google Scholar 

  10. Strukov, DB, Snider, GS, Stewart, DR, Williams, RS: The missing memristor found. Nature 453, 80-83 (2008)

    Article  Google Scholar 

  11. Tour, JM, He, T: Electronics: the fourth element. Nature 453, 42-43 (2008)

    Article  Google Scholar 

  12. Guo, ZY, Wang, J, Yan, Z: Global exponential dissipativity and stabilization of memristor- based recurrent neural networks with time-varying delays. Neural Netw. 48, 158-172 (2013)

    Article  MATH  Google Scholar 

  13. Pecora, LM, Carroll, TL: Synchronization in chaotic systems. Phys. Rev. Lett. 64(8), 821-824 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  14. Guo, Z, Wang, J, Yan, Z: Global exponential synchronization of two memristor-based recurrent neural networks with time delays via static or dynamic coupling. IEEE Trans. Syst. Man Cybern. Syst. 45(2), 235-249 (2015)

    Article  Google Scholar 

  15. Ren, F, Cao, J: Anti-synchronization of stochastic perturbed delayed chaotic neural networks. Neural Comput. Appl. 18(5), 515-521 (2009)

    Article  MathSciNet  Google Scholar 

  16. Wu, A, Zeng, Z: Anti-synchronization control of a class of memristive recurrent neural networks. Commun. Nonlinear Sci. Numer. Simul. 18(2), 373-385 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Zhang, G, Shen, Y, Wang, L: Global anti-synchronization of a class of chaotic memristive neural networks with time-varying delays. Neural Netw. 36, 1-8 (2013)

    Article  MATH  Google Scholar 

  18. Murray, JD: Mathematical Biology. Springer, Berlin (1989)

    Book  MATH  Google Scholar 

  19. Wang, JL, Wu, HN, Guo, L: Pinning control of spatially and temporally complex dynamical networks with time-varying delays. Nonlinear Dyn. 70, 1657-1674 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  20. Wu, H, Zhang, X, Li, R, Yao, R: Adaptive anti-synchronization and H1 anti-synchronization for memristive neural networks with mixed time delays and reaction-diffusion terms. Neurocomputing 168, 726-740 (2015)

    Article  Google Scholar 

  21. Li, R, Cao, J: Stability analysis of reaction-diffusion uncertain memristive neural networks with time-varying delays and leakage term. Appl. Math. Comput. 278, 54-69 (2016)

    MathSciNet  Google Scholar 

  22. Li, J, Zhang, W, Chen, M: Synchronization of delayed reaction-diffusion neural networks via an adaptive learning control approach. Comput. Math. Appl. 65, 1775-1785 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  23. Wang, J, Wu, H: Synchronization and adaptive control of an array of linearly coupled reaction-diffusion neural networks with hybrid coupling. IEEE Trans. Cybern. 44(8), 1350-1361 (2014)

    Article  Google Scholar 

  24. Lee, TH, Wu, Z, Park, JH: Synchronization of a complex dynamical network with coupling time-varying delays via sampled-data control. Appl. Math. Comput. 219, 1354-1366 (2012)

    MathSciNet  MATH  Google Scholar 

  25. Khapalov, AY: Continuous observability for parabolic system under observations of discrete type. IEEE Trans. Autom. Control 38(9), 1388-1391 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  26. Logemann, H, Rebarber, R, Townley, S: Stability of infinite-dimensional sampled-data systems. Trans. Am. Math. Soc. 35, 301-328 (2003)

    MathSciNet  MATH  Google Scholar 

  27. Logemann, H, Rebarber, R, Townley, S: Generalized sampled-data stabilization of well-posed linear infinite-dimensional systems. SIAM J. Control Optim. 44(4), 1345-1369 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  28. Fridman, E, Blighovsky, A: Robust sampled-data control of a class of semilinear parabolic systems. Automatica 48, 826-836 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  29. Zhang, W, Li, J, Xing, K, Ding, C: Synchronization for distributed parameter NNs with mixed delays via sampled-data control. Neurocomputing 175, 265-277 (2016)

    Article  Google Scholar 

  30. Cheng, M, Radisavljevic, V, Chang, C, Lin, C, Su, W: A sampled data singularly perturbed boundary control for a diffusion conduction system with noncollocated observation. IEEE Trans. Autom. Control 54(6), 1305-1310 (2009)

    Article  MathSciNet  Google Scholar 

  31. Deissenberg, C: Optimal control of linear econometric models with intermittent controls. Econ. Plan. 16, 49-56 (1980)

    Article  MATH  Google Scholar 

  32. Zhang, W, Li, C, Huang, TW, Huang, J: Stability and synchronization of memristor-based coupling neural networks with time-varying delays via intermittent control. Neurocomputing 173, 1066-1072 (2016)

    Article  Google Scholar 

  33. Zhang, GD, Shen, Y: Exponential synchronization of delayed memristor-based chaotic neural networks via periodically intermittent control. Neural Netw. 55, 1-10 (2014)

    Article  MATH  Google Scholar 

  34. Huang, T, Li, C, Liu, X: Synchronization of chaotic systems with delay using intermittent linear state feedback. Chaos, 18, 033122 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  35. Hardy, GH, Littlewood, JE, Polya, G: Inequalities. Cambridge University Press, Cambridge (1988)

    MATH  Google Scholar 

  36. Halanay, A: Differential Equations: Stability, Oscillations, Time Lags. Academic Press, New York (1966)

    MATH  Google Scholar 

  37. Pan, L, Cao, J: Stochastic quasi-synchronization for delayed dynamical networks via intermittent control. Commun. Nonlinear Sci. Numer. Simul. 17, 1332-1343 (2012)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work is partially supported by the National Natural Science Foundation of China under Grants No. 61573013, the National Science Foundation for Post-doctoral Scientists of China under Grant No. 2013M540754, and the Natural Science Foundation of Shaanxi Province under Grant No. 2015JM1015.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weiyuan Zhang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

WZ designed and performed all the steps of proof in this research and also wrote the paper. JL and CD participated in the design of the study and suggested many good ideas that made this paper possible and helped to draft the first manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, W., Li, J. & Ding, C. Anti-synchronization control for delayed memristor-based distributed parameter NNs with mixed boundary conditions. Adv Differ Equ 2016, 320 (2016). https://doi.org/10.1186/s13662-016-1017-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-016-1017-x

Keywords