Skip to main content

Theory and Modern Applications

Passivity analysis of memristive neural networks with mixed time-varying delays and different state-dependent memductance functions

Abstract

This paper is concerned with the problem of passivity analysis for a class of memristive neural networks with mixed time-varying delays and different state-dependent memductance functions. By employing the theories of differential inclusions and set-valued maps, delay-dependent criteria in terms of linear matrix inequalities are obtained for the passivity of the memristive neural networks. Finally, numerical examples are given to illustrate the feasibility of the theoretical results.

1 Introduction

In 1971, the memristor was predicted by Chua [1] and considered to be the fourth passive circuit element. The first practical memristor device was found by Strukov et al. [2] in 2008. The memristor retains its most recent value when the voltage is turned off, so it re-expresses the retained value when it is turned on the next time. That feature makes them useful as energy-saving devices that can compete with flash memory and other static memory devices. Some classes of memristors also have nonlinear response characteristics which makes them doubly suitable as artificial neurons.

The electronic synapses and neurons was found that they can represent important functionalities of their biological counterparts in [3]. Recently, the simulation of different kinds of memristors has developed rapidly and the studies of memristive neural networks have caused more attention [3–15]. In [16], Wu and Zeng considered the following memristive neurodynamic system:

$$ \begin{aligned} &\dot{x}_{i}(t)=-x_{i}(t)+\sum _{j=1}^{n}w_{ij} \bigl(x_{i}(t) \bigr)f_{j} \bigl(x_{j}(t) \bigr)+u_{i}(t), \\ &y_{i}(t)=f_{i} \bigl(x_{i}(t) \bigr), \quad i=1,2,\ldots,n, \end{aligned} $$
(1.1)

where \(x_{i}(t)\) represents the voltage of the capacitor \(\mathbf {C}_{i}\) at time t. \(u_{i}(t)\) and \(y_{i}(t)\) denote external input and output, respectively, \(f_{j}(\cdot)\) is the neuron activation function satisfying \(f_{j}(0)=0\), \(w_{ij}(x_{i}(t))\) represents memristor-based weights called memductance functions. In [16], the class of memristive neural networks was formulated and investigated with two different types of memductance functions. The passivity criteria in terms of linear matrix inequalities were proposed. Meanwhile, some stability criteria were obtained for the memristive neural networks based on the derived criteria.

In many biological and artificial neural networks, as we know, time delays must be taken into consideration because of the inherent communication time of neurons and the finite speed of information processing. As correctly pointed out in [17–20], time delays may change the dynamic characters of neural networks and cause some dramatic phenomena such as oscillations, divergences, and so on. Consequently, it is value to consider the problems of passivity analysis for neural networks with time delay and these problems have become topics of both theoretical and practical importance. By using different approaches, many significant results have been reported such as the global robust passivity analysis in [17] and the passivity analysis in [18–23]. In order to investigate the influence of time delay upon the dynamics of the memristive neural networks, Wu and Zeng [10] considered the following system:

$$ \begin{aligned} &\dot{x}_{i}(t)=-x_{i}(t)+\sum _{j=1}^{n}a_{ij} \bigl(x_{i}(t) \bigr)f_{j} \bigl(x_{j}(t) \bigr)+ \sum_{j=1}^{n}b_{ij} \bigl(x_{i}(t) \bigr)f_{j} \bigl(x_{j}(t- \tau_{j}) \bigr)+u_{i}(t), \\ &z_{i}(t)=f_{i} \bigl(x_{i}(t) \bigr)+f_{i} \bigl(x_{i}(t-\tau_{i}) \bigr)+u_{i}(t),\quad t\geq0, i=1,2,\ldots,n, \end{aligned} $$
(1.2)

where \(\tau_{j}\) is the time delay that satisfies \(0\leq\tau_{j}\leq \tau\) (\(\tau\geq0\) is a constant). \(u_{i}(t)\) and \(z_{i}(t)\) denote external input and output, respectively, \(f_{j}(\cdot)\) is the neuron activation function satisfying \(f_{j}(0)=0\), \(a_{ij}(x_{i}(t))\) and \(b_{ij}(x_{i}(t))\) represent memristor-based weights. Based on the theories of nonsmooth analysis and linear matrix inequalities, by using suitable Lyapunov functional, the exponential passivity was studied for the memristive neural networks with time delays.

We note that the time delay \(\tau_{j}\) was assumed constant in (1.2). In fact, the finite speed of information processing and the inherent communication time of neurons are related to the time t in many biological and artificial neural networks. So, using time-varying delay is more reasonable to describe the information processing or inherent communication time. Meanwhile, there may exist a spatial extent in neural networks as the presence of parallel pathways of different axonal sizes and lengths, i.e., a distribution of conduction velocities along these pathways or distribution of propagation delays over a period of time. Accordingly, this causes another kind of time delays considered in this paper, namely, distributed time delays. Recently, both discrete and distributed delays have been taken into account for realistic neural networks [24], and great attention has been paid to the problem of stability analysis for neural networks with both discrete and distributed time-varying delays [25, 26].

The passivity theory, originating from circuit theory, is the most important issue in the analysis and design of switched systems. In the passivity theory, the passivity means that the systems can keep internally stable. Therefore, the passivity theory provides a tool to analysis the stability of control systems and has been applied in many areas. Based on the passivity theory, the authors in [27] dealt with the problems of sliding mode control for uncertain singular time-delay systems. In [28], the authors designed a mode-dependent state feedback controller by applying the passivity theory. The design of passive controller based on the passivity analysis of nonlinear systems has become an effective way to solve practical engineering problems, for example, passivity-based control of three-phase AC/DC voltage-source converters. For details, the reader is referred to [29, 30] and the references therein. As state-dependent switched nonlinear systems, memristive neural networks conclude too many subsystems. While the product of input and output is utilized as the energy provision of the passive systems, which embodies the energy attenuation character. Passivity analysis of memristive neural networks provides a way to understand the complex brain functionalities with the adoption of memristor-MOS technology designs [16].

Motivated by the works in [10, 16, 25] and the circuit design in [31–35], the main purpose of this paper is to establish the passivity criteria for the given memristive neural networks with mixed time-varying delays and different state-dependent memductance functions. By combining differential inclusions with set-valued maps and constructing a proper Lyapunov-Krasovskii functional, new passivity criteria are derived in terms of linear matrix inequalities, which can be efficiently solved by Matlab toolbox.

The rest of this paper is organized as follows. In Section 2, the corresponding delayed neurodynamic equation for the presented memristive circuit is established and preliminaries were given. The theoretic results are derived in Section 3. In Section 4, the validity of the theoretical analysis is discussed through two numerical examples.

2 Model description and preliminaries

In this paper, we consider the following delayed memristor-based neural networks:

$$ \begin{aligned} &\dot{x}_{i}(t)=-x_{i}(t)+\sum _{j=1}^{n}w^{1}_{ij} \bigl(x_{i}(t) \bigr)f_{j} \bigl(x_{j}(t) \bigr)+ \sum_{j=1}^{n}w^{2}_{ij} \bigl(x_{i}(t) \bigr)f_{j} \bigl(x_{j} \bigl(t-h_{j}(t)\bigr) \bigr) \\ &\hphantom{\dot{x}_{i}(t)=}{}+\sum_{j=1}^{n}w^{3}_{ij} \bigl(x_{i}(t) \bigr) \int_{t-r_{j}(t)}^{t}f_{j} \bigl(x_{j}(s) \bigr)\,\mathrm{d}s+u_{i}(t), \\ &y_{i}(t)= f_{i} \bigl(x_{i}(t) \bigr), \quad i=1,2,\ldots,n. \end{aligned} $$
(2.1)

System (2.1) can be implemented by the large-scale integration circuits as shown in Figure 1. From Figure 1, system (2.1) can be obtained using Kirchhoff’s current law. In (2.1), \(x_{i}(t)\) is the voltage of the capacitor \(\mathbf{C}_{i}\) at time t, \(u_{i}(t)\), and \(y_{i}(t)\) denote external input and output, respectively. \(h_{j}(t)\) and \(r_{j}(t)\) represent time-varying delays, \(f_{j}(\cdot)\) is the neuron activation function. \(w^{m}_{ij}(x_{i}(t))\) (\(m=1,2,3\)) represent memristor-based weights, and

$$ w^{m}_{ij}\bigl(x_{i}(t)\bigr)= \frac{\mathbf{W}^{m}_{ij}}{\mathbf{C}_{i}}\times \operatorname{sgin}_{ij}, \qquad \operatorname{sgin}_{ij}= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} 1, & i\neq j,\\ -1, & i=j, \end{array}\displaystyle \right . $$
(2.2)

in which \(\mathbf{W}^{m}_{ij}\) denotes the memductances of the memristors \(\mathbf{R}^{m}_{ij}\), here \(\mathbf{R}^{m}_{ij}\) represents the memristor between \(x_{i}(t)\) and the feedback functions \(f_{j}(x_{j}(t))\) or \(f_{j} (x_{j}(t-h(t)) )\).

Figure 1
figure 1

The circuit of memristive neural networks ( 2.1 ). \(x_{i}(t)\) is the state of the ith subsystem, \(\mathbf{R}^{m}_{ij}\) represents the memristor, \(f_{j}\) is the amplifier, \(\mathbf{R}_{i}\) and \(\mathbf{C}_{i}\) are the resistor and capacitor, \(u_{i}\) is the external input, and \(m=1,2,3, i,j=0,1,\ldots, n\).

Combining with the physical structure of a memristor device, we see that

$$ \mathbf{W}^{m}_{ij}=\frac{\mathrm{d}\mathbf{q}^{m}_{ij}}{\mathrm{d}\sigma ^{m}_{ij}}, \quad m=1,2,3, $$
(2.3)

where \(\mathbf{q}^{m}_{ij}\) and \(\sigma^{m}_{ij}\) denote charge and magnetic flux corresponding to memristor \(\mathbf{R}^{m}_{ij}\), respectively.

As two typical memductance functions pointed in [16], we discuss the following two cases in this paper.

Case 1: The state-dependent switched memductance functions \(\mathbf {W}^{m}_{ij}\) are given by

$$ \mathbf{W}^{m}_{ij}= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} a^{m}_{ij}, & |\sigma^{m}_{ij}|>\ell^{m}_{ij},\\ b^{m}_{ij}, & |\sigma^{m}_{ij}|< \ell^{m}_{ij}, \end{array}\displaystyle \right . $$
(2.4)

where \(a^{m}_{ij},b^{m}_{ij}\) and \(\ell^{m}_{ij}>0\) are constants, \(i,j=1,2,\ldots,n\).

Case 2: The state-dependent continuous memductance functions \(\mathbf {W}^{m}_{ij}\) is given by

$$ \mathbf{W}^{m}_{ij}=c^{m}_{ij}+3d^{m}_{ij} \bigl(\sigma_{ij}^{m}\bigr)^{2}, $$
(2.5)

where \(c^{m}_{ij}\) and \(d^{m}_{ij}>0\) are constants, \(i,j=1,2,\ldots,n\).

According to the feature of the memristor and the current voltage characteristic [16], the following two cases occur.

Case 1′: in the case 1, then

$$w^{m}_{ij}\bigl(x_{i}(t)\bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \hat{w}^{m}_{ij},& \operatorname{sgin} \frac{\mathrm {d}f_{j}(x_{j}(t))}{\mathrm{d}t}- \frac{\mathrm {d}x_{i}(t)}{\mathrm{d}t}\leq0,\\ \check{w}^{m}_{ij}, & \operatorname{sgin} \frac{\mathrm {d}f_{j}(x_{j}(t))}{\mathrm{d}t}- \frac{\mathrm {d}x_{i}(t)}{\mathrm{d}t}>0, \end{array}\displaystyle \right . $$
(2.6)

where \(m=1\) or 3, and

$$ w^{2}_{ij}\bigl(x_{i}(t)\bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \hat{w}^{2}_{ij}, & \operatorname{sgin} \frac{\mathrm {d}f_{j} (x_{j}(t-h_{j}(t)) )}{\mathrm{d}t}- \frac{\mathrm {d}x_{i}(t)}{\mathrm{d}t}\leq0,\\ \check{w}^{2}_{ij}, & \operatorname{sgin} \frac{\mathrm{d}f_{j} (x_{j}(t-h_{j}(t)) )}{\mathrm{d}t}- \frac{\mathrm {d}x_{i}(t)}{\mathrm{d}t}>0, \end{array}\displaystyle \right . $$
(2.7)

for \(i,j=1,2,\ldots,n\), where \(\hat{w}^{m}_{ij}\) and \(\check {w}^{m}_{ij}\) are constants.

Case 2′: in the case 2, then

$$ w^{m}_{ij}\bigl(x_{i}(t)\bigr) \mbox{ is a continuous function, and } \underline {\Lambda}^{m}_{ij} \leq w^{m}_{ij}\bigl(x_{i}(t)\bigr)\leq\overline{ \Lambda}^{m}_{ij} $$
(2.8)

for \(i,j=1,2,\ldots,n\), where \(\underline{\Lambda}^{m}_{ij}\) and \(\overline{\Lambda}^{m}_{ij}\) are constants.

Obviously, the memristor-based neural network (2.1) with different memductance functions is a state-dependent switched system or a state-dependent continuous system under these two cases. System (2.1) is established based on the following assumptions:

  1. (A1)

    \(f_{i}(\cdot)\) is a monotonically increasing function with saturation, it satisfies

    $$0\leq \frac{f_{i}(\hat{x})-f_{i}(\check{x})}{\hat {x}-\check{x}}\leq k_{i}, \qquad f_{i}(0)=0, \quad i=1,2,\ldots,n, $$

    and

    $$f_{i}(\hat{x})\cdot \bigl(f_{i}(\hat{x})-k_{i} \hat{x} \bigr)\leq0,\qquad f_{i}(0)=0,\quad i=1,2,\ldots,n, $$

    for \(\forall\hat{x}, \check{x}\in\mathbb{R}\) and \(\hat{x}\neq\check {x}\), where \(k_{i} \) (\(i=1,2,\ldots,n\)) are positive constants.

  2. (A2)

    \(h_{j}(t)\) and \(r_{j}(t)\) are bounded functions satisfying

    $$0< h_{j}(t)\leq\overline{h}, \qquad \dot{h}_{j}(t)\leq\mu \quad\mbox{and}\quad 0< r_{j}(t)\leq\overline{r}. $$

The solutions of the networks discussed in the following are intended in the Filippov sense throughout this paper. \(\mathbb{R}^{n}\) is n-dimensional Euclidean space. \(\mathcal{C}([-\tau,0],\mathbb {R}^{n})\) is Banach space of all continuous functions. \(\|\cdot\|\) denotes the Euclidean norm of a vector and its induced norm of a matrix. co\(\{\widetilde{\Pi },\widehat{\Pi}\}\) denotes closure of the convex hull generated by real numbers Π̃ and Π̂ or real matrices Π̃ and Π̂. Let \(\overline{w}_{ij}^{m}=\max\{\hat {w}_{ij}^{m},\check{w}_{ij}^{m}\}\), \(\underline{w}_{ij}^{m}=\min\{ \hat{w}_{ij}^{m},\check{w}_{ij}^{m}\}\), \(\widetilde{w}_{ij}^{m}=\max\{|\hat{w}_{ij}^{m}|,|\check {w}_{ij}^{m}|\}\), \(\widetilde{\Lambda}^{m}_{ij}=\max\{|\underline {\Lambda}^{m}_{ij}|,|\overline{\Lambda}^{m}_{ij}|\}\) for \(i,j=1,2,\ldots ,n\). Denote \(K=\operatorname{diag}(k_{1},k_{2},\ldots,k_{n}), \widetilde {W}^{m}=(\tilde{w}^{m}_{ij})_{n\times n}, \widetilde{\Lambda }^{m}=(\widetilde{\Lambda}^{m}_{ij})_{n\times n}\). \(I_{n}\) is an \(n\times n\) identity matrix. For symmetric matrix \(\mathbf{T}, \mathbf {T}>0 \) (\(\mathbf{T}<0\)) means that T is a positive definite (negative definite) matrix. For matrices \(\mathbf{Q}=(q_{ij})_{n\times n}\) and \(\mathbf{H}=(h_{ij})_{n\times n}, \mathbf{Q\gg H}\) (\(\mathbf{Q\ll H}\)) means that \(q_{ij}\geq h_{ij}\) (\(q_{ij}\leq h_{ij}\)), for \(i,j=1,2,\ldots,n\). And by the interval matrix \([\mathbf{Q},\mathbf {H}]\), it follows that \(\mathbf{Q\ll H}\). For any matrix \(\mathbf {L}=(l_{ij})_{n\times n}\in[\mathbf{Q},\mathbf{H}]\), it means \(\mathbf {Q}\ll\mathbf{L}\ll\mathbf{H}\), i.e., \(q_{ij}\leq l_{ij}\leq h_{ij}\) for \(i,j=1,2,\ldots,n\). The symmetric terms in a symmetric matrix are denoted by ‘∗’.

In addition, the initial conditions of system (2.1) are assumed to be

$$x_{i}(t)=\phi_{i}(t),\quad t\in[-\tau,0],\tau=\max[ \overline {h},\overline{r}],i=1,2,\ldots,n, $$

where \(\phi_{i}(t)=(\phi_{1}(t),\phi_{2}(t),\ldots,\phi_{n}(t))\in \mathcal{C}([-\tau,0],\mathbb{R}^{n})\).

The following definitions are essential for the proof in the sequel.

Definition 1

[36]

Let \(G\subseteq\mathbb{R}^{n}, x\mapsto H(x)\) is called a set-valued map from \(G\hookrightarrow\mathbb {R}^{n}\), if for each point x of a set \(G\subseteq\mathbb{R}^{n}\), there corresponds a nonempty set \(H(x)\subseteq\mathbb{R}^{n}\).

Definition 2

[36]

A set-valued map H with nonempty values is said to be upper semi-continuous at \(x_{0}\in G\subseteq\mathbb{R}^{n}\), if for any open set M containing \(H(x_{0})\), there exists a neighborhood N of \(x_{0}\) such that \(H(N)\subseteq M\). \(H(x)\) is said to have a closed (convex, compact) image if, for each \(x\in G\), \(H(x)\) is closed (convex, compact).

Definition 3

[37]

For differential system \(\dot {x}=h(t,x)\), where \(h(t,x)\) is discontinuous in x. The set-valued map of \(h(t,x)\) is defined as

$$H(t,x)=\bigcap_{\epsilon>0}\bigcap _{\mu(M)=0}\operatorname{co} \bigl[h \bigl(B(x,\epsilon)\setminus M \bigr) \bigr], $$

where \(B(x,\epsilon)=\{y:\|y-x\|\leq\epsilon\}\) is the ball of center x and radius ϵ; intersection is taken over all sets M of measure zero and over all \(\epsilon>0\); and \(\mu(M)\) is the Lebesgue measure of the set M.

A Filippov solution of system \(\dot{x}=h(t,x)\), with initial condition \(x(0)=x_{0}\) is absolutely continuous on any subinterval \(t\in[t_{1},t_{2}]\) of \([0,T]\), which satisfies \(x(0)=x_{0}\), and the differential inclusion:

$$\dot{x}\in H(t,x) \quad \mbox{for a.a. }t\in[0,T]. $$

Definition 4

[38]

The system (2.1) is said to be passive if there exists a scalar \(\gamma\geq0\) such that for all \(t_{p}\geq0\) and all solutions of system (2.1) with \(x(0)=0\), the formula

$$2 \int_{0}^{t_{p}}y^{T}(s)u(s)\,\mathrm{d}s\geq- \gamma \int _{0}^{t_{p}}u^{T}(s)u(s)\,\mathrm{d}s $$

holds, where \(y(t)=(y_{1}(t),y_{2}(t),\ldots ,y_{n}(t))^{T},u(t)=(u_{1}(t),u_{2}(t),\ldots,u_{n}(t))^{T}\).

The product of input and output is regarded as the energy provision for the passivity of the systems in Definition 4, which embodies the energy attenuation character of system. By the control theories, we know that the passive properties of systems can keep the systems internally stable. Passive systems only burn energy while there is no energy production. Accordingly, by nature, passivity embodies a characteristic of the energy consumption of system. We know that the power flow is usually made to meet the energy conservation and the passive systems do not produce energy [16], i.e.,

$$E_{\mathrm{input}} + E_{\mathrm{initial}} = E_{\mathrm{residual}} +E_{\mathrm {dissipated}}. $$

Lemma 1

[39, 40]

Let M be a constant matrix. For scalars \(a,b\) and a vector function \(x(t):[a,b]\rightarrow\mathbb {R}^{n}\), the following inequality holds:

$$\biggl[ \int_{a}^{b}x(s)\,\mathrm{d}s \biggr]^{T}M \biggl[ \int_{a}^{b}x(s)\,\mathrm {d}s \biggr]\leq(b-a) \int_{a}^{b}x^{T}(s)Mx(s)\,\mathrm{d}s, $$

where \(a< b\) and \(M>0\).

3 Main results

In this section, we present our passivity criteria for system (2.1).

Theorem 3.1

Given scalars \(\mu, \bar{h}>0\) and \(\bar{r}>0\), system (2.1) is passive in the sense of Definition  4 under the case 1′ if there exist matrices \(J_{1}\), \(J_{2}\), \(J_{3}\), \(M_{1}>0\), \(M_{2}>0\), \(M_{3}>0\), \(N_{1}>0\), \(N_{2}>0\), \(N_{3}>0\), and positive diagonal matrices \(D=\operatorname{diag}\{d_{1},\ldots,d_{n}\}, \Upsilon=\{ \epsilon_{1},\ldots,\epsilon_{n}\}, L=\{\iota_{1},\ldots,\iota_{n}\}\), and appropriately dimensioned matrices \(R_{\zeta}>0,P_{\zeta},Q_{\zeta} \) (\(\zeta=1,2,\ldots,9\)) satisfying the following LMIs:

$$\begin{aligned}& \begin{bmatrix} J_{1} & J_{2}\\ \ast& J_{3} \end{bmatrix}>0, \end{aligned}$$
(3.1)
$$\begin{aligned}& \begin{bmatrix} \Theta+\widetilde{\Xi}+\widetilde{\Xi}^{T} & \sqrt{\bar{h}}P & \sqrt {\bar{h}}Q\\ \ast& -N_{2} & 0\\ \ast& \ast& -N_{2} \end{bmatrix}< 0, \end{aligned}$$
(3.2)

where

$$\Theta= \begin{bmatrix} \Sigma_{1} & 0 & -J_{2} & J_{1} & J_{3} & K\Upsilon& 0 & 0 & 0\\ \ast& \Sigma_{2} & 0 & 0 & 0 & 0 & KL & 0 & 0\\ \ast& \ast& -M_{1} & 0 & -J_{3} & 0 & 0 & 0 & 0\\ \ast& \ast& \ast& \bar{h}N_{2} & J_{2} & D & 0 & 0 & 0\\ \ast& \ast& \ast& \ast& -N_{1} & 0 & 0 & 0 & 0\\ \ast& \ast& \ast& \ast& \ast& \Sigma_{6} & 0 & 0 & -I\\ \ast& \ast& \ast& \ast& \ast& \ast& \Sigma_{7} & 0 & 0\\ \ast& \ast& \ast& \ast& \ast& \ast& \ast& -N_{3} & 0\\ \ast& \ast& \ast& \ast& \ast& \ast& \ast& \ast& -\gamma I \end{bmatrix}, $$

in which

$$\begin{aligned}& \Sigma_{1}=M_{1}+M_{2}+J_{2}+J^{T}_{2}+ \bar{h}^{2}N_{1},\qquad \Sigma _{2}=-(1- \mu)M_{2}, \qquad\Sigma_{6}=M_{3}+ \bar{r}^{2}N_{3}-2\Upsilon, \\& \Sigma_{7}=-(1-\mu)M_{3}-2L,\\& \widetilde{\Xi }=[P,Q,-P,-Q,0,0,0,0,0,0]+R\bigl[-I,0,0,-I,0,\widetilde{W}^{1}, \widetilde {W}^{2},\widetilde{W}^{3},I\bigr], \\& P= \bigl[P^{T}_{1},P^{T}_{2},P^{T}_{3},P^{T}_{4},P^{T}_{5},P^{T}_{6},P^{T}_{7},P^{T}_{8},P^{T}_{9} \bigr]^{T}, \\& Q= \bigl[Q^{T}_{1},Q^{T}_{2},Q^{T}_{3},Q^{T}_{4},Q^{T}_{5},Q^{T}_{6},Q^{T}_{7},Q^{T}_{8},Q^{T}_{9} \bigr]^{T}, \\& R= \bigl[R^{T}_{1},R^{T}_{2},R^{T}_{3},R^{T}_{4},R^{T}_{5},R^{T}_{6},R^{T}_{7},R^{T}_{8},R^{T}_{9} \bigr]^{T}>0. \end{aligned}$$

Proof

First of all, by utilizing the theories of differential inclusions and set-valued maps, it follows from (2.1) that

$$ \begin{aligned} &\dot{x}_{i}(t)\in -x_{i}(t)+ \sum_{j=1}^{n}\operatorname{co} \bigl\{ \hat {w}^{1}_{ij},\check{w}^{1}_{ij} \bigr\} f_{j} \bigl(x_{j}(t) \bigr)+\sum _{j=1}^{n}\operatorname{co} \bigl\{ \hat{w}^{2}_{ij},\check{w}^{2}_{ij} \bigr\} f_{j} \bigl(x_{j}\bigl(t-h_{j}(t)\bigr) \bigr) \\ &\hphantom{\dot{x}_{i}(t)\in}{}+\sum_{j=1}^{n} \operatorname{co} \bigl\{ \hat{w}^{3}_{ij}, \check{w}^{3}_{ij} \bigr\} \int_{t-r_{j}(t)}^{t}f_{j} \bigl(x_{j}(s) \bigr)\,\mathrm{d}s+u_{i}(t), \\ &y_{i}(t)= f_{i} \bigl(x_{i}(t) \bigr), \quad i=1,2,\ldots,n, \end{aligned} $$
(3.3)

or, equivalently, there exists \(e^{m}_{ij}\in\operatorname{co} \{\hat {w}^{m}_{ij},\check{w}^{m}_{ij} \}\) such that

$$ \begin{aligned} &\dot{x}_{i}(t)=-x_{i}(t)+ \sum_{j=1}^{n} e^{1}_{ij}f_{j} \bigl(x_{j}(t) \bigr)+\sum_{j=1}^{n}e^{2}_{ij}f_{j} \bigl(x_{j}\bigl(t-h_{j}(t)\bigr) \bigr) \\ &\hphantom{\dot{x}_{i}(t)=}{}+\sum_{j=1}^{n}e^{3}_{ij} \int_{t-r_{j}(t)}^{t}f_{j} \bigl(x_{j}(s) \bigr)\,\mathrm{d}s+u_{i}(t), \\ &y_{i}(t)= f_{i} \bigl(x_{i}(t) \bigr), \quad i=1,2,\ldots,n. \end{aligned} $$
(3.4)

Clearly, for \(i,j=1,2,\ldots,n\),

$$\operatorname{co} \bigl\{ \hat{w}^{m}_{ij}, \check{w}^{m}_{ij} \bigr\} =\bigl[\underline {w}^{m}_{ij},\overline{w}^{m}_{ij}\bigr]. $$

Of course, the above parameters \(e^{m}_{ij}\) (\(i,j=1,2,\ldots,n\)) in (3.4) depend upon the initial condition of system (2.1) and time t. The compact form of system (3.3) or (3.4) is as follows:

$$ \begin{aligned} &\dot{x}(t)\in-x(t)+\operatorname{co} \bigl\{ \widehat{W}^{1},\check{W}^{1} \bigr\} f \bigl(x(t) \bigr)+ \operatorname{co} \bigl\{ \widehat{W}^{2},\check{W}^{2} \bigr\} f \bigl(x\bigl(t-h(t)\bigr) \bigr) \\ &\hphantom{\dot{x}(t)\in}{}+\operatorname{co} \bigl\{ \widehat{W}^{3}, \check{W}^{3} \bigr\} \int_{t-r(t)}^{t}f \bigl(x(s) \bigr)\,\mathrm{d}s+u(t), \\ &y(t)=f \bigl(x(t) \bigr), \end{aligned} $$
(3.5)

or equivalently, there exists \(W^{m}\in\operatorname{co} \{\widehat {W}^{m}_{ij},\check{W}^{m}_{ij} \}\) such that

$$ \begin{aligned} &\dot{x}(t)=-x(t)+W^{1}f \bigl(x(t) \bigr)+W^{2}f \bigl(x\bigl(t-h(t)\bigr) \bigr)+W^{3} \int_{t-r(t)}^{t}f \bigl(x(s) \bigr)\,\mathrm{d}s+u(t), \\ &y(t)=f \bigl(x(t) \bigr), \end{aligned} $$
(3.6)

where

$$\begin{aligned}[b] &x(t)= \bigl(x_{1}(t),x_{2}(t),\ldots,x_{n}(t) \bigr)^{T}, \\ &u(t)= \bigl(u_{1}(t),u_{2}(t),\ldots,u_{n}(t) \bigr)^{T}, \\ &y(t)= \bigl(y_{1}(t),y_{2}(t),\ldots,y_{n}(t) \bigr)^{T}, \\ &f\bigl(x(t)\bigr)= \bigl(f_{1}\bigl(x_{1}(t) \bigr),\qquad f_{2}\bigl(x_{2}(t)\bigr),\ldots,f_{n} \bigl(x_{n}(t)\bigr) \bigr)^{T}, \\ &\widehat{W}^{m}=\bigl(\hat{w}^{m}_{ij} \bigr)_{n\times n},\qquad \check{W}^{m}=\bigl(\check {w}^{m}_{ij} \bigr)_{n\times n}. \end{aligned} $$

Clearly,

$$\operatorname{co} \bigl\{ \widehat{W}^{m},\check{W}^{m} \bigr\} =\bigl[\underline {W}^{m},\overline{W}^{m}\bigr], $$

where

$$\underline{W}^{m}=\bigl(\underline{w}^{m}_{ij} \bigr)_{n\times n},\qquad\overline {W}^{m}=\bigl( \overline{w}^{m}_{ij}\bigr)_{n\times n}. $$

Second, according to (A1), the inequalities

$$ \begin{aligned} &{-}2\sum_{i=1}^{n} \epsilon _{i}f_{i}\bigl(x_{i}(t)\bigr) \bigl(f_{i}(t)-k_{i}x_{i}(t)\bigr)\geq0, \\ &{-}2\sum_{i=1}^{n}\iota _{i}f_{i}\bigl(x_{i}\bigl(t-h(t)\bigr)\bigr) \bigl(f_{i}\bigl(t-h(t)\bigr)-k_{i}x_{i} \bigl(t-h(t)\bigr)\bigr)\geq0, \end{aligned} $$
(3.7)

hold, that is,

$$ \begin{aligned} &\varphi_{1}(t)=-2f^{T} \bigl(x(t) \bigr)\Upsilon f \bigl(x(t) \bigr)+2f^{T} \bigl(x(t) \bigr) \Upsilon Kx(t)\geq0, \\ &\varphi_{2}(t)=-2f^{T} \bigl(x\bigl(t-h(t)\bigr) \bigr)Lf \bigl(x\bigl(t-h(t)\bigr) \bigr)+2f^{T} \bigl(x\bigl(t-h(t)\bigr) \bigr)LKx\bigl(t-h(t)\bigr)\geq0. \end{aligned} $$
(3.8)

Third, by using the Leibniz-Newton formula and recalling (3.6), we have

$$ \begin{aligned} &\varphi_{3}(t)=2\eta^{T}(t)P \biggl[x(t)-x\bigl(t-h(t)\bigr)- \int_{t-h(t)}^{t}\dot {x}(s)\,\mathrm{d}s \biggr]=0, \\ &\varphi_{4}(t)=2\eta^{T}(t)Q \biggl[x\bigl(t-h(t) \bigr)-x(t-\bar{h})- \int_{t-\bar {h}}^{t-h(t)}\dot{x}(s)\,\mathrm{d}s \biggr]=0, \\ &\varphi_{5}(t)=2\eta^{T}(t)R \biggl[-\dot {x}(t)-x(t)+W^{1}f\bigl(x(t)\bigr)+W^{2}f\bigl(x\bigl(t-h(t) \bigr)\bigr) \\ &\hphantom{\varphi_{5}(t)=}{}+W^{3} \int_{t-r(t)}^{t}f\bigl(x(s)\bigr)\,\mathrm{d}s+u(t) \biggr]=0, \end{aligned} $$
(3.9)

where

$$\begin{aligned} \eta^{T}(t)={}& \biggl[x^{T}(t),x^{T} \bigl(t-h(t)\bigr),x^{T}(t-\bar{h}),\dot {x}^{T}(t), \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr)^{T},f^{T} \bigl(x(t)\bigr), \\ &{}f^{T} \bigl(x\bigl(t-h(t)\bigr) \bigr), \biggl( \int_{t-r(t)}^{t}f\bigl(x(s)\bigr)\,\mathrm{d}s \biggr)^{T},u^{T}(t) \biggr]. \end{aligned}$$

Define a Lyapunov-Krasovskii functional \(V(x(t))\):

$$ V\bigl(x(t)\bigr)=V_{1}\bigl(x(t)\bigr)+V_{2} \bigl(x(t)\bigr)+V_{3}\bigl(x(t)\bigr)+V_{4}\bigl(x(t) \bigr)+V_{5}\bigl(x(t)\bigr), $$
(3.10)

where

$$\begin{aligned}& \begin{aligned}[b] V_{1}\bigl(x(t)\bigr)={}&2\sum _{i=1}^{n}d_{i} \int_{0}^{x_{i}(t)}f_{i}(s)\,\mathrm{d}s +x^{T}(t)J_{1}x(t)+2x^{T}(t)J_{2} \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm {d}s \biggr) \\ &{}+ \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr)^{T}J_{3} \biggl( \int_{t-\bar {h}}^{t}x(s)\,\mathrm{d}s \biggr), \end{aligned} \\& \begin{aligned}[b] V_{2}\bigl(x(t)\bigr)={}& \int_{t-\bar{h}}^{t}x^{T}(s)M_{1}x(s)\, \mathrm{d}s+ \int _{t-h(t)}^{t}x^{T}(s)M_{2}x(s)\, \mathrm{d}s\\ &{}+ \int _{t-h(t)}^{t}f^{T}\bigl(x(s) \bigr)M_{3}f\bigl(x(s)\bigr)\,\mathrm{d}s, \end{aligned} \\& V_{3}\bigl(x(t)\bigr)=\bar{h} \int_{-\bar{h}}^{0} \int_{t+\theta }^{t}x^{T}(s)N_{1}x(s) \,\mathrm{d}s\,\mathrm{d}\theta, \\& V_{4}\bigl(x(t)\bigr)= \int_{-\bar{h}}^{0} \int_{t+\theta}^{t}\dot {x}^{T}(s)N_{2} \dot{x}(s)\,\mathrm{d}s\,\mathrm{d}\theta, \\& V_{5}\bigl(x(t)\bigr)=\bar{r} \int_{-\bar{r}}^{0} \int_{t+\theta }^{t}f^{T}\bigl(x(s) \bigr)N_{3}f\bigl(x(s)\bigr)\,\mathrm{d}s\,\mathrm{d}\theta. \end{aligned}$$

From system (3.6), we obtain

$$\begin{aligned} &x^{T}(t)J_{1}x(t)+2x^{T}(t)J_{2} \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr) + \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr)^{T}J_{3} \biggl( \int_{t-\bar {h}}^{t}x(s)\,\mathrm{d}s \biggr) \\ &\quad= \biggl[x^{T}(t), \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr)^{T} \biggr] \begin{bmatrix} J_{1} & J_{2}\\ \ast& J_{3} \end{bmatrix} \begin{bmatrix} x^{T}(t)\\ (\int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s )^{T} \end{bmatrix} \\ &\quad>0. \end{aligned}$$

Calculating the derivative of \(V_{i}(x(t))\) (\(i=1,2,\ldots,5\)) along the trajectory of (3.6), from Lemma 1 and (A1), we obtain

$$\begin{aligned} \dot{V}_{1}\bigl(x(t)\bigr)={}&2f^{T} \bigl(x(t)\bigr)D\dot{x}(t)+2x^{T}(t)J_{1}\dot {x}(t)+x^{T}(t) \bigl(J_{2}+J^{T}_{2} \bigr)x(t) \\ &{}-2x^{T}(t)J_{2}x(t-\bar{h})+2\dot{x}^{T}(t)J_{2} \biggl( \int_{t-\bar {h}}^{t}x(s)\,\mathrm{d}s \biggr) \\ &{}+2x^{T}(t)J_{3} \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr)-2x^{T}(t- \bar{h})J_{3} \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr), \\ \dot{V}_{2}\bigl(x(t)\bigr)={}&x^{T}(t)M_{1}x(t)-x^{T}(t- \bar{h})M_{1}x(t-\bar{h}) \\ &{}+x^{T}(t)M_{2}x(t)-\bigl(1-\dot{h}(t) \bigr)x^{T}\bigl(t-h(t)\bigr)M_{2}x\bigl(t-h(t)\bigr) \\ &{}+f^{T}\bigl(x(t)\bigr)M_{3}f\bigl(x(t)\bigr)-\bigl(1- \dot {h}(t)\bigr)f^{T}\bigl(x\bigl(t-h(t)\bigr)\bigr)M_{3}f \bigl(x\bigl(t-h(t)\bigr)\bigr) \\ \leq{}&x^{T}(t)M_{1}x(t)-x^{T}(t- \bar{h})M_{1}x(t-\bar{h}) \\ &{}+x^{T}(t)M_{2}x(t)-(1-\mu)x^{T}\bigl(t-h(t) \bigr)M_{2}x\bigl(t-h(t)\bigr) \\ &{}+f^{T}\bigl(x(t)\bigr)M_{3}f\bigl(x(t)\bigr)-(1- \mu)f^{T}\bigl(x\bigl(t-h(t)\bigr)\bigr)M_{3}f\bigl(x \bigl(t-h(t)\bigr)\bigr), \\ \dot{V}_{3}\bigl(x(t)\bigr)={}&x^{T}(t) \bigl( \bar{h}^{2}N_{1}\bigr)x(t)-\bar{h} \int_{t-\bar {h}}^{t}x^{T}(s)N_{1}x(s)\,\mathrm{d}s \\ \leq{}&x^{T}(t) \bigl(\bar{h}^{2}N_{1}\bigr)x(t)- \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr)^{T}N_{1} \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr), \\ \dot{V}_{4}\bigl(x(t)\bigr)={}&\dot{x}^{T}(t) ( \bar{h}N_{2})\dot{x}(t)- \int_{t-\bar {h}}^{t}\dot{x}^{T}(s)N_{2} \dot{x}(s)\,\mathrm{d}s, \\ \dot{V}_{5}\bigl(x(t)\bigr)={}&f^{T}\bigl(x(t)\bigr) \bigl(\bar{r}^{2}N_{3}\bigr)f\bigl(x(t)\bigr)-\bar{r} \int _{t-\bar{r}}^{t}f^{T}\bigl(x(s) \bigr)N_{3}f\bigl(x(s)\bigr)\,\mathrm{d}s \\ \leq{}&f^{T}\bigl(x(t)\bigr) \bigl(\bar{r}^{2}N_{3} \bigr)f\bigl(x(t)\bigr)-r(t) \int _{t-r(t)}^{t}f^{T}\bigl(x(s) \bigr)N_{3}f\bigl(x(s)\bigr)\,\mathrm{d}s \\ \leq{}&f^{T}\bigl(x(t)\bigr) \bigl(\bar{r}^{2}N_{3} \bigr)f\bigl(x(t)\bigr)- \biggl( \int _{t-r(t)}^{t}f\bigl(x(s)\bigr)\,\mathrm{d}s \biggr)^{T} N_{3} \biggl( \int _{t-r(t)}^{t}f\bigl(x(s)\bigr)\,\mathrm{d}s \biggr). \end{aligned}$$
(3.11)

From (3.11), we obtain

$$\begin{aligned} &\dot{V}\bigl(x(t)\bigr)-2y^{T}(t)u(t)-\gamma u^{T}(t)u(t) \\ &\quad\leq\sum_{i=1}^{5} \varphi_{i}(t)+\dot{x}^{T}(t) (\bar{h}N_{2})\dot {x}(t)+2f^{T}\bigl(x(t)\bigr)D\dot{x}(t)+2x^{T}(t)J_{1} \dot{x}(t) \\ &\qquad{}+x^{T}(t) \bigl(J_{2}+J^{T}_{2}+M_{1}+M_{2}+ \bar {h}^{2}N_{1}\bigr)x(t)+f^{T}\bigl(x(t)\bigr) \bigl(M_{3}+\bar{r}^{2}N_{3}\bigr)f\bigl(x(t)\bigr) \\ &\qquad{}+2\dot{x}^{T}(t)J_{2} \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr)+2x^{T}(t)J_{3} \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr) \\ &\qquad{}-2f^{T}\bigl(x(t)\bigr)u(t)-\gamma u^{T}(t)u(t)-2x^{T}(t)J_{2}x(t- \bar {h})-x^{T}(t-\bar{h})M_{1}x(t-\bar{h}) \\ &\qquad{}-2x^{T}(t-\bar{h})J_{3} \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr)- \biggl( \int_{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr)^{T}N_{1} \biggl( \int _{t-\bar{h}}^{t}x(s)\,\mathrm{d}s \biggr) \\ &\qquad{}-(1-\mu)x^{T}\bigl(t-h(t)\bigr)M_{2}x\bigl(t-h(t) \bigr)-(1-\mu )f^{T}\bigl(x\bigl(t-h(t)\bigr)\bigr)M_{3}f \bigl(x\bigl(t-h(t)\bigr)\bigr) \\ &\qquad{}- \int_{t-h(t)}^{t}\dot{x}^{T}(s)N_{2} \dot{x}(s)\,\mathrm{d}s- \int_{t-\bar {h}}^{t-h(t)}\dot{x}^{T}(s)N_{2} \dot{x}(s)\,\mathrm{d}s \\ &\qquad{}- \biggl( \int_{t-r(t)}^{t}f\bigl(x(s)\bigr)\,\mathrm{d}s \biggr)^{T}N_{3} \biggl( \int _{t-r(t)}^{t}f\bigl(x(s)\bigr)\,\mathrm{d}s \biggr) \\ &\quad\leq\eta^{T}(t) \bigl(\Theta+\Xi+\Xi^{T} +\bar {h}PN^{-1}_{2}P^{T}+\bar{h}QN^{-1}_{2}Q^{T} \bigr)\eta(t) \\ &\qquad{}- \int_{t-h(t)}^{t}\bigl(P^{T} \eta(t)+N_{2}\dot {x}(s)\bigr)^{T}N^{-1}_{2} \bigl(P^{T}\eta(t)+N_{2}\dot{x}(s)\bigr)^{T} \,\mathrm{d}s \\ &\qquad{}- \int_{t-\bar{h}}^{t-h(t)}\bigl(Q^{T} \eta(t)+N_{2}\dot {x}(s)\bigr)^{T}N^{-1}_{2} \bigl(Q^{T}\eta(t)+N_{2}\dot{x}(s)\bigr)^{T} \,\mathrm{d}s, \end{aligned}$$

where

$$\Xi=[P,Q,-P,-Q,0,0,0,0,0,0]+R\bigl[-I,0,0,-I,0,W^{1},W^{2},W^{3},I \bigr]. $$

Noting that \(R>0,\widetilde{w}_{ij}^{m}=\max \{|\hat {w}_{ij}^{m}|,|\check{w}_{ij}^{m}| \}\), \(\widetilde{W}^{m}=(\tilde{w}^{m}_{ij})_{n\times n}\) and \(W^{m}\in\operatorname{co} \{\widehat{W}^{m}_{ij},\check{W}^{m}_{ij} \}\), we have \(\Xi \ll\widetilde{\Xi}\). Hence, we obtain

$$\begin{aligned} &\dot{V}\bigl(x(t)\bigr)-2y^{T}(t)u(t)-\gamma u^{T}(t)u(t) \\ &\quad\leq\eta^{T}(t) \bigl(\Theta+\widetilde{\Xi}+\widetilde{ \Xi}^{T} +\bar {h}PN^{-1}_{2}P^{T}+ \bar{h}QN^{-1}_{2}Q^{T}\bigr)\eta(t) \\ &\qquad{}- \int_{t-h(t)}^{t}\bigl(P^{T} \eta(t)+N_{2}\dot {x}(s)\bigr)^{T}N^{-1}_{2} \bigl(P^{T}\eta(t)+N_{2}\dot{x}(s)\bigr)^{T} \,\mathrm{d}s \\ &\qquad{}- \int_{t-\bar{h}}^{t-h(t)}\bigl(Q^{T} \eta(t)+N_{2}\dot {x}(s)\bigr)^{T}N^{-1}_{2} \bigl(Q^{T}\eta(t)+N_{2}\dot{x}(s)\bigr)^{T} \,\mathrm{d}s. \end{aligned}$$
(3.12)

By applying the Schur complement to (3.2), from (3.12), we get

$$ \dot{V}\bigl(x(t)\bigr)-2y^{T}(t)u(t)-\gamma u^{T}(t)u(t)\leq0. $$
(3.13)

Integrating (3.13) with respect to t over the time period from 0 to \(t_{p}\), we have

$$2 \int_{0}^{t_{p}}y^{T}(s)u(s)\,\mathrm{d}s\geq V \bigl(t_{p},x(t_{p})\bigr)-V\bigl(0,x(0)\bigr)-\gamma \int_{0}^{t_{p}}u^{T}(t)u(t)\,\mathrm{d}s $$

for \(x(0)=0\). Since \(V(0,x(0))=0\) and \(V(t_{p},x(t_{p}))\geq0\), we get

$$2 \int_{0}^{t_{p}}y^{T}(s)u(s)\,\mathrm{d}s\geq- \gamma \int _{0}^{t_{p}}u^{T}(t)u(t)\,\mathrm{d}s. $$

Therefore, the memristive neural networks (2.1) is passive in the sense of Definition 4. This completes the proof. □

Theorem 3.2

Given scalars \(\mu, \bar{h}>0\) and \(\bar{r}>0\), system (2.1) is passive in the sense of Definition  4 under the case 2′ if there exist matrices \(J_{1},J_{2},J_{3}\), \(M_{1}>0\), \(M_{2}>0\), \(M_{3}>0\), \(N_{1}>0\), \(N_{2}>0\), \(N_{3}>0\), and positive diagonal matrices \(D=\operatorname{diag}\{d_{1},\ldots,d_{n}\}, \Upsilon=\{ \epsilon_{1},\ldots,\epsilon_{n}\}, L=\{\iota_{1},\ldots,\iota_{n}\}\), and appropriately dimensioned matrices \(R_{\zeta}>0, P_{\zeta},Q_{\zeta } \) (\(\zeta=1,2,\ldots,9\)) satisfying the following LMIs:

$$\begin{aligned}& \begin{bmatrix} J_{1} & J_{2}\\ \ast& J_{3} \end{bmatrix}>0, \end{aligned}$$
(3.14)
$$\begin{aligned}& \begin{bmatrix} \Theta+\Xi+\Xi^{T} & \sqrt{\bar{h}}P & \sqrt{\bar{h}}Q\\ \ast& -N_{2} & 0\\ \ast& \ast& -N_{2} \end{bmatrix}< 0, \end{aligned}$$
(3.15)

where

$$\Theta= \begin{bmatrix} \Sigma_{1} & 0 & -J_{2} & J_{1} & J_{3} & K\Upsilon& 0 & 0 & 0\\ \ast& \Sigma_{2} & 0 & 0 & 0 & 0 & KL & 0 & 0\\ \ast& \ast& -M_{1} & 0 & -J_{3} & 0 & 0 & 0 & 0\\ \ast& \ast& \ast& \bar{h}N_{2} & J_{2} & D & 0 & 0 & 0\\ \ast& \ast& \ast& \ast& -N_{1} & 0 & 0 & 0 & 0\\ \ast& \ast& \ast& \ast& \ast& \Sigma_{6} & 0 & 0 & -I\\ \ast& \ast& \ast& \ast& \ast& \ast& \Sigma_{7} & 0 & 0\\ \ast& \ast& \ast& \ast& \ast& \ast& \ast& -N_{3} & 0\\ \ast& \ast& \ast& \ast& \ast& \ast& \ast& \ast& -\gamma I \end{bmatrix}, $$

in which

$$\begin{aligned}& P= \bigl[P^{T}_{1},P^{T}_{2},P^{T}_{3},P^{T}_{4},P^{T}_{5},P^{T}_{6},P^{T}_{7},P^{T}_{8},P^{T}_{9} \bigr]^{T}, \\& Q= \bigl[Q^{T}_{1},Q^{T}_{2},Q^{T}_{3},Q^{T}_{4},Q^{T}_{5},Q^{T}_{6},Q^{T}_{7},Q^{T}_{8},Q^{T}_{9} \bigr]^{T}, \\& R= \bigl[R^{T}_{1},R^{T}_{2},R^{T}_{3},R^{T}_{4},R^{T}_{5},R^{T}_{6},R^{T}_{7},R^{T}_{8},R^{T}_{9} \bigr]^{T}>0, \\& \Xi=[P,Q,-P,-Q,0,0,0,0,0,0]+R\bigl[-I,0,0,-I,0,\widetilde{\Lambda }^{1},\widetilde{\Lambda}^{2},\widetilde{ \Lambda}^{3},I\bigr], \\& \Sigma_{1}=M_{1}+M_{2}+J_{2}+J^{T}_{2}+ \bar{h}^{2}N_{1},\qquad \Sigma _{2}=-(1- \mu)M_{2},\qquad \Sigma_{6}=M_{3}+\bar{r}^{2}N_{3}-2 \Upsilon, \\& \Sigma_{7}=-(1-\mu)M_{3}-2L. \end{aligned}$$

Proof

It follows from (2.1) that there exists \(\Lambda^{m}\in [\underline{\Lambda}^{m}_{ij},\overline {\Lambda}^{m}_{ij} ]\) such that

$$ \begin{aligned} &\dot{x}(t)=-x(t)+\Lambda^{1}f \bigl(x(t) \bigr)+\Lambda^{2}f \bigl(x\bigl(t-h(t)\bigr) \bigr)+ \Lambda^{3} \int_{t-r(t)}^{t}f \bigl(x(s) \bigr)\,\mathrm{d}s+u(t), \\ &y(t)=f \bigl(x(t) \bigr). \end{aligned} $$
(3.16)

By recalling (3.16), we obtain

$$\begin{aligned} &2\eta^{T}(t)R \biggl[-\dot{x}(t)-x(t)+ \Lambda^{1}f\bigl(x(t)\bigr)+\Lambda ^{2}f\bigl(x\bigl(t-h(t) \bigr)\bigr) \\ &\quad{}+\Lambda^{3} \int_{t-r(t)}^{t}f\bigl(x(s)\bigr)\,\mathrm{d}s+u(t) \biggr]=0. \end{aligned}$$
(3.17)

Consequently, we have

$$\begin{aligned} \varphi_{5}(t)={}&2\eta^{T}(t)R \biggl[- \dot{x}(t)-x(t)+\widetilde{\Lambda }^{1}f\bigl(x(t)\bigr)+\widetilde{ \Lambda}^{2}f\bigl(x\bigl(t-h(t)\bigr)\bigr) \\ &{}+\widetilde{\Lambda}^{3} \int_{t-r(t)}^{t}f\bigl(x(s)\bigr)\,\mathrm{d}s+u(t) \biggr]\geq0. \end{aligned}$$
(3.18)

Then we can complete the proof by following a similar line to the proof of Theorem 3.1. □

Remark 1

In Theorems 3.1 and 3.2, Lyapunov stability theory is applied to an analysis of the passivity of network (2.1). Actually, in the process of proofs, we can find passivity is a higher abstraction level of stability. For the passivity system (2.1), the corresponding Lyapunov function (3.10) can be regarded as the storage function.

Remark 2

In the proof of Theorem 3.1, in order to obtain (3.12) and turn to a study of the passivity of system (2.1), we assume \(R_{\zeta}>0 \) (\(\zeta=1,\ldots, 9\)), i.e., \(R_{\zeta} \) (\(\zeta=1,\ldots, 9\)) is a positive definite matrix.

Remark 3

Theorems 3.1 and 3.2 can directly derive the stability conditions in term of linear matrix inequalities for system (2.1) if the input \(u(t)=(u_{1}(t),u_{2}(t),\ldots ,u_{n}(t))^{T}=(0,0,\ldots,0)^{T}\).

Proof

When \(u(t)=(u_{1}(t),u_{2}(t),\ldots,u_{n}(t))^{T}=(0,0,\ldots ,0)^{T}\), by using standard arguments as Theorem 3.1, we can get the following equation from (3.12):

$$ \dot{V}\bigl(x(t)\bigr)\leq\eta^{T}(t) \bigl(\Theta+ \widetilde{\Xi}+\widetilde{\Xi }^{T} +\bar{h}PN^{-1}_{2}P^{T}+ \bar{h}QN^{-1}_{2}Q^{T}\bigr)\eta(t)< 0. $$
(3.19)

A consequence of equation (3.19) is that

$$\begin{aligned} &V\bigl(x(t)\bigr)- \int_{t_{0}}^{t}\eta^{T}(s) \bigl(\Theta+ \widetilde{\Xi}+\widetilde {\Xi}^{T} +\bar{h}PN^{-1}_{2}P^{T}+ \bar{h}QN^{-1}_{2}Q^{T}\bigr)\eta(s)\,\mathrm{d}s \\ &\quad\leq V\bigl(x(t_{0})\bigr)< \infty. \end{aligned}$$
(3.20)

By using Barbalat’s lemma [41], we obtain

$$x(t)\rightarrow0 \quad\mbox{as }t\rightarrow\infty, $$

and this completes the proof of the global attractivity of the origin of system (2.1). Thus, system (2.1) is stable if the input \(u(t)=(u_{1}(t),u_{2}(t),\ldots,u_{n}(t))^{T}=(0,0,\ldots,0)^{T}\). □

4 Illustrative examples

In this section, we give two examples to illustrate the feasibility of the theoretical results in Section 3.

Example 4.1

Consider a two-dimensional memristive neural network model,

$$ \begin{aligned} \dot{x}_{1}(t)={}&{-}x_{1}(t)+w^{1}_{11} \bigl(x_{1}(t) \bigr)f_{1} \bigl(x_{1}(t) \bigr)+w^{1}_{12} \bigl(x_{1}(t) \bigr)f_{2} \bigl(x_{2}(t) \bigr)\\ &{}+w^{2}_{11} \bigl(x_{1}(t) \bigr)f_{1} \bigl(x_{1}(t-0.01) \bigr)+w^{2}_{12} \bigl(x_{1}(t) \bigr)f_{2} \bigl(x_{2}(t-0.01) \bigr)\\ &{}+w^{3}_{11} \bigl(x_{1}(t) \bigr) \int_{t-0.02}^{t}f_{1} \bigl(x_{1}(s) \bigr)\,\mathrm{d}s+w^{3}_{12} \bigl(x_{1}(t) \bigr) \int_{t-0.02}^{t}f_{2} \bigl(x_{2}(s) \bigr)\,\mathrm{d}s+u_{1}(t),\\ \dot{x}_{2}(t)={}&{-}x_{2}(t)+w^{1}_{21} \bigl(x_{2}(t) \bigr)f_{1} \bigl(x_{1}(t) \bigr)+w^{1}_{22} \bigl(x_{2}(t) \bigr)f_{2} \bigl(x_{2}(t) \bigr)\\ &{}+w^{2}_{21} \bigl(x_{2}(t) \bigr)f_{1} \bigl(x_{1}(t-1) \bigr)+w^{2}_{22} \bigl(x_{2}(t) \bigr)f_{2} \bigl(x_{2}(t-1) \bigr)\\ &{}+w^{3}_{21} \bigl(x_{2}(t) \bigr) \int_{t-0.05}^{t}f_{1} \bigl(x_{1}(s) \bigr)\,\mathrm{d}s+w^{3}_{22} \bigl(x_{2}(t) \bigr) \int_{t-0.05}^{t}f_{2} \bigl(x_{2}(s) \bigr)\,\mathrm{d}s+u_{2}(t),\\ y_{1}(t)={}& f_{1} \bigl(x_{1}(t) \bigr),\\ y_{2}(t)={}&f_{2} \bigl(x_{2}(t) \bigr), \end{aligned} $$
(4.1)

for \(t\geq0\), where \(f(\rho)=f_{1}(\rho)=f_{2}(\rho)=\tanh(\rho ), \rho\in\mathbb{R}\) and

$$\begin{aligned}& w^{1}_{11} \bigl(x_{1}(t) \bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} -0.03, & - \frac{\mathrm{d}f_{1}(x_{1}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t}\leq0,\\ -0.01, & - \frac{\mathrm{d}f_{1}(x_{1}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t}> 0, \end{array}\displaystyle \right . \\& w^{1}_{12} \bigl(x_{1}(t) \bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} 0.05, & \frac{\mathrm{d}f_{2}(x_{2}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t}\leq0,\\ 0.04, & \frac{\mathrm{d}f_{2}(x_{2}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t}> 0, \end{array}\displaystyle \right . \\& w^{2}_{11} \bigl(x_{1}(t) \bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} -0.05, & - \frac{\mathrm{d}f_{1}(x_{1}(t-0.01))}{\mathrm{d}t}- \frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t}\leq0,\\ -0.03, & - \frac{\mathrm{d}f_{1}(x_{1}(t-0.01))}{\mathrm{d}t}- \frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t}> 0, \end{array}\displaystyle \right . \\& w^{2}_{12} \bigl(x_{1}(t) \bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} 0.07, & \frac{\mathrm{d}f_{2}(x_{2}(t-0.01))}{\mathrm{d}t}- \frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t}\leq0,\\ 0.05, & \frac{\mathrm{d}f_{2}(x_{2}(t-0.01))}{\mathrm{d}t}- \frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t}> 0, \end{array}\displaystyle \right . \\& w^{3}_{11} \bigl(x_{1}(t) \bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} -0.05, & - \frac{\mathrm{d}f_{1}(x_{1}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t}\leq0,\\ -0.03, & - \frac{\mathrm{d}f_{1}(x_{1}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t}> 0, \end{array}\displaystyle \right . \\& w^{3}_{12} \bigl(x_{1}(t) \bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} 0.07, & \frac{\mathrm{d}f_{2}(x_{2}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t}\leq0,\\ 0.05, & \frac{\mathrm{d}f_{2}(x_{2}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{1}(t)}{\mathrm{d}t}> 0, \end{array}\displaystyle \right . \\& w^{1}_{21} \bigl(x_{2}(t) \bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} 0.02, & \frac{\mathrm{d}f_{1}(x_{1}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t}\leq0,\\ 0.01, & \frac{\mathrm{d}f_{1}(x_{1}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t}> 0, \end{array}\displaystyle \right . \\& w^{1}_{22} \bigl(x_{2}(t) \bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} -0.01, & - \frac{\mathrm{d}f_{2}(x_{2}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t}\leq0,\\ -0.005, & - \frac{\mathrm{d}f_{2}(x_{2}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t}> 0, \end{array}\displaystyle \right . \\& w^{2}_{21} \bigl(x_{2}(t) \bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} 0.07, & \frac{\mathrm{d}f_{1}(x_{1}(t-1))}{\mathrm{d}t}- \frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t}\leq0,\\ 0.05, & \frac{\mathrm{d}f_{1}(x_{1}(t-1))}{\mathrm{d}t}- \frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t}> 0, \end{array}\displaystyle \right . \\& w^{2}_{22} \bigl(x_{2}(t) \bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} -0.04, & - \frac{\mathrm{d}f_{2}(x_{2}(t-1))}{\mathrm{d}t}- \frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t}\leq0,\\ -0.03, & - \frac{\mathrm{d}f_{2}(x_{2}(t-1))}{\mathrm{d}t}- \frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t}> 0, \end{array}\displaystyle \right . \\& w^{3}_{21} \bigl(x_{2}(t) \bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} 0.07, & \frac{\mathrm{d}f_{1}(x_{1}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t}\leq0,\\ 0.05, & \frac{\mathrm{d}f_{1}(x_{1}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t}> 0, \end{array}\displaystyle \right . \\& w^{3}_{22} \bigl(x_{2}(t) \bigr)= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} -0.04, & - \frac{\mathrm{d}f_{2}(x_{2}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t}\leq0,\\ -0.03, & - \frac{\mathrm{d}f_{2}(x_{2}(t))}{\mathrm{d}t}- \frac{\mathrm{d}x_{2}(t)}{\mathrm{d}t}> 0. \end{array}\displaystyle \right . \end{aligned}$$

In system (4.1), \(\mu=0,\bar{h}=1,\bar{r}=0.05\),

$$\widetilde{W}^{1}= \begin{bmatrix} 0.03 & 0.05\\ 0.02 & 0.01 \end{bmatrix}, \qquad\widetilde{W}^{2}= \begin{bmatrix} 0.05 & 0.07\\ 0.07 & 0.04 \end{bmatrix}, \qquad\widetilde{W}^{3}= \begin{bmatrix} 0.05 & 0.07\\ 0.07 & 0.04 \end{bmatrix}. $$

Solving the linear matrix inequalities in (3.1) and (3.2) by the Matlab Tool Box, we obtain an optimal feasible solution:

$$\begin{aligned}& J_{1}= \begin{bmatrix} 39.7553 & -0.1795\\ -0.1795 & 39.9743 \end{bmatrix}, \qquad J_{2}= \begin{bmatrix} -2.5325 & -0.2321\\ -0.2321 & -2.4619 \end{bmatrix},\\& J_{3}= \begin{bmatrix} 10.5831 & 0.0069\\ 0.0069 & 10.6368 \end{bmatrix}, \qquad M_{1}= \begin{bmatrix} 13.3668 & -0.1977\\ -0.1977 & 13.5422 \end{bmatrix},\\& M_{2}= \begin{bmatrix} 11.5221 & -0.2435\\ -0.2435 & 11.7298 \end{bmatrix},\qquad M_{3}= \begin{bmatrix} 9.2709 & -2.8768\\ -2.8768 & 10.3387 \end{bmatrix}, \\& N_{1}= \begin{bmatrix} 16.1051 & -0.2892\\ -0.2892 & 16.3336 \end{bmatrix},\qquad N_{2}= \begin{bmatrix} 22.0759 & 0.2187\\ 0.2187 & 22.0708 \end{bmatrix}, \\& N_{3}= \begin{bmatrix} 23.5298 & 1.4774\\ 1.4774 & 23.3309 \end{bmatrix}, \qquad P_{1}= \begin{bmatrix} -5.0212 & 0.1521\\ 0.1521 & -5.0874 \end{bmatrix},\\& P_{2}= \begin{bmatrix} 1.7440 & -0.1443\\ -0.1443 & 1.8064 \end{bmatrix},\qquad P_{3}= \begin{bmatrix} 2.5659 & -0.0524\\ -0.0524 & 2.5541 \end{bmatrix}, \\& P_{4}= \begin{bmatrix} -2.0337 & -0.1195\\ -0.1195 & -1.9522 \end{bmatrix},\qquad P_{5}= \begin{bmatrix} -5.2088 & -0.1242\\ -0.1242 & -5.2153 \end{bmatrix},\\& P_{6}= \begin{bmatrix} -1.6634 & -1.4740\\ -1.4740 & -1.3179 \end{bmatrix}, \qquad P_{7}= \begin{bmatrix} 2.5353 & 0.2302\\ 0.2302 & 2.6333 \end{bmatrix}, \\& P_{8}= \begin{bmatrix} 2.8426 & -0.3589\\ -0.3589 & 2.9805 \end{bmatrix},\qquad P_{9}= \begin{bmatrix} -0.0203 & -0.5475\\ -0.5475 & -0.0603 \end{bmatrix}, \\& Q_{1}= \begin{bmatrix} 1.2435 & 0.0641\\ 0.0641 & 1.1864 \end{bmatrix}, \qquad Q_{2}= \begin{bmatrix} -3.9316 & 0.0584\\ 0.0584 & -3.9362 \end{bmatrix},\\& Q_{3}= \begin{bmatrix} -2.5472 & 0.0937\\ 0.0937 & -2.5964 \end{bmatrix}, \qquad Q_{4}= \begin{bmatrix} 1.7610 & 0.0784\\ 0.0784 & 1.6908 \end{bmatrix},\\& Q_{5}= \begin{bmatrix} -3.4298 & 0.0244\\ 0.0244 & -3.4416 \end{bmatrix},\qquad Q_{6}= \begin{bmatrix} 0.1114 & 1.4434\\ 1.4434 & -0.2265 \end{bmatrix}, \\& Q_{7}= \begin{bmatrix} -4.1748 & -0.2125\\ -0.2125 & -4.2711 \end{bmatrix},\qquad Q_{8}= \begin{bmatrix} -2.7885 & 0.3045\\ 0.3045 & -2.9166 \end{bmatrix},\\& Q_{9}= \begin{bmatrix} 0.2770 & 0.7041\\ 0.7041 & 0.2438 \end{bmatrix}, \qquad R_{1}= \begin{bmatrix} 21.5155 & -0.5682\\ -0.5682 & 21.7773 \end{bmatrix},\\& R_{2}= \begin{bmatrix} 7.6220 & -0.3787\\ -0.3787 & 7.7055 \end{bmatrix},\qquad R_{3}= \begin{bmatrix} 10.0304 & -0.2798\\ -0.2798 & 10.0726 \end{bmatrix}, \\& R_{4}= \begin{bmatrix} 19.6989 & -0.1794\\ -0.1794 & 19.7887 \end{bmatrix},\qquad R_{5}= \begin{bmatrix} 7.3729 & -0.3152\\ -0.3152 & 7.4458 \end{bmatrix},\\& R_{6}= \begin{bmatrix} 44.6899 & -3.1224\\ -3.1224 & 45.2524 \end{bmatrix}, \qquad R_{7}= \begin{bmatrix} 6.3442 & 1.3620\\ 1.3620 & 6.3481 \end{bmatrix}, \\& R_{8}= \begin{bmatrix} 7.3921 & -0.0597\\ -0.0597 & 7.4681 \end{bmatrix},\qquad R_{9}= \begin{bmatrix} 11.3592 & -1.5493\\ -1.5493 & 11.5804 \end{bmatrix}, \\& D=\operatorname{diag} \{44.3154,44.3154 \},\qquad\Upsilon=\operatorname{diag} \{ 9.7112 , 9.7112 \},\\& L=\operatorname{diag} \{0.9734,0.9734 \}, \qquad K=\operatorname{diag} \{19.6302,19.6302 \} \quad \mbox{and} \quad \gamma= 49.6694. \end{aligned}$$

By applying Theorem 3.1, the passivity can be achieved.

Figures 2 and 3 show the state curves of system (4.1) with input \(u(t)=(2+\sin (t), 2-\sin (t))^{T}\) and \(u(t)=(0,0)^{T}\), respectively. System (4.1) is a state-dependent switched system. From Figure 2, we see system (4.1) can keep internally stable with input \(u(t)=(2+\sin (t), 2-\sin (t))^{T}\). The product of input and output can be regarded as the energy provision for the system’s passivity, which can embody energy attenuation character of system (4.1). That is to say, the passive system (4.1) will not produce energy. From Figure 3, we see system (4.1) is stable with input \(u(t)=(0,0)^{T}\). By contrast, we can say that passivity is at a higher abstraction level of stability.

Figure 2
figure 2

The state curves of system ( 4.1 ) with input \(\pmb{u(t)=(2+\sin (t),2-\sin (t))^{T}}\) .

Figure 3
figure 3

The state curves of system ( 4.1 ) with input \(\pmb{u(t)=(0,0)^{T}}\) .

Example 4.2

Consider a two-dimensional memristive neural network model

$$\begin{aligned} &\dot{x}_{1}(t)= -x_{1}(t)+w^{1}_{11} \bigl(x_{1}(t) \bigr)f_{1} \bigl(x_{1}(t) \bigr)+w^{1}_{12} \bigl(x_{1}(t) \bigr)f_{2} \bigl(x_{2}(t) \bigr) \\ &\hphantom{\dot{x}_{1}(t)=}{}+w^{2}_{11} \bigl(x_{1}(t) \bigr)f_{1} \bigl(x_{1}(t-0.01) \bigr)+w^{2}_{12} \bigl(x_{1}(t) \bigr)f_{2} \bigl(x_{2}(t-0.01) \bigr) \\ &\hphantom{\dot{x}_{1}(t)=}{}+w^{3}_{11} \bigl(x_{1}(t) \bigr) \int_{t-0.02}^{t}f_{1} \bigl(x_{1}(s) \bigr)\,\mathrm{d}s+w^{3}_{12} \bigl(x_{1}(t) \bigr) \int_{t-0.02}^{t}f_{2} \bigl(x_{2}(s) \bigr)\,\mathrm{d}s+u_{1}(t), \\ & \begin{aligned}[b] \dot{x}_{2}(t)={}&{-}x_{2}(t)+w^{1}_{21} \bigl(x_{2}(t) \bigr)f_{1} \bigl(x_{1}(t) \bigr)+w^{1}_{22} \bigl(x_{2}(t) \bigr)f_{2} \bigl(x_{2}(t) \bigr) \\ &{}+w^{2}_{21} \bigl(x_{2}(t) \bigr)f_{1} \bigl(x_{1}(t-1) \bigr)+w^{2}_{22} \bigl(x_{2}(t) \bigr)f_{2} \bigl(x_{2}(t-1) \bigr) \\ &{}+w^{3}_{21} \bigl(x_{2}(t) \bigr) \int_{t-0.05}^{t}f_{1} \bigl(x_{1}(s) \bigr)\,\mathrm{d}s+w^{3}_{22} \bigl(x_{2}(t) \bigr) \int_{t-0.05}^{t}f_{2} \bigl(x_{2}(s) \bigr)\,\mathrm{d}s+u_{2}(t), \end{aligned}\\ &y_{1}(t)=f_{1} \bigl(x_{1}(t) \bigr), \\ &y_{2}(t)=f_{2} \bigl(x_{2}(t) \bigr), \end{aligned}$$
(4.2)

for \(t\geq0\), where \(f(\rho)=f_{1}(\rho)=f_{2}(\rho)=\tanh(\rho ), \rho\in\mathbb{R}\) and

$$\begin{aligned}& w^{1}_{11} \bigl(x_{1}(t) \bigr)=0.07\sin \bigl(x_{1}(t)\bigr),\qquad w^{1}_{12} \bigl(x_{1}(t) \bigr)=0.06\sin \bigl(x_{1}(t)\bigr),\\& w^{2}_{11} \bigl(x_{1}(t) \bigr)=0.05\sin \bigl(x_{1}(t)\bigr), \qquad w^{2}_{12} \bigl(x_{1}(t) \bigr)=0.04\sin \bigl(x_{1}(t)\bigr),\\& w^{3}_{11} \bigl(x_{1}(t) \bigr)=0.03\sin \bigl(x_{1}(t)\bigr), \qquad w^{3}_{12} \bigl(x_{1}(t) \bigr)=0.02\sin \bigl(x_{1}(t)\bigr), \\& w^{1}_{21} \bigl(x_{2}(t) \bigr)=0.02\cos \bigl(x_{2}(t)\bigr), \qquad w^{1}_{22} \bigl(x_{2}(t) \bigr)=0.03\cos \bigl(x_{2}(t)\bigr),\\& w^{2}_{21} \bigl(x_{2}(t) \bigr)=0.04\cos \bigl(x_{2}(t)\bigr), \qquad w^{2}_{22} \bigl(x_{2}(t) \bigr)=0.05\cos \bigl(x_{2}(t)\bigr),\\& w^{3}_{21} \bigl(x_{2}(t) \bigr)=0.06\cos \bigl(x_{2}(t)\bigr), \qquad w^{3}_{22} \bigl(x_{2}(t) \bigr)=0.07\cos \bigl(x_{2}(t)\bigr). \end{aligned}$$

In system (4.2), \(\mu=0,\bar{h}=1,\bar{r}=0.05\),

$$\widetilde{\Lambda}^{1}= \begin{bmatrix} 0.07 & 0.06\\ 0.02 & 0.03 \end{bmatrix},\qquad\widetilde{ \Lambda}^{2}= \begin{bmatrix} 0.05 & 0.04\\ 0.04 & 0.05 \end{bmatrix},\qquad\widetilde{ \Lambda}^{3}= \begin{bmatrix} 0.03 & 0.02\\ 0.06 & 0.07 \end{bmatrix}. $$

Solvingthe linearmatrix inequalities in (3.14) and (3.15) by Matlab Tool Box, we obtain an optimal feasible solution:

$$\begin{aligned}& J_{1}= \begin{bmatrix} 371.1466 & -15.2114\\ -15.2114 & 382.7361 \end{bmatrix}, \qquad J_{2}= \begin{bmatrix} -24.2902 & -0.6306\\ -0.6306 & -22.3549 \end{bmatrix},\\& J_{3}= \begin{bmatrix} 83.9129 & -0.9747\\ -0.9747 & 86.4453 \end{bmatrix}, \qquad M_{1}= \begin{bmatrix} 126.4090 & -4.6637\\ -4.6637 & 133.2271 \end{bmatrix},\\& M_{2}= \begin{bmatrix} 121.1250 & -9.2107\\ -9.2107 & 128.1586 \end{bmatrix},\qquad M_{3}= \begin{bmatrix} 63.8557 & -34.3725\\ -34.3725 & 84.6962 \end{bmatrix}, \\& N_{1}= \begin{bmatrix} 135.4674 & -6.0634\\ -6.0634 & 144.1690 \end{bmatrix}, \qquad N_{2}= \begin{bmatrix} 195.6531 & 6.4820\\ 6.4820 & 195.3418 \end{bmatrix},\\& N_{3}= \begin{bmatrix} 220.5610 & 34.2490\\ 34.2490 & 219.5530 \end{bmatrix}, \qquad P_{1}= \begin{bmatrix} -6.1148 & 10.9917\\ 10.9917 & -9.8206 \end{bmatrix},\\& P_{2}= \begin{bmatrix} -2.9314 & -5.5211\\ -5.5211 & -0.5194 \end{bmatrix},\qquad P_{3}= \begin{bmatrix} 9.8546 & 3.7746\\ 3.7746 & 7.6906 \end{bmatrix}, \\& P_{4}= \begin{bmatrix} 11.1276 & 8.3435\\ 8.3435 & 11.1267 \end{bmatrix}, \qquad P_{5}= \begin{bmatrix} -57.7887 & 0.2996\\ 0.2996 & -59.0994 \end{bmatrix},\\& P_{6}= \begin{bmatrix} -14.8188 & -11.3098\\ -11.3098 & -10.3271 \end{bmatrix}, \qquad P_{7}= \begin{bmatrix} 18.3369 & 14.4109\\ 14.4109 & 17.5757 \end{bmatrix},\\& P_{8}= \begin{bmatrix} -5.3001 & -5.4611\\ -5.4611 & -2.5947 \end{bmatrix},\qquad P_{9}= \begin{bmatrix} -29.6055 & -15.2437\\ -15.2437 & -30.1532 \end{bmatrix}, \\& Q_{1}= \begin{bmatrix} -22.9982 & -19.1540\\ -19.1540 & -19.7656 \end{bmatrix}, \qquad Q_{2}= \begin{bmatrix} -13.3155 & 1.9207\\ 1.9207 & -13.3220 \end{bmatrix},\\& Q_{3}= \begin{bmatrix} -8.1153 & -4.4669\\ -4.4669 & -7.3248 \end{bmatrix}, \qquad Q_{4}= \begin{bmatrix} -10.0473 & -13.0968\\ -13.0968 & -8.2307 \end{bmatrix},\\& Q_{5}= \begin{bmatrix} -10.8245 & -1.2061\\ -1.2061 & -10.4430 \end{bmatrix},\qquad Q_{6}= \begin{bmatrix} 0.1971 & 11.2830\\ 11.2830 & -3.7068 \end{bmatrix}, \\& Q_{7}= \begin{bmatrix} -47.3183 & -14.1380\\ -14.1380 & -46.5001 \end{bmatrix},\qquad Q_{8}= \begin{bmatrix} 4.2905 & 4.1752\\ 4.1752 & 1.9804 \end{bmatrix},\\& Q_{9}= \begin{bmatrix} 25.1237 & 27.3041\\ 27.3041 & 19.8900 \end{bmatrix}, \qquad R_{1}= \begin{bmatrix} 238.9001 & 9.8372\\ 9.8372 & 245.0638 \end{bmatrix},\\& R_{2}= \begin{bmatrix} -22.0068 & -29.9246\\ -29.9246 & -14.0065 \end{bmatrix},\qquad R_{3}= \begin{bmatrix} 11.5901 & 0.4819\\ 0.4819 & 10.3971 \end{bmatrix}, \\& R_{4}= \begin{bmatrix} 193.8927 & 15.6661\\ 15.6661 & 193.2969 \end{bmatrix},\qquad R_{5}= \begin{bmatrix} 3.6074 & -0.6677\\ -0.6677 & 4.6803 \end{bmatrix},\\& R_{6}= \begin{bmatrix} 450.8130 & -24.9075\\ -24.9075 & 454.1943 \end{bmatrix}, \qquad R_{7}= \begin{bmatrix} 71.3226 & 62.7422\\ 62.7422 & 63.2145 \end{bmatrix},\\& R_{8}= \begin{bmatrix} -1.6244 & 0.3223\\ 0.3223 & 0.0841 \end{bmatrix},\qquad R_{9}= \begin{bmatrix} 137.8729 & -39.0824\\ -39.0824 & 144.6838 \end{bmatrix}, \\& D=\operatorname{diag} \{437.6439,437.6439 \},\qquad\Upsilon=\operatorname{diag} \{ 95.6349,95.6349 \},\\& L=\operatorname{diag} \{17.1388,17.1388 \}, \qquad K=\operatorname{diag} \{185.0730,185.0730 \} \quad \mbox{and} \quad \gamma=506.7072. \end{aligned}$$

By applying Theorem 3.2, the passivity can be achieved.

Figures 4 and 5 show the state curves of system (4.2) with input \(u(t)=(1+\sin (t), 1-\sin (t))^{T}\) and \(u(t)=(0,0)^{T}\), respectively. System (4.2) is a state-dependent continuous system. From Figure 4, we see that system (4.2) can keep internally stable with input \(u(t)=(1+\sin (t), 1-\sin (t))^{T}\). The product of input and output can be regarded as the energy provision for the system’s passivity which can embody energy attenuation character of system (4.2). That is to say, the passive system (4.2) will not produce energy. From Figure 5, we can see that system (4.2) is stable with input \(u(t)=(0,0)^{T}\). By contrast, we can say that passivity is at a higher abstraction level of stability.

Figure 4
figure 4

The state curves of system ( 4.2 ) with input \(\pmb{u(t)=(1+\sin (t),1-\sin (t))^{T}}\) .

Figure 5
figure 5

The state curves of system ( 4.2 ) with input \(\pmb{u(t)=(0,0)^{T}}\) .

5 Concluding remarks

In this paper, the problem of circuit design and passivity have been discussed for a class of memristor-based neural networks with mixed time-varying delays and different state-dependent memductance functions. The main contribution of this paper lies in the following aspects: (i) A delayed memristive neurodynamic system is established for the designed circuit. Different from the models of [9–11, 42], system (2.1) contains not only transmission delay but also distributed time delay which is caused by the spatial extent in neural networks as the presence of parallel pathway or distribution of propagation delays over a period of time, i.e., the model considered in this paper is more reasonable. (ii) By employing the theories of differential inclusions and set-valued maps, delay-dependent criteria in terms of linear matrix inequalities are obtained for the passivity of the memristive neural networks. (iii) The conditions are expressed in terms of linear matrix inequalities which can easily be checked via the Matlab LMI Tool Box.

References

  1. Chua, LO: Memristor-the missing circuit element. IEEE Trans. Circuit Theory 18(5), 507-519 (1971)

    Article  Google Scholar 

  2. Strukov, DB, Snider, GS, Stewart, DR, Williams, RS: The missing memristor found. Nature 453, 80-83 (2008)

    Article  Google Scholar 

  3. Pershin, YV, Di Ventra, M: Experimental demonstration of associative memory with memristive neural networks. Neural Netw. 23(7), 881-886 (2010)

    Article  Google Scholar 

  4. Li, R, Cao, J: Stability analysis of reaction-diffusion uncertain memristive neural networks with time-varying delays and leakage term. Appl. Math. Comput. 278, 54-69 (2016)

    Article  MathSciNet  Google Scholar 

  5. Wang, FZ, Helian, N, Wu, SN, Yang, X, Guo, YK, Lim, G, Rashid, MM: Delayed switching applied to memristor neural networks. J. Appl. Phys. 111(7), 07E317 (2012)

    Google Scholar 

  6. Wang, L, Li, H, Duan, S, Huang, T, Wang, H: Pavlov associative memory in a memristive neural network and its circuit implementation. Neurocomputing 171, 23-29 (2016)

    Article  Google Scholar 

  7. Wang, W, Li, L, Peng, H, Kurths, J, Xiao, J, Yang, Y: Anti-synchronization control of memristive neural networks with multiple proportional delays. Neural Process. Lett. 43, 269-283 (2016)

    Article  Google Scholar 

  8. Wang, W, Li, L, Peng, H, Xiao, J, Yang, Y: Synchronization control of memristor-based recurrent neural networks with perturbations. Neural Netw. 53, 8-14 (2014)

    Article  MATH  Google Scholar 

  9. Wu, AL, Zeng, ZG: Exponential stabilization of memristive neural networks with time delays. IEEE Trans. Neural Netw. Learn. Syst. 23(12), 1919-1929 (2012)

    Article  MathSciNet  Google Scholar 

  10. Wu, AL, Zeng, ZG: Exponential passivity of memristive neural networks with time delays. Neural Netw. 49, 11-18 (2014)

    Article  MATH  Google Scholar 

  11. Wu, AL, Zeng, ZG, Fu, CJ: Dynamic analysis of memristive neural system with unbounded time-varying delays. J. Franklin Inst. 351, 3032-3041 (2014)

    Article  MathSciNet  Google Scholar 

  12. Yang, S, Li, C, Huang, T: Exponential stabilization and synchronization for fuzzy model of memristive neural networks by periodically intermittent control. Neural Netw. 75, 162-172 (2016)

    Article  Google Scholar 

  13. Zhang, GD, Shen, Y, Wang, LM: Global anti-synchronization of a class of chaotic memristive neural networks with time-varying delays. Neural Netw. 46, 1-8 (2013)

    Article  MATH  Google Scholar 

  14. Zhang, ZX, Mou, SS, Lam, J, Gao, HJ: New passivity criteria for neural networks with time-varying delay. Neural Netw. 22(7), 864-868 (2009)

    Article  MATH  Google Scholar 

  15. Zhu, S, Shen, Y, Chen, GC: Exponential passivity of neural networks with time-varying delay and uncertainty. Phys. Lett. A 375(2), 136-142 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  16. Wu, AL, Zeng, ZG: Passivity analysis of memristive neural networks with different memductance functions. Commun. Nonlinear Sci. Numer. Simul. 19, 274-285 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  17. Balasubramaniam, P, Nagamani, G: Global robust passivity analysis for stochastic fuzzy interval neural networks with time-varying delays. Expert Syst. Appl. 39(1), 732-742 (2012)

    Article  Google Scholar 

  18. Li, HY, Lam, J, Cheung, KC: Passivity criteria for continuous-time neural networks with mixed time-varying delays. Appl. Math. Comput. 218, 11062-11074 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  19. Zhu, J, Zhang, QL, Yuan, ZH: Delay-dependent passivity criterion for discrete-time delayed standard neural network model. Neurocomputing 73(7-9), 1384-1393 (2010)

    Article  MATH  Google Scholar 

  20. Zhu, S, Shen, Y: Passivity analysis of stochastic delayed neural networks with Markovian switching. Neurocomputing 74(10), 1754-1761 (2011)

    Article  Google Scholar 

  21. Balasubramaniam, P, Nagamani, G: Passivity analysis of neural networks with Markovian jumping parameters and interval time-varying delays. Nonlinear Anal. Hybrid Syst. 4(4), 853-864 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  22. Balasubramaniam, P, Nagamani, G: Passivity analysis for uncertain stochastic neural networks with discrete interval and distributed time-varying delays. J. Syst. Eng. Electron. 21(4), 688-697 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  23. Balasubramaniam, P, Nagamani, G: A delay decomposition approach to delay-dependent passivity analysis for interval neural networks with time-varying delay. Neurocomputing 74(10), 1646-1653 (2011)

    Article  Google Scholar 

  24. Song, QK, Cao, JD: Robust stability in Cohen-Grossberg neural network with both time-varying and distributed delays. Neural Process. Lett. 27(2), 179-196 (2008)

    Article  MATH  Google Scholar 

  25. Hua, CC, Yang, X, Yan, J, Guan, XP: New stability criteria for neural networks with time-varying delays. Appl. Math. Comput. 218, 5035-5042 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  26. Yang, R, Gao, H, Lam, J, Shi, P: New stability criteria for neural networks with distributed and probabilistic delays. Circuits Syst. Signal Process. 28(4), 505-522 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  27. Wu, L, Zheng, WX: Passivity-based sliding mode control of uncertain singular time-delay systems. Automatica 45(9), 2120-2127 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  28. Ma, C, Zeng, Q, Zhang, L, Zhu, Y: Passivity and passification for Markov jump genetic regulatory networks with time-varying delays. Neurocomputing 136, 321-326 (2014)

    Article  Google Scholar 

  29. Lee, T: Lagrangian modeling and passivity-based control of three-phase AC/DC voltage-source converters. IEEE Trans. Ind. Electron. 51(4), 892-902 (2004)

    Article  Google Scholar 

  30. Wang, JH: Passivity-Based Control Theory and Its Applications. Publishing House of Electronics Industry, Beijing (2010)

    Google Scholar 

  31. Wang, W, Li, L, Peng, H, Kurths, J, Xiao, J, Yang, Y: Finite-time anti-synchronization control of memristive neural networks with stochastic perturbations. Neural Process. Lett. 43, 49-63 (2016)

    Article  Google Scholar 

  32. Wen, S, Bao, G, Zeng, Z, Chen, Y, Huang, T: Global exponential synchronization of memristor-based recurrent neural networks with time-varying delays. Neural Netw. 48, 195-203 (2013)

    Article  MATH  Google Scholar 

  33. Wen, S, Huang, T, Zeng, Z, Chen, Y, Li, P: Circuit design and exponential stabilization of memristive neural networks. Neural Netw. 63, 48-56 (2014)

    Article  MATH  Google Scholar 

  34. Wen, S, Zeng, Z, Huang, T, Yu, X: Noise cancellation of memristive neural networks. Neural Netw. 60, 74-83 (2014)

    Article  MATH  Google Scholar 

  35. Wu, H, Han, X, Wang, L, Wang, Y, Fang, B: Exponential passivity of memristive neural networks with mixed time-varying delays. J. Franklin Inst. 353, 688-712 (2016)

    Article  MathSciNet  Google Scholar 

  36. Aubin, J, Frankowska, H: Set-Valued Analysis. Birkhäuser, Boston (2009)

    Book  MATH  Google Scholar 

  37. Filippov, A: Differential Equations with Discontinuous Right Hand Sides. Kluwer Academic, Boston (1988)

    Book  MATH  Google Scholar 

  38. Li, C, Liao, X: Passivity analysis of neural networks with time delay. IEEE Trans. Circuits Syst. II, Express Briefs 52(8), 471-475 (2005)

    Article  Google Scholar 

  39. Gu, K, Kharitonov, V, Chen, J: Stability of Time-Delay Systems. Birkhäuser, Basel (2003)

    Book  MATH  Google Scholar 

  40. Shu, Z, Lam, J: Exponential estimates and stabilization of uncertain singular systems with discrete and distributed delays. Int. J. Control 81(6), 865-882 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  41. Gopalsamy, K: Stability and Oscillations in Delay Differential Equations of Population Dynamics. Kluwer Academic, Dordrecht (1992)

    Book  MATH  Google Scholar 

  42. Guo, Z, Wang, J, Yan, Z: Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays. Neural Netw. 48, 158-172 (2013)

    Article  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank all the anonymous reviewers and the editors for their helpful advice and hard work. This work was supported by NNSF of China (11371368, 11071254) and HEBNSF of China (A2014506015).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jian Liu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

JL carried out the theoretical calculation, numerical simulation, and drafting of the manuscript. RX participated in the theoretical calculation and helped to draft the manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, J., Xu, R. Passivity analysis of memristive neural networks with mixed time-varying delays and different state-dependent memductance functions. Adv Differ Equ 2016, 245 (2016). https://doi.org/10.1186/s13662-016-0971-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-016-0971-7

Keywords