Skip to main content

Theory and Modern Applications

Global asymptotic stability of piecewise homogeneous Markovian jump BAM neural networks with discrete and distributed time-varying delays

Abstract

In this paper, the issue of a global asymptotic stability analysis is developed for piecewise homogeneous Markovian jump BAM neural networks with mixed time delays. By establishing the Lyapunov functional, using mode-dependent discrete delay and applying the linear matrix inequality (LMI) method, a novel sufficient condition is obtained to guarantee the stability of the considered system. A numerical example is provided to demonstrate the feasibility and effectiveness of the proposed results.

1 Introduction

As is well known, the bidirectional associative memory (BAM) neural networks were originally introduced by Kosko [1–3], and they are a class of two-layer heteroassociative networks, which are composed of neurons arranged in two layers, the U-layer and the V-layer. Generally speaking, the neurons in one layer are fully interconnected to the neurons in the other layer. Moreover, there may be no interconnection among neurons in the same layer. In addition, the addressable memories or patterns of BAM neural networks can be stored with a two-way associative search. Owing to these reasons, the BAM neural network has been widely studied both in theory and applications; see [4–13]. Therefore, it is meaningful and important to study the BAM neural network.

Recently, a great deal of studies have been done to the stability analysis of the dynamical systems [14–25]. It is worth noting that Markovian jump systems have received increasing attention in the area of the mathematics and control research community. Therefore, the study of Markovian jumps is of great significance and value both theoretically and practically. Much work has been done for Markovian processes or Markovian chains in the literature, and the issues of stability and control have been well investigated; see, for example, [14–20] and references therein. The stability analysis problem has been investigated in [17] for stochastic high-order Markovian jumping neural networks with mixed time delays. In [18], the authors have made the first attempt to deal with the \(H_{\infty}\) estimation for discrete-time piecewise homogeneous Markov jump linear systems, and the time-varying character of TPs has been considered to be finite piecewise homogeneous and the variations have been considered to be of two types: arbitrary variations and stochastic variations. The \(H_{\infty}\) filtering analysis of piecewise homogeneous Markovian jump nonlinear systems has been studied in [19], where the mode-dependent filter is obtained. Very recently, the stochastic stability analysis has been investigated for piecewise homogeneous Markovian jump neural networks with mixed time delays in [20]. But the time-varying delays in [20] are independent of the Markovian jump mode. To the best of our knowledge, no results have been given for piecewise homogeneous Markovian jump BAM neural networks with discrete and distributed time delays.

This constitutes the motivation for the present research. In this paper, we deal with the stability problem for piecewise homogeneous Markovian jump BAM neural networks with discrete and distributed time delays. By employing the Lyapunov method, using mode-dependent discrete delay and some inequality techniques, sufficient conditions are derived for the global asymptotic stability in the mean square of the piecewise homogeneous Markovian jump BAM neural networks with discrete and distributed time delays. One illustrative example is also provided to show the effectiveness of the obtained results.

2 Model description and preliminaries

In this paper, we consider BAM neural networks with discrete and distributed time-varying delays described by

$$ \left \{ \textstyle\begin{array}{@{}l} \frac{dx(t)}{dt}=-Cx(t)+A_{1}f(y(t))+A_{2}f(y(t-\tau_{1}(t)))+A_{3}\int _{t-d_{1}(t)}^{t}f(y(s))\,ds,\\ \frac{dy(t)}{dt}=-Dy(t)+B_{1}g(x(t))+B_{2}g(x(t-\tau_{2}(t)))+B_{3}\int _{t-d_{2}(t)}^{t}g(x(s))\,ds, \end{array}\displaystyle \right . $$
(1)

with initial values

$$ \left \{ \textstyle\begin{array}{@{}l} x_{i}(s)=\phi_{1}(s),\quad s\in[-\mu,0], i=1,2,\ldots,n,\\ y_{j}(s)=\phi_{2}(s),\quad s\in[-\mu,0], j=1,2,\ldots,n, \end{array}\displaystyle \right . $$

where \(x(t)=[x_{1}(t),x_{2}(t),\ldots,x_{n}(t)]^{\intercal}\) and \(y(t)=[y_{1}(t),y_{2}(t),\ldots,y_{n}(t)]^{\intercal}\) are the state vectors, n is the number of units in the neural networks, \(C=\operatorname{diag}(c_{1},c_{2},\ldots,c_{n})\) and \(D=\operatorname{diag}(d_{1},d_{2},\ldots,d_{n})\) are diagonal matrices with positive entries \(c_{i}>0\) and \(d_{i}>0\); \(A_{1}=(a_{ij}^{(1)})_{n\times n}\) and \(B_{1}=(b_{ij}^{(1)})_{n\times n}\) are the synaptic connection matrices, \(A_{2}=(a_{ij}^{(2)})_{n\times n}\) and \(B_{2}=(b_{ij}^{(2)})_{n\times n}\) are the discretely delayed connection weight matrices, \(A_{3}=(a_{ij}^{(3)})_{n\times n}\) and \(B_{3}=(b_{ij}^{(3)})_{n\times n}\) are the distributively delayed connection weight matrices, \(f(y)=(f_{1}(y_{1}),f_{2}(y_{2}),\ldots,f_{n}(y_{n}))^{\intercal}\) and \(g(x)=(g_{1}(x_{1}),g_{2}(x_{2}),\ldots,g_{n}(x_{n}))^{\intercal}\) are the activation functions, \(\tau_{i}(t)\) and \(d_{i}\) (\(i=1,2\)) are discrete and distributed time-varying delays, respectively, and they satisfy \(0\leq d_{i}(t)\leq d_{i}\), \(0\leq\dot{d}_{i}(t)\leq d_{iu}\), \(0\leq\tau_{i}(t)\leq\tau_{i}\), \(0\leq\dot{\tau}_{i}(t)\leq \tau_{iu}\) (\(i=1,2\)). The initial value space generated function is \(\phi=(\phi_{1}^{\intercal},\phi_{2}^{\intercal})^{\intercal}\in C_{\digamma_{0}}^{2}([-u,0],\Re^{n+n})\), where \(C_{\digamma_{0}}^{2}\) denotes the family of all bounded \(\digamma_{0}\)-measurable, \(C_{\digamma_{0}}^{2}([-u,0],\Re^{n+n})\)-valued random variables, satisfying \(\|\phi\|=\sup_{-u\leq s\leq0}E|\phi(s)|^{2}<\infty\), where E denotes the expectation of the stochastic process, and \(\mu\triangleq\max(d,\tau)\), where \(d\triangleq\max(d_{1},\tau_{1})\), \(\tau\triangleq\max(d_{2},\tau_{2} )\).

The activation functions \(g_{i}(x_{i}(t))\) and \(f_{i}(x_{i}(t))\) (\(i=1,2,\ldots,n\)) are assumed to be nondecreasing, bounded, and globally Lipschitz; we have

$$\begin{aligned}& 0\leq{\frac{g_{j}(\xi_{1})-g_{j}(\xi_{2})}{\xi_{1}-\xi_{2}}}\leq l_{j},\quad g_{j}(0)=0, \end{aligned}$$
(2)
$$\begin{aligned}& 0\leq{\frac{f_{j}(\xi_{1})-f_{j}(\xi_{2})}{\xi_{1}-\xi_{2}}}\leq m_{j},\quad f_{j}(0)=0, \end{aligned}$$
(3)

\(\xi{_{1}},\xi{_{2}}\in{R}\), \(\xi{_{1}}\neq\xi{_{2}}\) (\(j=1,2,\ldots,n\)) where \(l_{j}>0\) and \(m_{j}>0\) (\(j=1,2,\ldots,n\)). Note \(L=\operatorname{diag}(l_{1},l_{2},\ldots,l_{n})\), \(M=\operatorname{diag}(m_{1},m_{2},\ldots,m_{n})\).

Now, based on BAM neural networks (1) and fixing a probability space \((\Omega,\digamma,\mathcal{P})\), we introduce the following Markovian jump BAM neural networks with mixed time delays:

$$ \left \{ \textstyle\begin{array}{@{}l} \frac{dx(t)}{dt}=-C(r_{t})x(t)+A_{1}(r_{t})f(y(t))+A_{2}(r_{t})f(y(t-\tau _{1}(t,r_{t})))\\ \hphantom{\frac{dx(t)}{dt}=}{}+A_{3}(r_{t})\int_{t-d_{1}(t)}^{t}f(y(s))\,ds,\\ \frac{dy(t)}{dt}=-D(r_{t})y(t)+B_{1}(r_{t})g(x(t))+B_{2}(r_{t})g(x(t-\tau _{2}(t,r_{t})))\\ \hphantom{\frac{dy(t)}{dt}=}{}+B_{3}(r_{t})\int_{t-d_{2}(t)}^{t}g(x(s))\,ds. \end{array}\displaystyle \right . $$
(4)

For convenience, each possible value of \(r(t)\) is denoted by i, \(i\in S_{1}\), in the following. Then we have

$$\begin{aligned}& C_{i}=C(r_{t}), \qquad A_{1i}=A_{1}(r_{t}), \qquad A_{2i}=A_{2}(r_{t}),\qquad A_{3i}=A_{3}(r_{t}), \\& D_{i}=D(r_{t}),\qquad B_{1i}=B_{1}(r_{t}), \qquad B_{2i}=B_{2}(r_{t}),\qquad B_{3i}=B_{3}(r_{t}), \\& 0\leq\tau_{1}(t,r_{t})=\tau_{1i}(t)\leq \tau_{1i}\leq\tau_{1},\qquad \dot{\tau }_{1i}\leq \tau_{1u}, \\& 0\leq\tau_{2}(t,r_{t})=\tau_{2i}(t)\leq \tau_{2i}\leq\tau_{2}, \qquad\dot{\tau }_{2i}\leq \tau_{2u}. \end{aligned}$$

The process \(\{r_{t},t\geq0\}\) is described by a Markov chain with finite state space \(S_{1}=\{1,2,\ldots,s\}\), and its transition probability matrix \(\Pi^{(\delta_{t+h})}\triangleq[\pi_{ij}^{(\delta _{t+h})}]_{s\times s}\) is given by

$$ Pr\{r_{t+h}=j\mid r_{t}=i\}= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \pi_{ij}^{(\delta_{t+h})}h+o(h), & j\neq i,\\ 1+\pi_{ij}^{(\delta_{t+h})}h+o(h), & j=i, \end{array}\displaystyle \right . $$
(5)

where \(h>0\) and \(\lim_{h\rightarrow0}o(h)/h=0\); \(\pi_{ij}^{(\delta_{t+h})}>0\) for \(j\neq i\) is the transition rate from mode i at time t to mode j at time \(t+h\) and \(\pi_{ii}^{(\delta_{t+h})}=-\sum_{j=1,j\neq i}^{s}\pi_{ij}^{(\delta_{t+h})}\). In this study, we assume that \(\delta_{t}\) varies in another finite \(S_{2}=\{1,2,\ldots,l\}\) with transition probability matrix \(\Lambda\triangleq[q_{mn}]_{l\times l}\) given by

$$ Pr\{\delta_{t+h}=n\mid\delta_{t}=m\}= \left \{ \textstyle\begin{array}{@{}l@{\quad}l} q_{mn}h+o(h), & n\neq m,\\ 1+q_{mn}h+o(h), & n=m, \end{array}\displaystyle \right . $$
(6)

where \(h>0\) and \(\lim_{h\rightarrow0}o(h)/h=0\); \(q_{mn}>0\) for \(m\neq n\), is the transition rate from mode m at time t to mode n at time \(t+h\) and \(q_{mm}=-\sum_{n=1,n\neq m}^{l}q_{mn}\).

Now, we are ready to introduce the notion of homogeneousness.

Definition 2.1

A finite Markov process \(r_{t}\epsilon S_{1}\) is said to be homogeneous (respectively, nonhomogeneous) if for all \(t\geqslant 0\), the transition probability satisfies \(Pr\{r_{t+h}=j\mid r_{t}=i\}=\pi_{ij}\) (respectively, \(Pr\{r_{t+h}=j\mid r_{t}=i\}=\pi _{ij}(t)\)), where \(\pi_{ij}\) (or \(\pi_{ij}(t)\)) denotes a probability function.

Remark 1

In this paper, according to the definition of homogeneousness and nonhomogeneousness, we can find that the Markovian chain \(\delta_{t}\) is homogeneous, while the Markovian chain \(r_{t}\) is nonhomogeneous.

Next, we will introduce several lemmas which will be essential in proving our conclusion in Section 3.

Lemma 2.1

[26]

For any constant matrix \(M>0\), any scalars a and b with \(a< b\), and a vector function \(x(t):[a,b]\rightarrow R^{n}\) such that the integrals concerned are well defined, the following holds:

$$ \biggl[ \int_{a}^{b}x(s)\,ds \biggr]^{\intercal}M \biggl[ \int_{a}^{b}x(s)\,ds \biggr]\leq(b-a) \int _{a}^{b}x(s)^{\intercal} Mx(s)\,ds. $$

Lemma 2.2

(Schur complement [27])

Let there be given constant matrices \(Z_{1}\), \(Z_{2}\), \(Z_{3}\), where \(Z_{1}=Z_{1}^{\intercal}\) and \(Z_{2}=Z_{2}^{\intercal}>0\). Then \(Z_{1}+Z_{3}^{\intercal}Z_{2}^{-1}Z_{3}<0\) if and only if \(\bigl [ {\scriptsize\begin{matrix}{} Z_{1} & Z_{3}^{\intercal} \cr Z_{3} & -Z_{2} \end{matrix}} \bigr ]<0\) or \(\bigl [{\scriptsize\begin{matrix}{} -Z_{2} & Z_{3} \cr Z_{3}^{\intercal} & Z_{1} \end{matrix}} \bigr ]<0\).

3 Main results

In this section, a set of conditions are derived to guarantee the global asymptotic stability in the mean square of the BAM neural networks (4).

Theorem 3.1

For any given scalars \(d_{1}\), \(d_{2}\), \(\tau_{1}\), \(\tau_{2}\), and \(\tau_{1u}\), \(\tau_{2u}\) the BAM neural networks in (4) are globally asymptotic stable in the mean square, if there exist \(P_{ji,m}>0\), \(Q_{ji,m}=\bigl [ {\scriptsize\begin{matrix}{} Q_{ji,m}^{1} & Q_{ji,m}^{2} \cr \ast& Q_{ji,m}^{3}\end{matrix}} \bigr ]>0\), \(R_{ji,m}=\bigl [ {\scriptsize\begin{matrix}{} R_{ji,m}^{1} & R_{ji,m}^{2} \cr \ast& R_{ji,m}^{3} \end{matrix}} \bigr ]>0\), \(W_{j}=\bigl [ {\scriptsize\begin{matrix}{} W_{j}^{1} & W_{j}^{2} \cr \ast& W_{j}^{3} \end{matrix}} \bigr ]>0\), \(Q_{j}=\bigl [ {\scriptsize\begin{matrix}{} Q_{j}^{1} & Q_{j}^{2} \cr \ast& Q_{j}^{3} \end{matrix}} \bigr ]>0\) (\(j=1,2\)), \(X_{ji,m}>0\), \(Y_{ji,m}>0\), \(E_{ji,m}>0\), \(F_{ji,m}>0\), \(S_{ji,m}\) (\(j=1,2\)), \(X_{i}>0\), \(Y_{i}>0\), \(E_{i}>0\), \(F_{i}>0\) (\(i=3,4\)), and any matrices \(K_{i}\) (\(i=1,2,3,4,5,6\)) with appropriate dimensions such that the following LMIs hold:

$$\begin{aligned}& \begin{bmatrix} \Xi& \Upsilon_{1} & \Upsilon_{2} \\ \ast& -\Gamma_{1} & 0\\ \ast& \ast& -\Gamma_{2} \end{bmatrix} < 0, \end{aligned}$$
(7)
$$\begin{aligned}& \begin{bmatrix} X_{1i,m} & S_{1i,m} \\ \ast& X_{1i,m} \end{bmatrix}\geq0, \qquad \begin{bmatrix} Y_{1i,m} & S_{2i,m} \\ \ast& Y_{1i,m} \end{bmatrix}\geq0, \end{aligned}$$
(8)
$$\begin{aligned}& \sum_{n=1}^{l}q_{mn}R_{1i,n}+ \sum_{j=1}^{s}\pi_{ij}^{(m)}R_{1j,m} \leq W_{1}, \qquad \sum_{n=1}^{l}q_{mn}R_{2i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}R_{2j,m} \leq W_{2}, \end{aligned}$$
(9)
$$\begin{aligned}& \begin{aligned} &\sum_{n=1}^{l}q_{mn}X_{1i,n}+ \sum_{j=1}^{s}\pi_{ij}^{(m)}X_{1j,m} \leq X_{2i,m}+X_{4}, \\ &\tau_{2} \Biggl[\sum_{n=1}^{l}q_{mn}X_{2i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}X_{2j,m} \Biggr]\leq X_{3}, \end{aligned} \end{aligned}$$
(10)
$$\begin{aligned}& \begin{aligned} &\sum_{n=1}^{l}q_{mn}Y_{1i,n}+ \sum_{j=1}^{s}\pi_{ij}^{(m)}Y_{1j,m} \leq Y_{2i,m}+Y_{4}, \\ & \tau_{1} \Biggl[\sum_{n=1}^{l}q_{mn}Y_{2i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}Y_{2j,m} \Biggr]\leq Y_{3}, \end{aligned} \end{aligned}$$
(11)
$$\begin{aligned}& \begin{aligned} &\sum_{n=1}^{S2}q_{mn}E_{1i,n}+ \sum_{j=1}^{s}\pi_{ij}^{(m)}E_{1j,m} \leq E_{2i,m}+E_{4}, \\ & d_{1} \Biggl[\sum_{n=1}^{l}q_{mn}E_{2i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}E_{2j,m} \Biggr]\leq E_{3}, \end{aligned} \end{aligned}$$
(12)
$$\begin{aligned}& \begin{aligned} &\sum_{n=1}^{l}q_{n}mF_{1i,n}+ \sum_{j=1}^{s}\pi_{ij}^{(m)}F_{1j,m} \leq F_{2i,m}+F_{4}, \\ & d_{2} \Biggl[\sum_{n=1}^{l}q_{mn}F_{2i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}F_{2j,m} \Biggr]\leq F_{3}, \end{aligned} \end{aligned}$$
(13)
$$\begin{aligned}& \pi_{ii}^{(m)}Q_{1i,m}+\sum _{n=1}^{l}q_{mn}Q_{1i,n}\leq0, \qquad \pi_{ii}^{(m)}Q_{2i,m}+\sum _{n=1}^{l}q_{mn}Q_{2i,n} \leq0, \end{aligned}$$
(14)
$$\begin{aligned}& \sum_{j=1,j\neq i}^{s}\pi_{ij}^{(m)}Q_{1j,m}+Q_{1} \leq0,\qquad \sum_{j=1,j\neq i}^{s} \pi_{ij}^{(m)}Q_{2j,m}+Q_{2} \leq0, \end{aligned}$$
(15)

where

$$\begin{aligned}& \begin{aligned}[b] \Xi_{1,1}={}&{-}P_{1i,m}C_{i}-C_{i}P_{1i,m}+R_{1i,m}^{1}-X_{1i,m}+ \tau_{2}W_{1}^{1}+\tau_{2}Q_{1}^{1}+Q_{1i,m}^{1} \\ &{}+\sum_{n=1}^{l}q_{mn}P_{1i,n}+ \sum_{j=1}^{s}\pi_{ij}^{(m)}P_{1j,m}, \end{aligned} \\& \Xi_{1,2}=X_{1i,m}-S_{1i,m}, \qquad \Xi_{1,3}=S_{1i,m}, \\& \Xi_{1,4}=R_{1i,m}^{2}+LK_{1}+Q_{1i,m}^{2}+ \tau_{2}W_{1}^{2}+\tau_{2}Q_{1}^{2}, \qquad \Xi_{1,11}=P_{1i,m}A_{1i}, \\& \Xi_{1,12}=P_{1i,m}A_{2i},\qquad \Xi_{1,14}=P_{1i,m}A_{3i}, \qquad \Xi_{2,2}=-2X_{1i,m}+S_{1i,m}+S_{1i,m}^{\intercal}-(1- \tau _{2})Q_{1i,m}^{1}, \\& \Xi_{2,3}=-S_{1i,m}+X_{1i,m},\qquad \Xi_{2,5}=-(1-\tau_{2u})Q_{1i,m}^{2}+LK_{2}, \\& \Xi_{3,3}=-X_{1i,m}-R_{1i,m}^{1},\qquad \Xi_{3,6}=LK_{3}-R_{1i,m}^{2}, \\& \Xi_{4,4}=R_{1i,m}^{3}+\tau_{2}W_{1}^{3}+Q_{1i,m}^{3}-2K_{1}+ \tau_{2}Q_{1}^{3}+d_{2}^{2}F_{1i,m}+ \frac{d_{2}^{3}}{2}F_{2i,m}+\frac {d_{2}^{3}}{2}F_{4} + \frac{d_{2}^{3}}{3}F_{3}, \\& \Xi_{4,8}=B_{1i}^{\intercal}P_{2i,m}, \qquad \Xi_{5,5}=-(1-\tau_{2u})Q_{1i,m}^{3}-2K_{2}, \qquad\Xi_{5,8}=B_{2i}^{\intercal }P_{2i,m}, \\& \Xi_{6,6}=-2K_{3}-R_{1i,m}^{3},\qquad \Xi_{7,7}=-F_{1i,m},\qquad \Xi_{7,8}=B_{3i}^{\intercal}P_{2i,m}, \\& \begin{aligned}[b] \Xi_{8,8}={}&{-}P_{2i,m}D_{i}-D_{i}P_{2i,m}+R_{2i,m}^{1}-Y_{1i,m} +\tau_{1}W_{2}^{1}+\tau_{1}Q_{2}^{1}+Q_{2i,m}^{1} \\ &{}+ \sum_{n=1}^{l}q_{mn}P_{2i,n}+ \sum_{j=1}^{s}\pi_{ij}^{m}P_{2j,m}, \end{aligned} \\& \Xi_{8,9}=Y_{1i,m}-S_{2i,m},\qquad \Xi_{8,10}=S_{2i,m},\qquad \Xi_{8,11}=R_{2i,m}^{2}+MK_{4}+Q_{2i,m}^{2}+ \tau_{1}W_{2}^{2}+\tau _{1}Q_{2}^{2}, \\& \Xi_{9,9}=-2Y_{1i,m}+S_{2i,m}+S_{2i,m}^{\intercal}-(1- \tau_{2u})Q_{2i,m}^{1},\qquad \Xi_{9,10}=-S_{2i,m}+Y_{1i,m}, \\& \Xi_{9,12}=-(1-\tau_{2u})Q_{2i,m}^{2}+MK_{5}, \qquad \Xi_{10,10}=-Y_{1i,m}-R_{2i,m}^{1}, \qquad \Xi_{10,13}=MK_{6}-R_{2i,m}^{2}, \\& \Xi_{11,11}=R_{2i,m}^{3}+\tau_{1}W_{2}^{3}+Q_{2i,m}^{3}-2K_{4}+ \tau _{1}Q_{2}^{3} +d_{1}^{2}E_{1i,m}+ \frac{d_{1}^{3}}{2}E_{2i,m}+\frac{d_{1}^{3}}{2}E_{4} + \frac{d_{1}^{3}}{3!}E_{3}, \\& \Xi_{12,12}=-(1-\tau_{1u})Q_{2i,m}^{3}-2K_{5}, \qquad \Xi_{13,13}=-2K_{6}-R_{2i,m}^{3}, \qquad \Xi_{14,14}=-E_{1i,m}, \\& \Gamma_{1}=\tau_{2}^{2}X_{1i,m}+ \frac{\tau_{2}^{3}}{2}X_{2i,m}+\frac {\tau_{2}^{3}}{3!}X_{3} + \frac{\tau_{2}^{3}}{2}X_{4},\qquad \Gamma_{2}= \tau_{1}^{2}Y_{1i,m}+\frac{\tau_{1}^{3}}{2}Y_{2i,m}+ \frac {\tau_{1}^{3}}{3!}Y_{3} +\frac{\tau_{1}^{3}}{2}Y_{4}, \\& \Upsilon_{1}= \bigl[-C_{i}\quad 0\quad 0\quad 0\quad 0 \quad 0\quad 0\quad 0\quad 0\quad 0\quad A_{1i}^{\intercal }\quad A_{2i}^{\intercal}\quad 0\quad A_{3i}^{\intercal} \bigr]^{\intercal}, \\& \Upsilon _{2}= \bigl[0\quad 0\quad 0\quad B_{1i}^{\intercal} \quad B_{2i}^{\intercal}\quad 0\quad B_{3i}^{\intercal } \quad -D_{i}\quad 0\quad 0\quad 0\quad 0\quad 0\quad 0 \bigr]^{\intercal}. \end{aligned}$$

Proof

Consider the following Lyapunov-Krasovskii functional:

$$ V(t,x_{t},y_{t},r_{t}, \delta_{t})=\sum_{i=1}^{5}V_{i}(t,x_{t},y_{t},r_{t}, \delta_{t}), $$
(16)

where

$$\begin{aligned} V_{1}(t,x_{t},y_{t},r_{t}, \delta_{t})={}&x^{\intercal}(t)P_{1r_{t},\delta _{t}}x(t)+y^{\intercal}(t)P_{2r_{t},\delta_{t}}y(t), \\ V_{2}(t,x_{t},y_{t},r_{t}, \delta_{t})={}& \int_{t-\tau_{2}}^{t}\eta _{1}(s)R_{1r_{t},\delta_{t}} \eta_{1}(s)\,ds+ \int_{t-\tau_{1}}^{t}\eta _{2}(s)R_{2r_{t},\delta_{t}} \eta_{2}(s)\,ds \\ &{}+ \int_{-\tau_{2}}^{0} \int_{t+\beta}^{t}\eta_{1}(s)W_{1} \eta _{1}(s)\,ds+ \int_{-\tau_{1}}^{0} \int_{t+\beta}^{t}\eta_{2}(s)W_{2} \eta _{2}(s)\,ds, \\ V_{3}(t,x_{t},y_{t},r_{t}, \delta_{t})={}& \int_{t-\tau _{2}(t,r_{t})}^{t}\eta_{1}(s)Q_{1r_{t},\delta_{t}} \eta_{1}(s)\,ds+ \int _{t-\tau_{1}(t,r_{t})}^{t}\eta_{2}(s)Q_{2r_{t},\delta_{t}} \eta _{2}(s)\,ds \\ &{}+ \int_{-\tau_{2}}^{0} \int_{t+\theta}^{t}\eta_{1}^{\intercal }(s)Q_{1} \eta_{1}(s)\,ds\,d\theta+ \int_{-\tau_{1}}^{0} \int_{t+\theta }^{t}\eta_{2}^{\intercal}(s)Q_{2} \eta_{2}(s)\,ds\,d\theta, \\ V_{4}(t,x_{t},y_{t},r_{t}, \delta_{t})={}&\tau_{2} \int_{-\tau_{2}}^{0} \int _{t+\theta}^{t}\dot{x}^{\intercal}(s)X_{1r_{t},\delta_{t}} \dot {x}(s)\,ds\,d\theta \\ &{}+\tau_{2} \int_{-\tau_{2}}^{0} \int_{\theta}^{0} \int_{t+\beta}^{t}\dot {x}^{\intercal}(s)X_{2r_{t},\delta_{t}} \dot{x}(s)\,ds\,d\beta\,d\theta \\ &{}+ \int_{-\tau_{2}}^{0} \int_{\delta}^{0} \int_{\theta}^{0} \int_{t+\beta }^{t}\dot{x}^{\intercal}(s)X_{3} \dot{x}(s)\,ds\,d\beta\,d\theta\, d\delta \\ &{}+\tau _{2} \int_{-\tau_{2}}^{0} \int_{\theta}^{0} \int_{t+\beta}^{t}\dot {x}^{\intercal}(s)X_{4} \dot{x}(s)\,ds\,d\beta\,d\theta \\ &{}+\tau_{1} \int_{-\tau_{1}}^{0} \int_{t+\theta}^{t}\dot{y}^{\intercal }(s)Y_{1r_{t},\delta_{t}} \dot{y}(s)\,ds\,d\theta \\ &{}+\tau_{1} \int_{-\tau_{1}}^{0} \int_{\theta}^{0} \int_{t+\beta}^{t}\dot {y}^{\intercal}(s)Y_{2r_{t},\delta_{t}} \dot{y}(s)\,ds\,d\beta\,d\theta \\ &{}+ \int_{-\tau_{1}}^{0} \int_{\delta}^{0} \int_{\theta}^{0} \int_{t+\beta }^{t}\dot{y}^{\intercal}(s)Y_{3} \dot{y}(s)\,ds\,d\beta\,d\theta\,d\delta \\ &{}+\tau _{1} \int_{-\tau_{1}}^{0} \int_{\theta}^{0} \int_{t+\beta}^{t}\dot {y}^{\intercal}(s)Y_{4} \dot{y}(s)\,ds\,d\beta\,d\theta, \\ V_{5}(t,x_{t},y_{t},r_{t}, \delta_{t})={}&d_{2} \int_{-d_{2}}^{0} \int _{t+\theta}^{t}g^{\intercal} \bigl(x(s) \bigr)F_{1r_{t},\delta_{t}}g \bigl(x(s) \bigr)\,ds\,d\theta \\ &{}+d_{2} \int_{-d_{2}}^{0} \int_{\theta}^{0} \int _{t+\beta}^{t}g^{\intercal} \bigl(x(s) \bigr)F_{2r_{t},\delta_{t}}g \bigl(x(s) \bigr)\,ds\,d\beta\,d\theta \\ &{}+ \int_{-d_{2}}^{0} \int_{\delta}^{0} \int_{\theta}^{0} \int_{t+\beta }^{t}g^{\intercal} \bigl(x(s) \bigr)F_{3}g \bigl(x(s) \bigr)\,ds\,d\beta\,d\theta\, d\delta \\ &{}+d_{2} \int_{-d_{2}}^{0} \int_{\theta}^{0} \int _{t+\beta}^{t}g^{\intercal} \bigl(x(s) \bigr)F_{4}g \bigl(x(s) \bigr)\,ds\,d\beta\,d\theta \\ &{}+d_{1} \int_{-d_{1}}^{0} \int_{t+\theta}^{t}f^{\intercal } \bigl(y(s) \bigr)E_{1r_{t},\delta_{t}}f \bigl(y(s) \bigr)\,ds\,d\theta \\ &{}+d_{1} \int_{-d_{1}}^{0} \int_{\theta}^{0} \int _{t+\beta}^{t}f^{\intercal} \bigl(y(s) \bigr)E_{2r_{t},\delta_{t}}f \bigl(y(s) \bigr)\,ds\,d\beta\,d\theta \\ &{}+ \int_{-d_{1}}^{0} \int_{\delta}^{0} \int _{\theta}^{0} \int_{t+\beta}^{t}f^{\intercal} \bigl(y(s) \bigr)E_{3}f \bigl(y(s) \bigr)\,ds\,d\beta\,d\theta\,d\delta \\ &{}+d_{1} \int_{-d_{1}}^{0} \int_{\theta}^{0} \int _{t+\beta}^{t}f^{\intercal} \bigl(y(s) \bigr)E_{4}f \bigl(y(s) \bigr)\,ds\,d\beta\,d\theta. \end{aligned}$$

Denote \(\eta_{1}(t)=[x^{\intercal}(t),g^{\intercal}(x(t))]^{\intercal}\) and \(\eta_{2}(t)=[y^{\intercal}(t),f^{\intercal}(y(t))]^{\intercal}\).

Define an infinitesimal generator (denoted by \(\mathcal{L}\)) of the Markovian process acting on \(V(t,x_{t},y_{t},r_{t},\delta_{t})\) (\(r_{t}=i\), \(\delta_{t}=m\)) defined as follows:

$$\begin{aligned} \mathcal{L}V(x_{t},y_{t},i,m)={}&\lim _{h\rightarrow0^{+}}\frac{1}{h} \bigl\{ \mathbb{E} \bigl\{ V(t+h,x_{t+h},y_{t+h},r_{t+h},\delta_{t+h}) \mid x_{t},y_{t},r_{t}=i,\delta_{t}=m \bigr\} \\ &{}-V(t,x_{t},y_{t},r_{t}=i,\delta _{t}=m) \bigr\} . \end{aligned}$$
(17)

Then, for each \(i\in S_{1}\), \(m\in S_{2}\), the stochastic differential of V along the trajectory of system (4) is given by

$$\begin{aligned} &\mathcal{L}V_{1}(x_{t},y_{t},i,m) \\ &\quad=2x^{\intercal}(t)P_{1i,m}\dot{x}(t)+ 2y^{\intercal}(t)P_{2i,m} \dot{y}(t)+x^{\intercal}(t) \Biggl[\sum_{n=1}^{l}q_{mn}P_{1i,n}+ \sum_{j=1}^{s}\pi_{ij}^{m}P_{1j,m} \Biggr]x(t) \\ &\qquad{}+y^{\intercal}(t) \Biggl[\sum_{n=1}^{l}q_{mn}P_{2i,n}+ \sum_{j=1}^{s}\pi _{ij}^{m}P_{2j,m} \Biggr]y(t), \end{aligned}$$
(18)
$$\begin{aligned} &\mathcal{L}V_{2}(x_{t},y_{t},i,m) \\ &\quad=\eta_{1}^{\intercal}(t)R_{1i,m}\eta _{1}(t)-\eta_{1}^{\intercal}(t-\tau_{2})R_{1i,m} \eta_{1}(t-\tau_{2}) + \int_{t-\tau_{2}}^{t}\eta_{1}^{\intercal}(s) \Biggl[\sum_{n=1} ^{l}q_{mn}R_{1i,n} \\ &\qquad{}+\sum_{j=1}^{s} \pi_{ij}^{(m)}R_{1j,m} \Biggr]\eta_{1}(s)\,ds + \tau_{2}\eta_{1}^{\intercal}(t)W_{1} \eta_{1}(t)- \int_{t-\tau _{2}}^{t}\eta_{1}^{\intercal}(s)W_{1} \eta_{1}(s)\,ds \\ &\qquad{}+\eta_{2}^{\intercal}(t)R_{2i,m} \eta_{2}(t)-\eta_{2}^{\intercal}(t-\tau _{1})R_{2i,m} \eta_{1}(t-\tau_{1}) + \int_{t-\tau_{1}}^{t}\eta_{2}^{\intercal}(s) \Biggl[\sum_{n=1}^{l} q_{mn}R_{2i,n} \\ &\qquad{}+\sum_{j=1}^{s} \pi_{ij}^{(m)}R_{2j,m} \Biggr]\eta_{2}(s)\,ds + \tau_{1}\eta_{2}^{\intercal}(t)W_{2} \eta_{2}(t)- \int_{t-\tau _{1}}^{t}\eta_{2}^{\intercal}(s)W_{2} \eta_{2}(s)\,ds, \end{aligned}$$
(19)
$$\begin{aligned} &\mathcal{L}V_{3}(x_{t},y_{t},i,m) \\ &\quad=\lim_{h\rightarrow0^{+}}\frac {1}{h}\mathbb{E} \biggl[ \int_{t+h-\tau_{2}(t+h,r_{t+h})}^{t+h}\eta _{1}^{\intercal}(s)Q_{1r_{t+h},\delta_{t+h}} \eta_{1}(s)\,ds - \int_{t-\tau_{2i}(t)}^{t}\eta_{1}^{\intercal}(s)Q_{1i,m} \eta _{1}(s)\,ds \biggr] \\ &\qquad{}+\lim_{h\rightarrow0^{+}}\frac{1}{h}\mathbb{E} \biggl[ \int_{t+h-\tau _{1}(t+h,r_{t+h})}^{t+h}\eta_{2}^{\intercal}(s)Q_{2r_{t+h},\delta _{t+h}} \eta_{2}(s)\,ds - \int_{t-\tau_{1i}(t)}^{t}\eta_{2}^{\intercal}(s)Q_{2i,m} \eta _{2}(s)\,ds \biggr] \\ &\qquad{}+\tau_{2}\eta_{1}^{\intercal}(t)Q_{1} \eta_{1}(t)- \int_{t-\tau _{2}}^{t}\eta_{1}^{\intercal}(s)Q_{1} \eta_{1}(s)\,ds+ \tau_{1}\eta_{2}^{\intercal}(t)Q_{2} \eta_{2}(t)- \int_{t-\tau _{1}}^{t}\eta_{2}^{\intercal}(s)Q_{2} \eta_{2}(s)\,ds \\ &\quad=\lim_{h\rightarrow0^{+}}\frac{1}{h}\mathbb {E} \Biggl[ \int_{t+h-\tau_{2i }(t)}^{t+h}\eta_{1}^{\intercal}(s)Q_{1i,m} \eta_{1}(s)\,ds - \int_{t-\tau_{2i}(t)}^{t}\eta_{1}^{\intercal}(s)Q_{1i,m} \eta_{1}(s)\,ds \\ &\qquad{}+\sum_{j=1}^{s} \bigl( \pi_{ij}^{(m)}h+o(h) \bigr) \int_{t+h-\tau _{2j}(t+h)}^{t+h}\eta_{1}^{\intercal}(s)Q_{1j,m} \eta_{1}(s)\,ds \\ &\qquad{}+\sum_{n=1}^{l} \bigl(q_{mn}h+o(h) \bigr) \int_{t+h-\tau_{2i}(t+h)}^{t+h}\eta _{1}^{\intercal}(s)Q_{1j,n} \eta_{1}(s)\,ds \Biggr] \\ &\qquad{}+\lim_{h\rightarrow0^{+}}\frac{1}{h}\mathbb{E} \Biggl[ \int_{t+h-\tau _{1i}(t+h)}^{t+h}\eta_{2}^{\intercal}(s)Q_{2i,m} \eta_{2}(s)\,ds - \int_{t-\tau_{1i}(t)}^{t}\eta_{2}^{\intercal}(s)Q_{2i,m} \eta_{2}(s)\,ds \\ &\qquad{}+\sum_{j=1}^{s} \bigl( \pi_{ij}^{(m)}h+o(h) \bigr) \int_{t+h-\tau _{1j}(t+h)}^{t+h}\eta_{1}^{\intercal}(s)Q_{2j,m} \eta_{1}(s)\,ds \\ &\qquad{}+\sum_{n=1}^{l} \bigl(q_{mn}h+o(h) \bigr) \int_{t+h-\tau_{1i}(t+h)}^{t+h}\eta _{1}^{\intercal}(s)Q_{2j,n} \eta_{1}(s)\,ds \Biggr] \\ &\qquad{}+\tau_{2}\eta_{1}^{\intercal}(t)Q_{1} \eta_{1}(t)- \int_{t-\tau _{2}}^{t}\eta_{1}^{\intercal}(s)Q_{1} \eta_{1}(s)\,ds+ \tau_{1}\eta_{2}^{\intercal}(t)Q_{2} \eta_{2}(t)- \int_{t-\tau _{1}}^{t}\eta_{2}^{\intercal}(s)Q_{2} \eta_{2}(s)\,ds \\ &\quad\leq\eta_{1}^{\intercal}(t)Q_{1i,m}\eta _{1}(t)-(1-\tau_{2u})\eta_{1}^{\intercal} \bigl(t-\tau_{2i}(t) \bigr)Q_{1i,m}\eta _{1} \bigl(t- \tau_{2i}(t) \bigr) \\ &\qquad{}+\pi_{ii}^{(m)} \int_{t-\tau_{2i}(t)}^{t}\eta_{1}^{\intercal }(s)Q_{1i,m} \eta_{1}(s)\,ds \\ &\qquad{}+\sum_{j=1,j\neq i}^{s} \pi_{ij}^{(m)} \int_{t-\tau_{2}}^{t}\eta_{1}^{\intercal }(s)Q_{1j,m} \eta_{1}(s)\,ds +\sum_{n=1}^{l}q_{mn} \int_{t-\tau_{2i}(t)}^{t}\eta_{1}^{\intercal }(s)Q_{1i,n} \eta_{1}(s)\,ds \\ &\qquad{}+\eta_{2}^{\intercal}(t)Q_{2i,m} \eta_{2}(t)-(1-\tau_{1u})\eta _{2}^{\intercal} \bigl(t-\tau_{1i}(t) \bigr)Q_{2i,m}\eta_{1} \bigl(t- \tau_{1i}(t) \bigr) \\ &\qquad{}+\pi_{ii}^{(m)} \int_{t-\tau_{1i}(t)}^{t}\eta_{2}^{\intercal }(s)Q_{2i,m} \eta_{2}(s)\,ds +\sum_{j=1,j\neq i}^{s} \pi_{ij}^{(m)} \int_{t-\tau_{1}}^{t}\eta_{2}^{\intercal }(s)Q_{2j,m} \eta_{2}(s)\,ds \\ &\qquad{}+\sum_{n=1}^{l}q_{mn} \int_{t-\tau_{1i}(t)}^{t}\eta_{2}^{\intercal }(s)Q_{2i,n} \eta_{2}(s)\,ds +\tau_{2}\eta_{1}^{\intercal}(t)Q_{1} \eta_{1}(t)- \int_{t-\tau _{2}}^{t}\eta_{1}^{\intercal}(s)Q_{1} \eta_{1}(s)\,ds \\ &\qquad{}+\tau_{1}\eta_{2}^{\intercal}(t)Q_{2} \eta_{2}(t)- \int_{t-\tau _{1}}^{t}\eta_{2}^{\intercal}(s)Q_{2} \eta_{2}(s)\,ds, \end{aligned}$$
(20)
$$\begin{aligned} &\mathcal{L}V_{4}(x_{t},y_{t},i,m) \\ &\quad=\dot{x}^{\intercal}(t) \biggl(\tau _{2}^{2}X_{1i,m}+ \frac{\tau_{2}^{3}}{2}X_{2i,m} +\frac{\tau_{2}^{3}}{3!}X_{3} + \frac{\tau_{2}^{3}}{2}X_{4} \biggr)\dot{x}(t)-\tau_{2} \int_{t-\tau _{2}}^{t}\dot{x}^{\intercal}(s)X_{1i,m} \dot{x}(s)\,ds \\ &\qquad{}-\tau_{2} \int_{-\tau_{2}}^{0} \int_{t+\theta }^{t}\dot{x}^{\intercal}(s)X_{2i,m} \dot{x}(s)\,ds\,d\theta -\tau_{2} \int_{-\tau_{2}}^{0} \int_{t+\theta}^{t}\dot{x}^{\intercal }(s)X_{4} \dot{x}(s)\,ds\,d\theta \\ &\qquad{}- \int_{-\tau_{2}}^{0} \int_{\theta}^{0} \int_{t+\beta}^{t}\dot {x}^{\intercal}(s)X_{3} \dot{x}(s)\,ds\,d\beta\,d\theta \\ &\qquad{}+\tau_{2} \int_{-\tau_{2}}^{0} \int_{t+\theta }^{t}\dot{x}^{\intercal}(s) \Biggl[\sum _{n=1}^{l}q_{mn}X_{1i,n} + \sum_{j=1}^{s}\pi_{ij}^{(m)}X_{1j,m} \Biggr]\dot{x}(s)\,ds\,d\theta \\ &\qquad{}+\tau_{2} \int_{-\tau_{2}}^{0} \int_{\theta}^{0} \int_{t+\beta}^{t}\dot {x}^{\intercal}(s) \Biggl[\sum _{n=1}^{l}q_{mn}X_{2i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}X_{2j,m} \Biggr]\dot{x}(s)\,ds\,d\beta\,d\theta \\ &\qquad{}+\dot{y}^{\intercal}(t) \biggl(\tau_{1}^{2}Y_{1i,m}+ \frac{\tau _{1}^{3}}{2}Y_{2i,m}+\frac{\tau_{1}^{3}}{3!}Y_{3} + \frac{\tau_{1}^{3}}{2}Y_{4} \biggr)\dot{y}(t)-\tau_{1} \int_{t-\tau _{1}}^{t}\dot{y}^{\intercal}(s)Y_{1i,m} \dot{y}(s)\,ds \\ &\qquad{}-\tau_{1} \int_{-\tau_{1}}^{0} \int_{t+\theta }^{t}\dot{y}^{\intercal}(s)Y_{2i,m} \dot{y}(s)\,ds\,d\theta -\tau_{1} \int_{-\tau_{1}}^{0} \int_{t+\theta}^{t}\dot{y}^{\intercal }(s)Y_{4} \dot{y}(s)\,ds\,d\theta \\ &\qquad{}- \int_{-\tau_{1}}^{0} \int_{\theta}^{0} \int_{t+\beta}^{t}\dot {y}^{\intercal}(s)Y_{3} \dot{y}(s)\,ds\,d\beta\,d\theta \\ &\qquad{}+\tau_{1} \int_{-\tau_{1}}^{0} \int_{t+\theta }^{t}\dot{y}^{\intercal}(s) \Biggl[\sum _{n=1}^{l}q_{mn}Y_{1i,n}+ \sum_{j=1}^{s}\pi_{ij}^{(m)}Y_{1j,m} \Biggr]\dot{y}(s)\,ds\,d\theta \\ &\qquad{}+\tau_{1} \int_{-\tau_{1}}^{0} \int_{\theta}^{0} \int_{t+\beta}^{t}\dot {y}^{\intercal}(s) \Biggl[\sum _{n=1}^{l}q_{mn}Y_{2i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}Y_{2j,m} \Biggr]\dot{y}(s)\,ds\,d\beta\,d\theta, \end{aligned}$$
(21)
$$\begin{aligned} &\mathcal{L}V_{5}(x_{t},y_{t},i,m) \\ &\quad=f^{\intercal } \bigl(y(t) \bigr) \biggl(d_{1}^{2}E_{1i,m}+ \frac{d_{1}^{3}}{2}E_{2i,m}+\frac{d_{1}^{3}}{2}E_{4} + \frac{d_{1}^{3}}{3!}E_{3} \biggr)f \bigl(y(t) \bigr) \\ &\qquad{}-d_{1} \int_{t-d_{1}}^{t}f^{\intercal } \bigl(y(s) \bigr)E_{1i,m}f \bigl(y(s) \bigr)\,ds \\ &\qquad{}-d_{1} \int_{-d_{1}}^{0} \int_{t+\theta}^{t}f^{\intercal } \bigl(y(s) \bigr)E_{2i,m}f \bigl(y(s) \bigr)\,ds\,d\theta \\ &\qquad{}-d_{1} \int_{-d_{1}}^{0} \int_{t+\theta}^{t}f^{\intercal } \bigl(y(s) \bigr)E_{4}f \bigl(y(s) \bigr)\,ds\,d\theta - \int_{-d_{1}}^{0} \int_{\theta}^{0} \int_{t+\beta}^{t}f^{\intercal } \bigl(y(s) \bigr)E_{3}f \bigl(y(s) \bigr)\,ds\,d\beta\,d\theta \\ &\qquad{}+d_{1} \int_{-d_{1}}^{0} \int_{t+\theta }^{t}f^{\intercal} \bigl(y(s) \bigr) \Biggl[ \sum_{n=1}^{l}q_{mn}E_{1i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}E_{1j,m} \Biggr]f \bigl(y(s) \bigr)\,ds\,d\theta \\ &\qquad{}+d_{1} \int_{-d_{1}}^{0} \int_{\theta}^{0} \int_{t+\beta}^{t}f^{\intercal } \bigl(y(s) \bigr) \Biggl[ \sum_{n=1}^{l}q_{mn}E_{2i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}E_{2j,m} \Biggr]f \bigl(y(s) \bigr)\,ds\,d\beta\,d\theta \\ &\qquad{}+g^{\intercal} \bigl(x(t) \bigr) \biggl(d_{2}^{2}F_{1i,m}+ \frac {d_{2}^{3}}{2}F_{2i,m}+\frac{d_{2}^{3}}{2}F_{4} + \frac{d_{2}^{3}}{3!}F_{3} \biggr)g \bigl(x(t) \bigr) \\ &\qquad{}-d_{2} \int_{t-d_{2}}^{t}g^{\intercal } \bigl(x(s) \bigr)F_{1i,m}g \bigl(x(s) \bigr)\,ds -d_{2} \int_{-d_{2}}^{0} \int_{t+\theta}^{t}g^{\intercal } \bigl(x(s) \bigr)F_{2i,m}g \bigl(x(s) \bigr)\,ds\,d\theta \\ &\qquad{}-d_{2} \int_{-d_{2}}^{0} \int_{t+\theta}^{t}g^{\intercal } \bigl(x(s) \bigr)F_{4}g \bigl(x(s) \bigr)\,ds\,d\theta - \int_{-d_{2}}^{0} \int_{\theta}^{0} \int_{t+\beta}^{t}g^{\intercal } \bigl(x(s) \bigr)F_{3}g \bigl(x(s) \bigr)\,ds\,d\beta\,d\theta \\ &\qquad{}+d_{2} \int_{-d_{2}}^{0} \int_{t+\theta}^{t}g^{\intercal} \bigl(x(s) \bigr) \Biggl[ \sum_{n=1}^{l}q_{mn}F_{1i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}F_{1j,m} \Biggr]g \bigl(x(s) \bigr)\,ds\,d\theta \\ &\qquad{}+d_{2} \int_{-d_{2}}^{0} \int_{\theta}^{0} \int_{t+\beta}^{t}g^{\intercal } \bigl(x(s) \bigr) \Biggl[ \sum_{n=1}^{l}q_{mn}F_{2i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}F_{2j,m} \Biggr]g \bigl(x(s) \bigr)\,ds\,d\beta\,d\theta. \end{aligned}$$
(22)

Denote

$$ \begin{aligned} &\sigma_{1}(t)= \int_{t-\tau_{2i}(t)}^{t}\dot{x}(s)\,ds,\qquad\sigma_{2}(t)= \int _{t-\tau_{2}}^{t-\tau_{2i}(t)}\dot{x}(s)\,ds, \\ &\sigma_{3}(t)= \int_{t-\tau_{1i}(t)}^{t}\dot{y}(s)\,ds,\qquad\sigma_{4}(t)= \int _{t-\tau_{1}}^{t-\tau_{1i}(t)}\dot{y}(s)\,ds. \end{aligned} $$
(23)

Next, by using a similar method to [19] in (21), when \(0<\tau_{1i}(t)<\tau_{1}\) and \(0<\tau_{2i}(t)<\tau_{2}\), according to Jensen’s inequality, we have

$$\begin{aligned} \tau_{2} \int_{t-\tau_{2}}^{t}\dot{x}^{\intercal}(s)X_{1i,m} \dot {x}(s)\,ds={}&\tau_{2} \int_{t-\tau_{2i}(t)}^{t}\dot{x}^{\intercal }(s)X_{1i,m} \dot{x}(s)\,ds +\tau_{2} \int_{t-\tau_{2}}^{t-\tau_{2i}(t)}\dot{x}^{\intercal }(s)X_{1i,m} \dot{x}(s)\,ds \\ \geq{}&\frac{\tau_{2}}{\tau_{2i}(t)}\sigma_{1}^{\intercal }(t)X_{1i,m} \sigma_{1}(t)+\frac{\tau_{2}}{\tau_{2}-\tau_{2i}(t)}\sigma _{2}^{\intercal}(t)X_{1i,m} \sigma_{2}(t) \\ ={}&\sigma_{1}^{\intercal}(t)X_{1i,m}\sigma_{1}(t)+ \frac{\tau_{2}-\tau _{2i}(t)}{\tau_{2i}(t)}\sigma_{1}^{\intercal}(t)X_{1i,m} \sigma_{1}(t) \\ &{}+\sigma_{2}^{\intercal}(t)X_{1i,m}\sigma_{2}(t)+ \frac{\tau_{2i}(t)}{\tau_{2}-\tau_{2i}(t)}\sigma_{2}^{\intercal }(t)X_{1i,m} \sigma_{2}(t). \end{aligned}$$
(24)

By a reciprocally convex approach, if the inequality (8) holds, then the following inequality holds:

$$ \left .\begin{bmatrix} \sqrt{\frac{\tau_{2}-\tau_{2i}(t)}{\tau_{2i}(t)}}\sigma_{1}(t) \\ -\sqrt{\frac{\tau_{2i}(t)}{\tau_{2}-\tau_{2i}(t)}} \sigma_{2}(t) \end{bmatrix} \right .^{\intercal} \begin{bmatrix} X_{1i,m} & S_{1i,m} \\ \ast& X_{1i,m} \end{bmatrix} \begin{bmatrix} \sqrt{\frac{\tau_{2}-\tau_{2i}(t)}{\tau_{2i}(t)}}\sigma_{1}(t) \\ -\sqrt{\frac{\tau_{2i}(t)}{\tau_{2}-\tau_{2i}(t)}} \sigma_{2}(t) \end{bmatrix} \geq0, $$
(25)

which implies

$$\begin{aligned} &\frac{\tau_{2}-\tau_{2i}(t)}{\tau_{2i}(t)}\sigma_{1}^{\intercal }(t)X_{1i,m} \sigma_{1}(t) +\frac{\tau_{2i}(t)}{\tau_{2}-\tau_{2i}(t)}\sigma_{2}^{\intercal }(t)X_{1i,m} \sigma_{2}(t) \\ &\quad\geq\sigma_{1}^{\intercal}(t)S_{1i,m} \sigma_{2}(t)+\sigma_{2}^{\intercal }(t)S_{1i,m}^{\intercal} \sigma_{1}(t). \end{aligned}$$
(26)

Then we can get from equations (24) and (26)

$$\begin{aligned} &\tau_{2} \int_{t-\tau_{2}}^{t}\dot{x}^{\intercal}(s)X_{1i,m} \dot {x}(s)\,ds \\ &\quad\geq \sigma_{1}^{\intercal}(t)X_{1i,m} \sigma_{1}(t)+\sigma_{2}^{\intercal }(t)X_{1i,m} \sigma_{2}(t) +\sigma_{1}^{\intercal}(t)S_{1i,m} \sigma_{2}(t)+\sigma_{2}^{\intercal }(t)S_{1i,m}^{\intercal} \sigma_{1}(t) \\ &\quad= \left .\begin{bmatrix} \sigma_{1}(t) \\ \sigma_{2}(t) \end{bmatrix} \right .^{\intercal} \begin{bmatrix} X_{1i,m} & S_{1i,m} \\ \ast& X_{1i,m} \end{bmatrix} \begin{bmatrix} \sigma_{1}(t) \\ \sigma_{2}(t) \end{bmatrix}. \end{aligned}$$
(27)

It should be noted that when \(\tau_{2i}(t)=0\) or \(\tau_{2i}(t)=\tau_{2}\), we have \(\sigma_{1}(t)=0\) or \(\sigma_{2}(t)=0\), respectively. Thus equation (27) still holds. It is clear that equation (27) implies

$$ -\tau_{2} \int_{t-\tau_{2}}^{t}\dot{x}^{\intercal}(s)X_{1i,m} \dot {x}(s)\,ds\leq\mathcal{X}(t)\Pi_{1}\mathcal{X}(t), $$
(28)

where \(\mathcal{X}(t)=[ x^{\intercal}(t) \ x^{\intercal}(t-\tau_{2i}(t)) \ x^{\intercal}(t-\tau _{2}) ]^{\intercal}\),

$$\Pi_{1}= \begin{bmatrix} -X_{1i,m} & X_{1i,m}-S_{1i,m} & S_{1i,m} \\ \ast& -2X_{1i,m}+S_{1i,m}+S_{1i,m}^{\intercal} & -S_{1i,m}+X_{1i,m} \\ \ast& \ast& -X_{1i,m} \end{bmatrix}. $$

Similarly, we also have

$$ -\tau_{1} \int_{t-\tau_{1}}^{t}\dot{y}^{\intercal}(s)Y_{1i,m} \dot {y}(s)\,ds\leq\mathcal{Y}(t)\Pi_{2}\mathcal{Y}(t), $$
(29)

where \(\mathcal{Y}(t)=[ y^{\intercal}(t) \ y^{\intercal}(t-\tau_{1i}(t)) \ y^{\intercal}(t-\tau _{1}) ]^{\intercal}\),

$$\Pi_{2}= \begin{bmatrix} -Y_{1i,m} & Y_{1i,m}-S_{2i,m} & S_{2i,m} \\ \ast& -2Y_{1i,m}+S_{2i,m}+S_{2i,m}^{\intercal} & -S_{2i,m}+Y_{1i,m} \\ \ast& \ast& -Y_{1i,m} \end{bmatrix}. $$

Using (2) and (3), for any positive diagonal matrices \(K_{j}\) (\(j=1,2,3,4,5,6\)), we have

$$\begin{aligned}& 2x^{\intercal}(t)LK_{1}g \bigl(x(t) \bigr)-2g^{\intercal} \bigl(x(t) \bigr)K_{1}g \bigl(x(t) \bigr)\geq0, \end{aligned}$$
(30)
$$\begin{aligned}& 2x^{\intercal} \bigl(t-\tau_{2i}(t) \bigr)LK_{2}g \bigl(x \bigl(t-\tau_{2i}(t) \bigr) \bigr)-2g^{\intercal } \bigl(x \bigl(t- \tau_{2i}(t) \bigr) \bigr)K_{2}g \bigl(x \bigl(t- \tau_{2i}(t) \bigr) \bigr)\geq0, \end{aligned}$$
(31)
$$\begin{aligned}& 2x^{\intercal}(t-\tau_{2})LK_{3}g \bigl(x(t- \tau_{2}) \bigr)-2g^{\intercal } \bigl(x(t-\tau_{2}) \bigr)K_{3}g \bigl(x(t-\tau_{2}) \bigr)\geq0, \end{aligned}$$
(32)
$$\begin{aligned}& 2y^{\intercal}(t)MK_{4}f \bigl(y(t) \bigr)-2f^{\intercal} \bigl(y(t) \bigr)K_{4}f \bigl(y(t) \bigr)\geq0, \end{aligned}$$
(33)
$$\begin{aligned}& 2y^{\intercal} \bigl(t-\tau_{1i}(t) \bigr)MK_{5}f \bigl(y \bigl(t-\tau_{1i}(t) \bigr) \bigr)-2f^{\intercal } \bigl(y \bigl(t- \tau_{1i}(t) \bigr) \bigr)K_{5}f \bigl(y \bigl(t- \tau_{1i}(t) \bigr) \bigr)\geq0, \end{aligned}$$
(34)
$$\begin{aligned}& 2y^{\intercal}(t-\tau_{1})MK_{6}f \bigl(y(t- \tau_{1}) \bigr)-2f^{\intercal } \bigl(y(t-\tau_{1}) \bigr)K_{6}f \bigl(y(t-\tau_{1u}) \bigr)\geq0. \end{aligned}$$
(35)

Here, by the use of Lemma 2.1, the integral term \(-d_{1}\int _{t-d_{1}}^{t}f^{\intercal}(y(s))E_{1i,m}f(y(s))\,ds\) and \(-d_{2}\int_{t-d_{2}}^{t}g^{\intercal}(x(s))F_{1i,m}g(x(s))\,ds\) can be estimated as, respectively,

$$\begin{aligned}& -d_{1} \int_{t-d_{1}}^{t}f^{\intercal} \bigl(y(s) \bigr)E_{1i,m}f \bigl(y(s) \bigr)\,ds \leq - \biggl[ \int_{t-d_{1}(t)}^{t}f \bigl(y(s) \bigr)\,ds \biggr]^{\intercal}E_{1i,m} \biggl[ \int _{t-d_{1}(t)}^{t}f \bigl(y(s) \bigr)\,ds \biggr], \end{aligned}$$
(36)
$$\begin{aligned}& -d_{2} \int_{t-d_{2}}^{t}g^{\intercal} \bigl(x(s) \bigr)F_{1i,m}g \bigl(x(s) \bigr)\,ds \leq - \biggl[ \int_{t-d_{2}(t)}^{t}g \bigl(x(s) \bigr)\,ds \biggr]^{\intercal}F_{1i,m} \biggl[ \int _{t-d_{2}(t)}^{t}g \bigl(x(s) \bigr)\,ds \biggr]. \end{aligned}$$
(37)

Then it follows from (18)-(37) that

$$ \mathcal{L}V(x_{t},y_{t},i,m)\leq \left .\begin{bmatrix} \zeta_{1}(t) \\ \zeta_{2}(t) \end{bmatrix} \right .^{\intercal} \bigl[\Xi+ \Upsilon_{1}^{\intercal}\Gamma_{1}\Upsilon_{1}+ \Upsilon _{2}^{\intercal}\Gamma_{2}\Upsilon_{2} \bigr] \begin{bmatrix} \zeta_{1}(t) \\ \zeta_{2}(t) \end{bmatrix}. $$
(38)

Here

$$\begin{aligned}& \zeta_{1}(t)=\biggl[x^{\intercal}(t) x^{\intercal}\bigl(t-\tau _{2i}(t)\bigr) x^{\intercal}(t-\tau_{2}) g^{\intercal} \bigl(x(t)\bigr) g^{\intercal }\bigl(x\bigl(t-\tau_{2i}(t)\bigr)\bigr) g^{\intercal}\bigl(x(t-\tau_{2})\bigr) \\& \hphantom{\zeta_{1}(t)=}{}\times\int _{t-d_{2}(t)}^{t}g^{\intercal}\bigl(x(s)\bigr)\,ds \biggr]^{\intercal}, \\& \zeta_{2}(t)=\biggl[y^{\intercal}(t) y^{\intercal}\bigl(t-\tau _{1i}(t)\bigr) y^{\intercal}(t-\tau_{1}) f^{\intercal} \bigl(y(t)\bigr) f^{\intercal }\bigl(y\bigl(t-\tau_{1i}(t)\bigr)\bigr) f^{\intercal}\bigl(y(t-\tau_{1})\bigr) \\& \hphantom{\zeta_{2}(t)=}{}\times\int _{t-d_{1}(t)}^{t}f^{\intercal}\bigl(y(s)\bigr)\,ds \biggr]^{\intercal}, \\& \bigl[\Xi+\Upsilon_{1}^{\intercal} \Gamma_{1} \Upsilon_{1}+\Upsilon _{2}^{\intercal} \Gamma_{2} \Upsilon_{2} \bigr]< 0. \end{aligned}$$
(39)

Applying the Schur complement shows that (39) is equivalent to (7). We have

$$ \begin{bmatrix} \Xi& \Upsilon_{1} & \Upsilon_{2} \\ \ast& -\Gamma_{1} & 0\\ \ast& \ast& -\Gamma_{2} \end{bmatrix} < 0, $$

which implies \(\dot{V}(x_{t},y_{t},i,m)<0\). Thus, the system (4) is asymptotically stable. This completes the proof. □

Remark 2

In [19], the authors have achieved some excellent work of piecewise homogeneous Markovian jump neural networks. The main contribution is devoted to the study of the stochastic stability analysis problem for a type of continuous-time neural networks with time-varying transition probabilities and mixed time delay. However, there are no results on piecewise homogeneous Markovian jump BAM neural network systems. In the application, the study of the piecewise homogeneous Markovian jump BAM neural networks is essential.

Specifically, when there is no distributed delay, the system (4) reduces to

$$ \left \{ \textstyle\begin{array}{@{}l} \frac{dx(t)}{dt}=-C(r_{t})x(t)+A_{1}(r_{t})f(y(t))+A_{2}(r_{t})f(y(t-\tau _{1}(t,r_{t}))),\\ \frac{dy(t)}{dt}=-D(r_{t})y(t)+B_{1}(r_{t})g(x(t))+B_{2}(r_{t})g(x(t-\tau _{2}(t,r_{t}))). \end{array}\displaystyle \right . $$
(40)

Consider the following Lyapunov functional for the above BAM neural networks:

$$\begin{aligned} V(t,x_{t},y_{t},r_{t}, \delta_{t})={}&V_{1}(t,x_{t},y_{t},r_{t}, \delta_{t}) +V_{2}(t,x_{t},y_{t},r_{t}, \delta_{t})+V_{3}(t,x_{t},y_{t},r_{t}, \delta _{t}) \\ &{}+V_{4}(t,x_{t},y_{t},r_{t}, \delta_{t}), \end{aligned}$$
(41)

where \(V_{1}(t,x_{t},y_{t},r_{t},\delta_{t})\), \(V_{2}(t,x_{t},y_{t},r_{t},\delta_{t})\), \(V_{3}(t,x_{t},y_{t},r_{t},\delta_{t})\), and \(V_{1}(t,x_{t},y_{t},r_{t},\delta_{t})\) have the same definitions as those in equation (16), and we can get the following corollary along similar lines to the proof of Theorem 3.1.

Corollary 3.1

For any given scalars \(\tau_{1}\), \(\tau_{2}\) and \(\tau_{1u}\), \(\tau_{2u}\) the BAM neural network (4) is globally asymptotic stable in the mean square, if there exist \(P_{ji,m}>0\), \(Q_{ji,m}=\bigl [ {\scriptsize\begin{matrix}{} Q_{ji,m}^{1} & Q_{ji,m}^{2} \cr \ast& Q_{ji,m}^{3} \end{matrix}} \bigr ]>0\), \(R_{ji,m}=\bigl [ {\scriptsize\begin{matrix}{} R_{ji,m}^{1} & R_{ji,m}^{2} \cr \ast& R_{ji,m}^{3} \end{matrix}} \bigr ]>0\), \(W_{j}=\bigl [ {\scriptsize\begin{matrix}{} W_{j}^{1} & W_{j}^{2} \cr \ast& W_{j}^{3} \end{matrix}} \bigr ]>0\), \(Q_{j}=\bigl [ {\scriptsize\begin{matrix}{} Q_{j}^{1} & Q_{j}^{2} \cr \ast& Q_{j}^{3} \end{matrix}} \bigr ]>0 \) (\(j=1,2\)), \(X_{ji,m}>0\), \(Y_{ji,m}>0\), \(S_{ji,m}\) (\(j=1,2\)), \(X_{i}>0\), \(Y_{i}>0\) (\(i=3,4\)), and any matrices \(K_{i}\) (\(i=1,2,3,4,5,6\)) with appropriate dimensions such that the following LMIs hold:

$$\begin{aligned}& \begin{bmatrix} \Xi& \Upsilon_{1} & \Upsilon_{2} \\ \ast& -\Gamma_{1} & 0\\ \ast& \ast& -\Gamma_{2} \end{bmatrix} < 0, \end{aligned}$$
(42)
$$\begin{aligned}& \begin{bmatrix} X_{1i,m} & S_{1i,m} \\ \ast& X_{1i,m} \end{bmatrix}\geq0, \qquad \begin{bmatrix} Y_{1i,m} & S_{2i,m} \\ \ast& Y_{1i,m} \end{bmatrix}\geq0, \end{aligned}$$
(43)
$$\begin{aligned}& \sum_{n=1}^{l}q_{mn}R_{1i,n}+ \sum_{j=1}^{s}\pi_{ij}^{(m)}R_{1j,m} \leq W_{1},\qquad \sum_{n=1}^{l}q_{mn}R_{2i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}R_{2j,m} \leq W_{2}, \end{aligned}$$
(44)
$$\begin{aligned}& \begin{aligned} &\sum_{n=1}^{l}q_{mn}X_{1i,n}+ \sum_{j=1}^{s}\pi_{ij}^{(m)}X_{1j,m} \leq X_{2i,m}+X_{4}, \\ &\tau_{2} \Biggl[\sum _{n=1}^{l}q_{mn}X_{2i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}X_{2j,m} \Biggr]\leq X_{3}, \end{aligned} \end{aligned}$$
(45)
$$\begin{aligned}& \begin{aligned} &\sum_{n=1}^{l}q_{mn}Y_{1i,n}+ \sum_{j=1}^{s}\pi_{ij}^{(m)}Y_{1j,m} \leq Y_{2i,m}+Y_{4},\\ & \tau_{1} \Biggl[\sum _{n=1}^{l}q_{mn}Y_{2i,n}+ \sum_{j=1}^{s}\pi _{ij}^{(m)}Y_{2j,m} \Biggr]\leq Y_{3}, \end{aligned} \end{aligned}$$
(46)
$$\begin{aligned}& \pi_{ii}^{(m)}Q_{1i,m}+\sum _{n=1}^{l}q_{mn}Q_{1i,n}\leq0, \qquad \pi _{ii}^{(m)}Q_{2i,m}+\sum _{n=1}^{l}q_{mn}Q_{2i,n} \leq0, \end{aligned}$$
(47)
$$\begin{aligned}& \sum_{j=1,j\neq i}^{s}\pi_{ij}^{(m)}Q_{1j,m}+Q_{1} \leq0, \qquad \sum_{j=1,j\neq i}^{s} \pi_{ij}^{(m)}Q_{2j,m}+Q_{2} \leq0, \end{aligned}$$
(48)

where

$$\begin{aligned}& \begin{aligned}[b] \Xi_{1,1}={}&{-}P_{1i,m}C_{i}-C_{i}P_{1i,m}+R_{1i,m}^{1}-X_{1i,m}+ \tau_{2}W_{1}^{1} +\tau_{2}Q_{1}^{1}+Q_{1i,m}^{1}\\ &{}+ \sum_{n=1}^{l}q_{mn}P_{1i,n}+ \sum_{j=1}^{s}\pi_{ij}^{(m)}P_{1j,m}, \end{aligned} \\& \Xi_{1,2}=X_{1i,m}-S_{1i,m},\qquad \Xi_{1,3}=S_{1i,m},\qquad \Xi_{1,4}=R_{1i,m}^{2}+LK_{1}+Q_{1i,m}^{2}+ \tau_{2}W_{1}^{2}+\tau_{2}Q_{1}^{2}, \\& \Xi_{1,10}=P_{1i,m}A_{1i}, \\& \Xi_{1,11}=P_{1i,m}A_{2i},\qquad \Xi_{2,2}=-2X_{1i,m}+S_{1i,m}+S_{1i,m}^{\intercal}-(1- \tau _{2})Q_{1i,m}^{1}, \\& \Xi_{2,3}=-S_{1i,m}+X_{1i,m},\qquad \Xi_{2,5}=-(1-\tau_{2u})Q_{1i,m}^{2}+LK_{2}, \qquad \Xi_{3,3}=-X_{1i,m}-R_{1i,m}^{1}, \\& \Xi_{3,6}=LK_{3}-R_{1i,m}^{2},\qquad \Xi_{4,4}=R_{1i,m}^{3}+\tau_{2}W_{1}^{3}+Q_{1i,m}^{3}-2K_{1}+ \tau_{2}Q_{1}^{3},\qquad \Xi_{4,7}=B_{1i}^{\intercal}P_{2i,m}, \\& \Xi_{5,5}=-(1-\tau_{2u})Q_{1i,m}^{3}-2K_{2}, \qquad \Xi_{5,7}=B_{2i}^{\intercal}P_{2i,m},\qquad \Xi_{6,6}=-2K_{3}-R_{1i,m}^{3}, \\& \begin{aligned}[b] \Xi_{7,7}={}&{-}P_{2i,m}D_{i}-D_{i}P_{2i,m}+R_{2i,m}^{1}-Y_{1i,m} +\tau_{1}W_{2}^{1}+\tau_{1}Q_{2}^{1}+Q_{2i,m}^{1}\\ &{}+ \sum_{n=1}^{l}q_{mn}P_{2i,n}+ \sum_{j=1}^{s}\pi_{ij}^{m}P_{2j,m}, \end{aligned} \\& \Xi_{7,8}=Y_{1i,m}-S_{2i,m},\qquad \Xi_{7,9}=S_{2i,m},\qquad \Xi_{7,10}=R_{2i,m}^{2}+MK_{4}+Q_{2i,m}^{2}+ \tau_{1}W_{2}^{2}+\tau _{1}Q_{2}^{2}, \\& \Xi_{8,8}=-2Y_{1i,m}+S_{2i,m}+S_{2i,m}^{\intercal}-(1- \tau_{2u})Q_{2i,m}^{1},\qquad \Xi_{8,9}=-S_{2i,m}+Y_{1i,m}, \\& \Xi_{8,11}=-(1-\tau_{2u})Q_{2i,m}^{2}+MK_{5}, \qquad \Xi_{9,9}=-Y_{1i,m}-R_{2i,m}^{1}, \qquad \Xi_{9,12}=MK_{6}-R_{2i,m}^{2}, \\& \Xi_{10,10}=R_{2i,m}^{3}+\tau_{1}W_{2}^{3}+Q_{2i,m}^{3}-2K_{4}+ \tau _{1}Q_{2}^{3},\qquad \Xi_{11,11}=-(1- \tau_{1u})Q_{2i,m}^{3}-2K_{5}, \\& \Xi_{12,12}=-2K_{6}-R_{2i,m}^{3}, \\& \Gamma_{1}=\tau_{2}^{2}X_{1i,m}+ \frac{\tau_{2}^{3}}{2}X_{2i,m}+\frac {\tau_{2}^{3}}{3!}X_{3} + \frac{\tau_{2}^{3}}{2}X_{4},\qquad \Gamma_{2}= \tau_{1}^{2}Y_{1i,m}+\frac{\tau_{1}^{3}}{2}Y_{2i,m}+ \frac {\tau_{1}^{3}}{3!}Y_{3} +\frac{\tau_{1}^{3}}{2}Y_{4}, \\& \Upsilon_{1}= \bigl[-C_{i}\quad 0\quad 0\quad 0\quad 0 \quad 0\quad 0\quad 0\quad 0\quad 0\quad A_{1i}^{\intercal}\quad A_{2i}^{\intercal}\quad 0 \bigr]^{\intercal}, \\& \Upsilon_{2}= \bigl[0\quad 0\quad 0\quad B_{1i}^{\intercal} \quad B_{2i}^{\intercal }\quad 0\quad -D_{i}\quad 0\quad 0 \quad 0\quad 0\quad 0\quad 0 \bigr]^{\intercal}. \end{aligned}$$

4 Examples

In this section, we will give a numerical example showing the effectiveness of the conditions given here. Consider BAM neural networks (4) with the following parameters:

$$\begin{aligned}& C_{1}= \begin{bmatrix} 2.2 & 0 \\ 0 & 2.5 \end{bmatrix},\qquad D_{1}= \begin{bmatrix} 2.2 & 0 \\ 0 & 2.5 \end{bmatrix},\qquad A_{1}= \begin{bmatrix} -1.5 & 0.5 \\ 0.3 & -1.2 \end{bmatrix},\\& A_{2}= \begin{bmatrix} 0.2 & 0.4 \\ -0.3 & 1.5 \end{bmatrix}, \qquad A_{3}= \begin{bmatrix} 1.2 & 0.2 \\ -0.8 & 0.5 \end{bmatrix},\qquad B_{1}= \begin{bmatrix} -1.5 & 1.8 \\ 0.5 & -0.9 \end{bmatrix},\\& B_{2}= \begin{bmatrix} 0.3 & -0.3 \\ -1.2 & -0.5 \end{bmatrix},\qquad B_{3}= \begin{bmatrix} 1.7 & 0.8 \\ 0.3 & -2.6 \end{bmatrix}, \qquad C_{21}= \begin{bmatrix} 1.4 & 0 \\ 0 & 2.8 \end{bmatrix},\\& D_{21}= \begin{bmatrix} 1.4 & 0 \\ 0 & 2.8 \end{bmatrix},\qquad A_{21}= \begin{bmatrix} -0.5 & 1.8 \\ -0.2 & -0.3 \end{bmatrix},\qquad A_{22}= \begin{bmatrix} -0.8 & 0.4 \\ -0.7 & -0.6 \end{bmatrix}, \\& A_{23}= \begin{bmatrix} 0.2 & 0.1 \\ 0.7 & 0.8 \end{bmatrix},\qquad B_{21}= \begin{bmatrix} 0.9 & 0.5 \\ -0.3 & 0.6 \end{bmatrix},\qquad B_{22}= \begin{bmatrix} 1.6 & -0.2 \\ 1.1 & 0.5 \end{bmatrix},\\& B_{23}= \begin{bmatrix} 0.9 & 1.3 \\ 1.6 & -0.2 \end{bmatrix}, \qquad L=M= \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \end{aligned}$$

and the activation functions are taken as follows:

$$f \bigl(y(t) \bigr)=\tanh \bigl(-y(t) \bigr),\qquad g \bigl(x(t) \bigr)=0.25 \times \bigl(\bigl|x(t)+1\bigr|+\bigl|x(t)-1\bigr| \bigr). $$

In this example, we assume \(\tau_{1}=1.7531\), \(\tau_{2}=1.2551\), \(\tau_{1u}=0.5060\), \(\tau_{2u}=0.6991\), and \(d_{1}=d_{2}=0.8\). The discrete delay \(\tau_{1}(t)=1.2+0.5\cos(t)\), \(\tau_{2}(t)=0.6+0.6\sin(t)\) and the distributed delay \(d_{1}(t)=d_{2}(t)=0.8\cos^{2}(t)\).

The transition probability matrices are

$$ \Pi^{1}= \begin{bmatrix} -1.5 & 1.5 \\ 1.2 & -1.2 \end{bmatrix},\qquad \Pi^{2}= \begin{bmatrix} -1.7 & 1.7 \\ 0.5 & -0.5 \end{bmatrix},\qquad \Pi^{3}= \begin{bmatrix} -0.9 & 0.9 \\ 1.6 & -1.6 \end{bmatrix}, $$

and the transition probability matrix is

$$\Lambda= \begin{bmatrix} -0.7 & 0.3 & 0.4 \\ 0.4 & -1.3 & 0.9 \\ 0.7 & 0.9 & -1.6 \end{bmatrix}. $$

Figure 1 is the state response of model (1) (\(r(t)=1\), \(\delta(t)=1\)) with the initial condition \([x_{1}(t),x_{2}(t),y_{1}(t),y_{2}(t)]^{\intercal }=[-0.7,1.5,-0.4,1.2]^{\intercal}\), and Figure 2 is the state response of model (2) (\(r(t)=2\), \(\delta(t)=2\)) with the initial condition \([x_{1}(t),x_{2}(t),y_{1}(t),y_{2}(t)]^{\intercal }=[0.8,0.1,-0.3,-0.5]^{\intercal}\). Through this example, we find that our results demonstrate the effectiveness of the proposed result.

Figure 1
figure 1

The state response of the model ( 1 ) in the example.

Figure 2
figure 2

The state response of the model ( 2 ) in the example.

5 Conclusions

In this paper, based on Lyapunov-Krasovskii functionals and some inequality techniques, we have investigated the problem of global asymptotic stability for piecewise homogeneous Markovian jump BAM neural networks with discrete and distributed time-varying delays. A linear matrix inequalities method has been developed to solve this problem. The sufficient condition has been established in terms of LMIs. A numerical example is given to demonstrate the usefulness of the derived LMI-based stability conditions.

References

  1. Kosko, B: Neural Networks and Fuzzy Systems: A Dynamical System Approach to Machine Intelligence. Prentice-Hall, Englewood Cliffs (1992)

    MATH  Google Scholar 

  2. Kosko, B: Adaptive bidirectional associative memories. Appl. Opt. 26, 4947-4960 (1987)

    Article  Google Scholar 

  3. Kosko, B: Bidirectional associative memories. IEEE Trans. Syst. Man Cybern. 18, 49-60 (1988)

    Article  MathSciNet  Google Scholar 

  4. Lakshmanan, S, Park, JH, Lee, TH, Jung, HY, Rakkiyappan, R: Stability criteria for BAM neural networks with leakage delays and probabilistic time-varying delays. Appl. Math. Comput. 219, 9408-9423 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  5. Zhu, QX, Li, XD, Yang, XS: Exponential stability for stochastic reaction-diffusion BAM neural networks with time-varying and distributed delays. Appl. Math. Comput. 217, 6078-6091 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  6. Park, JH, Park, CH, Kwon, OM, Lee, SM: A new stability criterion for bidirectional associative memory neural networks of neutral-type. Appl. Math. Comput. 199, 716-722 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  7. Park, JH, Kwon, OM: On improved delay-dependent criterion for global stability of bidirectional associative memory neural networks with time-varying delays. Appl. Math. Comput. 199, 435-446 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  8. Yang, WG: Existence of an exponential periodic attractor of periodic solutions for general BAM neural networks with time-varying delays and impulses. Appl. Math. Comput. 219, 569-582 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  9. Bao, HB, Cao, JD: Exponential stability for stochastic BAM networks with discrete and distributed delays. Appl. Math. Comput. 218, 6188-6199 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  10. Senan, S, Arik, S, Liu, D: New robust stability results for bidirectional associative memory neural networks with multiple time delays. Appl. Math. Comput. 218, 11472-11482 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  11. Ali, M, Balasubramaniam, P: Robust stability of uncertain fuzzy Cohen-Grossberg BAM neural networks with time-varying delays. Expert Syst. Appl. 36, 10583-10588 (2009)

    Article  Google Scholar 

  12. Zhu, Q, Rakkiyappan, R, Chandrasekar, A: Stochastic stability of Markovian jump BAM neural networks with leakage delays and impulse control. Neurocomputing 136, 136-151 (2014)

    Article  Google Scholar 

  13. Wang, G, Cao, J, Xu, M: Stability analysis for stochastic BAM neural networks with Markovian jumping parameters. Neurocomputing 72(16-18), 3901-3906 (2009)

    Article  Google Scholar 

  14. Zhang, BY, Li, YM: Exponential \(L_{2}-L_{\infty}\) filtering for distributed delay systems with Markovian jumping parameters. Signal Process. 93, 206-216 (2013)

    Article  Google Scholar 

  15. Syed Ali, M, Arik, S, Saravankumar, R: Delay-dependent stability criteria of uncertain Markovian jump neural networks with discrete interval and distributed time-varying delays. Neurocomputing 158, 167-173 (2015)

    Article  Google Scholar 

  16. Wu, ZG, Park, JH, Su, HY, Chu, J: Delay-dependent passivity for singular Markov jump systems with time-delays. Commun. Nonlinear Sci. Numer. Simul. 18, 669-681 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  17. Liu, Y, Wang, Z, Liu, X: An LMI approach to stability analysis of stochastic high-order Markovian jumping neural networks with mixed time delays. Nonlinear Anal. Hybrid Syst. 2, 110-120 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  18. Zhang, LX: \(H_{\infty}\) Estimation for discrete-time piecewise homogeneous Markov jump linear systems. Automatica 45, 2570-2576 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  19. Ding, YC, Zhu, H, Zhong, SM, Zhang, YP, Zeng, Y: \(H_{\infty}\) Filtering for a class of piecewise homogeneous Markovian jump nonlinear systems. Math. Probl. Eng. (2012). doi:10.1155/2012/716474

    MATH  MathSciNet  Google Scholar 

  20. Wu, Z, Park, JH, Su, HY, Chu, J: Stochastic stability analysis of piecewise homogeneous Markovian jump neural networks with mixed time-delays. J. Franklin Inst. 349, 2136-2150 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  21. Park, PG, Ko, JW, Jeong, C: Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47, 235-238 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  22. Cheng, J, Wang, H, Chen, S, Liu, Z, Yang, J: Robust delay-derivative-dependent state-feedback control for a class of continuous-time system with time-varying delays. Neurocomputing 173(3), 827-834 (2016)

    Article  Google Scholar 

  23. Cheng, J, Zhong, S, Zhong, Q, Zhu, H, Du, Y: Finite-time boundedness of state estimation for neural networks with time-varying delays. Neurocomputing 129, 257-264 (2014)

    Article  Google Scholar 

  24. Lu, R, Wu, H, Bai, J: New delay-dependent robust stability criteria for uncertain neutral systems with mixed delays. J. Franklin Inst. 351(3), 1386-1399 (2014)

    Article  MathSciNet  Google Scholar 

  25. Zhang, D, Cai, WJ, Xie, LH, Wang, QG: Non-fragile distributed filtering for T-S fuzzy systems in sensor networks. IEEE Trans. Fuzzy Syst. 23, 1883-1890 (2015). http://dx.doi.org/10.1109/TFUZZ.2014.2367101

    Article  Google Scholar 

  26. Sun, J, Liu, GP, Chen, J: Delay-dependent stability and stabilization of neutral time-delay systems. Int. J. Robust Nonlinear Control 19, 1364-1375 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  27. Boyd, S, Ghaohui, L, Feron, E, Balakrishnan, V: Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia (1994)

    Book  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the associate editor and the anonymous reviewers for their detailed comments and suggestions. This work was supported by the National Natural Science Foundation of China (Grant No. 61533006), the National Natural Science Foundation of China (Grant No. 61273015), and the Project of the key research base of Humanities and Social Sciences in Universities of Sichuan Province (Grant No. NYJ20150604).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuanhua Du.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

WW drafted the manuscript. YD helped to draft manuscript. SZ, JX, and NZ checked the manuscript. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wen, W., Du, Y., Zhong, S. et al. Global asymptotic stability of piecewise homogeneous Markovian jump BAM neural networks with discrete and distributed time-varying delays. Adv Differ Equ 2016, 60 (2016). https://doi.org/10.1186/s13662-016-0758-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-016-0758-x

Keywords