Skip to main content

Almost sure stability of the delayed Markovian jump RDNNs

Abstract

In this paper, we consider the almost sure stability of the delayed reaction–diffusion neural networks (RDNNs) with Markovian jump parameters and Dirichlet boundary conditions. By constructing new Lyapunov functional and utilizing some inequality techniques we give sufficient conditions ensuring the almost sure stability. The criteria can also ensure the almost sure global exponential stability when the input is equal to zero. Two numerical examples are given to demonstrate the effectiveness of the proposed approach.

Introduction

During the past few decades, neural networks(NNs)have been successfully applied in many areas such as signal processing, image processing, pattern recognition, fault diagnosis, associative memory, and combinatorial optimization [13]. Sontag [4] firstly introduced the concept of input-to-state stability (ISS). The theory of ISS plays a central role in modern control theory, in particular, in robust stabilization of nonlinear systems, design of nonlinear observers, analysis of large-scale networks, etc. [49]. Roughly speaking, the ISS property implies that no matter what the initial state is, if the external input is small, then the state must be eventually small. In last years, many interesting results on ISS properties of various systems such as discrete systems, switched systems, and hybrid systems have been reported [1012]. Since the ISS property implies that not only the unperturbed system is asymptotically stable at the equilibrium for the unforced system, but also its behavior remains bounded when its inputs are bounded. It also offers an effective way to tackle the stabilization of nonlinear control in the presence of various uncertainties arising from observer design, new small-gain theorems, and control engineering applications [79]. Because of these research backgrounds, the ISS properties of NNs are considered in recent years. It is well known that NNs are often affected by noise, such as perturbations in control or errors on observation. Thus, NNs are required not only to be stable, but also to have the property of ISS. Therefore, finding sufficient conditions to guarantee the ISS of NNs is an important and meaningful research topic. A great number of results on this topic have appeared in the literature [1317]. For instance, Sanchez and Perez [13] firstly proposed the ISS properties and presented some matrix norm conditions on ISS for NNs. Ahn [14] considered a passivity-based learning law to investigate the ISS for a class of switched Hopfield neural networks with time delay. Some LMI sufficient conditions have been proposed to guarantee the ISS by using Lyapunov function method [16]. In [17], two new criteria on ISS of NNs with time-varying delays are given.

Many pattern formation and wave propagation phenomena that appear in nature can be described by systems of coupled nonlinear differential equations, generally known as reaction–diffusion equations [18]. These wave propagation phenomena are exhibited by systems belonging to very different scientific disciplines. Therefore, the reaction–diffusion effects cannot be neglected in both biological and man-made NNs, especially when electrons are moving in a noneven electromagnetic field. So we must consider that the activations vary in space and time. Recently, the stability or synchronization criteria for NNs involving diffusion and time-varying delays are found in [1831]. Moreover, note that many systems such as biological networks and man-made NNs are described by partial differential equations with Dirichlet boundary conditions [19, 21, 23, 32], instead of Neumann boundary conditions. For example, in [29], the global exponential synchronization stability in an array of linearly diffusively coupled delayed RDNNs was studied via adding an impulsive controller to a small fraction of nodes. In [30], the authors discussed the sampled-data synchronization for a class of RDNNs with Dirichlet boundary conditions. Unlike other studies, a sampled-data controller with stochastic sampling is designed to synchronize the concerned delayed RDNNs. In [31], the global asymptotic sampled-data synchronization problem of an array of N randomly coupled RDNNs with Markovian jumping parameters and mixed delays was investigated. The jump parameters are determined by a continuous-time discrete-state Markovian chain, and the mixed time delays under consideration comprise both discrete and distributed delays. Hence, it is necessary to study ISS of reaction–diffusion delayed NNs with Dirichlet boundary conditions.

Over the past few years, stability analysis for stochastic systems with Markovian jump parameters or Brownian motion defined in a complete probability space or stochastic disturbances was widely investigated [31, 3337]. Markovian jump systems can be described by a set of linear systems with the transitions among models determined by a Markovian chain in a finite mode set. Applications of this kind of systems can be found in economic systems, power systems, solar-powered systems, battle management in command, control and communication systems, etc. NNs in real life often have a phenomenon of information latching. It is recognized that a way for dealing with this information latching problem is to extract finite-state representations (also called modes or clusters). In fact, such NNs with information latching may have finite modes, the modes may switch (or jump) from one to another at different times, and the switching (or jumping) between two arbitrarily different modes can be governed by a Markovian chain. Hence, the NNs with Markovian jump parameters are of great significance in modeling a class of NNs with finite modes. Recently, some results on the stability, estimation, and control problems related to such systems have been reported in the literature [3540].

To the best of author’s knowledge, there are few results or even no results concerning the ISS issues for Markovian jump RDNNs with mixed time-varying delays and Dirichlet boundary conditions. The issues of integrating mixed time-varying delays, Markovian jump parameters and diffusion effects into the study of almost sure ISS for NNs require more complicated analysis, which is very important in both theory and applications. In the present paper, we give some sufficient conditions for almost sure ISS for RDNNs with mixed delays and Markovian jump parameters. It is a challenging and interesting problem how to develop Lyapunov methods to solve the ISS problem of mixed delayed stochastic RDNNs with Markovian jump parameters. As far as we know, this extension has not been reported in the literature at the present stage. Compared with the existing results, the main contributions of this paper can be summarized as follows: the first involves that we make the first attempt to address the ISS analysis for a class of RDNNs with mixed delays and Markovian jump parameters; the second relates to that we apply the well-known Hardy–Poincaré inequality and Lyapunov method to investigate ISS properties for mixed delayed stochastic RDNNs with Markovian jump parameters; the third aspect is that the established algebraic criteria for ISS of such a system are new in terms of mixed delays, Markovian jump parameters, and reaction–diffusion terms. We conclude that both the reaction–diffusion coefficients and the regional feature have an effect on the almost sure ISS. The provided ISS criteria are true to Dirichlet boundary conditions and concerned with the regional feature, the reaction–diffusion coefficients, and the first eigenvalue of the Dirichlet Laplacian. Finally, two examples are employed to demonstrate the usefulness of the obtained results.

The structure of this paper is outlined as follows. In Sect. 2, we introduce some preliminaries and lemmas. In Sect. 3, we state the main results. In Sect. 4, we present two numerical examples to illustrate the results and, finally, make some conclusions in Sect. 5.

Model description and preliminaries

To begin with, we introduce some notations. Let Ω is an open domain containing the origin and radially bounded byπ, with smooth boundary Ω and \(\operatorname{mes}\Omega> 0\) in space \(R^{m}\),where mesΩ is the measure of the set Ω. By \(L^{2} ( \Omega )\) we denote the space of real Lebesgue-measurable functions defined on Ω; it is a Banach space with the \(L_{2}\)-norm \(\Vert \eta ( t,x ) \Vert _{2} = [ \sum_{i = 1}^{n} \Vert \eta_{i} ( t,x ) \Vert _{2}^{2} ]^{\frac{1}{2}}\), where \(\eta ( t,x ) = ( \eta_{1} ( t,x ),\ldots,\eta_{n} ( t,x ) )^{T}\), \(\Vert \eta_{i} ( t,x ) \Vert _{2} = ( \int_{\Omega} \vert \eta_{i} ( t,x ) \vert ^{2}\,dx )^{1 / 2}\), \(\vert \cdot \vert \) denotes the absolute value. By \(C = C( ( - \infty,0 ] \times\Omega,R^{n} )\) we denote the Banach space of continuous functions mapping the set \(( - \infty,0 ] \times\Omega\) into \(R^{n}\) with the norm \(\Vert \varphi \Vert = \sqrt{\sum_{i = 1}^{n} \Vert \varphi_{i} \Vert _{2}^{2}}\), where \(\varphi ( t,x ) = ( \varphi_{1} ( t,x ),\ldots,\varphi_{n} ( t,x ) )^{T}\), \(\Vert \varphi_{i} \Vert _{2} = \sqrt{\int_{\Omega} \vert \varphi_{i} ( \cdot,x ) \vert _{\tau}^{2}\,dx}\), \(\vert \varphi_{i} ( \cdot,x ) \vert _{\tau} = \sup_{ - \infty< s \le0} \vert \varphi_{i} ( s,x ) \vert \). Let \(( \Omega,F,\{F_{t}\}_{t\geq0},P )\) be a complete probability space with filtration \(\{ F_{t} \}_{t \ge0}\) satisfying the usual conditions (i.e., the filtration contains all P -null sets and is right continuous). By \(L_{F_{0}}^{p} ( ( - \infty,0 ] \times \Omega;R^{n} )\) we denote the family of all \(F_{0}\)-measurable \(C ( ( - \infty,0 ] \times\Omega;R^{n} )\)-valued random variables \(\varphi= \{ \varphi ( s,x ): - \infty< s \le0,x \in\Omega \}\) such that \(E \Vert \varphi \Vert _{2}^{2} < + \infty\), where \(E \{ \cdot \}\) stands for the mathematical expectation operator with respect to the given probability measure P. By κ we denote the class of continuous strictly increasing functions μ from \(R^{ +} \) to \(R^{ +} \) with \(\mu ( 0 ) = 0\). Let \(\kappa_{\infty} \) denote the class of functions \(\mu\in\kappa\) with \(\mu ( r ) \to\infty\) as \(r \to\infty\). Functions in κ and \(\kappa_{\infty} \) are called class κ and \(\kappa_{\infty} \) functions, respectively. In this note, a function \(\beta:R^{ +} \times R^{ +} \to R^{ +} \) is said to be of class κL if, for each fixed t, the mapping \(\beta ( \cdot,t )\) is of class κ and, for each fixed s, the function \(\beta ( s,t )\) is decreasing to zero in t as \(t \to\infty\). By \(L_{\infty}^{n} ( \Omega )\) we denote the class of functions \(v ( t,x ): R^{ +} \times\Omega\to R^{ +} \) with the supremum norm \(\Vert v ( t,x ) \Vert _{\Omega} = \sup_{t \ge0} \Vert v ( t,x ) \Vert < \infty\).

Consider the following delayed RDNNs with Markovian jump parameters:

$$ \begin{aligned}[b] \frac{\partial u_{i} ( t,x )}{\partial t} ={}& \sum _{l = 1}^{m} \frac{\partial}{\partial x_{l}} \biggl( D_{il} \frac{\partial u_{i} ( t,x )}{\partial x_{l}} \biggr) - a_{i} \bigl( r ( t ) \bigr)u_{i} ( t,x ) + \sum_{j = 1}^{n} w_{ij} \bigl( r ( t ) \bigr)g_{j} \bigl( u_{j} ( t,x ) \bigr) \\ &+ \sum_{j = 1}^{n} h_{ij} \bigl( r ( t ) \bigr)g_{j} \bigl( u_{j} \bigl( t - \tau_{j} ( t ),x \bigr) \bigr) + \sum_{j = 1}^{n} b_{ij} \bigl( r ( t ) \bigr) \int_{ - \infty}^{t} k_{ij} ( t - s )g_{j} \bigl( u_{j} ( s,x ) \bigr)\,ds \\ &+ v_{i} ( t,x ),\quad t \ge0,x \in\Omega, \end{aligned} $$
(1)

where \(x = ( x_{1},\ldots,x_{m} )^{T} \in\Omega\), \(u_{i} ( t,x )\) represents the state of the ith neuron at time t and in space x; the diagonal matrix \(A ( r ( t ) ) = \operatorname{diag} ( a_{1} ( r ( t ) ),\ldots,a_{n} ( r ( t ) ) )\) has positive entries \(a_{i} ( r ( t ) ) > 0\), \(B ( r ( t ) ) = ( b_{ij} ( r ( t ) ) )_{n \times n}\), \(W ( r ( t ) ) = ( w_{ij} ( r ( t ) ) )_{n \times n}\), and \(H ( r ( t ) ) = ( h_{ij} ( r ( t ) ) )_{n \times n}\) are the interconnection matrices representing the weight coefficients of the neurons, \(g_{j}\) denotes the activation functions of the jth neuron at time t and in space x, \(v ( t,x ) = ( v_{1} ( t,x ),v_{2} ( t,x ),\ldots,v_{n} ( t,x ) )^{T}\) denotes an external input vector to neurons, \(\tau_{j} ( t )\) are time-varying delays of NNs satisfying \(0 \le\tau_{j} ( t ) \le\tau\) and \(\dot{\tau}_{j} ( t ) \le\mu< 1\), smooth functions \(D_{il} = D_{il} ( t,x,u ) \ge0\) stand for the transmission diffusion operators along the ith neuron, and \(k_{ij} ( \cdot )\) are delay kernels.

Let \(\{ r ( t ),t \ge0 \}\) be a right-continuous Markovian chain on the probability space taking values in the finite space \(S = \{ 1,2,\ldots,N \}\)with generator \(\Gamma= ( \gamma_{ij} )_{N \times N}\) given by

$$P \bigl\{ r ( t + \delta ) = j|r ( t ) = i \bigr\} = \left \{ \textstyle\begin{array}{l@{\quad}l} \gamma_{ij}\delta+ o ( \delta ),& \mbox{if } i \ne j, \\ 1 + \gamma_{ij}\delta+ o ( \delta ), &\mbox{if } i = j, \end{array}\displaystyle \right . $$

with \(\delta> 0\) and \(\lim_{\delta\to0}o ( \delta ) / \delta= 0\), where \(\gamma_{ij} \ge0\) is the transition rate from i to j if \(i \ne j\) and \(\gamma_{ii} = - \sum_{i \ne j} \gamma_{ij}\). It is known that almost every sample path of \(r ( t )\) is a right-continuous step function with finite number of simple jumps in any finite subinterval of \(R^{ +} \).

System (1) is supplemented with the following Dirichlet boundary conditions and initial value:

$$ \begin{gathered} u_{i} ( t,x ) = 0,\quad ( t,x ) \in [ 0, + \infty ) \times\partial\Omega, \\ u_{i} ( s,x ) = \varphi_{i} ( s,x ), \quad( s,x ) \in ( - \infty,0 ] \times\Omega, \end{gathered} $$
(2)

where \(\varphi (s,x) = ( \varphi_{1}(s,x),\ldots,\varphi_{n}(s,x) )^{T}\) with given bounded and continuous functions \(\varphi_{i} ( s,x )\).

We denote \(u_{i} ( t,x ) = u_{i} ( t )\), \(\varphi_{i} ( s,x ) = \varphi_{i} ( s )\), \(v_{i} ( t,x ) = v_{i} ( t )\) if no confusion occurs.

To obtain our main results, we assume that the following conditions hold.

  1. (A1)

    There exist positive constants \(L_{j}\) such that, for all \(\eta_{1},\eta_{2} \in R\),

    $$0 \le\frac{g_{j} ( \eta_{1} ) - g_{j} ( \eta_{2} )}{\eta_{1} - \eta_{2}} \le L_{j}. $$
  2. (A2)

    The delay kernels \(k_{ij} ( \cdot ):[ 0, + \infty ) \to [ 0, + \infty )\) (\(i,j = 1,2,\ldots,n\)) are real-valued nonnegative continuous functions that satisfy the following conditions:

    1. (i)

      \(\int_{0}^{ + \infty} k_{ij} ( s )\,ds = 1\),

    2. (ii)

      \(\int_{0}^{ + \infty} sk_{ij} ( s )\,ds \le+ \infty\) for all \(s \in [ 0, + \infty )\),

    3. (iii)

      \(\int_{0}^{ + \infty} sk_{ij} ( s )e^{\xi s} \,ds < + \infty\), where ξ is a positive constant.

Remark 1

By assumptions (A1) and (A2), when \(v ( t )\) is given, it is not difficult to prove that there exists a unique equilibrium point \(u^{*}\) for system (1)–(2) based on Mawhin’s continuation theorem [40].

Definition 1

System (1)–(2) is said to be input-to-state stable if there exist a class κL function β and a class κ function γ such that, for any initial state φ and any bounded input \(v ( t )\), the solution \(u ( t )\) exists for all \(t \ge0\) and satisfies

$$ E \bigl\Vert u ( t ) \bigr\Vert _{2} \le \beta \bigl( E \Vert \varphi \Vert ,t \bigr) + \gamma \bigl( \bigl\Vert v ( t ) \bigr\Vert _{\Omega} \bigr). $$
(3)

Remark 2

Inequality (3) guarantees that for any bounded inputs \(v ( t )\), the state \(u ( t )\) will be bounded. That is, if the delayed RDNNs are globally input-to-state stable, then the state of the delayed RDNNs should remain bounded when its inputs are bounded. Hence, the delayed RDNNs are bounded-input bounded-output stable.

Lemma 1

([39, 41] Hardy–Poincaré inequality)

Let \(\Omega\subset R^{m}\) (\(m \ge3 \)) be a bounded open set containing the origin. Then

$$\begin{gathered} \int_{\Omega} \vert \nabla u \vert ^{2} \,dx - \frac{ ( m - 2 )^{2}}{4} \int_{\Omega} \frac{u^{2}}{ \vert x \vert ^{2}} \,dx \ge\frac{\Lambda_{2}}{R_{\Omega}^{2}} \int_{\Omega} u^{2} dS, \\ \quad u \in H_{0}^{1} ( \Omega ) = \biggl\{ y\Big|y \in L^{2} ( \Omega ),y|_{\partial\Omega} = 0,D_{i}y = \frac{\partial y}{\partial x_{i}} \in L^{2} ( \Omega ),1 \le i \le m \biggr\} ,\end{gathered} $$

\(\Lambda_{2} = 5.783\dots\) is the first eigenvalue of the Dirichlet Laplacian of the unit disk in \(R^{2}\), and \(R_{\Omega} \) is the radius of the ball \(\Omega^{ *} \subset R^{m}\) centered at the origin having the same measure as Ω.

Remark 3

In this paper, we employ different inequalities to deal with the reaction–diffusion terms, and consequently we are convinced that the diffusion does contribute to the stability analysis of RDNNs. The Hardy–Poincaré inequality is an important result and has been widely utilized in the study of partial differential equation [39, 41]. The introduction of Lemma 1 is mostly for evaluating the reaction–diffusion terms. In [23, 27, 29], there is a similar estimate regarding the reaction–diffusion terms; meanwhile, \(u \in C_{0}^{1} ( \Omega )\) is required in [23, 27, 29]. Here we suppose that \(u \in H_{0}^{1} ( \Omega )\). Obviously, \(u \in C_{0}^{1} ( \Omega )\) is stronger than \(u \in H_{0}^{1} ( \Omega )\).

Main results

Theorem 1

Suppose that (A1)(A2) hold. System (1)(2) is almost sure ISS if there exist constants \(q_{i} ( i ) > 0\) for any \(r ( t ) = i \in S\), \(i,j = 1,2,\ldots,n\), such that

$$ \begin{aligned}[b] &{-} \Xi- 2a_{i} ( i ) + 2 \bigl\vert w_{ii} ( i ) \bigr\vert L_{i} + \sum _{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert w_{ji} ( i ) \bigr\vert L_{i}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{ij} ( i ) \bigr\vert L_{j} + \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert \\ &\quad\quad{}+ \sum_{j = 1}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert b_{ji} ( i ) \bigr\vert L_{i}^{2} + \sum _{j = 1}^{n} \frac{ \vert h_{ji} ( i ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( i )}{q_{i} ( i )}L_{i} + \sum _{j = 1}^{n} \gamma_{ij}q_{i} ( j ) + 1 \\ &\quad{}< 0. \end{aligned} $$
(4)

Here \(\Xi= \frac{\underline{\alpha} ( m - 2 )^{2}}{2\pi ^{2}}\ + \frac{2\underline{\alpha} \Lambda_{2}}{R_{\Omega}^{2}}\), \(\underline{\alpha} = \min \{ D_{il},i = 1,\ldots,n;l = 1,\ldots,m \} > 0\), and π is a radial bound of an open domain Ω.

Proof

If condition (4) holds, then we can choose a positive number ε (may be very small) such that, for \(i = 1,2,\ldots,n\),

$$ \begin{aligned}[b] & {-} \Xi- 2a_{i} ( i ) + 2 \bigl\vert w_{ii} ( i ) \bigr\vert L_{i} + \sum _{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert w_{ji} ( i ) \bigr\vert L_{i}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{ij} ( i ) \bigr\vert L_{j} \\ &\qquad{}+ \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert + \sum_{j = 1}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert b_{ji} ( i ) \bigr\vert L_{i}^{2} + \sum_{j = 1}^{n} \frac{ \vert h_{ji} ( i ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( i )}{q_{i} ( i )}L_{i} + \sum _{j = 1}^{n} \gamma_{ij}q_{i} ( j ) + 1 + \varepsilon\\ &\quad{}< 0. \end{aligned} $$
(5)

Consider the following functions:

$$ \begin{aligned}[b] F_{i} ( y_{i} ) ={}& 2y_{i} - \Xi- 2a_{i} ( i ) + 2 \bigl\vert w_{ii} ( i ) \bigr\vert L_{i} + \sum _{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert w_{ji} ( i ) \bigr\vert L_{i}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{ij} ( i ) \bigr\vert L_{j} \\ &+ \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert + \sum_{j = 1}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert b_{ji} ( i ) \bigr\vert L_{i}^{2} \int_{0}^{ + \infty} k_{ji} ( s )e^{2y_{i}s}\,ds + \sum_{j = 1}^{n} \frac{ \vert h_{ji} ( i ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( i )}{q_{i} ( i )}L_{i} \\ &+ \sum _{j = 1}^{n} \gamma_{ij}q_{i} ( j ) + 1. \end{aligned} $$
(6)

From (6) and (A2) we obtain that \(F_{i} ( 0 ) < - \varepsilon< 0\) and \(F_{i} ( y_{i} )\) is continuous for \(y_{i} \in [ 0, + \infty )\); moreover, \(F_{i} ( y_{i} ) \to+ \infty\) as \(y_{i} \to+ \infty\), and thus there exists constant \(\alpha_{i} \in ( 0, + \infty )\) such that

$$ \begin{aligned}[b] F_{i} ( \alpha_{i} ) ={}& 2 \alpha_{i} - \Xi- 2a_{i} ( i ) + 2 \bigl\vert w_{ii} ( i ) \bigr\vert L_{i} + \sum _{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert w_{ji} ( i ) \bigr\vert L_{i}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{ij} ( i ) \bigr\vert L_{j} \\ &+ \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert + \sum_{j = 1}^{n} \frac{q_{j} ( i )}{q_{i} ( i )}L_{i}^{2} \bigl\vert b_{ji} ( i ) \bigr\vert \int_{0}^{ + \infty} k_{ji} ( s )e^{2\alpha_{i}s}\,ds + \sum_{j = 1}^{n} \frac{ \vert h_{ji} ( i ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j}}{q_{i}}L_{i} \\&+ \sum _{j = 1}^{n} \gamma_{ij}q_{i} ( j ) + 1 \\={}& 0. \end{aligned} $$
(7)

Let \(\alpha= \min_{1 \le i \le n} \{ \alpha_{i} \}\). Clearly, \(\alpha> 0\), and we can get

$$ \begin{aligned}[b] F_{i} ( \alpha ) ={}& 2\alpha- \Xi- 2a_{i} ( i ) + 2 \bigl\vert w_{ii} ( i ) \bigr\vert L_{i} + \sum_{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert w_{ji} ( i ) \bigr\vert L_{i}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{ij} ( i ) \bigr\vert L_{j} \\ &+ \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert + \sum_{j = 1}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert b_{ji} ( i ) \bigr\vert L_{i}^{2} \int_{0}^{ + \infty} k_{ji} ( s )e^{2\alpha s}\,ds + \sum_{j = 1}^{n} \frac{ \vert h_{ji} ( i ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j}}{q_{i}}L_{i} \\ &+ \sum _{j = 1}^{n} \gamma_{ij}q_{i} ( j ) + 1 \\ \le{}&0. \end{aligned} $$
(8)

Given \(\varphi\in L_{F_{0}}^{p}( ( - \infty,0 ] \times \Omega;R^{n} )\), fix the system mode \(i \in S\) arbitrarily. Let \(\phi_{j} ( t ) = t - \tau_{j} ( t )\). Since the derivative \(\dot{\phi}_{j} ( t ) = 1 - \dot{\tau}_{j} ( t ) \ge1 - \mu> 0\), \(\phi_{j} ( t )\) has an inverse function. We denote this inverse function by \(\phi_{j}^{ - 1} ( t )\). Construct the Lyapunov functional

$$\begin{aligned}[b] V \bigl( t,u ( t ),r ( t ) = i \bigr) ={}& \int_{\Omega} \sum_{i = 1}^{n} q_{i} ( i ) \Biggl[ e^{2\alpha t}u_{i} ( t )^{2} \\& + \frac{1}{1 - \mu} \sum_{j = 1}^{n} \bigl\vert h_{ij} ( i ) \bigr\vert L_{j} \int_{t - \tau_{j} ( t )}^{t} u_{j} ( s )^{2}e^{2\alpha ( s + \tau_{j} ( \phi _{j}^{ - 1} ( s ) ) )} \,ds\\ &+ \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert \int_{0}^{ + \infty} k_{ij} ( s ) \int_{t - s}^{t} g_{j} \bigl( u_{j} ( z ) \bigr)^{2} e^{2\alpha ( z + s )}\,dz\,ds \Biggr]\,dx.\end{aligned} $$
(9)

Along the solutions of model (1), we have

$$\begin{aligned} LV \bigl( t,u ( t ),r ( t ) = i \bigr) ={}& \lim_{\Delta\to0^{ +}} \frac{1}{\Delta} \bigl[ E \bigl\{ V \bigl( t + \Delta,u ( t + \Delta ),r ( t + \Delta ) \bigr)|u ( t ),r ( t ) = i \bigr\} \\ &- V \bigl( t,u ( t ),r ( t ) = i \bigr) \bigr] \\ ={}& \int_{\Omega} e^{2\alpha t}\sum_{i = 1}^{n} q_{i} ( i ) \Biggl\{ 2u_{i} ( t ) \Biggl[ \sum _{l = 1}^{m} \frac{\partial}{\partial x_{l}} \biggl( D_{il} \frac{\partial u_{i} ( t )}{\partial x_{l}} \biggr) - a_{i} ( i )u_{i} ( t ) \\ &+ \sum_{j = 1}^{n} w_{ij} ( i )g_{j} \bigl( u_{j} ( t ) \bigr) + \sum _{j = 1}^{n} h_{ij} ( i )g_{j} \bigl( u_{j} \bigl( t - \tau_{j} ( t ) \bigr) \bigr) \\ & + \sum_{j = 1}^{n} b_{ij} ( i ) \int_{ - \infty}^{t} k_{ij} ( t - s )g_{j} \bigl( u_{j} ( s ) \bigr)\,ds + v_{i} ( t ) \Biggr] + 2\alpha u_{i}^{2} ( t ) \\ &+ \sum_{j = 1}^{n} \frac{ \vert h_{ij} ( i ) \vert }{1 - \mu} L_{j} \bigl[ e^{2\alpha\tau} u_{j} ( t )^{2} - ( 1 - \mu )u_{j} \bigl( t - \tau_{j} ( t ) \bigr)^{2} \bigr] \\&+ e^{2\alpha t}\sum_{j = 1}^{n} \gamma_{ij}q_{i} ( j )u_{i} ( t )^{2} \\ &+ \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert \biggl[ \int _{0}^{ + \infty} e^{2\alpha s}k_{ij} ( s )g_{j} \bigl( u_{j} ( t ) \bigr)^{2}\,ds \\& - \int_{0}^{ + \infty} k_{ij} ( s ) g_{j} \bigl( u_{j} ( t - s ) \bigr)^{2}\,ds \biggr] \Biggr\} \,dx \\ \le{}& \int_{\Omega} e^{2\alpha t}\sum_{i = 1}^{n} q_{i} ( i ) \Biggl\{ \Biggl[ 2u_{i} ( t )\sum _{l = 1}^{m} \frac{\partial}{\partial x_{l}} \biggl( D_{il} \frac{\partial u_{i} ( t )}{\partial x_{l}} \biggr) - 2a_{i} ( i )u_{i} ( t )^{2} \\ &+ 2 \bigl\vert w_{ii} ( i ) \bigr\vert L_{i}u_{i} ( t )^{2} + 2\sum_{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert \bigl\vert u_{i} ( t ) \bigr\vert \bigl\vert g_{j} \bigl( u_{j} ( t ) \bigr) \bigr\vert \\&+ 2 \bigl\vert u_{i} ( t ) \bigr\vert \sum _{j = 1}^{n} \bigl\vert h_{ij} ( i ) \bigr\vert L_{j} \bigl\vert u_{j} \bigl( t - \tau_{j} ( t ) \bigr) \bigr\vert \\ & + 2 \bigl\vert u_{i} ( t ) \bigr\vert \sum _{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert \int_{ - \infty}^{t} k_{ij} ( t - s ) \bigl\vert g_{j} \bigl( u_{j} ( s ) \bigr) \bigr\vert \,ds + 2 \bigl\vert u_{i} ( t ) \bigr\vert v_{i} ( t ) \Biggr] \\ &+ 2\alpha u_{i} ( t )^{2}+ \sum_{j = 1}^{n} \frac{ \vert h_{ij} ( i ) \vert }{1 - \mu} L_{j} \bigl[ e^{2\alpha\tau} u_{j} ( t )^{2} - ( 1 - \mu )u_{j} \bigl( t - \tau_{j} ( t ) \bigr)^{2} \bigr] \\ &+ \sum_{j = 1}^{n} \gamma_{ij}q_{i} ( j )u_{i} ( t )^{2} \\ &+ \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert \biggl[ \int _{0}^{ + \infty} k_{ij} ( s )e^{2\alpha s} \bigl\vert g_{j} \bigl( u_{j} ( t ) \bigr) \bigr\vert ^{2}\,ds \\& - \int_{0}^{ + \infty} k_{ij} ( s ) \bigl\vert g_{j} \bigl( u_{j} ( t - s ) \bigr) \bigr\vert ^{2} \,ds \biggr] \Biggr\} \,dx. \end{aligned}$$
(10)

From Young’s inequality and (A2), we obtain

$$ 2\sum_{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert \bigl\vert u_{i} ( t ) \bigr\vert \bigl\vert g_{j} \bigl( u_{j} ( t ) \bigr) \bigr\vert \le\sum_{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert \bigl\vert u_{i} ( t ) \bigr\vert ^{2} + \sum_{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert \bigl\vert g_{j} \bigl( u_{j} ( t ) \bigr) \bigr\vert ^{2} $$
(11)

and

$$ \begin{aligned}[b] &2 \bigl\vert u_{i} ( t ) \bigr\vert \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert \int_{ - \infty}^{t} k_{ij} ( t - s ) \bigl\vert g_{j} \bigl( u_{j} ( s,x ) \bigr) \bigr\vert \,ds \\ &\quad\le\sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert \bigl\vert u_{i} ( t ) \bigr\vert ^{2} + \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert \int_{ - \infty}^{t} k_{ij} ( t - s ) \bigl\vert g_{j} \bigl( u_{j} ( s,x ) \bigr) \bigr\vert ^{2}\,ds \end{aligned} . $$
(12)

Applying the Green formula, the Dirichlet boundary condition, and Lemma 1, we have

$$ \begin{aligned}[b] 2 \int_{\Omega} \sum_{l = 1}^{m} u_{i} ( t )\frac{\partial}{\partial x_{l}} \biggl( D_{il} \frac{\partial u_{i} ( t )}{\partial x_{l}} \biggr)\,dx &= - 2\sum_{l = 1}^{m} \int_{\Omega} D_{il} \biggl( \frac{\partial u_{i} ( t )}{\partial x_{l}} \biggr)^{2}\,dx \\ &< - \biggl( \frac{\underline{\alpha} ( m - 2 )^{2}}{2\pi^{2}} + \frac{2\underline{\alpha} \Lambda_{2}}{R_{\Omega}^{2}} \biggr) \int _{\Omega} u_{i} ( t )^{2}\,dx\\& = - \Xi \int_{\Omega} u_{i} ( t )^{2}\,dx. \end{aligned} $$
(13)

By (11)–(13) and (A2) we derive

$$\begin{aligned} LV ( t,u,i ) \le{}& \int_{\Omega} e^{2\alpha t}\sum_{i = 1}^{n} q_{i} ( i ) \Biggl\{ \Biggl[ - \Xi u_{i} ( t )^{2} - 2a_{i} ( i )u_{i} ( t )^{2} \\ &+ 2 \bigl\vert w_{ii} ( i ) \bigr\vert L_{i}u_{i} ( t )^{2} + \sum_{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert \bigl\vert u_{i} ( t ) \bigr\vert ^{2} + \sum_{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert L_{j}^{2} \bigl\vert u_{j} ( t ) \bigr\vert ^{2} \\ &+ \sum_{j = 1}^{n} \bigl\vert h_{ij} ( i ) \bigr\vert L_{j} \bigl( \bigl\vert u_{i} ( t ) \bigr\vert ^{2} + \bigl\vert u_{j} \bigl( t - \tau_{j} ( t ) \bigr) \bigr\vert ^{2} \bigr) + \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert \bigl\vert u_{i} ( t ) \bigr\vert ^{2} \\ &+ \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert \int_{ - \infty}^{t} k_{ij} ( t - s ) \bigl\vert g_{j} \bigl( u_{j} ( s ) \bigr) \bigr\vert ^{2}\,ds + \bigl\vert u_{i} ( t ) \bigr\vert ^{2} + v_{i} ( t )^{2} \Biggr] \\& + \sum _{j = 1}^{n} \frac{ \vert h_{ij} ( i ) \vert }{1 - \mu} L_{j}e^{2\alpha \tau} \bigl\vert u_{j} ( t ) \bigr\vert ^{2} \\ &- \sum_{j = 1}^{n} \bigl\vert h_{ij} ( i ) \bigr\vert L_{j} \bigl\vert u_{j} \bigl( t - \tau_{j} ( t ) \bigr) \bigr\vert ^{2} + \sum_{j = 1}^{n} \gamma_{ij}q_{i} ( j )u_{i} ( t )^{2} + 2\alpha u_{i} ( t )^{2} \\ &+ \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert \biggl[ \int _{0}^{ + \infty} k_{ij} ( s )e^{2\alpha s} \bigl\vert g_{j} \bigl( u_{j} ( t ) \bigr) \bigr\vert ^{2}\,ds \\& - \int_{0}^{ + \infty} k_{ij} ( s ) \bigl\vert g_{j} \bigl( u_{j} ( t - s ) \bigr) \bigr\vert ^{2} \,ds \biggr] \Biggr\} \,dx \\ ={}& \int_{\Omega} e^{2\alpha t}\sum_{i = 1}^{n} q_{i} ( i ) \Biggl[ \Biggl( - \Xi- 2a_{i} ( i ) + 2 \bigl\vert w_{ii} ( i ) \bigr\vert L_{i} + \sum_{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert \\& + \sum_{j = 1,j \ne i}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert w_{ji} ( i ) \bigr\vert L_{i}^{2} \\ &+ \sum_{j = 1}^{n} \bigl\vert h_{ij} ( i ) \bigr\vert L_{j} + \sum _{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert + \sum_{j = 1}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert b_{ji} ( i ) \bigr\vert L_{i}^{2} \int_{0}^{ + \infty} k_{ji} ( s )e^{2\alpha s}\,ds \\ &+ \sum_{j = 1}^{n} \frac{ \vert h_{ji} ( i ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( i )}{q_{i} ( i )}L_{i} + 1 + 2 \alpha+ \sum_{j = 1}^{n} \gamma_{ij}q_{i} ( j ) \Biggr) \bigl\vert u_{i} ( t ) \bigr\vert ^{2} + v_{i} ( t )^{2} \Biggr] \,dx. \end{aligned}$$
(14)

It follows from Dynkin’s formula and (14) that

$$ \begin{aligned}[b] EV ( t,u,i ) \le{}& EV \bigl( 0,\varphi ( 0 ),i \bigr) + \Biggl\{ \int_{0}^{t} e^{2\alpha\xi} \sum _{i = 1}^{n} q_{i} ( i ) \Biggl[ \Biggl( - \Xi- 2a_{i} ( i ) + 2 \bigl\vert w_{ii} ( i ) \bigr\vert L_{i} \\ &+ \sum_{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert w_{ji} ( i ) \bigr\vert L_{i}^{2} + \sum _{j = 1}^{n} \bigl\vert h_{ij} ( i ) \bigr\vert L_{j} + \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert \\&+ \sum _{j = 1}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert b_{ji} ( i ) \bigr\vert L_{i}^{2} \int_{0}^{ + \infty} k_{ji} ( s )e^{2\alpha s}\,ds \\ &+ \sum_{j = 1}^{n} \frac{ \vert h_{ji} ( i ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( i )}{q_{i} ( i )}L_{i} + 1 + 2\alpha+ \sum_{j = 1}^{n} \gamma_{ij}q_{i} ( j ) \Biggr)E \bigl\Vert u_{i} ( \xi ) \bigr\Vert _{2}^{2} \Biggr]d\xi \Biggr\} \\ &+ \frac{n}{2\alpha} E \bigl\Vert v ( t ) \bigr\Vert _{\Omega}^{2} \bigl( e^{2\alpha t} - 1 \bigr) .\end{aligned} $$
(15)

Since

$$ V ( t,u,i ) \ge\sum_{i = 1}^{n} q_{i} ( i )e^{2\alpha t} \bigl\Vert u_{i} ( t ) \bigr\Vert _{2}^{2} \ge \min_{1 \le i \le n} \bigl\{ q_{i} ( i ) \bigr\} e^{2\alpha t}\sum_{i = 1}^{n} \bigl\Vert u_{i} ( t ) \bigr\Vert _{2}^{2},\quad t \ge0, $$
(16)

and

$$\begin{aligned} V \bigl( 0,\varphi ( 0 ),0 \bigr) ={}& \int_{\Omega} \sum_{i = 1}^{n} q_{i} ( i ) \Biggl[ \varphi_{i} ( 0 )^{2} + \frac{1}{1 - \mu} \sum_{j = 1}^{n} \bigl\vert h_{ij} ( 0 ) \bigr\vert L_{j} \int_{ - \tau_{j} ( 0 )}^{0} u_{j}^{2} ( s,x )e^{2\alpha ( s + \tau_{j} ( \psi_{j}^{ - 1} ( s ) ) )} \,ds \\ &+ \sum_{j = 1}^{n} \bigl\vert b_{ij} ( 0 ) \bigr\vert \int_{0}^{ + \infty} k_{ij} ( s ) \int_{ - s}^{0} g_{j} \bigl( u_{j} ( z,x ) \bigr)^{2} e^{2\alpha ( z + s )}\,dz\,ds \Biggr]\,dx \\ \le{}&\max_{1 \le i \le n} \bigl\{ q_{i} ( 0 ) \bigr\} \sum _{i = 1}^{n} \Biggl\{ \bigl\Vert \varphi_{i} ( 0 ) \bigr\Vert _{2}^{2} \\ &+ \sum _{j = 1}^{n} \bigl\vert b_{ij} ( 0 ) \bigr\vert L_{j}^{2} \int_{0}^{ + \infty} k_{ij} ( s ) \biggl[ \int_{ - s}^{0} \bigl\Vert u_{j} ( z,x ) \bigr\Vert _{2}^{2}e^{2\alpha ( z + s )}\,dz \biggr]\,ds \\ &+ \frac{1}{1 - \mu} \sum_{j = 1}^{n} \bigl\vert h_{ij} ( 0 ) \bigr\vert L_{j} \int_{ - \tau}^{0} \bigl\Vert u_{i} ( s ) \bigr\Vert _{2}^{2}e^{2\alpha ( s + \tau _{j} ( \psi_{j}^{ - 1} ( s ) ) )} \,ds \Biggr\} \\ \le{}&\max_{1 \le i \le n} \bigl\{ q_{i} ( 0 ) \bigr\} \Biggl\{ 1 + \max_{1 \le i \le n} \Biggl\{ \sum_{j = 1}^{n} \bigl\vert b_{ji} ( 0 ) \bigr\vert L_{i}^{2} \int_{0}^{ + \infty} se^{2\alpha s}k_{ji} ( s )\,ds \Biggr\} \\ & + \frac{\tau e^{2\alpha\tau}}{1 - \mu} \sum _{j = 1}^{n} \bigl\vert h_{ij} ( 0 ) \bigr\vert L_{j} \Biggr\} \Vert \varphi_{i} \Vert _{2}^{2}, \end{aligned}$$
(17)

combining (4) and (15)–(17), we derive

$$ \begin{aligned}[b] E \bigl[ \bigl\Vert u ( t ) \bigr\Vert _{2} \bigr] \le \biggl( \frac{\max_{1 \le i \le n} \{ q_{i} ( 0 ) \}}{\min_{1 \le i \le n} \{ q_{i} ( i ) \}} \biggr)^{1 / 2}e^{ - \alpha t} \Biggl\{ 1 + \max_{1 \le i \le n} \Biggl\{ \sum _{j = 1}^{n} \bigl\vert b_{ji} ( 0 ) \bigr\vert L_{i}^{2} \int_{0}^{ + \infty} se^{2\alpha s}k_{ji} ( s )\,ds \Biggr\} \\ + \frac{\tau e^{2\alpha\tau}}{1 - \mu} \sum_{j = 1}^{n} \bigl\vert h_{ij} ( 0 ) \bigr\vert L_{j} \Biggr\} ^{1 / 2}E \bigl[ \Vert \varphi \Vert \bigr] + \biggl( \frac{n}{2\alpha\min_{1 \le i \le n} \{ q_{i} ( i ) \}} \biggr)^{1 / 2}E \bigl[ \bigl\Vert v ( t ) \bigr\Vert _{\Omega} \bigr]. \end{aligned} $$
(18)

Hence, from (3) we get that system (1)–(2) is almost sure ISS. This completes the proof of Theorem 1. □

Remark 4

In this paper, we concern with the Markovian jump RDNNs with Dirichlet boundary conditions. The results are expressed by a set of inequalities. These conditions are easy to verify, and our results play an important role in the design and applications of almost sure ISS. It is worth mentioning that the effect of reaction–diffusion terms is considered by the Hardy–Poincaré inequality. In Theorem 1, the Hardy–Poincaré inequality is used firstly. Moreover, we can see a very interesting fact that as long as diffusion coefficients \(D_{il}\) in system (1) are large enough, (4) always be satisfied. This shows that a large enough diffusion can always make system (1)–(2) almost sure ISS.

Remark 5

If we do not consider Markov jump parameters, that is, the Markov chain \(\{ r ( t ),t \ge0 \}\) only takes a unique value 1 (i.e., \(S = \{ 1 \}\)), then, for simplicity, we write \(a_{i} ( 1 ) = a_{i}\), \(w_{ij} ( 1 ) = w_{ij}\), \(h_{ij} ( 1 ) = h_{ij}\), \(b_{ij} ( 1 ) = b_{ij}\). Then system (1) will be reduced to the following deterministic delayed RDNNs:

$$\begin{aligned}[b] \frac{\partial u_{i} ( t,x )}{\partial t} ={}& \sum_{l = 1}^{m} \frac{\partial}{\partial x_{l}} \biggl( D_{il}\frac{\partial u_{i} ( t,x )}{\partial x_{l}} \biggr) - a_{i}u_{i} ( t,x ) + \sum_{j = 1}^{n} w_{ij}g_{j} \bigl( u_{j} ( t,x ) \bigr)\\ &+ \sum _{j = 1}^{n} h_{ij}g_{j} \bigl( u_{j} \bigl( t - \tau_{j} ( t ),x \bigr) \bigr) + \sum_{j = 1}^{n} b_{ij} \int_{ - \infty}^{t} k_{ij} ( t - s )g_{j} \bigl( u_{j} ( s,x ) \bigr)\,ds\\ & + v_{i} ( t ),\quad t \ge0,x \in\Omega. \end{aligned} $$
(19)

It is worth pointing out that particular cases of system (19) were studied in [19, 23].

The next theorem shows that the equilibrium solution of system (19) is ISS. The proof of Theorem 2 is similar to that in Theorem 1, and thus we omit it.

Theorem 2

Suppose that (A1)(A2) hold. System (19) and (2) is ISS if there exist constants \(q_{i} > 0\) for any \(i,j = 1,2,\ldots,n\) such that

$$ \begin{aligned}[b] &{-} \Xi- 2a_{i} + 2 \vert w_{ii} \vert L_{i} + \sum_{j = 1,j \ne i}^{n} \vert w_{ij} \vert + \sum_{j = 1,j \ne i}^{n} \frac{q_{j}}{q_{i}} \vert w_{ji} \vert L_{i}^{2} + \sum_{j = 1}^{n} \vert h_{ij} \vert L_{j} + \sum_{j = 1}^{n} \vert b_{ij} \vert \\ &\quad{}+ \sum_{j = 1}^{n} \frac{q_{j}}{q_{i}} \vert b_{ji} \vert L_{i}^{2} + \sum _{j = 1}^{n} \frac{ \vert h_{ji} \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j}}{q_{i}}L_{i} + 1 < 0. \end{aligned} $$
(20)

Remark 6

Theorem 1 reduces to almost sure exponential stability condition for delayed RDNNs with Markovian jump parameters if \(v ( t ) = 0\). Similarly, Theorem 2 becomes an exponential stability condition for delayed RDNNs when \(v ( t ) = 0\). In [34], the authors employed the Lyapunov direct method to consider the almost sure stability of Itô stochastic reaction–diffusion systems with Brownian motion defined in a complete probability space, including asymptotic stability in probability and almost sure exponential stability. In addition, the stability criteria in [34] are independent on reaction–diffusion coefficients and the regional feature. Compared with [34], this paper studies the ISS analysis for a class of RDNNs with mixed delays and Markovian jump parameters. Furthermore, the given ISS criteria are true to Dirichlet boundary conditions and concerned with the regional feature, the reaction–diffusion coefficients, and the first eigenvalue of the Dirichlet Laplacian.

Some famous NN models are particular cases of model (1). In system (1)–(2), ignoring the role of reaction–diffusion, system (1) reduces to the following delayed NNs:

$$ \begin{aligned}[b] &\begin{aligned}du_{i} ( t ) = {}&\Biggl[ - a_{i} \bigl( r ( t ) \bigr)u_{i} ( t ) + \sum _{j = 1}^{n} w_{ij} \bigl( r ( t ) \bigr)g_{j} \bigl( u_{j} ( t ) \bigr) + \sum _{j = 1}^{n} h_{ij} \bigl( r ( t ) \bigr)g_{j} \bigl( u_{j} \bigl( t - \tau_{j} ( t ) \bigr) \bigr) \\& + \sum_{j = 1}^{n} b_{ij} \bigl( r ( t ) \bigr) \int_{ - \infty}^{t} k_{ij} ( t - s )g_{j} \bigl( u_{j} ( s ) \bigr)\,ds + v_{i} ( t ) \Biggr]\,dt,\quad t \ge0,\end{aligned} \\ &u_{i} ( s ) = \varphi_{i} ( s ),\quad s \in ( - \infty,0 ].\end{aligned} $$
(21)

As a consequence of Theorems 1 and 2, we get the following results.

Corollary 1

Assume that (A1) and (A2) are satisfied. System (21) is almost sure ISS if there exist constants \(q_{i} ( i ) > 0\) for any \(r ( t ) = i \in S\), \(i,j = 1,2,\ldots,n\), such that

$$ \begin{aligned}[b] &{-} 2a_{i} ( i ) + 2 \bigl\vert w_{ii} ( i ) \bigr\vert L_{i} + \sum _{j = 1,j \ne i}^{n} \bigl\vert w_{ij} ( i ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert w_{ji} ( i ) \bigr\vert L_{i}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{ij} ( i ) \bigr\vert L_{j} + \sum_{j = 1}^{n} \bigl\vert b_{ij} ( i ) \bigr\vert \\ &\qquad{}+ \sum_{j = 1}^{n} \frac{q_{j} ( i )}{q_{i} ( i )} \bigl\vert b_{ji} ( i ) \bigr\vert L_{i}^{2} + \sum _{j = 1}^{n} \frac{ \vert h_{ji} ( i ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( i )}{q_{i} ( i )}L_{i} + \sum _{j = 1}^{n} \gamma_{ij}q_{i} ( j ) + 1\\&\quad < 0. \end{aligned} $$
(22)

Corollary 2

Assume that (A1) and (A2) are satisfied. System (21) is ISS if there exist constants \(q_{i} > 0\) for any \(i,j = 1,2,\ldots,n\) such that

$$ \begin{aligned}[b] &{-} 2a_{i} + 2 \vert w_{ii} \vert L_{i} + \sum_{j = 1,j \ne i}^{n} \vert w_{ij} \vert + \sum_{j = 1,j \ne i}^{n} \frac{q_{j}}{q_{i}} \vert w_{ji} \vert L_{i}^{2} + \sum_{j = 1}^{n} \vert h_{ij} \vert L_{j} + \sum_{j = 1}^{n} \vert b_{ij} \vert \\ &\quad+ \sum_{j = 1}^{n} \frac{q_{j}}{q_{i}} \vert b_{ji} \vert L_{i}^{2} + \sum _{j = 1}^{n} \frac{ \vert h_{ji} \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j}}{q_{i}}L_{i} + 1 < 0. \end{aligned} $$
(23)

Remark 7

Our model in (21) is more general than some well-studied NNs. When \(b_{ij} = 0\), the model in (21) reduces the model studied in [15]. The authors in [15] present criteria for the ISS of NNs with time-varying delays. Corollary 2 in this paper is much less conservative than those in [15]. Moreover, our results depend on Markovian jump parameters and can be easily checked by simple computation. To the best of authors’ knowledge, up to now, little work is reported on almost sure ISS of NNs with Markovian jump parameters and mixed time-varying delays.

Illustrative examples

We present two examples to illustrate the usefulness of our main results. Our aim is to examine the almost sure ISS of given RDNNs with Markovian jump parameters and mixed time delays.

Example 1

Consider the two-neuron delayed RDNNs with Markovian jump parameters

$$ \begin{aligned}[b] &\begin{aligned}\frac{\partial u_{i} ( t,x )}{\partial t} ={}& \sum _{l = 1}^{m} \frac{\partial}{\partial x_{l}} \biggl( D_{il} \frac{\partial u_{i} ( t,x )}{\partial x_{l}} \biggr) - a_{i} \bigl( r ( t ) \bigr)u_{i} ( t,x ) + \sum_{j = 1}^{n} w_{ij} \bigl( r ( t ) \bigr)g_{j} \bigl( u_{j} ( t,x ) \bigr) \\ &+ \sum_{j = 1}^{n} h_{ij} \bigl( r ( t ) \bigr)g_{j} \bigl( u_{j} \bigl( t - \tau_{j} ( t ),x \bigr) \bigr) + \sum_{j = 1}^{n} b_{ij} \bigl( r ( t ) \bigr) \int _{ - \infty}^{t} k_{ij} ( t - s )g_{j} \bigl( u_{j} ( s,x ) \bigr)\,ds \\ &+ v_{i} ( t ),\quad t \ge0,x \in\Omega, \end{aligned} \\ & u_{i} ( t,x ) = 0,\quad ( t,x ) \in [ 0, + \infty ) \times\partial\Omega,\qquad u_{i} ( s,x ) = \varphi_{i} ( s,x ), \quad ( s,x ) \in ( - \infty,0 ] \times\Omega, \end{aligned}$$
(24)

where \(\Omega= \{ ( x_{1},\ldots,x_{4} )^{T}| - 1 < x_{k} < 1,k = 1,\ldots,4 \} \subset R^{4}\), \(k_{ij} ( s ) = se^{ - s}\), \(n = 2\), \(m = 3\), \(\pi= 2\), \(R_{\Omega} = 2\), \(\mu= 0.5\), \(\Lambda_{2} = 5.783\), \(\tau= \ln2\), \(\alpha= 0.5\), \(g_{j} ( \eta ) = \frac{1}{2} ( \vert \eta+ 1 \vert - \vert \eta- 1 \vert )\), \(L_{j} = 1\), \(D_{il} = 1\), \(i,j = 1,2\), \(l = 1,\ldots,4\), \(\tau_{1} ( t ) = \tau_{2} ( t ) = 0.5(1 + \sin t)\), \(v ( t ) = [ \begin{array}{c@{\ }c} \sin t & \cos2t \end{array} ]^{T}\), and the generator of the Markov chain and parameters are

$$\begin{aligned}& \Gamma= \left( \textstyle\begin{array}{c@{\quad}c} - 0.1 & 0.1 \\ 0.2 & - 0.2 \end{array}\displaystyle \right),\qquad A ( 1 ) = \left( \textstyle\begin{array}{c@{\quad}c} 3 & 0 \\ 0 & 2 \end{array}\displaystyle \right),\qquad A ( 2 ) = \left( \textstyle\begin{array}{c@{\quad}c} 3 & 0 \\ 0 & 1.5 \end{array}\displaystyle \right), \\ & W ( 1 ) = \left( \textstyle\begin{array}{c@{\quad}c} 0.2 & - 0.1 \\ 0.1 & - 0.3 \end{array}\displaystyle \right),\qquad W ( 2 ) = \left( \textstyle\begin{array}{c@{\quad}c} 0.1 & 0 \\ 0.1 & - 0.1 \end{array}\displaystyle \right),\\ & H ( 1 ) = \left( \textstyle\begin{array}{c@{\quad}c} 0.2 & 0 \\ 0 & 0.1 \end{array}\displaystyle \right),\quad\quad H ( 2 ) = \left( \textstyle\begin{array}{c@{\quad}c} 0.5 & - 0.1 \\ 0.3 & 0.1 \end{array}\displaystyle \right), \\& B ( 1 ) = \left( \textstyle\begin{array}{c@{\quad}c} 0 & 1 \\ 1 & - 1 \end{array}\displaystyle \right),\qquad B ( 2 ) = \left( \textstyle\begin{array}{c@{\quad}c} 1 & 0 \\ - 1 & 1 \end{array}\displaystyle \right). \end{aligned}$$

Then a simple computation yields

$$\begin{gathered} - \Xi- 2a_{1} ( 1 ) + 2 \bigl\vert w_{11} ( 1 ) \bigr\vert L_{1} \\ \qquad{}+ \sum _{j = 1,j \ne i}^{n} \bigl\vert w_{1j} ( 1 ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac{q_{j} ( 1 )}{q_{1} ( 1 )} \bigl\vert w_{j1} ( 1 ) \bigr\vert L_{1}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{1j} ( 1 ) \bigr\vert L_{j} + \sum_{j = 1}^{n} \bigl\vert b_{1j} ( 1 ) \bigr\vert \\ \quad\quad{}+ \sum_{j = 1}^{n} \frac{q_{j} ( 1 )}{q_{1} ( 1 )} \bigl\vert b_{j1} ( 1 ) \bigr\vert L_{1}^{2} + \sum _{j = 1}^{n} \frac{ \vert h_{j1} ( 1 ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( 1 )}{q_{1} ( 1 )}L_{1} + \sum _{j = 1}^{n} \gamma_{1j}q_{1} ( j ) + 1\\\quad = - 6.21 < 0, \\ - \Xi- 2a_{2} ( 1 ) + 2 \bigl\vert w_{22} ( 1 ) \bigr\vert L_{2} \\\qquad{}+ \sum _{j = 1,j \ne i}^{n} \bigl\vert w_{2j} ( 1 ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac{q_{j} ( 1 )}{q_{2} ( 1 )} \bigl\vert w_{j2} ( 1 ) \bigr\vert L_{2}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{2j} ( 1 ) \bigr\vert L_{j} + \sum_{j = 1}^{n} \bigl\vert b_{2j} ( 1 ) \bigr\vert \\ \quad\quad{}+ \sum_{j = 1}^{n} \frac{q_{j} ( 1 )}{q_{2} ( 1 )} \bigl\vert b_{j2} ( 1 ) \bigr\vert L_{2}^{2} + \sum _{j = 1}^{n} \frac{ \vert h_{j2} ( 1 ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( 1 )}{q_{2} ( 1 )}L_{2} + \sum _{j = 1}^{n} \gamma_{2j}q_{2} ( j ) + 1 \\\quad= - 3.61 < 0. \\ - \Xi- 2a_{1} ( 2 ) + 2 \bigl\vert w_{11} ( 2 ) \bigr\vert L_{1}\\\qquad{} + \sum _{j = 1,j \ne i}^{n} \bigl\vert w_{1j} ( 2 ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac {q_{j} ( 2 )}{q_{1} ( 2 )} \bigl\vert w_{j1} ( 2 ) \bigr\vert L_{1}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{1j} ( 2 ) \bigr\vert L_{j} + \sum_{j = 1}^{n} \bigl\vert b_{1j} ( 2 ) \bigr\vert \\ \qquad{}+ \sum_{j = 1}^{n} \frac{q_{j} ( 2 )}{q_{1} ( 2 )} \bigl\vert b_{j1} ( 2 ) \bigr\vert L_{1}^{2} + \sum _{j = 1}^{n} \frac{ \vert h_{j1} ( 2 ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( 2 )}{q_{1} ( 2 )}L_{1} + \sum _{j = 1}^{n} \gamma_{1j}q_{1} ( j ) + 1\\\quad = - 4.31 < 0, \\ - \Xi- 2a_{2} ( 2 ) + 2 \bigl\vert w_{22} ( 2 ) \bigr\vert L_{2}\\\qquad{} + \sum _{j = 1,j \ne i}^{n} \bigl\vert w_{2j} ( 2 ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac {q_{j} ( 2 )}{q_{2} ( 2 )} \bigl\vert w_{j2} ( 2 ) \bigr\vert L_{2}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{2j} ( 2 ) \bigr\vert L_{j} + \sum_{j = 1}^{n} \bigl\vert b_{2j} ( 2 ) \bigr\vert \\ \qquad{}+ \sum_{j = 1}^{n} \frac{q_{j} ( 2 )}{q_{2} ( 2 )} \bigl\vert b_{j2} ( 2 ) \bigr\vert L_{2}^{2} + \sum _{j = 1}^{n} \frac{ \vert h_{j2} ( 2 ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( 2 )}{q_{2} ( 2 )}L_{2} + \sum _{j = 1}^{n} \gamma_{2j}q_{2} ( j ) + 1 \\\quad= - 3.21 < 0. \end{gathered} $$

According to Theorem 1, system (24) is almost sure ISS. Figures 16 show that the state of RDNNs (24) in the presence of input and illustrate the feature of ISS that its behavior remains bounded when its inputs are bounded for RDNNs (24). The evolutions of states are shown in Figs. 16, which also demonstrate that system (24) is almost sure ISS, whereas Fig. 7 is given to show the situation of the Markovian switching sequence.

Figure 1
figure1

The state surface of \(u _{1}(x _{1},0.1,-0.2,t)\)

Figure 2
figure2

The state surface of \(u _{2}(x_{1},0.1,-0.2,t)\)

Figure 3
figure3

The state surface of \(u _{1}(0.4,x _{2},0.1,t)\)

Figure 4
figure4

The state surface of \(u _{2}(0.4,x_{2},0.1,t)\)

Figure 5
figure5

The state surface of \(u _{1}(0.1,0.1,x _{3},t)\)

Figure 6
figure6

The state surface of \(u _{2}(0.1,0.1,x_{3},t)\)

Figure 7
figure7

The Markov switching sequence

Example 2

Consider the two-neuron delayed RDNNs with Markovian jump parameters

$$\begin{aligned}& \begin{aligned}\frac{\partial u_{i} ( t,x )}{\partial t} = {}&\sum _{l = 1}^{m} \frac{\partial}{\partial x_{l}} \biggl( D_{il} \frac{\partial u_{i} ( t,x )}{\partial x_{l}} \biggr) - a_{i} \bigl( r ( t ) \bigr)u_{i} ( t,x ) + \sum_{j = 1}^{n} w_{ij} \bigl( r ( t ) \bigr)g_{j} \bigl( u_{j} ( t,x ) \bigr) \\ &+ \sum_{j = 1}^{n} h_{ij} \bigl( r ( t ) \bigr)g_{j} \bigl( u_{j} \bigl( t - \tau_{j} ( t ),x \bigr) \bigr) + \sum_{j = 1}^{n} b_{ij} \bigl( r ( t ) \bigr) \int _{ - \infty}^{t} k_{ij} ( t - s )g_{j} \bigl( u_{j} ( s,x ) \bigr)\,ds \\ &+ v_{i} ( t ),\quad t \ge0,x \in\Omega,\end{aligned} \\& u_{i} ( t,x ) = 0,\quad ( t,x ) \in [ 0, + \infty ) \times\partial\Omega,\qquad u_{i} ( s,x ) = \varphi_{i} ( s,x ),\quad ( s,x ) \in ( - \infty,0 ] \times\Omega, \end{aligned}$$
(25)

where \(\Omega= \{ ( x_{1},\ldots,x_{4} )^{T}| - 1 < x_{k} < 1,k = 1,\ldots,4 \} \subset R^{4}\), \(k_{ij} ( s ) = se^{ - s}\), \(n = 2\), \(m = 3\), \(\pi= 2\), \(R_{\Omega} = 2\), \(\mu= 0.5\), \(\Lambda_{2} = 5.783\), \(\tau= \ln2\), \(\alpha= 0.5\), \(g_{j} ( \eta ) = \tanh ( \eta )\), \(x = ( x_{1},0.1, - 0.2 )\) \(L_{j} = 1\), \(D_{il} = 1\), \(i,j = 1,2\), \(l = 1,\ldots,4\), \(\tau_{1} ( t ) = \tau_{2} ( t ) = 0.5(1 + \sin t)\), \(v ( t ) = [ \begin{array}{c@{\ }c} \sin2t & \cos t \end{array} ]^{T}\), and the generator of the Markov chain and parameters are

$$\begin{gathered} \Gamma= \left( \textstyle\begin{array}{c@{\quad}c} - 0.1 & 0.1 \\ 0.2 & - 0.2 \end{array}\displaystyle \right),\qquad A ( 1 ) = \left( \textstyle\begin{array}{c@{\quad}c} 2 & 0 \\ 0 & 1.5 \end{array}\displaystyle \right),\qquad A ( 2 ) = \left( \textstyle\begin{array}{c@{\quad}c} 2 & 0 \\ 0 & 1.8 \end{array}\displaystyle \right),\\W ( 1 ) = \left( \textstyle\begin{array}{c@{\quad}c} 0.2 & - 0.1 \\ 0.1 & - 0.3 \end{array}\displaystyle \right), \qquad W ( 2 ) = \left( \textstyle\begin{array}{c@{\quad}c} 0.2 & 0 \\ 0.1 & - 0.2 \end{array}\displaystyle \right),\\H ( 1 ) = \left( \textstyle\begin{array}{c@{\quad}c} 0.2 & 0 \\ 0 & 0.1 \end{array}\displaystyle \right),\qquad H ( 2 ) = \left( \textstyle\begin{array}{c@{\quad}c} 0.2 & - 0.1 \\ 0.3 & 0.1 \end{array}\displaystyle \right),\\B ( 1 ) = \left( \textstyle\begin{array}{c@{\quad}c} 0 & 1 \\ 1 & - 1 \end{array}\displaystyle \right), \qquad B ( 2 ) = \left( \textstyle\begin{array}{c@{\quad}c} 1 & 0 \\ - 1 & 1 \end{array}\displaystyle \right).\end{gathered} $$

Then a simple computation yields

$$\begin{aligned}& - \Xi- 2a_{1} ( 1 ) + 2 \bigl\vert w_{11} ( 1 ) \bigr\vert L_{1} + \sum _{j = 1,j \ne i}^{n} \bigl\vert w_{1j} ( 1 ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac{q_{j} ( 1 )}{q_{1} ( 1 )} \bigl\vert w_{j1} ( 1 ) \bigr\vert L_{1}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{1j} ( 1 ) \bigr\vert L_{j} \\& \qquad+ \sum_{j = 1}^{n} \bigl\vert b_{1j} ( 1 ) \bigr\vert + \sum_{j = 1}^{n} \frac{q_{j} ( 1 )}{q_{1} ( 1 )} \bigl\vert b_{j1} ( 1 ) \bigr\vert L_{1}^{2} + \sum_{j = 1}^{n} \frac{ \vert h_{j1} ( 1 ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( 1 )}{q_{1} ( 1 )}L_{1} + \sum _{j = 1}^{n} \gamma_{1j}q_{1} ( j ) + 1 \\& \quad= - 1.41 < 0, \\& - \Xi- 2a_{2} ( 1 ) + 2 \bigl\vert w_{22} ( 1 ) \bigr\vert L_{2} + \sum _{j = 1,j \ne i}^{n} \bigl\vert w_{2j} ( 1 ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac{q_{j} ( 1 )}{q_{2} ( 1 )} \bigl\vert w_{j2} ( 1 ) \bigr\vert L_{2}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{2j} ( 1 ) \bigr\vert L_{j} \\& \qquad{}+ \sum_{j = 1}^{n} \bigl\vert b_{2j} ( 1 ) \bigr\vert + \sum_{j = 1}^{n} \frac{q_{j} ( 1 )}{q_{2} ( 1 )} \bigl\vert b_{j2} ( 1 ) \bigr\vert L_{2}^{2} + \sum_{j = 1}^{n} \frac{ \vert h_{j2} ( 1 ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( 1 )}{q_{2} ( 1 )}L_{2} + \sum _{j = 1}^{n} \gamma_{2j}q_{2} ( j ) + 1 \\& \quad= - 0.71 < 0. \\& - \Xi- 2a_{1} ( 2 ) + 2 \bigl\vert w_{11} ( 2 ) \bigr\vert L_{1} + \sum _{j = 1,j \ne i}^{n} \bigl\vert w_{1j} ( 2 ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac {q_{j} ( 2 )}{q_{1} ( 2 )} \bigl\vert w_{j1} ( 2 ) \bigr\vert L_{1}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{1j} ( 2 ) \bigr\vert L_{j} \\& \qquad{}+ \sum_{j = 1}^{n} \bigl\vert b_{1j} ( 2 ) \bigr\vert + \sum_{j = 1}^{n} \frac{q_{j} ( 2 )}{q_{1} ( 2 )} \bigl\vert b_{j1} ( 2 ) \bigr\vert L_{1}^{2} + \sum_{j = 1}^{n} \frac{ \vert h_{j1} ( 2 ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( 2 )}{q_{1} ( 2 )}L_{1} + \sum _{j = 1}^{n} \gamma_{1j}q_{1} ( j ) + 1 \\& \quad= - 0.51 < 0, \\& - \Xi- 2a_{2} ( 2 ) + 2 \bigl\vert w_{22} ( 2 ) \bigr\vert L_{2} + \sum _{j = 1,j \ne i}^{n} \bigl\vert w_{2j} ( 2 ) \bigr\vert + \sum_{j = 1,j \ne i}^{n} \frac {q_{j} ( 2 )}{q_{2} ( 2 )} \bigl\vert w_{j2} ( 2 ) \bigr\vert L_{2}^{2} + \sum_{j = 1}^{n} \bigl\vert h_{2j} ( 2 ) \bigr\vert L_{j} \\& \qquad{}+ \sum_{j = 1}^{n} \bigl\vert b_{2j} ( 2 ) \bigr\vert + \sum_{j = 1}^{n} \frac{q_{j} ( 2 )}{q_{2} ( 2 )} \bigl\vert b_{j2} ( 2 ) \bigr\vert L_{2}^{2} + \sum_{j = 1}^{n} \frac{ \vert h_{j2} ( 2 ) \vert }{1 - \mu} e^{2\alpha\tau} \frac{q_{j} ( 2 )}{q_{2} ( 2 )}L_{2} + \sum _{j = 1}^{n} \gamma_{2j}q_{2} ( j ) + 1 \\& \quad= - 0.91 < 0. \end{aligned}$$

According to Theorem 1, we can say that system (25) is almost sure ISS. Figures 8 and 9 show that the states of RDNNs (25) in the presence of input and illustrate the feature of ISS that its behavior remains bounded when its inputs are bounded for RDNNs (25). The evolutions of states are shown in Figs. 8 and 9, which also demonstrate that system (25) is almost sure ISS. Figures 10 and 11 show the states of RDNNs (25) in the absence of input. Therefore we can see that these RDNNs have bounded input–bounded output stability and exponential stability.

Figure 8
figure8

The transient state of \(u_{1} ( x,t )\) for RDNNs (25) in presence of input \(v ( t )\)

Figure 9
figure9

The transient state of \(u_{2} ( x,t )\) for RDNNs (25) in presence of input \(v ( t )\)

Figure 10
figure10

The transient state of \(u_{1} ( x,t )\) for RDNNs (25) in absence of input \(v ( t )\)

Figure 11
figure11

The transient state of \(u_{2} ( x,t )\) for RDNNs (25) in absence of input \(v ( t )\)

Conclusions

In this paper, we have considered the almost sure ISS problem for Markovian jump delayed NNs with reaction–diffusion terms and Dirichlet boundary conditions. Sufficient criteria have been obtained guaranteeing almost sure ISS of the considered systems. Different inequalities are employed to deal with the reaction–diffusion terms, and consequently we are convinced that the diffusion do contribute to the ISS analysis of RDNNs. The well-known Hardy–Poincaré inequality in partial differential equation is used firstly in the ISS problem. Two numerical examples have demonstrated the effectiveness of the proposed approach.

References

  1. 1.

    Kwon, O.M., Park, J.H.: Exponential stability analysis for uncertain neural networks with interval time-varying delays. Appl. Math. Comput. 212, 530–541 (2009)

    MathSciNet  MATH  Google Scholar 

  2. 2.

    Yang, X., Cao, J.: Exponential synchronization of delayed neural networks with discontinuous activations. IEEE Trans. Circuits Syst. I 60(9), 2431–2439 (2013)

    MathSciNet  Article  Google Scholar 

  3. 3.

    Huang, C.X., Cao, J.D.: Convergence dynamics of stochastic Cohen–Grossberg neural networks with unbounded distributed delays. IEEE Trans. Neural Netw. 22, 561–572 (2011)

    Article  Google Scholar 

  4. 4.

    Sontag, E.D.: Smooth stabilization implies coprime factorization. IEEE Trans. Autom. Control 34, 435–443 (1989)

    MathSciNet  Article  MATH  Google Scholar 

  5. 5.

    Sontag, E.D., Wang, Y.: New characterizations of input-to-state stability. IEEE Trans. Autom. Control 41, 1283–1294 (1996)

    MathSciNet  Article  MATH  Google Scholar 

  6. 6.

    Jiang, Z.P., Wang, Y.: Input-to-state stability for discrete-time nonlinear systems. Automatica 37(6), 857–869 (2001)

    MathSciNet  Article  MATH  Google Scholar 

  7. 7.

    Hong, Y.G., Jiang, Z.P., Feng, G.: Finite-time input-to-state stability and applications to finite-time control design. SIAM J. Control Optim. 48(7), 4395–4418 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  8. 8.

    Sontag, E.D.: On the input-to-state stability property. Eur. J. Control 1(1), 24–36 (1995)

    Article  MATH  Google Scholar 

  9. 9.

    Sontag, E.D., Wang, Y.: On characterizations of the input-to-state stability property. Syst. Control Lett. 24(5), 351–359 (1995)

    MathSciNet  Article  MATH  Google Scholar 

  10. 10.

    Dashkovskiy, S., Mironchenko, A.: Input-to-state stability of nonlinear impulsive systems. SIAM J. Control Optim. 51(3), 1962–1987 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  11. 11.

    Dashkovskiy, S., Ruffer, B., Wirth, F.: Small gain theorems for large scale systems and construction of ISS Lyapunov functions. SIAM J. Control Optim. 48(6), 4089–4118 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  12. 12.

    Freeman, R.A., Kokotovic, P.V.: Robust Nonlinear Control Design: State-Space and Lyapunov Techniques. Birkhäuser, Boston (2008)

    Google Scholar 

  13. 13.

    Sanchez, E.N., Perez, J.P.: Input-to-state stability (ISS) analysis for dynamic neural networks. IEEE Trans. Circuits Syst. I 46, 1395–1398 (1999)

    MathSciNet  Article  MATH  Google Scholar 

  14. 14.

    Ahn, C.K.: Passive learning and input-to-state stability of switched Hopfield neural networks with time-delay. Inf. Sci. 180, 4582–4594 (2010)

    Article  MATH  Google Scholar 

  15. 15.

    Zhu, S., Shen, Y.: Two algebraic criteria for input-to-state stability of recurrent neural networks with time-varying delays. Neural Comput. Appl. 22, 1163–1169 (2013). https://doi.org/10.1007/s00521-012-0882-9

    Article  Google Scholar 

  16. 16.

    Ahn, C.K.: Robust stability of recurrent neural networks with ISS learning algorithm. Nonlinear Dyn. 65, 413–419 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  17. 17.

    Yang, Z., Zhou, W., Huang, T.: Exponential input-to-state stability of recurrent neural networks with multiple time-varying delays. Cogn. Neurodyn. 8, 47–54 (2014). https://doi.org/10.1007/s11571-013-9258-9

    Article  Google Scholar 

  18. 18.

    Serrano-Gotarredona, T., Linares-Barranco, B.: Log-domain implementation of complex dynamics reaction-diffusion neural networks. IEEE Trans. Neural Netw. 14, 1337–1355 (2003)

    Article  Google Scholar 

  19. 19.

    Lu, J.G.: Robust global exponential stability for interval reaction–diffusion Hopfield neural networks with distributed delays. IEEE Trans. Circuits Syst. II 54, 1115–1119 (2007)

    Article  Google Scholar 

  20. 20.

    Zhang, W., Li, J., Xing, K., Ding, C.: Synchronization for distributed parameter NNs with mixed delays via sampled-data control. Neurocomputing 175, 265–277 (2016)

    Article  Google Scholar 

  21. 21.

    Wang, Z.S., Zhang, H.G., Li, P.: An LMI approach to stability analysis of reaction–diffusion Cohen–Grossberg neural networks concerning Dirichlet boundary conditions and distributed delays. IEEE Trans. Syst. Man Cybern. B 40, 1596–1606 (2010)

    Article  Google Scholar 

  22. 22.

    Wang, Z., Zhang, H.: Global asymptotic stability of reaction–diffusion Cohen–Grossberg neural network with continuously distributed delays. IEEE Trans. Neural Netw. 21, 39–49 (2010)

    Article  Google Scholar 

  23. 23.

    Lu, J.G., Lu, L.J.: Global exponential stability and periodicity of reaction–diffusion recurrent neural networks with distributed delays and Dirichlet boundary conditions. Chaos Solitons Fractals 39, 1538–1549 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  24. 24.

    Zhang, W., Li, J., Ding, C., Xing, K.: pth moment exponential stability of hybrid delayed reaction–diffusion Cohen–Grossberg neural networks. Neural Process. Lett. 46, 83–111 (2017)

    Article  Google Scholar 

  25. 25.

    Zhang, W.: Passivity analysis of spatially and temporally BAM neural networks with the Neumann boundary conditions. Bound. Value Probl. 2015, Article ID 174 (2015). https://doi.org/10.1186/s13661-015-0435-0

    MathSciNet  Article  MATH  Google Scholar 

  26. 26.

    Balasubramaniam, P., Vidhya, C.: Exponential stability of stochastic reaction–diffusion uncertain fuzzy neural networks with mixed delays and Markovian jumping parameters. Expert Syst. Appl. 39, 3109–3115 (2012)

    Article  Google Scholar 

  27. 27.

    Zhang, X., Wu, S., Li, K.: Delay-dependent exponential stability for impulsive Cohen–Grossberg neural networks with time-varying delays and reaction–diffusion terms. Commun. Nonlinear Sci. Numer. Simul. 16, 1524–1532 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  28. 28.

    Zhang, W., Li, J., Chen, M.: Pinning adaptive synchronization analysis of linearly coupled delayed RDNNs with unknown time-varying coupling strengths. Adv. Differ. Equ. 2014, Article ID 146 (2014)

    MathSciNet  Article  Google Scholar 

  29. 29.

    Yang, X., Cao, J., Yang, Z.: Synchronization of coupled reaction–diffusion neural networks with time-varying delays via pinning-impulsive controller. SIAM J. Control Optim. 51(5), 3486–3510 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  30. 30.

    Rakkiyappan, R., Dharani, S., Zhu, Q.: Synchronization of reaction–diffusion neural networks with time-varying delays via stochastic sampled-data controller. Nonlinear Dyn. 79(1), 485–500 (2015)

    MathSciNet  Article  MATH  Google Scholar 

  31. 31.

    Rakkiyappan, R., Dharani, S.: Sampled-data synchronization of randomly coupled reaction–diffusion neural networks with Markovian jumping and mixed delays using multiple integral approach. Neural Comput. Appl. 28(3), 449–462 (2017)

    Article  Google Scholar 

  32. 32.

    Schiaffino, A., Tesei, A.: Competition systems with Dirichlet boundary conditions. J. Math. Biol. 15, 93–105 (1982)

    MathSciNet  Article  MATH  Google Scholar 

  33. 33.

    Mao, X.: Exponential Stability of Stochastic Differential Equations. Marcel Dekker, New York (1994)

    Google Scholar 

  34. 34.

    Luo, Q., Zhang, Y.: Almost sure exponential stability of stochastic reaction diffusion systems. Nonlinear Anal. 71, e487–e493 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  35. 35.

    Rakkiyappan, R., Zhu, Q., Chandrasekar, A.: Stability of stochastic neural networks of neutral type with Markovian jumping parameters: a delay-fractioning approach. J. Franklin Inst. 351(3), 1553–1570 (2014)

    MathSciNet  Article  Google Scholar 

  36. 36.

    Ali, M.S., Gunasekaran, N., Zhu, Q.: State estimation of T–S fuzzy delayed neural networks with Markovian jumping parameters using sampled-data control. Fuzzy Sets Syst. 306, 87–104 (2017)

    MathSciNet  Article  MATH  Google Scholar 

  37. 37.

    Wang, G., Li, Z., Zhang, Q., Yang, C.: Robust finite-time stability and stabilization of uncertain Markovian jump systems with time-varying delay. Appl. Math. Comput. 293, 377–393 (2017)

    MathSciNet  Google Scholar 

  38. 38.

    Shi, G., Ma, Q.: Synchronization of stochastic Markovian jump neural networks with reaction–diffusion terms. Neurocomputing 77, 275–280 (2012)

    Article  Google Scholar 

  39. 39.

    Brezis, H., Vazquez, J.L.: Blow-up solutions of some nonlinear elliptic problem. Rev. Mat. Univ. Complut. Madr. 10, 443–469 (1997)

    MathSciNet  MATH  Google Scholar 

  40. 40.

    Lou, X., Cui, B.: Stochastic stability analysis for delayed neural networks of neutral type with Markovian jump parameters. Chaos Solitons Fractals 39(5), 2188–2197 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  41. 41.

    Zhang, Y., Luo, Q.: Global exponential stability of impulsive delayed reaction–diffusion neural networks via Hardy–Poincaré inequality. Neurocomputing 83, 198–204 (2012)

    Article  Google Scholar 

Download references

Acknowledgements

The authors are grateful to anonymous referees for their excellent suggestions, which greatly improved the presentation of the paper.

Funding

This work is partially supported by the National Natural Science Foundation of China under Grants No. 61573013 and the Special research projects in Shaanxi Province Department of Education under Grant No. 17JK0824.

Author information

Affiliations

Authors

Contributions

The authors declare that the study was realized in collaboration with the same responsibility. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Weiyuan Zhang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhang, W., Li, J. Almost sure stability of the delayed Markovian jump RDNNs. Adv Differ Equ 2018, 248 (2018). https://doi.org/10.1186/s13662-018-1685-9

Download citation

Keywords

  • Reaction–diffusion neural networks
  • Almost sure stability
  • Delayed
  • Markovian jump