Skip to content

Advertisement

Open Access

Dynamic analysis of stochastic fuzzy Cohen-Grossberg neural networks with time-varying delays

Advances in Difference Equations20152015:196

https://doi.org/10.1186/s13662-015-0431-9

Received: 30 September 2014

Accepted: 2 March 2015

Published: 26 June 2015

Abstract

This paper is concerned with the problem of stochastic stability for a class of fuzzy Cohen-Grossberg neural networks, in which the interconnections and delays are time-varying. Based on a Lyapunov function and the Ito differential formula, a set of novel sufficient conditions on the pth moment exponential stability of the equilibrium of the system is derived. Moreover, an illustrative example is given to demonstrate the effectiveness of the results obtained.

Keywords

fuzzy Cohen-Grossberg neural networksglobal pth moment exponential stabilitytime-varying delaysIto differential formula

1 Introduction

Recently Cohen and Grossberg neural networks [1] have been extensively studied and applied in many different fields such as associative memory, signal processing, and some optimization problems. In such applications, it is of prime importance to ensure that the designed neural networks are stable [2]. In practice, due to the finite speeds of the switching and transmission of signals, time delays do exist in a working network and thus should be incorporated into the model equation [312]. In addition to the delay effects, studies have been intensively focused on stochastic models [1318]. It has been realized that the synaptic transmission is a noisy process brought about by random fluctuations from the release of neurotransmitters and other probabilistic causes, and it is of great significance to consider stochastic effects on the stability of neural networks described by stochastic functional differential equations.

Stochastic effects constitute another source of disturbances or uncertainties in real systems. A lot of dynamical systems have variable structures subject to stochastic abrupt changes, which may result from abrupt phenomena such as stochastic failures and repairs of the components, changes in the interconnections of subsystems or sudden environment switching. Therefore, stochastic perturbations should be taken into account when modeling neural networks. In recent years, the dynamic analysis of stochastic systems (including neural networks) with delays has been an attractive topic for many researchers, and a large number of stability criteria of these systems have been reported; see e.g. [1926] and the references therein.

In this paper, I would like to integrate fuzzy operations into Cohen-Grossberg neural networks. Speaking of fuzzy operations, Yang and Yang [27] first introduced fuzzy cellular neural networks (FCNNs) combining those operations with cellular neural networks. So far researchers have found that FCNNs are useful in image processing, and some results have been reported on the stability and periodicity of FCNNs [2634].

However, to the best of my knowledge, few authors have considered the problem of the pth moment exponential stability and almost sure exponential stability of stochastic nonautonomous fuzzy Cohen-Grossberg neural networks. In fact, in the process of the electronic circuits’ applications, assuring a constant connection matrix and delays is unrealistic. Therefore, in this sense, time-varying connection matrix and delays will be better candidates for modeling neural information processing.

Motivated by the above discussions, this paper is concerned with the following stochastic fuzzy Cohen-Grossberg neural networks with time-varying delays:
$$\begin{aligned} d x_{i}(t) =&-a_{i}\bigl(x_{i}(t) \bigr)\Biggl[b_{i}\bigl(x_{i}(t)\bigr)-\sum _{j=1}^{n}c_{ij}(t)f_{j} \bigl(x_{j}(t)\bigr)-\bigwedge_{j=1}^{n} \alpha _{ij}(t)g_{j}\bigl(x_{j}\bigl(t- \tau_{j}(t)\bigr)\bigr) \\ &{} -\bigvee_{j=1}^{n} \beta_{ji}(t)g_{j}\bigl(x_{j}\bigl(t- \tau_{j}(t)\bigr)\bigr)+I_{i}(t) \Biggr]\, dt+\sum _{j=1}^{n}\sigma_{ij}\bigl(x_{j}(t) \bigr)\, d\omega_{j}(t). \end{aligned}$$
(1)
For \(i=1,2,\ldots,n\), where n corresponds to the number of units in the neural networks, \(x_{i}(t)\) corresponds to the state of the ith neuron. \(f_{j}(\cdot)\), \(g_{j}(\cdot)\) are signal transmission functions. \(\tau_{j}(t)\) corresponds to the time delay required in processing and satisfies \(0\le\tau_{j}(t)\le\tau\) (τ is a constant). \(a_{i}(x_{i}(t))\) represents an amplification function at time t. \(b_{i}(x_{i}(t))\) is an appropriately behaved function at time t such that the solutions of model (1) remain bounded; \(c_{ij}(t)\) represents the elements of the feedback template. \(I_{i}(t)=\tilde{I}_{i}(t)+\bigwedge_{j=1}^{n} T_{ij}(t)u_{j}(t)+\bigvee_{j=1}^{n} H_{ij}(t)u_{j}(t)\). \(\alpha_{ij}(t)\), \(\beta_{ij}(t)\), \(T_{ij}(t)\), and \(H_{ij}(t)\) are elements of the fuzzy feedback MIN template and the fuzzy feedback MAX template, fuzzy feed-forward MIN template, and fuzzy feed-forward MAX template, respectively; and denote the fuzzy AND and fuzzy OR operation, respectively; \(u_{j}(t)\) denotes the external input of the ith neurons. \(\tilde{I}_{i}(t)\) is the external bias of the ith unit. \(\sigma_{ij}(\cdot)\) is the diffusion coefficient, \(\sigma_{i}=(\sigma _{i1},\sigma_{i2},\ldots,\sigma_{in})\): \(\omega(t)=(\omega_{1}(t),\omega_{2}(t),\ldots,\omega_{n}(t))^{T}\) is an n-dimensional Brownian motion defined on a complete probability space \((\Omega,F,\{F_{t}\}_{t\ge0},P)\) with a filtration \(\{F_{t}\}_{t\ge0}\) satisfying the usual conditions (i.e., it is right continuous and \(F_{0}\) contains all P-null sets).

Obviously, model (1) is quite general, and it includes several well-known neural networks models as its special cases such as Hopfield neural networks, cellular neural networks, and bidirectional association memory neural networks [12, 24]. There are at least three different types of stochastic stability to describe the limiting behaviors of stochastic differential equations: stability in probability, moment stability, and almost sure stability (see [23, 35]). When designing an associative memory neural network, we should make the convergence speed as high as possible to ensure the quick convergence of the network operation. Therefore, pth moment (\(p\geq2\)) exponential stability and almost sure exponential stability are most useful concepts as they imply that the solutions will tend to the trivial solution exponentially fast. This motivates us to study the pth moment exponential stability for system (1).

The rest of this paper is organized as follows. In Section 2, the basic assumptions and preliminaries are introduced. In Section 3, the criterion for the pth moment (\(p\geq2\)) exponential stability for system (1) is derived by using the Lyapunov function method and Ito differential inequality. An illustrative example is given in Section 4. Conclusions are drawn in Section 5.

2 Preliminaries and some assumptions

For convenience, we introduce several notations. Let \(C=C((-\infty ,0],R^{n})\) be the Banach space of continuous function which map into \(R^{n}\) with the topology of uniform convergence. For any \(x(t)=(x_{1}(t),x_{2}(t),\ldots,c_{n}(t))^{T}\in R^{n}\), we define \(\|x(t)\|=\|x(t)\|_{p}=(\sum_{i=1}^{n}|x_{i}(t)|^{p})^{\frac{1}{p}}\) (\(1< p<\infty\)).

The initial conditions for system (1) are \(x(s)=\varphi(s)\), \(-\tau\leq s\leq0\), \(\varphi\in L_{F_{0}}^{P}((-\tau,0],R^{n})\), where \(L_{F_{0}}^{P}((-\tau,0],R^{n})\) is \(R^{n}\)-valued stochastic process \(\varphi (s)\), \(-\tau\leq s\leq0\), \(\varphi(s)\) is \(F_{0}\) measurable, \(\int_{-\tau}^{0} E[|\varphi(s)|^{p}]\, ds<\infty\).

Throughout the paper, we make the following assumptions.
  1. (A1)
    There exist positive constants \(\underline{a}_{i}\), \(\overline {a}_{i}\) such that
    $$ 0< \underline{a}_{i}\le a_{i}(x)< \overline{a}_{i},\quad \forall x\in R, i=1, 2, \ldots, n. $$
    (2)
     
  2. (A2)
    The signal transmission functions \(f_{j}(\cdot)\), \(g_{j}(\cdot)\) (\(j=1,2,\ldots, n\)) are Lipschitz continuous on R with Lipschitz constants \(\mu_{j}\) and \(\nu_{j}\), namely, for any \(u, v\in R\),
    $$\bigl\vert f_{j}(u)-f_{j}(v)\bigr\vert \leq \mu_{j}|u-v| , \qquad \bigl\vert g_{i}(u)-g_{i}(v) \bigr\vert \leq\nu_{i}|u-v|,\qquad f_{j}(0)=g_{j}(0)=0. $$
     
  3. (A3)
    \(b_{i}(\cdot)\in C(R,R)\) and there exist positive constants \(h_{i}\) such that
    $$\frac{b_{i}(u)-b_{i}(v)}{u-v}\ge h_{i},\quad \forall u\neq v, i=1,2,\ldots,n. $$
     
  4. (A4)
    \(\sigma(x(t))=(\sigma_{ij}(x_{j}(t)))_{n\times n}\) (\(i,j=1,2,\ldots,n\)), there exist nonnegative numbers \(s_{i}\), \(i=1,2,\ldots,n\), such that
    $$ \operatorname{trace}\bigl[\sigma^{T}(x)\sigma(x)\bigr]\le \sum_{i=1}^{n}s_{i}x_{i}^{2}. $$
    (3)
     

Remark 2.1

The activation functions are generally assumed to be continuous, differentiable, and monotonically increasing, such as the functions of sigmoid type. These restrictive conditions are no longer needed in this paper. Instead, only the Lipschitz condition is imposed in assumption (A2). Note that the type of activation functions in (A2) has already been used in numerous papers.

If \(V(t,x)\in C^{2,1}([-\tau,\infty)\times R^{n};R^{+})\), according to the Ito formula, we define an operator LV associated with (1) as
$$\begin{aligned} \begin{aligned} LV(t,x)={}& V_{t}(t,x)+\sum_{i=1}^{n}V_{x_{i}}(t,x) \Biggl\{ -a_{i}x_{i}(t) \Biggl[b_{i} \bigl(x_{i}(t)\bigr)-\sum_{j=1}^{n}c_{ij}(t)f_{j} \bigl(x_{j}(t)\bigr) \\ &{}-\bigwedge_{j=1}^{n} \alpha_{ij}(t) g_{j}\bigl(x_{j}\bigl(t-\tau _{j}(t)\bigr)\bigr)-\bigvee_{j=1}^{n} \beta_{ij}(t)g_{j}\bigl(x_{j}\bigl(t- \tau_{j}(t)\bigr)\bigr)+I_{i}(t) \Biggr]\, dt\Biggr\} \\ &{}+\frac{1}{2}\operatorname{trace}\bigl[\sigma^{T}V_{xx}(t,x) \sigma\bigr], \end{aligned} \end{aligned}$$
where
$$V_{t}(t,x)=\frac{\partial V(t,x)}{\partial t},\qquad V_{x_{i}}(t,x)= \frac {\partial V(t,x)}{\partial x_{i}},\qquad V_{xx}(t,x)= \biggl(\frac{\partial V(t,x)}{\partial x_{i}\, \partial x_{j}} \biggr)_{n\times n}. $$

Definition 2.1

The equilibrium \(x^{*}\) of system (1) is said to be global pth moment exponentially stable, if there exist positive constants \(M\ge1\), \(\lambda>0\) such that
$$ E\bigl(\bigl\Vert x(t)-x^{*}\bigr\Vert ^{p}\bigr)\le M \bigl\Vert \varphi-x^{*}\bigr\Vert _{L^{p}}^{p}e^{-\lambda(t-t_{0})}, \quad t>t_{0}, \forall x_{0}\in R^{n}, $$
(4)
where \(x(t)=(x_{1}(t),x_{2}(t),\ldots,x_{n}(t))^{T}\) is any solution of model (1), \(p \geq2\) is a constant; when \(p=2\), it is usually said to be exponential stability in mean square.

Lemma 2.1

[34]

Suppose x and y are two states of system (1), then we have
$$\Biggl\vert \bigwedge_{j=1}^{n} \alpha_{ij}(t)g_{j}(x) -\bigwedge _{j=1}^{n}\alpha_{ij}(t)g_{j}(y) \Biggr\vert \leq\sum_{j=1}^{n}\bigl\vert \alpha_{ij}(t)\bigr\vert \bigl\vert g_{j}(x) -g_{j}(y)\bigr\vert $$
and
$$\Biggl\vert \bigvee_{j=1}^{n} \beta_{ij}(t)g_{j}(x) -\bigvee_{j=1}^{n} \beta_{ij}(t)g_{j}(y)\Biggr\vert \leq\sum _{j=1}^{n}\bigl\vert \beta _{ij}(t)\bigr\vert \bigl\vert g_{j}(x)-g_{j}(y)\bigr\vert . $$

Lemma 2.2

If \(a_{i}>0\) (\(i =1,2,\ldots, m\)), denote p nonnegative real numbers, then
$$ a_{1}a_{2}\cdots a_{m}\le \frac{a_{1}^{p}+a_{2}^{p}+\cdots+a_{m}^{p}}{p}, $$
(5)
where \(p\ge1\) denotes an integer. A particular form of (5) is
$$a_{1}^{p-1}a_{2}\le\frac{(p-1)a_{1}^{p}}{p}+ \frac{a_{2}^{p}}{p}. $$

3 Main results

In this section, we will consider the existence and the global pth moment exponential stability of system (1).

Theorem 3.1

Under condition (A1)-(A4), if there exist a positive diagonal matrix \(D=\operatorname{diag}(d_{1}, d_{2},\ldots,d_{n})\) and two constants \(0< N_{2}\), \(0< u<1\), such that
$$0< N_{2}\le N_{2}(t)\le uN_{1}(t),\quad t\ge t_{0}, $$
where
$$\begin{aligned}& N_{1}(t)=\min_{1\le i\le n}\Biggl\{ p\underline{a}_{i}h_{i}- \sum_{j=1}^{n}\overline{a}_{i}(p-1) \bigl\vert c_{ij}(t)\bigr\vert \mu_{j}-\sum _{j=1}^{n}\overline {a}_{j}\bigl\vert c_{ji}(t)\bigr\vert \mu_{j} \\& \hphantom{N_{1}(t)=}{} -\sum_{j=1}^{n} \overline{a}_{i}(p-1) \bigl(\bigl\vert \alpha_{ij}(t)\bigr\vert +\bigl\vert \beta_{ij}(t)\bigr\vert \bigr) \nu_{j} \\& \hphantom{N_{1}(t)=}{}-\sum_{j=1}^{n} \frac{(p-1)(p-2)}{2}s_{j}-\sum_{j=1}^{n} \frac {d_{j}}{d_{i}}(p-1)s_{i}\Biggr\} , \\& N_{2}(t)=\max_{1\le i\le n}\sum _{j=1}^{n}\frac{d_{j}}{d_{i}}\overline {a}_{i}\bigl(\bigl\vert \alpha_{ij}(t)\bigr\vert +\bigl\vert \beta_{ij}(t)\bigr\vert \bigr)\nu_{j}, \end{aligned}$$
then \(x^{*}=(x_{1}^{*},x_{2}^{*},\ldots, x_{n}^{*})^{T}\) is a unique equilibrium which is globally pth moment exponentially stable, where \(p\ge2\) denotes a positive constant. When \(p=2\), the equilibrium \(x^{*}\) of system (1) has exponential stability in mean square.

Proof

The proof of the existence and uniqueness of the equilibrium for system is similar to that of [33]. So we omit it.

Suppose that \(x^{*}=(x_{1}^{*},x_{2}^{*},\ldots,x_{n}^{*})^{T}\) is the unique equilibrium of system (1). Set \(y_{i}(t)=x_{i}(t)-x_{i}^{*}\), \(\tilde{\sigma}_{ij}(y_{j}(t))=\sigma _{ij}(y_{j}(t)+x_{j}^{*})-\sigma_{ij}(x_{j}^{*})\), then system (1) can be transformed into the following equation:
$$\begin{aligned} d y_{i}(t) =&-a_{i}\bigl(y_{i}(t)+x_{i}^{*} \bigr)\Biggl[b_{i}\bigl(y_{i}(t)+x_{i}^{*} \bigr)-b_{i}\bigl(x_{i}^{*}\bigr) \\ &{}-\sum_{j=1}^{n}c_{ij}(t) \bigl(f_{j}\bigl(y_{j}(t)+x_{j}^{*} \bigr)-f_{j}\bigl(x_{j}^{*}\bigr)\bigr) \\ &{}-\Biggl(\bigwedge_{j=1}^{n} \alpha_{ij}(t)g_{j}\bigl(y_{j}\bigl(t- \tau_{j}(t)\bigr)+x_{j}^{*}\bigr) -\bigwedge _{j=1}^{n}\alpha_{ij}(t)g_{j} \bigl(x_{j}^{*}\bigr)\Biggr) \\ &{}-\Biggl(\bigvee_{j=1}^{n} \beta_{ji}(t)g_{j}\bigl(y_{j}\bigl(t- \tau_{j}(t)\bigr)+x_{j}^{*}\bigr) -\bigvee _{j=1}^{n}\beta_{ji}(t)g_{j} \bigl(x_{j}^{*}\bigr)\Biggr)\Biggr]\, dt \\ &{}+\sum_{j=1}^{n}\tilde{ \sigma}_{ij}\bigl(y_{j}(t)\bigr)\, d\omega_{j}(t), \quad t\geq t_{0}, i=1,2,\ldots,n. \end{aligned}$$
(6)
Consider the following Lyapunov function:
$$ V(t,y)=\sum_{i=1}^{n}d_{i} \bigl\vert y_{i}(t)\bigr\vert ^{p}=\sum _{i=1}^{n}d_{i}\bigl\vert x_{i}(t)-x_{i}^{*}\bigr\vert ^{p},\quad p\ge2. $$
(7)
Calculating the operator \(LV(t,y(t))\), and using Lemma 2.2, associated with system (6), it has the form
$$\begin{aligned} LV\bigl(t,y(t)\bigr) =&p\sum_{i=1}^{n}d_{i} \bigl\vert y_{i}(t)\bigr\vert ^{p-1}\operatorname{sgn} \bigl\{ y_{i}(t)\bigr\} \Biggl\{ -a_{i}\bigl(y_{i}(t)+x_{i}^{*} \bigr) \Biggl[b_{i}\bigl(y_{i}(t)+x_{i}^{*} \bigr)-b_{i}\bigl(x_{i}^{*}\bigr) \\ &{} -\sum_{j=1}^{n}c_{ij}(t) \bigl(f_{j}\bigl(y_{j}(t)+x_{j}^{*} \bigr)-f_{j}\bigl(x_{j}^{*}\bigr)\bigr) \\ &{}- \Biggl(\bigwedge_{j=1}^{n} \alpha_{ij}(t)g_{j}\bigl(y_{j}\bigl(t- \tau_{j}(t)\bigr)+x_{j}^{*}\bigr) -\bigwedge _{j=1}^{n}\alpha_{ij}(t)g_{j} \bigl(x_{j}^{*}\bigr) \Biggr) \\ &{}- \Biggl(\bigvee_{j=1}^{n} \beta_{ji}(t)g_{j}\bigl(y_{j}\bigl(t- \tau_{j}(t)\bigr)+x_{j}^{*}\bigr) -\bigvee _{j=1}^{n}\beta_{ji}(t)g_{j} \bigl(x_{j}^{*}\bigr) \Biggr)\Biggr]\Biggr\} \\ &{}+\frac{p(p-1)}{2}\sum_{i=1}^{n}\bigl\vert y_{i}(t)\bigr\vert ^{p-2}\sum _{j=1}^{n}\tilde {\sigma}_{ij} \bigl(y_{i}(t)\bigr) \\ \le&-p\sum_{i=1}^{n}d_{i}\bigl\vert y_{i}(t)\bigr\vert ^{p-1}a_{i} \bigl(y_{i}(t)+x_{i}^{*}\bigr)h_{i}y_{i}(t) \operatorname {sgn}\bigl\{ y_{i}(t)\bigr\} \\ &{}+p\sum_{i=1}^{n}d_{i}\bigl\vert y_{i}(t)\bigr\vert ^{p-1}a_{i} \bigl(y_{i}(t)+x_{i}^{*}\bigr)\Biggl\{ \sum _{j=1}^{n}c_{ij}(t)f_{j} \bigl(y_{j}(t)\bigr)\operatorname{sgn}\bigl\{ y_{i}(t)\bigr\} \\ &{}+\sum_{j=1}^{n}\bigl\vert \alpha_{ij}(t)\bigr\vert \bigl\vert g_{j} \bigl(y_{j}\bigl(t-\tau_{j}(t)\bigr)+x_{j}^{*} \bigr)-g_{j}\bigl(x_{j}^{*}\bigr)\bigr\vert \operatorname{sgn}\bigl\{ y_{i}(t)\bigr\} \\ &{}+\sum_{j=1}^{n}\bigl\vert \beta_{ij}(t)\bigr\vert \bigl\vert g_{j} \bigl(y_{j}\bigl(t-\tau_{j}(t)\bigr)+x_{j}^{*} \bigr)-g_{j}\bigl(x_{j}^{*}\bigr)\bigr\vert \operatorname{sgn}\bigl\{ y_{i}(t)\bigr\} \Biggr\} \\ &{}+\frac{p(p-1)}{2}\sum_{i=1}^{n}d_{i}y_{i}^{p-2}(t) \sum_{j=1}^{n}\sigma _{ij}^{2} \operatorname{sgn}\bigl\{ y_{i}(t)\bigr\} \\ \le&-p\sum_{i=1}^{n}d_{i}\bigl\vert y_{i}(t)\bigr\vert ^{p-1}\underline{a}_{i}h_{i} \bigl\vert y_{i}(t)\bigr\vert +p\sum_{i=1}^{n}d_{i} \bigl\vert y_{i}(t)\bigr\vert ^{p-1}\overline{a}_{i} \sum_{j=1}^{n}\bigl\vert c_{ij}(t)\bigr\vert \mu _{j}\bigl\vert y_{j}(t)\bigr\vert \\ &{}+p\sum_{i=1}^{n}d_{i}\bigl\vert y_{i}(t)\bigr\vert ^{p-1}\overline{a}_{i} \sum_{j=1}^{n}\bigl(\bigl\vert \alpha _{ij}(t)\bigr\vert +\bigl\vert \beta_{ij}(t)\bigr\vert \bigr)\nu_{j}\bigl\vert y_{j}(t)\bigr\vert \\ &{}+\frac{p(p-1)}{2}\sum_{i=1}^{n}d_{i} \bigl\vert y_{i}(t)\bigr\vert ^{p-2}\sum _{j=1}^{n}s_{j}y_{j}^{2}(t) \\ =&-\sum_{i=1}^{n}d_{i}\Biggl\{ p \underline{a}_{i}h_{i} -\sum_{j=1}^{n} \overline{a}_{i}(p-1)\bigl\vert c_{ij}(t)\bigr\vert \mu_{j}-\sum_{j=1}^{n}\overline {a}_{j}\bigl\vert c_{ji}(t)\bigr\vert \mu_{j} \\ &{}-\sum_{j=1}^{n}\overline{a}_{i}(p-1) \bigl(\bigl\vert \alpha_{ij}(t)\bigr\vert +\bigl\vert \beta_{ij}(t)\bigr\vert \bigr) \nu_{j}-\sum _{j=1}^{n}\frac{(p-1)(p-2)}{2}s_{j} \\ &{}-\sum_{j=1}^{n}\frac{d_{j}}{d_{i}}(p-1)s_{i} \Biggr\} \bigl\vert y_{i}(t)\bigr\vert ^{p} \\ &{}+\sum_{i=1}^{n}d_{i}\sum _{j=1}^{n}\frac{d_{j}}{d_{i}} \overline{a}_{i}\bigl(\bigl\vert \alpha _{ij}(t)\bigr\vert +\bigl\vert \beta_{ij}(t)\bigr\vert \bigr)\nu_{j} \bigl\vert y_{i}\bigl(t-t_{i}(t)\bigr)\bigr\vert ^{p} \\ \le&-N_{1}(t)V\bigl(t,y(t)\bigr)+N_{2}(t)\sup _{t-\tau\le t\le t}{V\bigl(s,y(s)\bigr)}, \end{aligned}$$
(8)
where
$$\begin{aligned}& N_{1}(t)=\min_{1\le i\le n}\Biggl\{ p\underline{a}_{i}h_{i}- \sum_{j=1}^{n}\overline{a}_{i}(p-1) \bigl\vert c_{ij}(t)\bigr\vert \mu_{j}-\sum _{j=1}^{n}\overline {a}_{j}\bigl\vert c_{ji}(t)\bigr\vert \mu_{j} \\& \hphantom{N_{1}(t)=}{} -\sum_{j=1}^{n} \overline{a}_{i}(p-1) \bigl(\bigl\vert \alpha_{ij}(t)\bigr\vert +\bigl\vert \beta_{ij}(t)\bigr\vert \bigr) \nu_{j}-\sum_{j=1}^{n} \frac{(p-1)(p-2)}{2}s_{j} -\sum_{j=1}^{n} \frac{d_{j}}{d_{i}}(p-1)s_{i}\Biggr\} , \\& N_{2}(t)=\max_{1\le i\le n}\sum _{j=1}^{n}\frac{d_{j}}{d_{i}}\overline {a}_{i}\bigl(\bigl\vert \alpha_{ij}(t)\bigr\vert +\bigl\vert \beta_{ij}(t)\bigr\vert \bigr)\nu_{j}. \end{aligned}$$
Applying the Ito formula, for \(t\geq t_{0}\), we obtain
$$\begin{aligned}& V\bigl(t+\delta,y(t+\delta)\bigr)-V\bigl(t,y(t)\bigr) \\& \quad =\int_{t}^{t+\delta}LV\bigl(s,y(s)\bigr)\, ds+ \int_{t}^{t+\delta}V_{y}\bigl(s,y(s)\bigr) \sigma \bigl(s,y(s)\bigr)\, d\omega(s). \end{aligned}$$
(9)
Since \(E[V_{x}(s,y(s))\sigma(s,y(s))\, d\omega(s)]=0\), taking expectations on both sides of the equality (9) and applying the inequality (8) yields
$$\begin{aligned}& E\bigl(V\bigl(t+\delta,y(t+\delta)\bigr)\bigr)-E\bigl(V \bigl(t,y(t)\bigr)\bigr) \\& \quad \le\int_{t}^{t+\delta} \Bigl[-N_{1}(t)E \bigl(V\bigl(s,y(s)\bigr)\bigr)+N_{2}(t)E \Bigl(\sup _{s-\tau\le\theta\le s}V\bigl(\theta,y(\theta)\bigr) \Bigr) \Bigr]\, ds. \end{aligned}$$
(10)
The Dini derivative \(D^{+}\) is
$$ D^{+}E\bigl(V\bigl(t,y(t)\bigr)\bigr)=\lim_{\delta\rightarrow0^{+}} \sup\frac{E(V(t+\delta ,y(t+\delta)))-E(V(t,y(t)))}{\delta}. $$
(11)
Denote \(z(t)=E(V(t,y(t)))\), and (10) leads directly to
$$ D^{+}z(t)\le-N_{1}(t)z(t)+N_{2}(t) \|z_{t}\|^{p}. $$
(12)
Hence, from Lemma 3.2 of [22], we have
$$z(t)\le\bigl\Vert z(t_{0})\bigr\Vert ^{p}e^{-\lambda(t-t_{0})}. $$
Namely,
$$E \bigl[\bigl\Vert x(t)-x^{*}\bigr\Vert ^{p} \bigr]\le M \bigl\Vert \varphi-x^{*}\bigr\Vert ^{p}e^{-\lambda(t-t_{0})},\quad t\ge t_{0}, $$
where \(M=\frac{\max_{1\le i\le n}\{d_{i}\}}{\min_{1\le i\le n}\{d_{i}\}}>1\). λ is the unique positive solution of the following equation:
$$\lambda= N_{1}(t)-N_{2}(t)e^{\lambda\tau}. $$
Therefore the equilibrium \(x^{*}\) of system (1) is pth moment exponentially stable. The proof is completed. □
Specially, suppose that \(c_{ij}(t)=c_{ij}\), \(\alpha_{ij}(t)=\alpha _{ij}\), \(\beta_{ij}(t)=\beta_{ij}\) (\(i,j=1,2,\ldots,n\)); system (1) becomes the stochastic fuzzy Cohen-Grossberg neural networks with time-varying delays,
$$\begin{aligned} d x_{i}(t) =&-a_{i}\bigl(x_{i}(t) \bigr)\Biggl[b_{i}\bigl(x_{i}(t)\bigr)-\sum _{j=1}^{n}c_{ij}f_{j} \bigl(x_{j}(t)\bigr)-\bigwedge_{j=1}^{n} \alpha_{ij}g_{j}\bigl(x_{j}\bigl(t-\tau _{j}(t)\bigr)\bigr) \\ &{} -\bigvee_{j=1}^{n} \beta_{ji}g_{j}\bigl(x_{j}\bigl(t- \tau_{j}(t)\bigr)\bigr)+I_{i}(t) \Biggr]\, dt+\sum _{j=1}^{n}\sigma_{ij}\bigl(x_{j}(t) \bigr)\, d\omega_{j}(t). \end{aligned}$$
(13)
For (13), we have the following corollary by Theorem 3.1.

Corollary 3.1

If assumptions (A1)-(A4) hold, and there are constants \(N_{i}>0\) (\(i=1,2\)), \(0< u<1\) such that
$$0< N_{2}< uN_{1}, $$
where
$$\begin{aligned}& N_{1}=\min_{1\le i\le n}\Biggl\{ p\underline{a}_{i}h_{i}- \sum_{j=1}^{n}\overline{a}_{i}(p-1) \vert c_{ij}\vert \mu_{j}-\sum _{j=1}^{n}\overline {a}_{j}\vert c_{ji}\vert \mu_{j} \\& \hphantom{N_{1}=}{}-\sum_{j=1}^{n} \overline{a}_{i}(p-1) \bigl(\vert \alpha_{ij}\vert + \vert \beta_{ij}\vert \bigr) \nu_{j}-\sum _{j=1}^{n}\frac{(p-1)(p-2)}{2}s_{j}-\sum _{j=1}^{n}\frac {d_{j}}{d_{i}}(p-1)s_{i} \Biggr\} , \\& N_{2}=\max_{1\le i\le n}\sum_{j=1}^{n} \frac{d_{j}}{d_{i}}\overline{a}_{i}\bigl(\vert \alpha _{ij} \vert +\vert \beta_{ij}\vert \bigr)\nu_{j}, \end{aligned}$$
then the unique equilibrium \(x^{*}=(x_{1}^{*},x_{2}^{*},\ldots, x_{n}^{*})^{T}\) of system (13) is globally pth moment exponentially stable.

Remark 3.1

The delay functions \(\tau_{j}(t)\) considered in this paper only need to be bounded and can be nondifferential. This generalized some published results in [20]. It should be noted that the stability of system (1) is dependent on the magnitude of noise, therefore, stochastic noise fluctuation is one of the very important aspects in designing a stable network and should be considered adequately.

Remark 3.2

Compared with [20, 21], the method in this paper does not resort to the semimartingale convergence theorem. Since system (1) does not require the delays to be constants, furthermore, the model is nonautonomous and includes fuzzy operation, it is clear that the results obtained in [12, 14, 2023] cannot be applicable to system (1). This implies that the results of this paper are essentially new and complement some corresponding ones already known.

4 An example

Example 4.1

Consider the following impulsive stochastic fuzzy neural networks with time-varying delays and distributed delays:
$$ \left \{ \textstyle\begin{array}{l} d x_{1}(t)=-(3+\cos x_{1}(t)) [11x_{1}(t)-c_{11}(t)f_{1}(x_{1}(t))-c_{12}(t)f_{2}(x_{2}(t)) \\ \hphantom{d x_{1}(t)=}{}-\bigwedge_{j=1}^{2}\alpha_{1j}(t)g_{j}(x_{j}(t-\tau_{j}(t))+I_{1}(t) +\bigwedge_{j=1}^{2}T_{1j}(t)u_{j}(t) \\ \hphantom{d x_{1}(t)=}{}-\bigvee_{j=1}^{2}\beta_{1j}(t)g_{j}(x_{j}(t-\tau_{j}(t)))) +\bigvee_{j=1}^{2}H_{1j}(t)u_{j}(t)]\, dt \\ \hphantom{d x_{1}(t)=}{}+\sigma_{11}(x_{1}(t))\, d\omega_{1}+\sigma_{12}(x_{2}(t))\, d\omega_{2} , \\ d x_{2}(t)=-(2+\sin x_{2}(t)) [17x_{2}(t)-c_{21}(t)f_{1}(x_{1}(t))-c_{22}(t)f_{2}(x_{2}(t)) \\ \hphantom{d x_{2}(t)=}{}-\bigwedge_{j=1}^{2}\alpha_{2j}(t) g_{j}(x_{j}(t-\tau_{j}(t)))+I_{2}(t)+\bigwedge_{j=1}^{2}T_{2j}(t)u_{j}(t) \\ \hphantom{d x_{2}(t)=}{}-\bigvee_{j=1}^{2}\beta_{2j}(t)g_{j}(x_{j}(t-\tau_{j}(t))) +\bigvee_{j=1}^{2}H_{2j}(t)u_{j}(t)]\, dt \\ \hphantom{d x_{2}(t)=}{}+\sigma_{21}(x_{1}(t))\, d\omega_{1}+\sigma_{22}(x_{2}(t))\, d\omega_{2}, \end{array}\displaystyle \right . $$
(14)
where
$$\begin{aligned}& \left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} c_{11}(t) & c_{12}(t) \\ c_{21}(t)& c_{22}(t) \end{array}\displaystyle \right ) = \left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} 0.1 & 0.7\\ 0.6 & 0.3 \end{array}\displaystyle \right ),\qquad \left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} \alpha_{11}(t) & \alpha_{12}(t) \\ \alpha_{21}(t)& \alpha_{22}(t) \end{array}\displaystyle \right ) = \left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} \frac{5}{3} & \frac{1}{4} \\ \frac{1}{3} & \frac{3}{4} \end{array}\displaystyle \right ), \\& \left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} \beta_{11}(t) &\beta_{12}(t) \\ \beta_{21}(t) &\beta_{22}(t) \end{array}\displaystyle \right ) = \left ( \textstyle\begin{array}{@{}c@{\quad}c@{}} \frac{1}{3} &\frac{1}{4} \\ \frac{2}{3} &\frac{3}{4} \end{array}\displaystyle \right ), \\& f_{i}(r)= g_{i}(r)=\frac{1}{2}\bigl(\vert r+1 \vert -\vert r-1\vert \bigr),\qquad \tau_{j}(t)=0.3|\sin t|+0.1, \quad i,j=1,2, \\& \sigma_{11}(x)=0.2x, \qquad \sigma_{12}(x)=0.3x,\qquad \sigma_{21}(x)=0.1x, \qquad \sigma _{22}(x)=0.2x, \\& T_{ij}(t)=H_{ij}(t)=u_{j}(t)=0.8+2t,\qquad I_{i}(t)=2+3t\quad (i,j=1,2). \end{aligned}$$
Obviously, system (14) satisfies assumptions (A1)-(A3) with
$$\begin{aligned}& \underline{a}_{1}=2,\qquad \overline{a}_{1}=4,\qquad \underline{a}_{2}=1, \qquad \overline {a}_{2}=3, \\& h_{1}=11,\qquad h_{2}=17,\qquad \mu_{i}= \nu_{i}=1\quad (i=1,2). \end{aligned}$$
It can easily be checked that the assumption (A4) is satisfied with \(s_{1}=0.05\), \(s_{2}=0.13\). Let \(p=2\). It is easy to compute \(N_{1}= 19.97\), \(N_{2}=10\). There exists a positive number \(0< u=0.6<1\) such that \(0< N_{2}=10< uN_{1}=0.6\times19.97=11.98\). Clearly, all conditions of Corollary 3.1 are satisfied. Thus system (14) has a unique equilibrium point \(x^{*}\) which is globally mean square exponential stable.

5 Conclusions

In this paper, we have studied the existence, uniqueness, and pth moment exponential stability of the equilibrium point for stochastic fuzzy Cohen-Grossberg neural networks with time-varying delays. Some sufficient conditions set up here are easily verified and these conditions are correlated with parameters and the magnitude of noise the system (1). The obtained criteria can be applied to design globally mean square exponentially stable fuzzy Cohen-Grossberg neural networks.

Declarations

Acknowledgements

The author would like to thank the editor and anonymous reviewers for their helpful comments and valuable suggestions, which have greatly improved the quality of this paper.

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Authors’ Affiliations

(1)
Faculty of Mathematics and Physics, Huaiyin Institute of Technology, Huai’an, P.R. China

References

  1. Cohen, M, Grossberg, S: Stability and global pattern formation and memory storage by competitive neural networks. IEEE Trans. Syst. Man Cybern. 13, 815-826 (1983) MATHMathSciNetView ArticleGoogle Scholar
  2. Liang, J, Cao, J: Global output convergence of recurrent neural networks with distributed delays. Nonlinear Anal., Real World Appl. 8, 187-197 (2007) MATHMathSciNetView ArticleGoogle Scholar
  3. Wang, L, Zou, X: Harmless delays in Cohen-Grossberg neural networks. Physica D 170, 162-173 (2002) MATHMathSciNetView ArticleGoogle Scholar
  4. Ye, H, Michel, AN, Wang, K: Qualitative analysis of Cohen-Grossberg neural networks with multiple delays. Phys. Rev. E 51, 2611-2618 (1995) MathSciNetView ArticleGoogle Scholar
  5. Lu, W, Chen, T: New conditions on global stability of Cohen-Grossberg neural networks. Neural Comput. 15, 1173-1189 (2003) MATHView ArticleGoogle Scholar
  6. Chen, T, Rong, L: Delay-independent stability analysis of Cohen-Grossberg neural networks. Phys. Lett. A 317, 436-449 (2003) MATHMathSciNetView ArticleGoogle Scholar
  7. Cao, J, Liang, J: Boundedness and stability for Cohen-Grossberg neural network with time-varying delays. J. Math. Anal. Appl. 296, 665-685 (2004) MATHMathSciNetView ArticleGoogle Scholar
  8. Cao, J, Li, X: Stability in delayed Cohen-Grossberg neural networks: LMI optimization approach. Physica D 212, 54-65 (2005) MATHMathSciNetView ArticleGoogle Scholar
  9. Arik, S, Orman, Z: Global stability analysis of Cohen-Grossberg neural networks with time varying delays. Phys. Lett. A 341, 410-421 (2005) MATHView ArticleGoogle Scholar
  10. Yuan, K, Cao, J: An analysis of global asymptotic stability of delayed Cohen-Grossberg neural networks via nonsmooth analysis. IEEE Trans. Circuits Syst. I 52, 1854-1861 (2005) MathSciNetView ArticleGoogle Scholar
  11. Song, Q, Cao, J: Stability analysis of Cohen-Grossberg neural network with both time-varying and continuously distributed delays. J. Comput. Appl. Math. 197, 188-203 (2006) MATHMathSciNetView ArticleGoogle Scholar
  12. Liao, X, Mao, X: Exponential stability and instability of stochastic neural networks. Stoch. Anal. Appl. 14, 165-185 (1996) MATHMathSciNetView ArticleGoogle Scholar
  13. Blythe, S, Mao, X, Liao, X: Stability of stochastic delay neural networks. J. Franklin Inst. 338, 481-495 (2001) MATHMathSciNetView ArticleGoogle Scholar
  14. Wan, L, Sun, J: Mean square exponential stability of delayed Hopfield neural networks. Phys. Lett. A 343, 306-318 (2005) MATHView ArticleGoogle Scholar
  15. Wen, F, Yang, X: Skewness of return distribution and coefficient of risk premium. J. Syst. Sci. Complex. 22, 360-371 (2009) MathSciNetView ArticleGoogle Scholar
  16. Park, JH, Kwon, OM: Analysis on global stability of stochastic neural networks of neutral type. Mod. Phys. Lett. B 22, 3159-3170 (2008) MATHMathSciNetView ArticleGoogle Scholar
  17. Park, JH, Kwon, OM: Synchronization of neural networks of neutral type with stochastic perturbation. Mod. Phys. Lett. B 23, 1743-1751 (2009) MATHView ArticleGoogle Scholar
  18. Park, JH, Lee, SM, Jung, HY: LMI optimization approach to synchronization of stochastic delayed discrete-time complex networks. J. Optim. Theory Appl. 143, 357-367 (2009) MATHMathSciNetView ArticleGoogle Scholar
  19. Liu, Y, Wang, Z, Liu, X: On global exponential stability of generalized stochastic neural networks with mixed time-delays. Neurocomputing 70, 314-326 (2006) View ArticleGoogle Scholar
  20. Zhao, H, Ding, N: Dynamic analysis of stochastic Cohen-Grossberg neural networks with time delays. Appl. Math. Comput. 183, 464-470 (2006) MATHMathSciNetView ArticleGoogle Scholar
  21. Huang, C, Chen, P, He, Y, Huang, L, Tan, W: Almost sure exponential stability of delayed Hopfield neural networks. Appl. Math. Lett. 21, 701-705 (2008) MATHMathSciNetView ArticleGoogle Scholar
  22. Huang, C, He, Y, Chen, P: Dynamic analysis of stochastic recurrent neural networks. Neural Process. Lett. 27, 267-276 (2008) View ArticleGoogle Scholar
  23. Huang, C, Cao, J: Stochastic dynamics of nonautonomous Cohen-Grossberg neural networks. Abstr. Appl. Anal. 2011, Article ID 297147 (2011) MathSciNetView ArticleGoogle Scholar
  24. Wang, Z, Liu, Y, Fraser, K, Liu, X: Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays. Phys. Lett. A 354, 288-297 (2008) View ArticleGoogle Scholar
  25. Song, Q, Wang, Z: Stability analysis of impulsive stochastic Cohen-Grossberg neural networks with mixed time delays. Physica A 387, 3314-3326 (2008) View ArticleGoogle Scholar
  26. Xiong, P, Huang, L: On pth moment exponential stability of stochastic fuzzy cellular neural networks with time-varying delays and impulses. Adv. Differ. Equ. 2013, 172 (2013) MathSciNetView ArticleGoogle Scholar
  27. Yang, T, Yang, L: The global stability of fuzzy cellular neural networks. IEEE Trans. Circuits Syst. I 43, 880-883 (1996) View ArticleGoogle Scholar
  28. Yang, T, Yang, T, Wu, C, Chua, L: Fuzzy cellular neural networks: theory. In: Proc. of IEEE Int. Workshop on Cellular Neural Neworks Appl., pp. 181-186 (1996) Google Scholar
  29. Yang, T, Yang, T, Wu, C, Chua, L: Fuzzy cellular neural networks: applications. In: In Proc. of IEEE Int. Workshop on Cellular Neural Neworks Appl., pp. 225-230 (1996) Google Scholar
  30. Huang, T: Exponential stability of fuzzy cellular neural networks with distributed delay. Phys. Lett. A 351, 48-52 (2006) MATHView ArticleGoogle Scholar
  31. Huang, T: Exponential stability of delayed fuzzy cellular neural networks with diffusion. Chaos Solitons Fractals 31, 658-664 (2007) MATHMathSciNetView ArticleGoogle Scholar
  32. Zhang, Q, Xiang, R: Global asymptotic stability of fuzzy cellular neural networks with time-varying delays. Phys. Lett. A 372, 3971-3977 (2008) MATHMathSciNetView ArticleGoogle Scholar
  33. Zhang, Q, Yang, L: Exponential p-stability of impulsive stochastic fuzzy cellular neural networks with mixed delays. WSEAS Trans. Math. 12, 490-499 (2011) Google Scholar
  34. Yuan, K, Cao, J, Deng, J: Exponential stability and periodic solutions of fuzzy cellular neural networks with time-varying delays. Neurocomputing 69, 1619-1627 (2006) View ArticleGoogle Scholar
  35. Mao, X: Stochastic Differential Equations and Their Applications. Ellis Horwood Publishing Series in Mathematics and Its Applications. Horwood, Chichester (1997) MATHGoogle Scholar

Copyright

© Bao; licensee Springer. 2015

Advertisement