Open Access

Delay-probability-distribution-dependent stability criteria for discrete-time stochastic neural networks with random delays

Advances in Difference Equations20132013:314

https://doi.org/10.1186/1687-1847-2013-314

Received: 20 June 2013

Accepted: 10 September 2013

Published: 8 November 2013

Abstract

The problem of delay-probability-distribution-dependent robust stability for a class of discrete-time stochastic neural networks (DSNNs) with delayed and parameter uncertainties is investigated. The information of the probability distribution of the delay is considered and transformed into parameter matrices of the transferred DSSN model. In the DSSN model, the time-varying delay is characterized by introducing a Bernoulli stochastic variable. By constructing an augmented Lyapunov-Krasovskii functional and introducing some analysis techniques, some novel delay-distribution-dependent mean square stability conditions for the DSSN, which are to be robustly globally exponentially stable, are derived. Finally, a numerical example is provided to demonstrate less conservatism and effectiveness of the proposed methods.

Keywords

discrete-time stochastic neural networks discrete time-varying delays delay-probability-distribution-dependent robust exponential stability LMIs

1 Introduction

In the past few decades, neural networks (NNs) have received considerable attention owing to their potential applications in a variety of areas such as signal processing [1], pattern recognition [2], static image processing [3], associative memory [4], combinatorial optimization [5] and so on. In recent years, the stability problem of time-delay NNs has become a topic of great theoretic and practical use importance due to the fact that inherent time delays and unavoidable parameter uncertainties are all well known to many biological and artificial NNs because of the finite speed of information processing as well as the NNs parameter fluctuations of the hardware implementation. Various efforts have been achieved in the stability analysis of NNs with time-varying delays and parameter uncertainties, please refer to [615] and some following references.

The majority of the existing research results have been limited in continuous-time and deterministic NNs. On the one hand, in implementation and application of the NNs, discrete-time neural networks (DNNs) play a more important role than their continuous-time counter-parts in today’s digital world. To be more specific, DNNs can ideally keep the dynamical characteristics, functional similarity, and even the physical of biological reality of the continuous-time NNs under mild restriction. On the other hand, when modeling real NNs systems, stochastic disturbance is probably the main resource of the performance degradation of the implemented NN. Thus, the research on the dynamical behavior of discrete-time stochastic neural networks (DSNNs) with time-varying delays and parameter uncertainties is necessary. Recently, stability analysis for DSNNs with time-varying delays and parameter uncertainties has received more and more interest. Some stability criteria have been proposed in [1620]. In [16], Liu with his coauthor have researched a class of DSNNs with time-varying delays and parameter uncertainties and have proposed some delay-dependent sufficient conditions guaranteeing the global robust exponential stability by using the Lyapunov method and the linear matrix inequality technology. Employing the similar technique to that in [16], the result obtained in [16] has been improved by Zhang et al. in [17] and Luo with his coauthor in [18].

In practice, the time-varying delay in some NNs often exists in a stochastic fashion [2126]. That is, the time-varying delay in some NNs may be subject to probabilistic measurement delays. In some NNs, the output signal of the node is transferred to another node by multi-branches with arbitrary time delays, which are random, and its probabilities can often be measured by the statistical methods such as normal distribution, uniform distribution, Poisson distribution, Bernoulli random binary distribution. In most of the existing references for DSNNs, the deterministic time delay case was concerned, and the stability criteria were derived based on the information of variation range of the time delay, [1620], or the information of variation range of the time delay and time delays themselves [17] and [27]. However, it often occurs in the real systems, where the max value of the delay is very large, but the probability of it to take such a large value is very small. It may lead to a more conservative result if only the information of variation range of time delay is considered. Yet, as far as we know, little attention has been paid to the study of stability of DSNNs with stochastic time delay, when considering the variation range and the probability distribution of the time delay. More recently, in [28], some sufficient conditions on robust globally exponential stability for a class of SDNNs, which is an involved parameter, uncertainties and stochastic delay were derived. What is more, the robust globally exponential stability analysis problem for uncertain DSNNs with random delay has not been adequately investigated and still needs challenge.

In this paper, some new improved delay-probability-distribution-dependent stability criteria, which guarantee the robust global exponential stability for discrete-time stochastic neural networks with time-varying delay are obtained via constructing a novel augmented Lyapunov-Krasovskii functional. These new conditions are less conservative than those obtained in [1618] and [28]. The numerical example is also provided to illuminate the improvement of the proposed criteria.

The notations are quite standard. Throughout this paper, N + stands for the set of nonnegative integers, R n and R n × m denote, respectively, the n-dimensioned Euclidean space and the set of all n × m real matrices. The superscript ‘T’ denotes the transpose and the notation X Y (respective X > Y ) means that X and Y are symmetric matrices, and that X Y is positive semi-definitive (respective positive definite). is the Euclidean norm in R n . I is the identity matrix with appropriate dimensions. If A is a matrix, denote by A its own operator norm, i.e., A = sup { A x : x = 1 } = λ max ( A T A ) , where λ max ( A ) (respectively, λ min ( A ) ) means the largest (respectively, smallest) eigenvalue of A. Moreover, let ( Ω , F , { F t } t 0 , P ) be a complete probability space with a filtration { F t } t 0 to satisfy the usual conditions (i.e., the filtration contains all P-null sets and is right continuous). E { } stands for the mathematical expectation operator with respect to the given probability measure P. The asterisk in a matrix is used to denote term that is induced by symmetry. Matrices, if not explicitly, are assumed to have compatible dimensions. N [ a , b ] = { a , a + 1 , , b } . Sometimes, the arguments of function will be omitted in the analysis when no confusion can arise.

2 Problem formulation and preliminaries

Consider the following n-neurons parameter uncertainties DSNN with time-varying delays:
x ( k + 1 ) = ( A + Δ A ( k ) ) x ( k ) + ( B + Δ B ( k ) ) f ( x ( k ) ) + ( D + Δ D ( k ) ) g ( x ( k τ ( k ) ) ) + σ ( k , x ( k ) , x ( k τ ( k ) ) ) ω ( k ) ,
(1)
where x ( k ) = [ x 1 ( k ) , x 2 ( k ) , , x n ( k ) ] T R n denotes the state vector associated with the n-neurons, the positive integer τ ( k ) denotes the time-varying delay, satisfying τ m τ ( k ) τ M , k N + , the τ m and τ M are known positive integers. The initial condition associated with model (1) is given by
x ( k ) = ϕ ( k ) , k [ τ M , 0 ] .
(2)
The diagonal matrix A = diag ( a 1 , a 2 , , a n ) with | a i | < 1 is the state feedback coefficient matrix, B = ( b i j ) n × n and D = ( d i j ) n × n are the connection weight matrix and the delayed connection weight matrix, respectively, f ( x ( k ) ) = [ f 1 ( x 1 ( k ) ) , f 2 ( x 2 ( k ) ) , , f n ( x n ( k ) ) ] T and g ( x ( k ) ) = [ g 1 ( x 1 ( k ) ) , g 2 ( x 2 ( k ) ) , , g n ( x n ( k ) ) ] T denote the neuron activation functions, σ ( k , x ( k ) , x ( k τ ( k ) ) ) is the noise intensity function vector, Δ A ( k ) , Δ B ( k ) and Δ D ( k ) denote the parameter uncertainties which satisfy the following condition:
[ Δ A ( k ) Δ B ( k ) Δ D ( k ) ] = M F ( k ) [ E a E b E d ] ,
(3)
where M, E a , E b , E d are known real constant matrices with appropriate dimensions, and F ( k ) is an unknown time-varying matrix which satisfies
F T ( k ) F ( k ) I , k N + .
(4)
ω ( k ) is a scalar Wiener process (Brownian motion) on ( Ω , F , { F t } t 0 , P ) with
E ( ω ( k ) ) = 0 , E ( ω 2 ( k ) ) = 1 , E ( ω ( i ) ω ( j ) ) = 0 , i j .
(5)
Assumption 1 For each neuron, activation function in system (1), f i ( ) and g i ( ) i = 1 , 2 , , n are bounded and satisfy the following conditions: ξ 1 , ξ 2 R , ξ 1 ξ 2 ,
γ i f i ( ξ 1 ) f i ( ξ 2 ) ξ 1 ξ 2 γ i + , ϱ i g i ( ξ 1 ) g i ( ξ 2 ) ξ 1 ξ 2 ϱ i + , f i ( 0 ) = g i ( 0 ) = 0 , i = 1 , 2 , , n ,
(6)

where γ i , γ i + , ϱ i and ϱ i + are known constants.

Remark 1 The constants γ i , γ i + , ϱ i , ϱ i + in Assumption 1 are allowed to be positive, negative, or zero. Hence, the functions f ( x ( k ) ) and g ( x ( k ) ) could be non-monotonic, and are more general than the usual sigmoid functions and the commonly used Lipschitz conditions recently.

Assumption 2 σ ( k , x ( k ) , x ( k τ ( k ) ) ) : R × R n × R n R n is the continuous function, and is assumed to satisfy
σ T σ ( x ( k ) x ( k τ ( k ) ) ) T G ( x ( k ) x ( k τ ( k ) ) ) ,
(7)
where
G = ( G 1 G 2 G 3 ) .
Remark 2 Choose G 1 = ρ 1 I , G 2 = 0 , G 3 = ρ 2 I , we can find that (7) is reduced to
σ T σ ρ 1 x ( k ) 2 + ρ 2 x ( k τ ( k ) ) 2 ,
(8)

where ρ 1 > 0 , ρ 2 > 0 are known constant scalars. Thus, the assumption condition (8) is a special case of the assumption condition (7). It should be pointed out that the robust delay-distribution-dependent stability criteria for DSNNs with time-varying delay by (7) is generally less conservative than by (8).

Assumption 3 For any τ m τ 0 < τ M , assume that τ ( k ) takes values in [ τ m , τ 0 ] or ( τ 0 , τ M ] , considering the information of probability distribution of the time-varying delay, two sets and two mapping functions are defined
Ω 1 = { k | τ ( k ) [ τ m , τ 0 ] } , Ω 2 = { k | τ ( k ) ( τ 0 , τ M ] } ,
(9)
τ 1 ( k ) = { τ ( k ) , k Ω 1 , τ m , else , τ 2 ( k ) = { τ ( k ) , k Ω 2 , τ 0 , else .
(10)

It is obvious that Ω 1 Ω 2 = N + , Ω 1 Ω 2 = Φ (empty set). It is easy to check that k Ω 1 implies that the event τ ( k ) [ τ m , τ 0 ] takes place, and k Ω 2 means that τ ( k ) ( τ 0 , τ M ] happens.

Define a stochastic variable as
α ( k ) = { 1 , k Ω 1 , 0 , k Ω 2 .
(11)
Assumption 4 α ( k ) is a Bernoulli distributed sequence with
Prob { α ( k ) = 1 } = α 0 , Prob { α ( k ) = 0 } = α ¯ 0 = 1 α 0 ,
(12)

where α 0 is a constant.

Remark 3 From Assumption 4, it is easy to see that
E { α ( k ) } = α 0 , E { α ( k ) α ¯ ( k ) } = 0 , E { α ( k ) α 0 } = 0 , E { ( α ( k ) α 0 ) 2 } = α 0 α 0 ¯ , E { ( α ( k ) ) 2 } = α 0 .
(13)
By Assumptions 3 and 4, system (1) can be rewritten as
x ( k + 1 ) = ( A + Δ A ( k ) ) x ( k ) + ( B + Δ B ( k ) ) f ( x ( k ) ) + α ( k ) ( D + Δ D ( k ) ) g ( x ( k τ 1 ( k ) ) ) + ( 1 α ( k ) ) ( D + Δ D ( k ) ) g ( x ( k τ 2 ( k ) ) ) + α ( k ) σ ( k , x ( k ) , x ( k τ 1 ( k ) ) ) ω ( k ) + ( 1 α ( k ) ) σ ( k , x ( k ) , x ( k τ 2 ( k ) ) ) ω ( k ) .
(14)

Assumption 5 Assume that for any k N + , α ( k ) is independent of ω ( k ) .

Remark 4 It is noted that the introduction of binary stochastic variable was first introduced in [23] and then successfully used in [25, 26, 28]. By introducing the new functions τ 1 ( k ) and τ 2 ( k ) , the stochastic variable sequence α ( k ) , system (1) is transformed into (14). In (14), the probabilistic effects of the time delay have been translated into the parameter matrices of the transformed system. Then, the stochastic stability criteria based on the new model (14) can be derived, which show the relationship between the stability of the system and the variation range of the time delay and the probability distribution parameter.

For brevity of the following analysis, we denote A + Δ A ( k ) , B + Δ B ( k ) , D + Δ D ( k ) and 1 α ( k ) by A k , B k , D k , and α ¯ ( k ) , respectively. Then (14) can be rearranged as
x ( k + 1 ) = A k x ( k ) + B k f ( x ( k ) ) + α ( k ) D k g ( x ( k τ 1 ( k ) ) ) + α ¯ ( k ) D k g ( x ( k τ 2 ( k ) ) ) + α ( k ) σ ( k , x ( k ) , x ( k τ 1 ( k ) ) ) ω ( k ) + α ¯ ( k ) σ ( k , x ( k ) , x ( k τ 2 ( k ) ) ) ω ( k ) .
(15)

It is obvious that x ( k ) = 0 is a trivial solution of DSNN (15).

The following definition and lemmas are needed to conclude our main results.

Definition 2.1 [16]

The DSNN (1) is said to be robustly exponentially stable in the mean square if there exist constants α > 0 and μ ( 0 , 1 ) such that every solution of the DSNN (1) satisfies that
E { x ( k ) 2 } α μ k max τ M i 0 E { x ( i ) 2 } , k N +
(16)

for all parameter uncertainties satisfying the admissible condition.

Lemma 2.1 [28]

Given the constant matrices Ω 1 , Ω 2 and Ω 3 with appropriate dimensions, where Ω 1 = Ω 1 T and Ω 2 = Ω 2 T > 0 , then Ω 1 + Ω 3 T Ω 2 1 Ω 3 < 0 if and only if
( Ω 1 Ω 3 T Ω 2 ) < 0 or ( Ω 2 Ω 3 Ω 1 ) < 0 .

Lemma 2.2 [28]

Let x , y R n and ε > 0 . Then we have
x T y + y T x ε 1 x T x + ε y T y .

3 Robustly globally square exponentially stable of DSNNs

In this section, we shall establish our main criteria based on the LMI approach. For presentation convenience, in the following, we define
Γ 1 = diag { γ 1 γ 1 + , γ 2 γ 2 + , , γ n γ n + } , Γ 3 = diag { ϱ 1 , ϱ 2 , , ϱ n } , Γ 2 = diag { γ 1 + γ 1 + 2 , γ 2 + γ 2 + 2 , , γ n + γ n + 2 } , Γ 4 = diag { ϱ 1 + , ϱ 2 + , , ϱ n + } , Γ 5 = diag { ϱ 1 ϱ 1 + , ϱ 2 ϱ 2 + , , ϱ n ϱ n + } , Γ 6 = diag { ϱ 1 + ϱ 1 + 2 , ϱ 2 + ϱ 2 + 2 , , ϱ n + ϱ n + 2 } .
Theorem 3.1 For given positive integers τ m , τ M , τ m τ 0 < τ M , under Assumptions 1-5, the DSNN (15) is globally exponentially stable in the mean square, if there exist symmetric positive-definite matrices P, Q 1 , Q 2 , Z 1 , Z 2 with appropriate dimensional, positive-definite diagonal matrices H, R, S, T, Λ 1 , Λ 2 , Λ 3 , Λ 4 and two positive constants ε, λ such that the following two matrix inequalities hold:
P λ I ,
(17)
Ξ = ( Ξ 11 α 0 λ G 2 α ¯ 0 λ G 5 Ξ 14 Ξ 15 Ξ 16 Ξ 17 Ξ 22 0 0 0 Ξ 26 0 Ξ 33 0 0 0 Ξ 37 Ξ 44 0 Ξ 46 Ξ 47 Ξ 55 0 0 Ξ 66 0 Ξ 77 ) 0 ,
(18)
where
Ξ 11 = A k T P A k P + α 0 λ G 1 + α ¯ 0 λ G 4 2 ( τ M τ 0 + 1 ) Γ 3 H Ξ 11 = + 2 ( τ M τ 0 + 1 ) Γ 4 R + ( τ 0 τ m + 1 ) Q 1 + ( τ M τ 0 + 1 ) Q 2 Ξ 11 = 2 ( τ 0 τ m + 1 ) Γ 3 S + 2 ( τ 0 τ m + 1 ) Γ 4 T Γ 1 Λ 1 Γ 5 Λ 2 , Ξ 15 = ( τ M τ 0 + 1 ) H ( τ M τ 0 + 1 ) R + ( τ 0 τ m + 1 ) S Ξ 15 = ( τ 0 τ m + 1 ) T + Γ 6 Λ 2 , Ξ 22 = α 0 λ G 3 + 2 Γ 3 S 2 Γ 4 T Q 1 Γ 5 Λ 3 , Ξ 33 = α ¯ 0 λ G 6 + 2 Γ 3 H 2 Γ 4 R Q 2 Γ 5 Λ 4 , Ξ 55 = ( τ 0 τ m + 1 ) Z 1 + ( τ M τ 0 + 1 ) Z 2 Λ 2 , Ξ 16 = α 0 A k T P D k , Ξ 14 = A k T P B k + Γ 2 Λ 1 , Ξ 17 = α ¯ 0 A k T P D k , Ξ 26 = S + T + Γ 6 Λ 3 , Ξ 37 = H + R + Γ 6 Λ 4 , Ξ 44 = B k T P B k Λ 1 , Ξ 46 = α 0 B k T P D k , Ξ 47 = α ¯ 0 B k T P D k , Ξ 66 = α 0 D k T P D k Z 1 Λ 3 , Ξ 77 = α ¯ 0 D k T P D k Z 2 Λ 4 .
Proof We construct the following Lyapunov-Krasovskii functional candidate for system (15):
V ( k , x ( k ) ) = i = 1 7 V i ( k , x ( k ) ) ,
(19)
where
V 1 ( k , x ( k ) ) = x T ( k ) P x ( k ) , V 2 ( k , x ( k ) ) = 2 i = τ M + 1 τ 0 + 1 j = k + i 1 k 1 { [ g ( x ( j ) ) Γ 3 x ( j ) ] T H x ( j ) + [ Γ 4 x ( j ) g ( x ( j ) ) ] T R x ( j ) } , V 3 ( k , x ( k ) ) = 2 i = τ 0 + 1 τ m + 1 j = k + i 1 k 1 { [ g ( x ( j ) ) Γ 3 x ( j ) ] T S x ( j ) + [ Γ 4 x ( j ) g ( x ( j ) ) ] T T x ( j ) } , V 4 ( k , x ( k ) ) = i = k τ 1 ( k ) k 1 x T ( i ) Q 1 x ( i ) + i = τ 0 + 1 τ m j = k + i k 1 x T ( j ) Q 1 x ( j ) , V 5 ( k , x ( k ) ) = i = k τ 2 ( k ) k 1 x T ( i ) Q 2 x ( i ) + i = τ M + 1 τ 0 j = k + i k 1 x T ( j ) Q 2 x ( j ) , V 6 ( k , x ( k ) ) = i = k τ 1 ( k ) k 1 g T ( x ( i ) ) Z 1 g ( x ( i ) ) + i = τ m τ 0 1 j = k i k 1 g T ( x ( j ) ) Z 1 g ( x ( j ) ) , V 7 ( k , x ( k ) ) = i = k τ 2 ( k ) k 1 g T ( x ( i ) ) Z 2 g ( x ( i ) ) + i = τ 0 τ M 1 j = k i k 1 g T ( x ( j ) ) Z 2 g ( x ( j ) ) .
Denote X = { x ( k ) , x ( k 1 ) , , x ( k τ ( k ) ) } . Calculating the difference of V ( k , x ( k ) ) and taking the mathematical expectation, by (5), and Assumption 4 and Remark 3, we have
E { Δ V 1 ( k , x ( k ) ) } = E { E { V 1 ( k + 1 , x ( k + 1 ) ) | X } V 1 ( k , x ( k ) ) } = E { x T ( k ) ( A k T P A k P ) x ( k ) + 2 x T ( k ) A k T P B k f ( x ( k ) ) + 2 α 0 x T ( k ) A k T P D k g ( x ( k τ 1 ( k ) ) ) + 2 α ¯ 0 x T ( k ) A k T P D k g ( x ( k τ 2 ( k ) ) ) + f T ( x ( k ) ) B k T P B k f ( x ( k ) ) + 2 α 0 f T ( x ( k ) ) B k T P D k g ( x ( k τ 1 ( k ) ) ) + 2 α ¯ 0 f T ( x ( k ) ) B k T P D k g ( x ( k τ 1 ( k ) ) ) + α 0 g T ( x ( k τ 1 ( k ) ) ) D k T P D k g ( x ( k τ 1 ( k ) ) ) + α ¯ 0 g T ( x ( k τ 2 ( k ) ) ) D k T P D k g ( x ( k τ 2 ( k ) ) ) + α 0 σ T ( k , x ( k ) , x ( k τ 1 ( k ) ) ) P σ ( k , x ( k ) , x ( k τ 1 ( k ) ) ) + α ¯ 0 σ T ( k , x ( k ) , x ( k τ 2 ( k ) ) ) P σ ( k , x ( k ) , x ( k τ 2 ( k ) ) ) } .
(20)
It is very easy to check from Assumption 2 and (17) that
α 0 σ T ( k , x ( k ) , x ( k τ 1 ( k ) ) ) P σ ( k , x ( k ) , x ( k τ 1 ( k ) ) ) α 0 λ ( x ( k ) x ( k τ 1 ( k ) ) ) T ( G 1 G 2 G 3 ) ( x ( k ) x ( k τ 1 ( k ) ) ) ,
(21)
α ¯ 0 σ T ( k , x ( k ) , x ( k τ 2 ( k ) ) ) P σ ( k , x ( k ) , x ( k τ 2 ( k ) ) ) α ¯ 0 λ ( x ( k ) x ( k τ 2 ( k ) ) ) T ( G 4 G 5 G 6 ) ( x ( k ) x ( k τ 2 ( k ) ) ) ,
(22)
E { Δ V 2 ( k ) } = E { E { V 2 ( k + 1 , x ( k + 1 ) ) | X } V 2 ( k , x ( k ) ) } E { Δ V 2 ( k ) } = 2 E { i = τ M + 1 τ 0 + 1 { [ g ( x ( k ) ) Γ 3 x ( k ) ] T H x ( k ) + [ Γ 4 x ( k ) g ( x ( k ) ) ] T R x ( k ) } E { Δ V 2 ( k ) } = i = k τ M k τ 0 { [ g ( x ( i ) ) Γ 3 x ( i ) ] T H x ( i ) E { Δ V 2 ( k ) } = + [ Γ 4 x ( i ) g ( x ( i ) ) ] T R x ( i ) } } E { Δ V 2 ( k ) } E { 2 ( τ M τ 0 + 1 ) [ g ( x ( k ) ) Γ 3 x ( k ) ] T H x ( k ) E { Δ V 2 ( k ) } = + 2 ( τ M τ 0 + 1 ) [ Γ 4 x ( k ) g ( x ( k ) ) ] T R x ( k ) E { Δ V 2 ( k ) } = 2 [ g ( x ( k τ 2 ( k ) ) ) Γ 3 x ( k τ 2 ( k ) ) ] T H x ( k τ 2 ( k ) ) E { Δ V 2 ( k ) } = 2 [ Γ 4 x ( k τ 2 ( k ) ) g ( x ( k τ 2 ( k ) ) ) ] T R x ( k τ 2 ( k ) ) } ,
(23)
E { Δ V 3 ( k ) } = E { E { V 3 ( k + 1 , x ( k + 1 ) ) | X } V 3 ( k , x ( k ) ) } E { Δ V 3 ( k ) } E { 2 ( τ 0 τ m + 1 ) [ g ( x ( k ) ) Γ 3 x ( k ) ] T S x ( k ) E { Δ V 3 ( k ) } = + 2 ( τ 0 τ m + 1 ) [ Γ 4 x ( k ) g ( x ( k ) ) ] T T x ( k ) E { Δ V 3 ( k ) } = 2 [ g ( x ( k τ 1 ( k ) ) ) Γ 3 x ( k τ 1 ( k ) ) ] T S x ( k τ 1 ( k ) ) E { Δ V 3 ( k ) } = 2 [ Γ 4 x ( k τ 1 ( k ) ) g ( x ( k τ 1 ( k ) ) ) ] T T x ( k τ 1 ( k ) ) } ,
(24)
E { Δ V 4 ( k ) } = E { E { V 4 ( k + 1 , x ( k + 1 ) ) | X } V 4 ( k , x ( k ) ) } E { Δ V 4 ( k ) } = E { ( i = k + 1 τ 1 ( k + 1 ) k i = k τ 1 ( k ) k 1 ) x T ( i ) Q 1 x ( i ) E { Δ V 4 ( k ) } = + i = τ 0 + 1 τ m ( j = k + i + 1 k j = k + i k 1 ) x T ( j ) Q 1 x ( j ) } E { Δ V 4 ( k ) } = E { x T ( k ) Q 1 x ( k ) x T ( k τ 1 ( k ) ) Q 1 x ( k τ 1 ( k ) ) E { Δ V 4 ( k ) } = + ( i = k + 1 τ 1 ( k + 1 ) k 1 i = k τ 1 ( k ) + 1 k 1 ) x T ( i ) Q 1 x ( i ) E { Δ V 4 ( k ) } = + i = τ 0 + 1 τ m ( x T ( k ) Q 1 x ( k ) x T ( k + i ) Q 1 x ( k + i ) ) } E { Δ V 4 ( k ) } E { ( τ 0 τ m + 1 ) x T ( k ) Q 1 x ( k ) i = k τ 0 + 1 k τ m x T ( i ) Q 1 x ( i ) E { Δ V 4 ( k ) } = + ( i = k + 1 τ 0 k 1 i = k τ m + 1 k 1 ) x T ( i ) Q 1 x ( i ) E { Δ V 4 ( k ) } = x T ( k τ 1 ( k ) ) Q 1 x ( k τ 1 ( k ) ) } E { Δ V 4 ( k ) } = E { ( τ 0 τ m + 1 ) x T ( k ) Q 1 x ( k ) E { Δ V 4 ( k ) } = x T ( k τ 1 ( k ) ) Q 1 x ( k τ 1 ( k ) ) } ,
(25)
E { Δ V 5 ( k ) } = E { E { V 5 ( k + 1 , x ( k + 1 ) ) | X } V 5 ( k , x ( k ) ) } E { Δ V 5 ( k ) } E { ( τ M τ 0 + 1 ) x T ( k ) Q 2 x ( k ) E { Δ V 5 ( k ) } = x T ( k τ 2 ( k ) ) Q 2 x ( k τ 2 ( k ) ) } ,
(26)
E { Δ V 6 ( k ) } = E { E { V 6 ( k + 1 , x ( k + 1 ) ) | X } V 6 ( k , x ( k ) ) } E { Δ V 6 ( k ) } = E { ( i = k + 1 τ 1 ( k + 1 ) k i = k τ 1 ( k ) k 1 ) g T ( x ( i ) ) Z 1 g ( x ( i ) ) E { Δ V 6 ( k ) } = + i = τ m τ 0 1 ( j = k i + 1 k j = k i k 1 ) g T ( x ( j ) ) Z 1 g ( x ( j ) ) } E { Δ V 6 ( k ) } = E { g T ( x ( k ) ) Z 1 g ( x ( k ) ) g T ( x ( k τ 1 ( k ) ) ) Z 1 g ( x ( k τ 1 ( k ) ) ) E { Δ V 6 ( k ) } = + ( i = k + 1 τ 1 ( k + 1 ) k 1 i = k τ 1 ( k ) + 1 k 1 ) g T ( x ( i ) ) Z 1 g ( x ( i ) ) E { Δ V 6 ( k ) } = + i = τ m τ 0 1 ( g T ( x ( k ) ) Z 1 g ( x ( k ) ) g T ( x ( k i ) ) Z 1 g ( x ( k i ) ) ) } E { Δ V 6 ( k ) } E { ( τ 0 τ m + 1 ) g T ( x ( k ) ) Z 1 g ( x ( k ) ) E { Δ V 6 ( k ) } = i = k τ 0 + 1 k τ m g T ( x ( i ) ) Z 1 g ( x ( i ) ) E { Δ V 6 ( k ) } = + ( i = k + 1 τ 0 k 1 i = k τ m + 1 k 1 ) g T ( x ( i ) ) Z 1 g ( x ( i ) ) E { Δ V 6 ( k ) } = g T ( x ( k τ 1 ( k ) ) ) Z 1 g ( x ( k τ 1 ( k ) ) ) } E { Δ V 6 ( k ) } = E { ( τ 0 τ m + 1 ) g T ( x ( k ) ) Z 1 g ( x ( k ) ) E { Δ V 6 ( k ) } = g T ( x ( k τ 1 ( k ) ) ) Z 1 g ( x ( k τ 1 ( k ) ) ) } ,
(27)
E { Δ V 7 ( k ) } = E { E { V 7 ( k + 1 , x ( k + 1 ) ) | X } V 7 ( k , x ( k ) ) } E { Δ V 7 ( k ) } E { ( τ M τ 0 + 1 ) g T ( x ( k ) ) Z 2 g ( x ( k ) ) E { Δ V 7 ( k ) } = g T ( x ( k τ 2 ( k ) ) ) Z 2 g ( x ( k τ 2 ( k ) ) ) } .
(28)
From (6), it follows that
( f i ( x ( k ) ) γ i + x i ( k ) ) ( f i ( x ( k ) ) γ i x i ( k ) ) 0 , i = 1 , 2 , , n ,
which are equivalent to
( x ( k ) f ( x ( k ) ) ) T ( γ i γ i + e i e i T γ i + γ i + 2 e i e i T e i e i T ) ( x ( k ) f ( x ( k ) ) ) 0 ,
(29)

where e i denotes the unit column vector having one element on its i th row, zeros elsewhere.

Then from (6) and (19), for any matrices Λ 1 = diag { λ 11 , λ 12 , , λ 1 n } > 0 , it follows that
( x ( k ) f ( x ( k ) ) ) T ( Γ 1 Λ 1 Γ 2 Λ 1 Λ 1 ) ( x ( k ) f ( x ( k ) ) ) 0 .
(30)
Similarly, for any matrices Λ i = diag { λ i 1 , λ i 2 , , λ i n } > 0 , i = 2 , 3 , 4 , we get the following inequalities:
( x ( k ) g ( x ( k ) ) ) T ( Γ 5 Λ 2 Γ 6 Λ 2 Λ 2 ) ( x ( k ) g ( x ( k ) ) ) 0 ,
(31)
( x ( k τ 1 ( k ) ) g ( x ( k τ 1 ( k ) ) ) ) T ( Γ 5 Λ 3 Γ 6 Λ 3 Λ 3 ) ( x ( k τ 1 ( k ) ) g ( x ( k τ 1 ( k ) ) ) ) 0 ,
(32)
( x ( k τ 2 ( k ) ) g ( x ( k τ 2 ( k ) ) ) ) T ( Γ 5 Λ 4 Γ 6 Λ 4 Λ 4 ) ( x ( k τ 2 ( k ) ) g ( x ( k τ 2 ( k ) ) ) ) 0 .
(33)
Then from (19) to (33), we have
E { Δ V ( k ) } E { ζ T ( k ) Ξ ζ ( k ) } ,
(34)
where
ξ T ( k ) = [ x T ( k ) , x T ( k τ 1 ( k ) ) , x T ( k τ 2 ( k ) ) , f T ( x ( k ) ) , g T ( x ( k ) ) , g T ( x ( k τ 1 ( k ) ) ) , g T ( x ( k τ 2 ( k ) ) ) ] .
Since Ξ < 0 , from (34), we can conclude that
E { Δ V ( k ) } λ max ( Ξ ) E { x ( k ) 2 } .
(35)
It is easy to derive that
E { V ( k ) } μ 1 E { x ( k ) 2 } + μ 2 i = k τ M k 1 E { x ( i ) 2 } ,
(36)
where
μ 1 = λ max ( P ) , μ 2 = ( τ M τ 0 + 1 ) [ 2 γ ( λ max ( H ) + λ max ( R ) ) + ϱ λ max ( Z 2 ) ] μ 2 = + ( τ 0 τ m + 1 ) [ 2 γ ( λ max ( S ) + λ max ( T ) ) + ϱ λ max ( Z 1 ) ] μ 2 = + ( τ 0 τ m + 1 ) λ max ( Q 1 ) + ( τ M τ 0 + 1 ) λ max ( Q 2 )
with
γ = max 1 i n { | γ i | , | γ i + | } , ϱ = max 1 i n { | ϱ i | , | ϱ i + | } .
For any θ > 1 , it follows from (35) and (36) that
E { θ k + 1 V ( k + 1 ) θ k V ( k ) } = θ k + 1 E { Δ V ( k ) } + θ k ( θ 1 ) E { V ( k ) } θ k [ ( θ 1 ) μ 1 + θ λ max ( Ξ ) ] E { x ( k ) 2 } + ( θ 1 ) μ 2 i = k τ M k 1 E { x ( i ) 2 } .
(37)
Furthermore, for any integer N τ M + 1 , summing up both sides of (37) from 0 to N 1 with respect to k, we have
θ N E { V ( N ) } E { V ( 0 ) } ( ( θ 1 ) μ 1 + θ λ max ( Ξ ) ) k = 0 N 1 θ k E { x ( k ) 2 } + μ 2 ( θ 1 ) k = 0 N 1 i = k τ M k 1 θ k E { x ( i ) 2 } .
(38)
Note that for τ M 1 , it is easy to compute that
k = 0 N 1 i = k τ M k 1 θ k E { x ( i ) 2 } ( i = τ M 1 k = 0 i + τ M + i = 0 N 1 τ M k = i + 1 i + τ M + i = N τ M N 1 k = i + 1 N 1 ) μ k E { x ( i ) 2 } τ M θ τ M sup τ M i 0 E { x ( i ) 2 } + τ M θ τ M i = 0 N 1 θ i E { x ( i ) 2 } .
(39)
Then from (38) and (39), one has
θ N E { V ( N ) } E { V ( 0 ) } + τ M θ τ M ( θ 1 ) μ 2 sup τ M i 0 E { x ( i ) 2 } + [ ( θ 1 ) μ 1 + θ λ max ( Ξ ) + τ M θ τ M ( θ 1 ) μ 2 ] k = 0 N 1 θ k E { x ( k ) 2 } .
(40)
Let μ = max { μ 1 , μ 2 } . From (36), it is obvious that
E { V ( 0 ) } μ sup τ M i 0 E { x ( i ) 2 } .
(41)
In addition, by (19), we can get
E { V ( N ) } λ min ( P ) E { x ( N ) 2 } .
(42)
In addition, it can be verified that there exists a scalar θ 0 > 1 such that
( θ 1 ) μ 1 + θ λ max ( Ξ ) + τ M θ τ M ( θ 1 ) μ 2 = 0 .
(43)
Substituting (41)-(43) into (40), we obtain
E { x ( N ) 2 } μ + τ M θ 0 τ M ( θ 0 1 ) μ 2 λ min ( P ) ( 1 θ 0 ) N sup τ M i 0 E { x ( i ) 2 } .
(44)

By Definition 2.1, the DSNN (15) is globally exponentially stable in the mean square. This completes the proof. □

Remark 5 In Theorem 3.1, free-weighting matrices R, H, S, T are introduced by constructing a new Lyapunov functional (19). On the one hand, in (19), the useful information of the time delays is considered sufficiently. On the other hand, the terms V 2 ( k , x ( k ) ) , V 3 ( k , x ( k ) ) are introduced and make full use of the information of the activation function g ( x ( k ) ) . Which make this stability criterion generally less conservative than those obtained in [1618, 28]. However, because of the parameter uncertainties contained in (18), it is difficult to use Theorem 3.1 directly to determine the stability of the DSNN (15). Thus, it is necessary for us to give another criterion as follows.

Theorem 3.2 For given positive integers τ m , τ M , τ m τ 0 < τ M , under Assumptions 1-5, the DSNN (15) is robustly globally exponentially stable in the mean square if there exist symmetric positive-definite matrices P, Q 1 , Q 2 , Z 1 , Z 2 with appropriate dimensional, positive-definite diagonal matrices H, R, S, T, Λ 1 , Λ 2 , Λ 3 , Λ 4 and positive constants ε, λ such that the following two LMIs hold:
P λ I ,
(45)
Ξ ˜ = ( Ξ ˜ 11 α 0 λ G 2 α ¯ 0 λ G 5 Ξ ˜ 14 Ξ ˜ 15 Ξ ˜ 16 Ξ ˜ 17 A T P 0 Ξ ˜ 22 0 0 0 Ξ ˜ 26 0 0 0 Ξ ˜ 33 0 0 0 Ξ ˜ 37 0 0 Ξ ˜ 44 0 Ξ ˜ 46 Ξ ˜ 47 B T P 0 Ξ ˜ 55 0 0 0 0 Ξ ˜ 66 0 α 0 D T P 0 Ξ ˜ 77 α ¯ 0 D T P 0 P P M ε I ) 0 ,
(46)
where
Ξ ˜ 11 = ε E a T E a P + α 0 λ G 1 + α ¯ 0 λ G 4 2 ( τ M τ 0 + 1 ) Γ 3 H Ξ ˜ 11 = + 2 ( τ M τ 0 + 1 ) Γ 4 R + ( τ 0 τ m + 1 ) Q 1 + ( τ M τ 0 + 1 ) Q 2 Ξ ˜ 11 = 2 ( τ 0 τ m + 1 ) Γ 3 S + 2 ( τ 0 τ m + 1 ) Γ 4 T Γ 1 Λ 1 Γ 5 Λ 2 , Ξ ˜ 15 = ( τ M τ 0 + 1 ) H ( τ M τ 0 + 1 ) R + ( τ 0 τ m + 1 ) S Ξ ˜ 15 = ( τ 0 τ m + 1 ) T + Γ 6 Λ 2 , Ξ ˜ 22 = α 0 λ G 3 + 2 Γ 3 S 2 Γ 4 T Q 1 Γ 5 Λ 3 , Ξ ˜ 33 = α ¯ 0 λ G 6 + 2 Γ 3 H 2 Γ 4 R Q 2 Γ 5 Λ 4 , Ξ ˜ 55 = ( τ 0 τ m + 1 ) Z 1 + ( τ M τ 0 + 1 ) Z 2 Λ 2 , Ξ ˜ 14 = ε E a T E b + Γ 2 Λ 1 , Ξ ˜ 16 = α 0 ε E a T E d , Ξ ˜ 17 = α ¯ 0 ε E a T E d , Ξ ˜ 37 = H + R + Γ 6 Λ 4 , Ξ ˜ 17 = α ¯ 0 ε E a T E d , Ξ ˜ 26 = S + T + Γ 6 Λ 3 , Ξ ˜ 44 = ε E b T E b Λ 1 , Ξ ˜ 46 = α 0 ε E b T E d , Ξ ˜ 47 = α ¯ 0 ε E b T E d , Ξ ˜ 66 = α 0 ε E d T E d Z 1 Λ 3 , Ξ ˜ 77 = α ¯ 0 ε E d T E d Z 2 Λ 4 .
Proof We show that Ξ < 0 in (18) implies that Ξ + η T P 1 η < 0 , where
Ξ = ( Ξ 11 α 0 λ G 2 α ¯ 0 λ G 5 Γ 2 Λ 1 Ξ 15 0 0 Ξ 22 0 0 0 Ξ 26 0 Ξ 33 0 0 0 Ξ 37 Λ 1 0 0 0 Ξ 55 0 0 Z 1 Λ 3 0 Z 2 Λ 4 ) 0 , Ξ 11 = P + α 0 λ G 1 + α ¯ 0 λ G 4 2 ( τ M τ 0 + 1 ) Γ 3 T H Ξ 11 = + 2 ( τ M τ 0 + 1 ) Γ 4 T R + ( τ 0 τ m + 1 ) Q 1 + ( τ M τ 0 + 1 ) Q 2 , η = [ P A k , 0 , 0 , P B k , 0 , α k P D k , α ¯ k P D k ] .
According to Lemma 2.1, Ξ + η T P 1 η < 0 is equivalent to
( Ξ η T η P ) = ( Ξ η 1 T η 1 P ) + ( 0 η 2 T η 2 0 ) < 0
(47)
with
η 1 = [ P A , 0 , 0 , P B , 0 , α 0 P D , α ¯ 0 P D ] , η 2 = [ P Δ A ( k ) , 0 , 0 , P Δ B ( k ) , 0 , α 0 P Δ D ( k ) , α ¯ 0 P Δ D ( k ) ] .
From Lemma 2.2, we can get
( 0 η 2 T η 2 0 ) = ϖ 1 M F ( k ) ϖ 2 + ϖ 2 T F T ( k ) M T ϖ 1 T ε 1 ϖ 1 M M T ϖ 1 T + ε ϖ 2 T ϖ 2 ,
(48)
where
ϖ 1 = [ 0 , 0 , 0 , 0 , 0 , 0 , 0 , P ] , ϖ 2 = [ E a , 0 , 0 , E b , 0 , α 0 E d , α ¯ 0 E d , 0 ] .
Combining (47) with (48), we have
( ( Ξ η 1 T η 1 P ) + ε ϖ 2 T ϖ 2 ϖ 1 M ε I ) 0 ,

which implies that (46) holds. This completes the proof. □

Remark 6 When α k 1 , the DSNN (15) reduce to (1), which has been well investigated in [1618]. By setting G i = ρ i I , i = 1 , 3 , 4 , 6 and G 2 = G 5 = 0 in Theorem 3.2 and deleting the fifth rows and the corresponding fifth columns of (46), we can obtain the stability condition for system (1), which can be easily seen to be equivalent to Theorem 3.2 in [28].

If the stochastic term and parameter uncertainties are removed in (15), then (15) reduces to
x ( k + 1 ) = A x ( k ) + B f ( x ( k ) ) + α ( k ) D g ( x ( k τ 1 ( k ) ) ) + α ¯ ( k ) D g ( x ( k τ 2 ( k ) ) ) ,
(49)

then we get the following results.

Corollary 3.1 For given positive integers τ m , τ M , τ m τ 0 < τ M , under Assumptions 1-5, the DSNN (15) is globally exponentially stable in the mean square if there exist symmetric positive-definite matrices P, Q 1 , Q 2 , Z 1 , Z 2 with appropriate dimensional, positive-definite diagonal matrices H, R, S, T, Λ 1 , Λ 2 , Λ 3 , Λ 4 and a positive constant ε such that the following LMI holds:
Ξ ˇ = ( Ξ ˇ 11 0 0 Ξ ˇ 14 Ξ 15 Ξ ˇ 16 Ξ ˇ 17 Ξ ˇ 22 0 0 0 Ξ 26 0 Ξ ˇ 33 0 0 0 Ξ 37 Ξ ˇ 44 0 Ξ ˇ 46 Ξ ˇ 47 Ξ 55 0 0 Ξ ˇ 66 0 Ξ ˇ 77 ) 0 ,
(50)
where
Ξ ˇ 11 = A T P A P 2 ( τ M τ 0 + 1 ) Γ 3 H + 2 ( τ M τ 0 + 1 ) Γ 4 R Ξ ˇ 11 = + ( τ 0 τ m + 1 ) Q 1 + ( τ M τ 0 + 1 ) Q 2 Γ 1 Λ 1 Γ 5 Λ 2 Ξ ˇ 11 = 2 ( τ 0 τ m + 1 ) Γ 3 S + 2 ( τ 0 τ m + 1 ) Γ 4 T , Ξ ˇ 22 = 2 Γ 3 S 2 Γ 4 T Q 1 Γ 5 Λ 3 , Ξ ˇ 66 = α 0 D T P D Z 1 Λ 3 , Ξ ˇ 33 = 2 Γ 3 H 2 Γ 4 R Q 2 Γ 5 Λ 4 , Ξ ˇ 77 = α ¯ 0 D T P D Z 2 Λ 4 , Ξ ˇ 14 = A T P B + Γ 2 Λ 1 , Ξ ˇ 16 = α 0 A T P D , Ξ ˇ 17 = α ¯ 0 A T P D , Ξ ˇ 44 = B T P B Λ 1 , Ξ ˇ 46 = α 0 B T P D , Ξ ˇ 47 = α ¯ 0 B T P D .

4 Example

In this section, a numerical example will be presented to show the effectiveness of the main results derived in Section 3. For the convenience of comparison, let us consider the DSNN (15) with the following parameters:
A = ( 0.1 0 0 0.4 ) , B = ( 0.1 0.2 0 0.1 ) , D = ( 0 0.2 0.2 0.1 ) , M = ( 0.02 0 0.1 0.01 ) , E a = ( 0.1 0 0 0.01 ) , E b = ( 0.01 0.1 0.02 0 ) , E d = ( 0.1 0 0.01 0.05 ) , G 1 = G 3 = G 4 = G 6 = ( 0.16 0 0 0.16 ) , G 2 = G 5 = 0 2 × 2 , f 1 ( s ) = sin ( 0.2 s ) 0.6 cos ( s ) , f 2 ( s ) = tanh ( 0.4 s ) , g 1 ( s ) = tanh ( 0.83 s ) + 0.6 cos ( s ) , g 2 ( s ) = tanh ( 0.2 s ) .
It is easy to verify that
Γ 1 = ( 0.64 0 0 0 ) , Γ 2 = ( 0 0 0 0.2 ) , Γ 3 = ( 0.6 0 0 0 ) , Γ 4 = ( 1 0 0 0.2 ) , Γ 5 = ( 0.6 0 0 0 ) , Γ 6 = ( 0.2 0 0 0.1 ) .
For τ 1 = 1 , τ 0 = 2 , τ M = 12 and α 0 = 0.89 , by using Matlab LMI toolbox, we can solve a set of feasible solutions for the LMIs (45) and (46) in Theorem 3.2, which are listed as follows:
P = ( 161.8985 0.5616 0.5616 161.5996 ) , Z 1 = ( 8.5888 2.3625 2.3625 14.7045 ) , Z 2 = ( 0.6886 0.3231 0.3231 3.0148 ) , Q 1 = ( 9.8745 0.0872 0.0872 13.4307 ) , Q 2 = ( 1.2293 0.0546 0.0546 1.0868 ) , H = ( 0.6389 0 0 4.4731 ) , R = ( 1.0193 0 0 5.0362 ) , S = ( 7.2292 0 0 25.9788 ) , T = ( 6.6790 0 0 25.6866 ) , Λ 1 = ( 5.9270 0 0 71.5861 ) , Λ 2 = ( 27.6014 0 0 78.8797 ) , Λ 3 = ( 11.6106 0 0 28.3651 ) , Λ 4 = ( 0.9885 0 0 7.1557 ) , ε = 65.4665 , λ = 162.6924 .
Therefore, for all admissible parameter uncertainties and external perturbations, the DSNN (15) is globally exponentially stable in the mean square sense. For τ 1 = 1 , τ 0 = 2 and α 0 = 0.89 , by [28], the upper bound of the time-varying delay is 8, and by Theorem 3.2 in this paper, we obtain τ M = 12 . What is more, when τ 1 = 1 , τ 0 = 2 , and α 0 = 0.1 , α 0 = 0.2 , α 0 = 0.3 , α 0 = 0.4 , and by Theorem 3.2 in this paper, we can get that the upper bound of the time-varying delay τ M is 4, 4, 4, 4, respectively. While the LMIs (31), (32) in [28] have no feasible solutions. The further comparison is listed in Table 1, from which one can see that the criterion proposed in Theorem 3.2 is less conservative than those obtained in [28]. One can see that the criterion proposed in Theorem 3.2 is less conservative than those obtained in [1618] when the probability distribution of the time delay is ignored.
Table 1

For given τ m = 1 , τ 0 = 2 , allowable upper bounds τ M with different probability distribution of time delay

 

0.5

0.6

0.7

0.72

0.8

0.89

0.95

0.99

1

By [28]

3

3

3

4

5

8

16

22

+∞

By Theorem 3.2

5

5

6

6

7

12

24

117

+∞

Remark 7 From this example, we can see that stability conditions in this paper are dependent on time delays themselves, the variation interval and the distribution probability of the delay, that is, not only dependent on the time-delay interval, which distinguishes them from the traditional delay-dependent stability conditions.

5 Conclusions

In this paper, the robust delay-probability-distribution-dependent stochastic stability problem for a class of DSNNs with parameter uncertainties has been studied. In terms of LMIs technique, and combined with Lyapunov stable theory, a new augmented Lyapunov-Krasovskii functional has been constructed, and some novel sufficient conditions ensuring robustly globally exponentially stable in the mean square sense have been derived. Compared with some previous works established in the literature cited therein, the new criteria derived in this paper are less conservative. The numerical example has been demonstrated to show the validity of these new sufficient conditions.

Declarations

Acknowledgements

The authors would like to thank the editor, the associate editor and the anonymous referees for their detailed comments and valuable suggestions which considerably improved the presentation of this paper. The work of Xia Zhou is supported by the National Natural Science Foundation of China (No. 11226140) and the Anhui Provincial Colleges and Universities Natural Science Foundation (No. KJ2013Z267). The work of Yong Ren is supported by the National Natural Science Foundation of China (No. 10901003 and 11126238), the Distinguished Young Scholars of Anhui Province (No. 1108085J08), the Key Project of Chinese Ministry of Education (No. 211077) and the Anhui Provincial Natural Science Foundation (No. 10040606Q30).

Authors’ Affiliations

(1)
School of Mathematic and Computational Science, Fuyang Teachers College
(2)
College of Mathematical Sciences, University of Electronic Science and Technology of China
(3)
Department of Mathematics, Anhui Normal University

References

  1. Suaha FBM, Ahmad M, Taib MN: Applications of artificial neural network on signal processing of optical fibre pH sensor based on bromophenol blue doped with sol–gel film. Sens. Actuators B, Chem. 2003, 90: 182-188. 10.1016/S0925-4005(03)00026-1View ArticleGoogle Scholar
  2. Anagun AS: A neural network applied to pattern recognition in statistical process control. Comput. Ind. Eng. 1998, 35: 185-188.View ArticleGoogle Scholar
  3. Wilson CL, Watson CI, Paek EG: Effect of resolution and image quality on combined optical and neural network fingerprint matching. Pattern Recognit. 2000, 33: 317-331. 10.1016/S0031-3203(99)00052-7View ArticleGoogle Scholar
  4. Miyoshi S, Yanai HF, Okada M: Associative memory by recurrent neural networks with delay elements. Neural Netw. 2004, 17: 55-63. 10.1016/S0893-6080(03)00207-7View ArticleMATHGoogle Scholar
  5. Ding Z, Leung H, Zhu Z: A study of the transiently chaotic neural network for combinatorial optimization. Math. Comput. Model. 2002, 36: 1007-1020. 10.1016/S0895-7177(02)00254-6MathSciNetView ArticleMATHGoogle Scholar
  6. Shao JL, Huang TZ, Wang XP: Further analysis on global robust exponential stability of neural networks with time-varying delays. Commun. Nonlinear Sci. Numer. Simul. 2012, 17: 1117-1124. 10.1016/j.cnsns.2011.08.022MathSciNetView ArticleMATHGoogle Scholar
  7. Mathiyalagan K, Sakthivel R, Marshal S: New robust exponential stability results for discrete-time switched fuzzy neural networks with time delays. Comput. Math. Appl. 2012, 64: 2926-2938. 10.1016/j.camwa.2012.08.008MathSciNetView ArticleMATHGoogle Scholar
  8. Hou LL, Zong GD, Wu YQ: Robust exponential stability analysis of discrete-time switched Hopfield neural networks with time delay. Nonlinear Anal. Hybrid Syst. 2011, 5: 525-534. 10.1016/j.nahs.2010.10.014MathSciNetView ArticleMATHGoogle Scholar
  9. Wang WQ, Zhong SM, Nguang SK, Liu F: Novel delay-dependent stability criterion for uncertain genetic regulatory networks with interval time-varying delays. Neurocomputing 2013, 121: 170-178.View ArticleGoogle Scholar
  10. Faydasicok O, Arik S: Robust stability analysis of a class of neural networks with discrete time delays. Neural Netw. 2012, 29-30: 52-59.View ArticleMATHGoogle Scholar
  11. Deng FQ, Hua MG, Liu XZ, Peng YJ, Fei JT: Robust delay-dependent exponential stability for uncertain stochastic neural networks with mixed delays. Neurocomputing 2011, 74: 1503-1509. 10.1016/j.neucom.2010.08.027View ArticleGoogle Scholar
  12. Wang T, Zhang C, Fei SM: Further stability criteria on discrete-time delayed neural networks with distributed delay. Neurocomputing 2013, 111: 195-203.View ArticleGoogle Scholar
  13. Mahmoud MS, Ismail A: Improved results on robust exponential stability criteria for neutral-type delayed neural networks. Appl. Math. Comput. 2010, 217: 3011-3019. 10.1016/j.amc.2010.08.034MathSciNetView ArticleMATHGoogle Scholar
  14. Wang WQ, Zhong SM: Stochastic stability analysis of uncertain genetic regulatory networks with mixed time-varying delays. Neurocomputing 2012, 82: 143-156.View ArticleGoogle Scholar
  15. Zhu S, Shen Y: Robustness analysis for connection weight matrices of global exponential stability of stochastic recurrent neural networks. Neural Netw. 2013, 38: 17-22.View ArticleMATHGoogle Scholar
  16. Liu YR, Wang ZD, Liu XH: Robust stability of discrete-time stochastic neural networks with time-varying delays. Neurocomputing 2008, 71: 823-833. 10.1016/j.neucom.2007.03.008View ArticleGoogle Scholar
  17. Zhang YJ, Xu SY, Zeng ZP: Novel robust stability criteria of discrete-time stochastic recurrent neural networks with time delay. Neurocomputing 2009, 72: 3343-3351. 10.1016/j.neucom.2009.01.014View ArticleGoogle Scholar
  18. Luo MZ, Zhong SM, Wang RJ, Kang W: Robust stability analysis for discrete-time stochastic neural networks systems with time-varying delays. Appl. Math. Comput. 2009, 209: 305-313. 10.1016/j.amc.2008.12.084MathSciNetView ArticleMATHGoogle Scholar
  19. Gao M, Cui BT: Global robust exponential stability of discrete-time interval BAM neural networks with time-varying delays. Appl. Math. Model. 2009, 33: 1270-1284. 10.1016/j.apm.2008.01.019MathSciNetView ArticleMATHGoogle Scholar
  20. Udpin S, Niamsup P: Robust stability of discrete-time LPD neural networks with time-varying delay. Commun. Nonlinear Sci. Numer. Simul. 2009, 14: 3914-3924. 10.1016/j.cnsns.2008.08.018MathSciNetView ArticleMATHGoogle Scholar
  21. Balasubramaniam P, Vembarasan V, Rakkiyappan R: Delay-dependent robust exponential state estimation of Markovian jumping fuzzy Hopfield neural networks with mixed random time-varying delays. Commun. Nonlinear Sci. Numer. Simul. 2011, 16: 2109-2129. 10.1016/j.cnsns.2010.08.024MathSciNetView ArticleMATHGoogle Scholar
  22. Lou XY, Ye Q, Cui BT: Exponential stability of genetic regulatory networks with random delays. Neurocomputing 2010, 73: 759-769. 10.1016/j.neucom.2009.10.006View ArticleGoogle Scholar
  23. Ray A: Output feedback control under randomly varying distributed delays. J. Guid. Control Dyn. 1994, 17: 701-711. 10.2514/3.21258View ArticleMATHGoogle Scholar
  24. Nilsson J, Bernhardsson B, Wittenmark B: Stochastic analysis and control of real-time systems with random time delays. Automatica 1998, 34: 57-64. 10.1016/S0005-1098(97)00170-2MathSciNetView ArticleMATHGoogle Scholar
  25. Yue D, Zhang YJ, Tian EG, Peng C: Delay-distribution-dependent exponential stability criteria for discrete-time recurrent neural networks with stochastic delay. IEEE Trans. Neural Netw. 1998, 19: 1299-1306.Google Scholar
  26. Tang Y, Fang JA, Xia M, Yu DM: Delay-distribution-dependent stability of stochastic discrete-time neural networks with randomly mixed time-varying delays. Neurocomputing 2009, 2009(72):383-3838.Google Scholar
  27. Wu ZG, Su HY, Chua J, Zhou WN: Improved result on stability analysis of discrete stochastic neural networks with time delay. Phys. Lett. A 2009, 373: 1546-1552. 10.1016/j.physleta.2009.02.056MathSciNetView ArticleMATHGoogle Scholar
  28. Zhang YJ, Yue D, Tian EG: Robust delay-distribution-dependent stability of discrete-time stochastic neural networks with time-varying delay. Neurocomputing 2009, 72: 1265-1273. 10.1016/j.neucom.2008.01.028View ArticleGoogle Scholar

Copyright

© Zhou et al.; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.