- Research
- Open Access
New approach to state estimator for discrete-time BAM neural networks with time-varying delay
- Saibing Qiu^{1, 2},
- Xinge Liu^{1}Email author and
- Yanjun Shu^{1}
https://doi.org/10.1186/s13662-015-0498-3
© Qiu et al. 2015
Received: 30 January 2015
Accepted: 12 May 2015
Published: 19 June 2015
Abstract
In this paper, state estimation for discrete-time BAM neural networks with time-varying delay is discussed. Under a weaker assumption on activation functions, by constructing a novel Lyapunov-Krasovskii functional (LKF), a set of sufficient conditions are derived in terms of linear matrix inequality (LMI) for the existence of state estimator such that the error system is global exponentially stable. Based on the delay partitioning method and the reciprocally convex approach, some less conservative stability criteria along with lower computational complexity are obtained. Finally, a numerical example is given to show the effectiveness of the derived result.
Keywords
- BAM neural networks
- discrete-time
- state estimation
- exponential stability
- LMI
1 Introduction
In the past few years, neural networks have been widely studied and applied in various fields such as load frequency control in pattern recognition, static image processing, associative memories, mechanics of structures and materials, optimization and other scientific areas (see [1–16]). The BAM neural network model is an extension of the unidirectional auto-associator of Hopfield. A BAM neural network is composed of neurons arranged in two layers: X- and Y-layers. The neurons in one layer are fully interconnected to the neurons in the other layer, while there are no interconnection among neurons in the same layer [17]. The BAM neural networks have potential applications in many fields such as signal processing, artificial intelligence and so on. In addition, time delays are inevitable in many biological and artificial neural networks such as signal transmission delay, propagation and signal processing delay. It is well known that the existence of time delays is the source of oscillation and instability in neural networks which can change the dynamic characters of neural networks dramatically. The stability of neural networks has drawn particular research interest. The stability of BAM neural networks also has been widely studied, and a lot of results have been reported (see [18, 19] and the references cited therein).
In relatively large-scale neural networks, the state components of the neural network model are unknown or not available for direct measurement, normally only partial information about the neuron states is available in the network outputs. In order to make use of the neural networks in practice, it is important and necessary to estimate the neuron states through available measurements. Recently, the state estimation problem for neural networks has received increasing research interest (see [20–26]). The state estimation problem for delayed neural networks has been proposed in [20]. Mou et al. [21] studied state estimation for discrete-time neural networks with time-varying delays. A sufficient condition was derived for the design of the state estimator to guarantee the global exponential stability of the error system. The problem of state estimation for discrete-time delayed neural networks with fractional uncertainties and sensor saturations was studied in [22]. In [26], the state estimation problem for a new class of discrete-time neural networks with Markovian jumping parameters as well as mode-dependent mixed time-delays is studied. New techniques are developed to deal with the mixed time-delays in the discrete-time setting, and a novel Lyapunov-Krasovskii functional is put forward to reflect the mode-dependent time-delays. Sufficient conditions are established in terms of linear matrix inequalities (LMIs) that guarantee the existence of the state estimators.
However, only very little attention has been paid to state estimation for BAM neural networks with time-varying delay. The robust delay-dependent state estimation problem for a class of discrete-time BAM neural networks with time-varying delays is considered in [27]. By using the Lyapunov-Krasovskii functional together with linear matrix inequality (LMI) approach, a new set of sufficient conditions is derived for the existence of state estimator such that the error state system is asymptotically stable. The problem of state estimator of BAM neural network with leakage delays is studied in [28]. A sufficient condition is established to ensure that the error system was globally asymptotically stable. It should be pointed out that only the global asymptotical stability of the error system was discussed in [27, 28]. To the best of our knowledge, the exponential stability of the error system for the discrete-time BAM neural networks with time-varying delay has not been fully addressed yet. Motivated by this consideration, in this paper we aim to establish some new sufficient conditions for the existence of state estimations which guarantee the error system for the discrete-time BAM neural networks with time-varying delays to be globally exponentially stable. These conditions are expressed in the form of LMIs.
The rest of this paper is organized as follows. In Section 2, some notations and lemmas, which will be used in this paper, are given. In Section 3, based on utilizing both the delay partitioning method and the reciprocally convex approach, we construct a novel Lyapunov-Krasovskii functional, and a new set of sufficient conditions are derived for the global exponential stability of the error system. In Section 4, an illustrative example is provided to demonstrate the effectiveness of the proposed result.
2 Problem formulation and preliminaries
For the sake of convenience, we introduce some notations firstly. ∗ denotes the symmetric term in a symmetric matrix. For an arbitrary matrix A, \(A^{T}\) stands for the transpose of A. \(A^{-1}\) denotes the inverse of A. \(R^{n\times n}\) denotes the \(n\times n\)-dimensional Euclidean space. If A is a symmetric matrix, \(A>0\) (\(A\geq0\)) means that A is positive definite (positive semidefinite). Similarly, \(A<0\) (\(A\leq0\)) means that A is negative definite (negative semidefinite). \(\lambda_{m}(A)\), \(\lambda_{M}(A)\) denote the minimum and maximum eigenvalues of a square matrix A, respectively. \(\|A\|=\sqrt{\lambda_{M}(A^{T}A)}\).
Throughout this paper, we make the following assumptions.
Assumption 1
Remark 1
In Assumption 1, the constants \(F_{i}^{-}\), \(F_{i}^{+}\), \(G_{i}^{-}\) and \(G_{i}^{+}\) are allowed to be positive, negative, or zero. So the activation functions in this paper are more general than nonnegative sigmoidal functions used in [29, 30]. The stability condition developed in this paper is less restrictive than that in [29, 30].
Before presenting the main results of the paper, we need the following definition and lemmas.
Definition 1
Lemma 1
[31]
Lemma 2
[32]
3 Main results
In this section, we construct a novel Lyapunov-Krasovskii functional together with free-weighting matrix technique. A delay-dependent condition is derived to estimate the neuron state through available output measurements such that the resulting error system (5) is globally exponentially stable.
Theorem 1
Proof
Obviously, \(\zeta_{1}(k)=0\), or \(\zeta_{2}(k)=0\), if \(h(k)=h_{M}\) or \(h(k)=h_{m}\). Similarly, \(\zeta_{3}(k)=0\), or \(\zeta_{4}(k)=0\), if \(d(k)=d_{M}\) or \(d(k)=d_{m}\).
Remark 2
A state estimator to the neuron states is designed through available output measurements in our paper. A new sufficient condition is established to ensure the global exponential stability of the error system (5) in this paper. The numerical complexity of Theorem 1 in this paper is proportional to \(13n^{2}+11n\). However, only the sufficient condition is established to ensure the global asymptotical stability of the error system in [27]. The numerical complexity of Theorem 3.1 in [27] is proportional to \(29n^{2}+25n\). It is obvious that Theorem 1 in this paper has lower computational complexity.
Corollary 1
4 Illustrative example
In this section, we provide a numerical example to illustrate the effectiveness of our result.
5 Conclusions
In this paper, the problem of state estimation for BAM discrete-time neural network with time-varying delays has been studied. Based on the delay partitioning method, the reciprocally convex approach and a new Lyapunov-Krasovskii functional, a new set of sufficient conditions which guarantee the global exponential stability of the error system is derived. A numerical example is presented to demonstrate that the new criterion has lower computational complexity than previously reported criteria.
Declarations
Acknowledgements
The authors would like to thank the reviewers for their valuable comments and constructive suggestions. This project is partly supported by the National Natural Science Foundation of China under Grants 61271355 and 61375063 and Zhong Nan Da Xue Foundation under Grant 2015JGB21.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- Zhu, QX, Cao, JD: Robust exponential stability of Markovian impulsive stochastic Cohen-Grossberg neural networks with mixed time delays. IEEE Trans. Neural Netw. 21, 1314-1325 (2010) View ArticleGoogle Scholar
- Li, Y, Shao, YF: Dynamic analysis of an impulsive differential equation with time-varying delays. Appl. Math. 59, 85-98 (2014) MATHMathSciNetView ArticleGoogle Scholar
- Guo, S, Tang, XH, Huang, LH: Stability and bifurcation in a discrete system of two neurons with delays. Nonlinear Anal., Real World Appl. 9, 1323-1335 (2008) MATHMathSciNetView ArticleGoogle Scholar
- Liu, XG, Wu, M, Martin, R, Tang, ML: Delay-dependent stability analysis for uncertain neutral systems with time-varying delays. Math. Comput. Simul. 75, 15-27 (2007) MATHMathSciNetView ArticleGoogle Scholar
- Liu, XG, Wu, M, Martin, R, Tang, ML: Stability analysis for neutral systems with mixed delays. J. Comput. Appl. Math. 202, 478-497 (2007) MATHMathSciNetView ArticleGoogle Scholar
- Chen, P, Tang, XH: Existence and multiplicity of solutions for second-order impulsive differential equations with Dirichlet problems. Appl. Math. Comput. 218, 11775-11789 (2012) MATHMathSciNetView ArticleGoogle Scholar
- Zang, YC, Li, JP: Stability in distribution of neutral stochastic partial differential delay equations driven by a-stable process. Adv. Differ. Equ. 2014, 13 (2014) MathSciNetView ArticleGoogle Scholar
- Wu, YY, Li, T, Wu, YQ: Improved exponential stability criteria for recurrent neural networks with time-varying discrete and distributed delays. Int. J. Autom. Comput. 7, 199-204 (2010) View ArticleGoogle Scholar
- Tang, XH, Shen, JH: New nonoscillation criteria for delay differential equations. J. Math. Anal. Appl. 290, 1-9 (2004) MATHMathSciNetView ArticleGoogle Scholar
- Zhao, HY, Cao, JD: New conditions for global exponential stability of cellular network with delays. Neural Netw. 18, 1332-1340 (2005) MATHView ArticleGoogle Scholar
- Liu, ZG, Chen, A, Cao, JD, Huang, LH: Existence and global exponential stability of periodic solution for BAM neural networks with periodic coefficients and time-varying delays. IEEE Trans. Circuits Syst. I 50, 1162-1173 (2003) MathSciNetView ArticleGoogle Scholar
- Liu, J, Zhang, J: Note on stability of discrete-time time-varying delay systems. IET Control Theory Appl. 6, 335-339 (2012) MathSciNetView ArticleGoogle Scholar
- Li, XA, Zhou, J, Zhu, E: The pth moment exponential stability of stochastic cellular neural network with impulses. Adv. Differ. Equ. 2013, 6 (2013) View ArticleGoogle Scholar
- Zhang, BY, Xu, SY, Zou, Y: Improved delay-dependent exponential stability criteria for discrete-time recurrent neural networks with time-varying delays. Neurocomputing 72, 321-330 (2008) View ArticleGoogle Scholar
- Wu, ZG, Su, HY, Chu, J: New results on exponential passivity of neural networks with time-varying delays. Nonlinear Anal., Real World Appl. 13, 1593-1599 (2012) MATHMathSciNetView ArticleGoogle Scholar
- Pan, LJ, Cao, JD: Robust stability for uncertain stochastic neural networks with delays and impulses. Neurocomputing 94, 102-110 (2012) View ArticleGoogle Scholar
- Kosko, B: Bidirectional associative memories. IEEE Trans. Syst. Man Cybern. 18, 49-60 (1988) MathSciNetView ArticleGoogle Scholar
- Liu, XG, Tang, ML, Martin, R, Liu, X: Discrete-time BAM neural networks with variable delays. Phys. Lett. A 367, 322-330 (2007) MATHView ArticleGoogle Scholar
- Liu, XG, Martin, R, Wu, M, Tang, ML: Global exponential stability of bidirectional associative memory neural network with time delays. IEEE Trans. Neural Netw. 19, 397-407 (2008) View ArticleGoogle Scholar
- Wang, Z, Daniel, W, Liu, X: State estimation for delayed neural networks. IEEE Trans. Neural Netw. 16, 279-284 (2005) View ArticleGoogle Scholar
- Mou, SH, Gao, HJ, Qiang, WY, Fei, ZY: State estimation for discrete-time neural networks with time-varying delays. Neurocomputing 72, 643-647 (2008) View ArticleGoogle Scholar
- Kan, X, Wang, ZD, Shu, HS: State estimation for discrete-time delayed neural networks with fractional uncertainties and sensor saturations. Neurocomputing 17, 64-71 (2013) View ArticleGoogle Scholar
- He, Y, Wu, QG, Lin, C: Delay-dependent state estimation for delayed neural networks. IEEE Trans. Neural Netw. 17, 1077-1081 (2006) MATHView ArticleGoogle Scholar
- Liang, JL, Lan, J: Robust state estimation for stochastic genetic regulatory networks. Int. J. Syst. Sci. 41, 47-63 (2010) View ArticleGoogle Scholar
- Wang, Z, Liu, Y, Liu, X: State estimation for jumping recurrent neural networks with discrete and distributed delays. Neural Netw. 22, 41-48 (2009) View ArticleGoogle Scholar
- Liu, YR, Wang, ZD, Liu, XH: State estimation for linear discrete-time Markovian jumping neural networks with mixed mode-dependent delays. Phys. Lett. A 372, 7147-7155 (2008) MATHMathSciNetView ArticleGoogle Scholar
- Arunkumar, A, Sakthivel, R, Mathiyalagan, K, Marshal Anthoni, S: Robust state estimation for discrete-time BAM neural networks with time-varying delay. Neurocomputing 131, 171-178 (2014) View ArticleGoogle Scholar
- Sakthivel, R, Vadivel, P, Mathiyalagan, K, Arunkumar, A, Sivachitra, M: Design of state estimator bidirectional associative memory neural network with leakage delays. Inf. Sci. 296, 263-274 (2015) MathSciNetView ArticleGoogle Scholar
- Lu, CY: A delay-range-dependent approach to design state estimators for discrete-time recurrent neural networks with interval time-varying delay. IEEE Trans. Circuits Syst. II, Express Briefs 55, 1163-1167 (2008) View ArticleGoogle Scholar
- Lu, CY, Cheng, JC, Su, TJ: Design of delay-range-dependent state estimators for discrete time recurrent neural networks with interval time-varying delay. In: Press, I (ed.) Proceedings of the American Control Conference, Washington, 11-13 June 2008, pp. 4209-4231 (2008) Google Scholar
- Kwon, OM, Park, MJ, Park, JH, Lee, SM, Cha, EJ: New criteria on delay-dependent stability for discrete-time neural networks with time-varying delays. Neurocomputing 121, 185-194 (2013) View ArticleGoogle Scholar
- Park, P, Ko, J, Jeong, C: Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47, 235-238 (2011) MATHMathSciNetView ArticleGoogle Scholar