- Research
- Open access
- Published:
On the stability of the Cartesian product of a neural ring and an arbitrary neural network
Advances in Difference Equations volume 2014, Article number: 176 (2014)
Abstract
The stability of a system of neural networks connected to a ring has been studied extensively throughout recent years. Our main contribution within this work states that the stability region in the parameter space of a discrete-time model can be extended by breaking such a ring provided that there is a sufficiently large number of networks. Also, it has been shown that for a small ring, paradoxical values may appear within its parameter space for which such a ring is stable; meanwhile, corresponding linear configuration is unstable.
MSC:37B25.
1 Introduction
Many neural networks of artificial or natural origin, including the brain net, have a ring structure [1]. The stability of a ring neural network with delayed interactions has been studied in recent works such as [2–5]. In particular, [5] examined the breaking of a ring neural network into a linear neural network which gives an extended stability region in the parameter space provided that there is a sufficiently large number of neurons at the ring neural network. In this paper, we take such an approach to address the related question dealing with a discrete-model of ring consisting of identical (maybe complicated) networks. We characterize closely what happens with the stability of such rings after they are broken.
This paper is structured as follows. In Section 2, formal definitions of the Cartesian product of neural networks, ring and linear configuration of a network are stated. In Section 3, it is proven that by breaking a sufficiently large ring of neural networks, it does not lose its stability. Also an example of a small torus neural network, i.e. a ring consisting of small neural rings, is given. Hence, after two consecutive cut transformations, it yields a grid configuration. We show that there is a small region within the parameter space resulting in loss of stability in the breaking of the ring neural network. Such parameter values will be called paradoxical.
2 The Cartesian product of neural networks
Neural networks have been described either by nonlinear equations [6, 7] or by linear nonhomogeneous equations as it is done in [8]. Nonetheless, local stability analysis of steady states offers an interesting approach as we have chosen within this work. When linearized discrete-time neural networks models are considered, the state vector of a network at time s is governed by the following linear homogeneous equation (see [2, 6, 9]):
where n is the number of neurons in the network, is a damping factor of neuron self-oscillations, is a delay in the damping process of neuron self-oscillations, is a delay in the neuron interactions (). Entries of the matrix represent interaction forces among n different neurons, thus that every entry at the principal diagonal of A will be zero. For every j (), the j th component of is the state of the j th neuron at the moment s. The entry of the matrix A is the force of action from the v th neuron to the j th neuron. We proceed to give formal definitions to neural networks and the Cartesian product of networks as follows.
Definition 1 A neural network is an ordered 5-tuple , where , (), . We call (1) the defining equation of the network . We say that two neural networks are compatible if and only if they have the same γ, k, m. Given two compatibles networks and , we define their Cartesian product as the neural network , where the Kronecker sum operation ⊕ is defined as follows: , having ⊗ as the Kronecker product operation, and , stand for the unit matrices of orders n, r, respectively.
These definitions do not contradict those given in [10, 11]. We also notice that the square block matrix of order rn has the form
where () are entries of .
It is not hard to see that for any given neural network , its matrix A can be seen as a weighted directed graph with a set of vertices and a set of directed edges E defined as follows: if an entry of A is nonzero, then and weighted by . Such a graph does not depend on γ, k, m.
For any given pair , of compatible networks, their Cartesian products and are isomorphic in the sense that one defining equation can be obtained from another by a straightforward permutation of components.
Now, let us consider the following example of ring and linear configurations of networks, both playing a crucial role in our main results.
Example 1 Let be an circulant matrix for , and let be a tridiagonal matrix for :
We define and as the ring and linear neural networks, respectively, where b is the strength of the connection between a neuron to its counterclockwise neighbor neuron, a is the strength of their opposite direction connection (Figure 1). We point out that has a connection between its first and last neuron, meanwhile has no connection between them.
It follows that has the defining equation (1) with
We state the following key property of the Kronecker sum.
If () is a full list of eigenvalues of an matrix and () is the corresponding list for an matrix , then the eigenvalues of are given by (, ).
3 The stability of a ring of neural networks
Our main purpose is to study the stability of a ring and linear configuration of a neural network. Hence, we proceed to state straightforwardly stability definitions for the defining equation (1). We say that this equation is stable (asymptotically stable) if and only if every solution has a bounded norm (the sequence tends to zero as ). Quite often stability requirements of a system are adjusted [15, 16], we will state the following definitions along these lines. Given ρ is a positive real number, we say that equation (1) is ρ-stable (ρ-asymptotically stable) if and only if the sequence is bounded (the sequence tends to zero as ). Equations that are not ρ-stable (asymptotically ρ-stable) will be called ρ-unstable (asymptotically ρ-unstable). We should notice that when , (asymptotic) ρ-stability is equivalent to the usual Lyapunov notion of (asymptotic) stability. Furthermore, stability cones [17, 18] for stability analysis of (1) will be extensively used in our analysis. Stability cones for stability analysis of differential delayed matrix equations were introduced in [19].
It is a plausible step to take the compatible network as the breaking of the network . Now, let us consider to be an arbitrary neural network, then it follows that the network is the compatible breaking of the ring (Figure 2) in the sense that it is the resulting neural network after the breaking of all links between the first and the last copy of at the latter network. The stability of those neural networks involved along this process will be addressed in the following theorem.
Theorem 2 Let , and be compatible neural networks, obeying the condition , then for every , there exists such that for all , if is ρ-stable, then is asymptotically ρ-stable.
Proof Let be a neural network and be the list of eigenvalues of A. We assume the condition and that are fixed. It was shown in [12–14] that the set of values are the eigenvalues of ; in a similar fashion, the set of values , , are the eigenvalues of . By applying Theorem 1 and related stability analysis results from [18] over the neural networks and , we construct two sets of points as follows. Firstly, the set (, ) obeying
Secondly, the set of points (, ) obeying
We proceed with such a construction by an exhaustive case analysis over a and b.
CASE 1: , . Let us construct for every j () points and so that
CASE 1.1: There exists j () such that lies outside the ρ-stability cone for the given values of k, m. Then the point (see (3)) lies outside the ρ-stability cone, therefore the network is ρ-unstable for every . So we can put in the conclusion of the theorem.
CASE 1.2: There exists j () such that lies outside the ρ-stability cone. Let us use the fact that approaches when , being the integral part of z. We conclude from (3) that there exists an such that for every , the point lies outside the ρ-stability cone. Therefore the network is ρ-unstable for every .
CASE 1.3: For all j (), both and lie inside the ρ-stability cone or on its boundary. Since (recall that ), all the points (see (4)) lie inside the line segment with the endpoints , (see (5), (6)). But the section of the ρ-stability cone at the level has the property of being convex, hence all the points (, ) lie inside the ρ-stability cone. Therefore the neural network is asymptotically ρ-stable. This enables one to put in the conclusion of the theorem.
CASE 2: , . This case is similar to CASE 1.
CASE 3: , . For every j (), let us construct points and such that
CASE 3.1: There exists j () such that lies outside the ρ-stability cone. If , then . Hence by (3) there exists such that for every , the point lies outside the ρ-stability cone. Therefore the network is ρ-unstable for every .
CASE 3.2: There exists j () such that lies outside the ρ-stability cone. This case is similar to CASE 3.1, the only difference being in using instead of .
CASE 3.3: For all j (), both and lie inside the ρ-stability cone or on its boundary. This case is similar to CASE 1.3, the only difference being in using instead of .
CASE 4: , . This case is similar to CASE 3. Hence, our proof is completed. □
Considering the semigroup structure of all neural networks for γ, k, m fixed, it is not hard to see that the neural network its identity element, the fact which entails that such a structure is really a commutative monoid. By replacing by ℰ in Theorem 2, we obtain an interesting consequence.
Theorem 3 Let and be compatible neural networks, obeying that , then for every , there exists such that for all if is ρ-stable, then is asymptotically ρ-stable.
A similar result to this corollary for a continuous-time neural network model was shown in [5]. We do remark that our main Theorem 2 states that in the case , the breaking of large ring neural networks extends the asymptotic stability domain in the parameter space providing a sufficiently large size. The latter is crucial to it, in fact it is no longer true when the number of networks in such a ring is not large enough. We will state adequate definitions and an example to support this issue.
Definition 2 Let , and be pairwise compatible neural networks. Consider to be a point in the ab-plane; we call it paradoxical for both transformations and , if the network is asymptotically stable, and is unstable.
Example 2 By considering be a toroidal neural network, significant changes in the stability domains can be shown after and are broken according to the following diagram
Now, by using the stability cones methods from [9, 17, 18], stability domains can be found for those networks involved in the previous diagram. It is not hard to see how in all four operations denoted by arrows in (9), the stability domains are significantly increased. Nevertheless, Figure 3 shows in detail how these operations create paradoxical points, for which the system loses stability after the ring has been broken.
4 Conclusion
In connection with the above investigations, some open problems arise. For example, in [5] a detailed analysis of appearance and disappearance of paradoxical points in a continuous-time model of neural ring networks was performed. Consequently, natural directions for future research are the analysis of these phenomena in our discrete-time model of neural networks. Moreover, in the future, we intend to examine relevant issues in neural networks with distributed delays.
References
Vishwanathan A, Bi GQ, Zeringue HC: Ring-shaped neuronal networks: a platform to study persistent activity. Lab Chip 2011, 11(6):1081–1088. 10.1039/c0lc00450b
Kaslik E: Dynamics of a discrete-time bidirectional ring of neurons with delay. In Proceedings of Int. Joint Conf. on Neural Networks. IEEE Comput. Soc., Los Alamitos; 2009:1539–1546. Atlanta, Georgia, USA, 14–19 June 2009
Kaslik E, Balint S: Complex and chaotic dynamics in a discrete-time delayed Hopfield neural network with ring architecture. Neural Netw. 2009, 22(10):1411–1418. 10.1016/j.neunet.2009.03.009
Yuan Y, Campbell SA: Stability and synchronization of a ring of identical cells with delayed coupling. J. Dyn. Differ. Equ. 2004, 16: 709–744. 10.1007/s10884-004-6114-y
Khokhlova TN, Kipnis MM: The breaking of a delayed ring neural network contributes to stability: the rule and exceptions. Neural Netw. 2013, 48: 148–152.
Kaslik E, Balint S: Bifurcation analysis for a two-dimensional delayed discrete-time Hopfield neural network. Chaos Solitons Fractals 2007, 34: 1245–1253. 10.1016/j.chaos.2006.03.107
Idels L, Kipnis M: Stability criteria for a nonlinear nonautonomous system with delays. Appl. Math. Model. 2009, 33(5):2293–2297. 10.1016/j.apm.2008.06.005
Arbib M (Ed): The Handbook of Brain Theory and Neural Networks. 2nd edition. MIT Press, Cambridge; 2002.
Ivanov SA, Kipnis MM: Stability analysis of discrete-time neural networks with delayed interactions: torus, ring, grid, line. Int. J. Pure Appl. Math. 2012, 78(5):691–709.
Imrich W, Klavžar S, Rall DF: Graphs and Their Cartesian Products. AK Peters, Wellesley; 2008.
Brualdi RA, Cvetković DM: A Combinatorial Approach to Matrix Theory and Its Applications. CRC Press, Boca Raton; 2008.
Horn RA, Johnson CR: Topics in Matrix Analysis. Cambridge University Press, Cambridge; 1994.
Brouwer AE, Haemers WH: Spectra of Graphs. Springer, Berlin; 2011.
Cvetković DM, Doob M, Sachs H: Spectra of Graphs - Theory and Applications. 3rd edition. Wiley, New York; 1998.
Chestnov VN:Synthesis -controllers for multidimensional systems with given accuracy and degree of stability. Autom. Remote Control 2011, 72(10):2161–2175. 10.1134/S0005117911100134
Gryazina EN, Polyak BT: Stability regions in the parameter space: D -decomposition revisited. Automatica 2006, 42(1):13–26. 10.1016/j.automatica.2005.08.010
Kipnis MM, Malygina VV: The stability cone for a matrix delay difference equation. Int. J. Math. Math. Sci. 2011., 2011: Article ID 860326
Ivanov SA, Kipnis MM, Malygina VV: The stability cone for a difference matrix equation with two delays. ISRN Appl. Math. 2011., 2011: Article ID 910936
Khokhlova TN, Kipnis MM, Malygina VV: Stability cone for linear delay differential matrix equation. Appl. Math. Lett. 2011, 24: 742–745. 10.1016/j.aml.2010.12.020
Acknowledgements
This work was supported by grants from the Ministry of Education of Russia, Chelyabinsk State Pedagogical University, Russia, and by the grant from Fondecyt No. 1130112, Chile.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the manuscript and typed, read, and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ivanov, S., Kipnis, M. & Medina, R. On the stability of the Cartesian product of a neural ring and an arbitrary neural network. Adv Differ Equ 2014, 176 (2014). https://doi.org/10.1186/1687-1847-2014-176
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1847-2014-176