Skip to main content

Theory and Modern Applications

On the stability of the Cartesian product of a neural ring and an arbitrary neural network

Abstract

The stability of a system of neural networks connected to a ring has been studied extensively throughout recent years. Our main contribution within this work states that the stability region in the parameter space of a discrete-time model can be extended by breaking such a ring provided that there is a sufficiently large number of networks. Also, it has been shown that for a small ring, paradoxical values may appear within its parameter space for which such a ring is stable; meanwhile, corresponding linear configuration is unstable.

MSC:37B25.

1 Introduction

Many neural networks of artificial or natural origin, including the brain net, have a ring structure [1]. The stability of a ring neural network with delayed interactions has been studied in recent works such as [25]. In particular, [5] examined the breaking of a ring neural network into a linear neural network which gives an extended stability region in the parameter space provided that there is a sufficiently large number of neurons at the ring neural network. In this paper, we take such an approach to address the related question dealing with a discrete-model of ring consisting of identical (maybe complicated) networks. We characterize closely what happens with the stability of such rings after they are broken.

This paper is structured as follows. In Section 2, formal definitions of the Cartesian product of neural networks, ring and linear configuration of a network are stated. In Section 3, it is proven that by breaking a sufficiently large ring of neural networks, it does not lose its stability. Also an example of a small torus neural network, i.e. a ring consisting of small neural rings, is given. Hence, after two consecutive cut transformations, it yields a grid configuration. We show that there is a small region within the parameter space resulting in loss of stability in the breaking of the ring neural network. Such parameter values will be called paradoxical.

2 The Cartesian product of neural networks

Neural networks have been described either by nonlinear equations [6, 7] or by linear nonhomogeneous equations as it is done in [8]. Nonetheless, local stability analysis of steady states offers an interesting approach as we have chosen within this work. When linearized discrete-time neural networks models are considered, the state vector x s : Z + R n of a network at time s is governed by the following linear homogeneous equation (see [2, 6, 9]):

x s =γ x s m +A x s k ,s=1,2,,
(1)

where n is the number of neurons in the network, γR is a damping factor of neuron self-oscillations, m Z + is a delay in the damping process of neuron self-oscillations, k Z + is a delay in the neuron interactions (km). Entries of the matrix A R n × n represent interaction forces among n different neurons, thus that every entry at the principal diagonal of A will be zero. For every j (1jn), the j th component of x s is the state of the j th neuron at the moment s. The entry a j v of the matrix A is the force of action from the v th neuron to the j th neuron. We proceed to give formal definitions to neural networks and the Cartesian product of networks as follows.

Definition 1 A neural network is an ordered 5-tuple A=(γ,k,m,n,A), where γR, k,m Z + (km), A R n × n . We call (1) the defining equation of the network . We say that two neural networks are compatible if and only if they have the same γ, k, m. Given two compatibles networks A 1 =(γ,k,m,r, A 1 ) and A 2 =(γ,k,m,n, A 2 ), we define their Cartesian product as the neural network A 1 A 2 =(γ,k,m,rn, A 1 A 2 ), where the Kronecker sum operation is defined as follows: A 1 A 2 = I n A 1 + A 2 I r , having as the Kronecker product operation, and I n , I r stand for the unit matrices of orders n, r, respectively.

These definitions do not contradict those given in [10, 11]. We also notice that the square block matrix A 1 A 2 of order rn has the form

A 1 A 2 =( A 1 a 12 I r a 1 n I r a 21 I r A 1 a 2 n I r a n 1 I r a n 2 I r A 1 ),

where a j v (1j,vn) are entries of A 2 .

It is not hard to see that for any given neural network A=(γ,k,m,n,A), its matrix A can be seen as a weighted directed graph (V;E) with a set of vertices V={1,2,,n} and a set of directed edges E defined as follows: if an entry a j v of A is nonzero, then (j,v)E and weighted by a j v . Such a graph does not depend on γ, k, m.

For any given pair , of compatible networks, their Cartesian products AB and BA are isomorphic in the sense that one defining equation can be obtained from another by a straightforward permutation of x s components.

Now, let us consider the following example of ring and linear configurations of networks, both playing a crucial role in our main results.

Example 1 Let C n (a,b) be an n×n circulant matrix for n>3, and let L n (a,b) be a tridiagonal matrix for n>2:

C n (a,b)=( 0 b 0 0 a a 0 b 0 0 0 a 0 0 0 0 0 0 0 b b 0 0 a 0 ), L n (a,b)=( 0 b 0 0 0 a 0 b 0 0 0 a 0 0 0 0 0 0 0 b 0 0 0 a 0 ).
(2)

We define C n (a,b)=(γ,k,m,n, C n (a,b)) and L n (a,b)=(γ,k,m,n, L n (a,b)) as the ring and linear neural networks, respectively, where b is the strength of the connection between a neuron to its counterclockwise neighbor neuron, a is the strength of their opposite direction connection (Figure 1). We point out that C n (a,b) has a connection between its first and last neuron, meanwhile L n (a,b) has no connection between them.

Figure 1
figure 1

Links in the networks C 3 (a,b) , L 2 (c,d) and C 3 (a,b) L 2 (c,d) .

It follows that C 3 (a,b) L 2 (c,d) has the defining equation (1) with

A= C 3 (a,b) L 2 (c,d)=( 0 b a d 0 0 a 0 b 0 d 0 b a 0 0 0 d c 0 0 0 b a 0 c 0 a 0 b 0 0 c b a 0 ).

We state the following key property of the Kronecker sum.

Theorem 1 (see [1214])

If λ j (1jr) is a full list of eigenvalues of an r×r matrix A 1 and μ v (1vn) is the corresponding list for an n×n matrix A 2 , then the eigenvalues of A 1 A 2 are given by λ j + μ v (1jr, 1vn).

3 The stability of a ring of neural networks

Our main purpose is to study the stability of a ring and linear configuration of a neural network. Hence, we proceed to state straightforwardly stability definitions for the defining equation (1). We say that this equation is stable (asymptotically stable) if and only if every solution x s has a bounded norm (the sequence | x s | tends to zero as s). Quite often stability requirements of a system are adjusted [15, 16], we will state the following definitions along these lines. Given ρ is a positive real number, we say that equation (1) is ρ-stable (ρ-asymptotically stable) if and only if the sequence | x s |/ ρ s is bounded (the sequence | x s |/ ρ s tends to zero as s). Equations that are not ρ-stable (asymptotically ρ-stable) will be called ρ-unstable (asymptotically ρ-unstable). We should notice that when ρ=1, (asymptotic) ρ-stability is equivalent to the usual Lyapunov notion of (asymptotic) stability. Furthermore, stability cones [17, 18] for stability analysis of (1) will be extensively used in our analysis. Stability cones for stability analysis of differential delayed matrix equations were introduced in [19].

It is a plausible step to take the compatible network L n (a,b) as the breaking of the network C n (a,b). Now, let us consider to be an arbitrary neural network, then it follows that the network A L n (a,b) is the compatible breaking of the ring A C n (a,b) (Figure 2) in the sense that it is the resulting neural network after the breaking of all links between the first and the last copy of at the latter network. The stability of those neural networks involved along this process will be addressed in the following theorem.

Figure 2
figure 2

The ring of neural networks and a result of its break: the networks A C 3 (a,b) and A L 3 (a,b) .

Theorem 2 Let A=(γ,k,m,r,A), C n (a,b) and L n (a,b) be compatible neural networks, obeying the condition a 2 + b 2 0, then for every ρ>0, there exists n 0 such that for all n> n 0 , if A C n (a,b) is ρ-stable, then A L n (a,b) is asymptotically ρ-stable.

Proof Let A=(γ,k,m,r,A) be a neural network and λ 1 ,, λ r be the list of eigenvalues of A. We assume the condition a 2 + b 2 0 and that k,m,γ,ρ>0 are fixed. It was shown in [1214] that the set of values (a+b)cos 2 π v n +i(ab)sin 2 π v n are the eigenvalues of C n (a,b); in a similar fashion, the set of values 2 a b cos π v n + 1 , 1vn, are the eigenvalues of L n . By applying Theorem 1 and related stability analysis results from [18] over the neural networks A C n (a,b) and A L n (a,b), we construct two sets of points as follows. Firstly, the set M j v =( u 1 j v , u 2 j v , u 3 j v ) (1jr, 1vn) obeying

u 1 j v + i u 2 j v = λ j + ( ( a + b ) cos 2 π v n + i ( a b ) sin 2 π v n ) exp ( i k m arg γ ) , u 3 j v = | γ | .
(3)

Secondly, the set of points P j v =( u 1 j v , u 2 j v , u 3 j v ) (1jr, 1vn) obeying

u 1 j v +i u 2 j v = λ j +2 a b cos π v n + 1 exp ( i k m arg γ ) , u 3 j v =|γ|.
(4)

We proceed with such a construction by an exhaustive case analysis over a and b.

CASE 1: a0, b0. Let us construct for every j (1jr) points M j 1 =( u 1 j 1 , u 2 j 1 , u 3 j 1 ) and M j 2 =( u 1 j 2 , u 2 j 2 , u 3 j 2 ) so that

u 1 j 1 +i u 2 j 1 = λ j +(a+b)exp ( i k m arg γ ) , u 3 j 1 =|γ|,
(5)
u 1 j 2 +i u 2 j 2 = λ j (a+b)exp ( i k m arg γ ) , u 3 j 2 =|γ|.
(6)

CASE 1.1: There exists j (1jr) such that M j 1 lies outside the ρ-stability cone for the given values of k, m. Then the point M j n (see (3)) lies outside the ρ-stability cone, therefore the network A C n (a,b) is ρ-unstable for every n3. So we can put n 0 =3 in the conclusion of the theorem.

CASE 1.2: There exists j (1jr) such that M j 2 lies outside the ρ-stability cone. Let us use the fact that [n/2]/n approaches 1/2 when n, [z] being the integral part of z. We conclude from (3) that there exists an n 0 such that for every n> n 0 , the point M j [ n / 2 ] lies outside the ρ-stability cone. Therefore the network A C n (a,b) is ρ-unstable for every n> n 0 .

CASE 1.3: For all j (1jr), both M j 1 and M j 2 lie inside the ρ-stability cone or on its boundary. Since 2 a b <a+b (recall that a 2 + b 2 0), all the points P j v (see (4)) lie inside the line segment with the endpoints M j 1 , M j 2 (see (5), (6)). But the section of the ρ-stability cone at the level u 3 =|γ| has the property of being convex, hence all the points P j v (1jr, 1vn) lie inside the ρ-stability cone. Therefore the neural network A L n (a,b) is asymptotically ρ-stable. This enables one to put n 0 =3 in the conclusion of the theorem.

CASE 2: a0, b0. This case is similar to CASE 1.

CASE 3: a>0, b<0. For every j (1jr), let us construct points M j 3 =( u 1 j 3 , u 2 j 3 , u 3 j 3 ) and M j 4 =( u 1 j 4 , u 2 j 4 , u 3 j 4 ) such that

u 1 j 3 +i u 2 j 3 = λ j +i(ab)exp ( i k m arg γ ) , u 3 j v 3 =|γ|,
(7)
u 1 j 4 +i u 2 j 4 = λ j i(ab)exp ( i k m arg γ ) , u 3 j v 4 =|γ|.
(8)

CASE 3.1: There exists j (1jr) such that M j 3 lies outside the ρ-stability cone. If n, then [n/4]/n1/4. Hence by (3) there exists n 0 such that for every n> n 0 , the point M j [ n / 4 ] lies outside the ρ-stability cone. Therefore the network A C n (a,b) is ρ-unstable for every n> n 0 .

CASE 3.2: There exists j (1jr) such that M j 4 lies outside the ρ-stability cone. This case is similar to CASE 3.1, the only difference being in using [3n/4]/n3/4 instead of [n/4]/n1/4.

CASE 3.3: For all j (1jr), both M j 3 and M j 4 lie inside the ρ-stability cone or on its boundary. This case is similar to CASE 1.3, the only difference being in using 2 | a b | <ab instead of 2 a b <a+b.

CASE 4: a<0, b>0. This case is similar to CASE 3. Hence, our proof is completed. □

Considering the semigroup structure of all neural networks for γ, k, m fixed, it is not hard to see that the neural network E=(γ,k,m,1,0) its identity element, the fact which entails that such a structure is really a commutative monoid. By replacing by in Theorem 2, we obtain an interesting consequence.

Theorem 3 Let C n (a,b) and L n (a,b) be compatible neural networks, obeying that a 2 + b 2 0, then for every ρ>0, there exists n 0 such that for all n> n 0 if C n (a,b) is ρ-stable, then L n (a,b) is asymptotically ρ-stable.

A similar result to this corollary for a continuous-time neural network model was shown in [5]. We do remark that our main Theorem 2 states that in the case a 2 + b 2 0, the breaking of large ring neural networks extends the asymptotic stability domain in the parameter space providing a sufficiently large size. The latter is crucial to it, in fact it is no longer true when the number of networks in such a ring is not large enough. We will state adequate definitions and an example to support this issue.

Definition 2 Let , C n (a,b) and L n (a,b) be pairwise compatible neural networks. Consider (a,b) to be a point in the ab-plane; we call it paradoxical for both transformations A C n (a,b)A L n (a,b) and C n (a,b)A L n (a,b)A, if the network A C n (a,b) is asymptotically stable, and A L n (a,b) is unstable.

Example 2 By considering C 3 (a,b) C 5 (a,b) be a toroidal neural network, significant changes in the stability domains can be shown after C 3 (a,b) and C 5 (a,b) are broken according to the following diagram

(9)

Now, by using the stability cones methods from [9, 17, 18], stability domains can be found for those networks involved in the previous diagram. It is not hard to see how in all four operations denoted by arrows in (9), the stability domains are significantly increased. Nevertheless, Figure 3 shows in detail how these operations create paradoxical points, for which the system loses stability after the ring has been broken.

Figure 3
figure 3

The boundaries of stability domains in ab -plane of the networks C 3 (a,b) C 5 (a,b) (black), L 3 (a,b) C 5 (a,b) (blue), C 3 (a,b) L 5 (a,b) (green), L 3 (a,b) L 5 (a,b) (violet). The parameters (γ,k,m,ρ)=(0.4,2,1,1). The origin is inside all the stability domains. The areas of paradoxical points for transformations 1, 2, 3, 4 (see (9)) are painted red.

4 Conclusion

In connection with the above investigations, some open problems arise. For example, in [5] a detailed analysis of appearance and disappearance of paradoxical points in a continuous-time model of neural ring networks was performed. Consequently, natural directions for future research are the analysis of these phenomena in our discrete-time model of neural networks. Moreover, in the future, we intend to examine relevant issues in neural networks with distributed delays.

References

  1. Vishwanathan A, Bi GQ, Zeringue HC: Ring-shaped neuronal networks: a platform to study persistent activity. Lab Chip 2011, 11(6):1081–1088. 10.1039/c0lc00450b

    Article  Google Scholar 

  2. Kaslik E: Dynamics of a discrete-time bidirectional ring of neurons with delay. In Proceedings of Int. Joint Conf. on Neural Networks. IEEE Comput. Soc., Los Alamitos; 2009:1539–1546. Atlanta, Georgia, USA, 14–19 June 2009

    Google Scholar 

  3. Kaslik E, Balint S: Complex and chaotic dynamics in a discrete-time delayed Hopfield neural network with ring architecture. Neural Netw. 2009, 22(10):1411–1418. 10.1016/j.neunet.2009.03.009

    Article  Google Scholar 

  4. Yuan Y, Campbell SA: Stability and synchronization of a ring of identical cells with delayed coupling. J. Dyn. Differ. Equ. 2004, 16: 709–744. 10.1007/s10884-004-6114-y

    Article  MathSciNet  MATH  Google Scholar 

  5. Khokhlova TN, Kipnis MM: The breaking of a delayed ring neural network contributes to stability: the rule and exceptions. Neural Netw. 2013, 48: 148–152.

    Article  MATH  Google Scholar 

  6. Kaslik E, Balint S: Bifurcation analysis for a two-dimensional delayed discrete-time Hopfield neural network. Chaos Solitons Fractals 2007, 34: 1245–1253. 10.1016/j.chaos.2006.03.107

    Article  MathSciNet  MATH  Google Scholar 

  7. Idels L, Kipnis M: Stability criteria for a nonlinear nonautonomous system with delays. Appl. Math. Model. 2009, 33(5):2293–2297. 10.1016/j.apm.2008.06.005

    Article  MathSciNet  MATH  Google Scholar 

  8. Arbib M (Ed): The Handbook of Brain Theory and Neural Networks. 2nd edition. MIT Press, Cambridge; 2002.

    Google Scholar 

  9. Ivanov SA, Kipnis MM: Stability analysis of discrete-time neural networks with delayed interactions: torus, ring, grid, line. Int. J. Pure Appl. Math. 2012, 78(5):691–709.

    Google Scholar 

  10. Imrich W, Klavžar S, Rall DF: Graphs and Their Cartesian Products. AK Peters, Wellesley; 2008.

    MATH  Google Scholar 

  11. Brualdi RA, Cvetković DM: A Combinatorial Approach to Matrix Theory and Its Applications. CRC Press, Boca Raton; 2008.

    Book  MATH  Google Scholar 

  12. Horn RA, Johnson CR: Topics in Matrix Analysis. Cambridge University Press, Cambridge; 1994.

    MATH  Google Scholar 

  13. Brouwer AE, Haemers WH: Spectra of Graphs. Springer, Berlin; 2011.

    MATH  Google Scholar 

  14. Cvetković DM, Doob M, Sachs H: Spectra of Graphs - Theory and Applications. 3rd edition. Wiley, New York; 1998.

    MATH  Google Scholar 

  15. Chestnov VN:Synthesis H -controllers for multidimensional systems with given accuracy and degree of stability. Autom. Remote Control 2011, 72(10):2161–2175. 10.1134/S0005117911100134

    Article  MathSciNet  MATH  Google Scholar 

  16. Gryazina EN, Polyak BT: Stability regions in the parameter space: D -decomposition revisited. Automatica 2006, 42(1):13–26. 10.1016/j.automatica.2005.08.010

    Article  MathSciNet  MATH  Google Scholar 

  17. Kipnis MM, Malygina VV: The stability cone for a matrix delay difference equation. Int. J. Math. Math. Sci. 2011., 2011: Article ID 860326

    Google Scholar 

  18. Ivanov SA, Kipnis MM, Malygina VV: The stability cone for a difference matrix equation with two delays. ISRN Appl. Math. 2011., 2011: Article ID 910936

    Google Scholar 

  19. Khokhlova TN, Kipnis MM, Malygina VV: Stability cone for linear delay differential matrix equation. Appl. Math. Lett. 2011, 24: 742–745. 10.1016/j.aml.2010.12.020

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by grants from the Ministry of Education of Russia, Chelyabinsk State Pedagogical University, Russia, and by the grant from Fondecyt No. 1130112, Chile.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mikhail Kipnis.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the manuscript and typed, read, and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ivanov, S., Kipnis, M. & Medina, R. On the stability of the Cartesian product of a neural ring and an arbitrary neural network. Adv Differ Equ 2014, 176 (2014). https://doi.org/10.1186/1687-1847-2014-176

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2014-176

Keywords