 Research
 Open Access
 Published:
Analysis of the equilibrium points of background neural networks with uniform firing rate
Advances in Difference Equations volume 2017, Article number: 314 (2017)
Abstract
In this paper, we give a complete analysis of the equilibrium points of background neural networks with uniform firing rates. By using continuity, monotonicity of some functions and Rolle’s theorem, the number of equilibrium points and their locations are obtained. Moreover, some novel sufficient conditions are given to guarantee the stability of the equilibrium points for the network model by utilizing Taylor’s theorem. A simulation example is conducted to illustrate the theories developed in this paper.
Introduction
Dynamical analysis is one of the most important issues of recurrent neural networks and many results on this topic have been reported in the literature; see Atteneave [1]; Cohen and Grossberg [2]; Forti [3]; Hahnloser [4]; Zeng et al. [5]; Zeng and Wang [6]; Cao and Wang [7]; Chen et al. [8]; Chen [9]; Tang et al. [10]; Zhang et al. [11]; Zuo et al. [12]; Zhang et al. [13]; Li and Cao [14]; Samidurai and Manivannan [15] and the references therein. It is also an essential step towards successful applications such as signal processing and problem optimization. Analysis of the equilibrium point for the concerned recurrent neural networks is a very important part of dynamical analysis (Cheng et al. [16]; Qu et al. [17]). Especially, existence and stability problems of the equilibrium points for various types of recurrent neural networks have attracted significant attention of many researchers (Manivannan et al. [18]; Nie and Cao [19]; Manivannan et al. [20]). Generally, two ways are used to prove the existence of equilibria for a neural network model. One way to prove this is by using a mapping derived from the neural network being a homeomorphism (Forti and Tesi [21]; Chen [9]; Lu [22]; Zhao and Zhu [23]). Another way uses Brouwer’s fixedpoint theorem (Forti [3]; Forti and Tesi [21]; Guo and Huang [24]; Wang [25]; Miller and Michel [26]).
In order to interpret the phenomena and exhibit how the dynamical states of recurrent neural networks are affected by a given external background input, the background neural network model was proposed in Salinas [27]. By utilizing theoretical models and computer simulations, it has been shown that small changes in this model may shift a network from a relatively quiet state to some other state with highly complex dynamics.
As far as we are concerned, few references have studied the dynamical properties of the background neural network model; see Zhang and Yi [28]; Wan et al. [29]; Xu and Yi [30]. However, since the network equations (2.1) are nonlinear and coupled equations, neither the wellknown homeomorphism method nor Brouwer’s fixedpoint theorem can be used to easily investigate the equilibrium point of (2.1). The only known theoretical results in the literature of local stability conditions of the equilibria for background neural networks are obtained by computing eigenvalues at equilibria (Salinas [27]). Unfortunately, so far, the equilibrium analysis problem for the background neural networks with firing rate (the uniform firing rate means that the firing rate is the same for all neurons) remains far from completion. The major difficulty stems from the network model which consists of highly nonlinear coupled equations. The lack of basic information on equilibria creates some difficulties in discussing dynamical properties and bifurcations of the background neural networks.
In this paper, we give a complete analysis of the equilibrium for the background neural networks with uniform firing rate. For the first time, we transform the equilibrium problem of the background neural network with firing rate into a root problem of a cubic equation. Not following the common idea of computing roots of the cubic equation, we analyze the equilibria through a geometrical formulation of the parameter conditions of the background neural networks. Correspondingly, the number and coordinates of the equilibria are determined by using continuity and monotonicity, together with Rolle’s theorem. Furthermore, novel sufficient stability conditions for the equilibria are given. The studies based on the background neural network with uniform firing rate provide an insightful understanding of the computational performance of system (2.1).
The rest of this paper is organized as follows. In Section 2, preliminaries are given. In Section 3, we establish conditions for the exact number of equilibria for the background networks. Locations of these equilibria are obtained. Moreover, we formulate novel sufficient conditions for stability of such equilibria. In Section 4, a simulation example is presented to illustrate the theoretical results. In Section 5, conclusions are drawn.
Preliminaries
The background neural network model is described by the following system of nonlinear differential equations:
for \(t\geq 0\), where \(x_{i}\) denotes the activity of neuron i, \(h_{i}\) represents its external input, \(w_{ij}\) represents the excitatory synaptic connection from neuron j to neuron i, v is the inhibitory synaptic connection by which a neuron decreases another neuron’s gain, τ is a time constant, and s is a saturation constant. All these quantities are always positive or zero. If the firing rate is the same for all neurons, then the network equations (2.1) are reduced to a nonlinear equation as follows:
for all \(t \geq 0\). x denotes the uniform firing rate, \(w_{tot}\) denotes the excitatory synaptic connection, and N is the total number of neurons.
For simplicity, we consider system (2.2) in the following equivalent form:
Herein, let \(\tau =1\), \(a=\frac{w_{\mathrm{tot}}}{\sqrt{s}} > 0\), \(b=\frac{h}{ \sqrt{s}} > 0\), \(c=\frac{vN}{s}> 0\).
An equilibrium of network (2.3) is described as follows:
Let
i.e.,
We suppose \(x(0)>0\) and
It means that the trajectory of system (2.2) is positive.
From reference Zhang and Yi [28], we have
Denote
Because \(1+cx^{2}(t)>0\), the equilibria of system (2.3) are determined by zeros of \(P(x)\) in the interval \(I:=(0,\Pi )\).
Equilibria and qualitative properties
In this section, we present novel sufficient conditions which guarantee the existence and the number of equilibria for network (2.3). Our approach is based on a geometrical observation. We also establish stability criteria of these equilibria through the Taylor expansion at some equilibrium.
Theorem 1
System (2.3) has at most three equilibria in \((0,\Pi )\). The number of equilibria and their locations are described in Table 1.
Proof
System (2.3) has at most three equilibria in \((0,\Pi )\) because \(\operatorname{deg}(P)=3\).
Next, we will discuss the zeros of
According to the definition of \(P(x)\), we have
In order to state our results easily, we partition the parameter conditions ensuring the number and locations of equilibria into the following subregions:
The derivative \(P'(x)=3cx^{2}+2a^{2}x+2ab1=0\) has two zeros:
We have \(P(\zeta_{})=\frac{2a^{6}+2a^{4}\sqrt{a^{4}+6abc3c}+12abc\sqrt{a ^{4}+6abc3c}6c\sqrt{a^{4}+6abc3c}18a^{3}bc+9a^{2}c27b^{2}c^{2}}{27c ^{2}}\), \(P(\zeta_{+})=\frac{2a^{6}+2a^{4}\sqrt{a^{4}+6abc3c}+12abc\sqrt{a ^{4}+6abc3c}6c\sqrt{a^{4}+6abc3c}+18a^{3}bc9a^{2}c+27b^{2}c^{2}}{27c ^{2}}\). Consider the discriminant \(\Delta_{P'(x)}\) of a polynomial \(P'\) with degree 2
We separately discuss the three cases: \(\Delta_{P'(x)}>0,\Delta_{P'(x)}=0\) and \(\Delta_{P'(x)}<0\).
Case 1: \(\Delta_{P'(x)}=a^{4}+3c(2ab1)>0\), i.e., \(w_{\mathrm{tot}}^{4}+6vNw_{\mathrm{tot}}h3vNs>0\). However, \(2ab1\) may be positive, negative or zero. Thus, according to the sign of \(2ab1\), we need to discuss the following three subcases.
Subcase 1.1: \(ab<\frac{1}{2}\), i.e., \(\frac{w_{\mathrm{tot}}h}{s}<\frac{1}{2}\).
Equation \(P'(x)=0\) has two distinct roots \(\zeta_{},\zeta_{+}(0< \zeta_{}<\zeta_{+})\) in this case.

(a)
\(\zeta_{}\), \(\zeta_{+}\) are the minimum point and the maximum point of \(P(x)\), respectively. By simple computation, we obtain \(\zeta_{+}< \Pi \). According to Rolle’s theorem and the continuity of \(P(x)\), \(P(x)\) has a unique positive real zero \(x_{1}^{*}\in (\zeta_{+}, \Pi )\) when \(P(\zeta_{})>0\). Obviously, system (2.3) has a unique equilibrium \(E_{1}^{(1)}:x_{1}^{*}\in (\zeta_{+},\Pi )\), as shown in Figure 1(a).

(b)
\(P(x)\) has two positive real zeros \(\zeta_{}\) and \(x_{2}^{*} \in (\zeta_{+},\Pi )\) when \(P(\zeta_{})= 0\). Then it follows that system (2.3) has two equilibrium points \(E_{2}^{(1)}:\zeta_{}\) and \(E_{3}^{(1)}:x_{2}^{*}\in (\zeta_{+},\Pi )\), as shown in Figure 1(b).

(c.1)
\(P(x)\) has three distinct positive real zeros \(x_{3}^{*}\in (0, \zeta_{}),x_{4}^{*}\in (\zeta_{},\zeta_{+}),x_{5}^{*}\in (\zeta_{+}, \Pi )\) when \(P(\zeta_{})<0\) and \(P(\zeta_{+})>0\). It is also clear that system (2.3) has three equilibrium points \(E_{4}^{(1)}:x_{3} ^{*}\in (0,\zeta_{})\), \(E_{5}^{(1)}:x_{4}^{*}\in (\zeta_{},\zeta _{+})\) and \(E_{6}^{(1)}:x_{5}^{*}\in (\zeta_{+},\Pi )\), as shown in Figure 1(c). \(P(\zeta_{})<0, P(\zeta_{+})>0\) on the solid curve.

(c.2)
\(P(x)\) has two positive real zeros \(x_{6}^{*}\in (0,\zeta_{})\) and \(\zeta_{+}\) when \(P(\zeta_{})<0\) and \(P(\zeta_{+})=0\). Then system (2.3) has two equilibrium points \(E_{7}^{(1)}:x_{6}^{*}\in (0, \zeta_{})\) and \(E_{8}^{(1)}:\zeta_{+}\), as shown in Figure 1(c). \(P(\zeta_{})<0, P(\zeta_{+})=0\) on the dotted curve.

(c.3)
\(P(x)\) has one positive real zero \(x_{7}^{*}\in (0,\zeta_{})\) when \(P(\zeta_{})<0\) and \(P(\zeta_{+})<0\). Then system (2.3) has a unique equilibrium \(E_{9}^{(1)}:x_{7}^{*}\in (0,\zeta_{})\), as shown in Figure 1(c). \(P(\zeta_{})<0\) and \(P(\zeta_{+})<0\) on the starshaped curve.
Subcase 1.2: \(ab>\frac{1}{2}\), i.e., \(\frac{w_{\mathrm{tot}}h}{s}>\frac{1}{2}\).
\(P'(x)=0\) has two distinct zeros \(\zeta_{},\zeta_{+}( \zeta_{}< 0 < \zeta_{+})\). \(P(x)\) has a unique positive real zero \(x_{8}^{*}\in ( \zeta_{+},\Pi )\). Then we see that system (2.3) has a unique equilibrium \(E_{10}^{(1)}:x_{8}^{*}\in (\zeta_{+},\Pi )\), as shown in Figure 2(d).
Subcase 1.3: \(ab=\frac{1}{2}\), i.e., \(\frac{w_{\mathrm{tot}}h}{s}=\frac{1}{2}\).
\(P'(x)=0\) has one zero \(\zeta_{}=0,\zeta_{+}=\frac{2a^{2}}{3c}>0\). \(P(x)\) has a unique positive real zero \(x_{9}^{*}\in (\zeta_{+}, \Pi )\) implying that system (2.3) has a unique equilibrium \(E_{11}^{(1)}:x_{9}^{*}\in (\zeta_{+},\Pi )\), as shown in Figure 2(e).
Case 2: \(\Delta_{P'(x)}=a^{4}+3c(2ab1)=0\), i.e., \(w_{\mathrm{tot}}^{4}+6vNw_{\mathrm{tot}}h3vNs=0\).
\(P'(x)\) has a multiple one \(\frac{a^{2}}{3c}\) of multiplicity 2. \(\frac{a^{2}}{3c}\) is not the extreme point of \(P(x)\). Combining with (3.1), \(P(x)\) has a positive zero \(x_{10}\in (0,\Pi )\), which implies that system (2.3) has a unique equilibrium \(E_{1}^{(2)}:x _{10}^{*}\in (0,\Pi )\).
Case 3: \(\Delta_{P'(x)}=a^{4}+3c(2ab1)<0\), i.e., \(w_{\mathrm{tot}}^{4}+6vNw_{\mathrm{tot}}h3vNs<0\).
\(P'(x)\) has no real zero. It means that \(P(x)\) has no extreme point. By (3.1), \(P(x)\) is a strictly monotonically decreasing function. Thus, \(P(x)\) has a positive zero \(x_{11}^{*}\in (0,\Pi )\). System (2.3) accordingly has a unique equilibrium \(E_{1}^{(3)}:x_{11} ^{*}\in (0,\Pi )\).
The proof is completed. □
Remark 1
It should be noted that, as shown in the proof of Theorem 1, we separately discuss the existence and the number of the equilibria in three cases based on the sign of the discriminant \(\Delta_{P'(x)}\). We have employed continuity and monotonicity of the function \(P(x)\), combined with Rolle’s theorem, to estimate the coordinates and the number of the equilibria. These techniques have provided some new analysis approaches, rather than simply using the wellknown homeomorphism method and Brouwer’s fixedpoint theorem. Moreover, these results may provide important theoretical foundations to further analyze the limit cycle and bifurcations of networks (2.3).
Next, we will discuss stability of the equilibrium point of system (2.3) by utilizing the Taylor expansion.
Theorem 2
The stability of the equilibrium points of system (2.3) is described in Table 2.
Proof
Let \(x^{*}\) be a general equilibrium of system (2.3).
The Taylor expansion of \(F(x)\) at equilibrium \(x^{*}\) is described by
Since \(F(x)=\frac{P(x)}{1+cx^{2}}\), we have
Denote \(Q(x)=P'(x)(1+cx^{2})2cxP(x)\), thus
Then \(F'(x^{\ast })=\frac{Q(x^{\ast })}{(1+cx^{{\ast }^{2}})^{2}}=\frac{P'(x^{\ast })(1+cx^{{\ast }^{2}})}{(1+cx^{{\ast }^{2}})^{2}}\) (since \(P(x^{\ast })=0\)). The sign of \(F'(x^{\ast })\) is the same as that of \(P'(x^{\ast })\), since \(1+cx^{{\ast }^{2}}>0\).
Because \(F(x^{*})=0\), the sign of \(\dot{x}(t)\) is the same as that of \(F'(x^{*})\). Therefore, the stability of the equilibrium is determined by the sign of \(P'(x^{*})\). Concretely, \(P(x)\) is monotonically decreasing in \((\zeta_{+},\Pi )\) and \(P'(x_{1}^{*})<0\). Thus, the equilibrium point \(E_{1}^{(1)}\) is asymptotically stable. Similarly, equilibria \(E_{3}^{(1)}, E_{4}^{(1)}, E_{6}^{(1)}, E_{7}^{(1)}, E_{9} ^{(1)}, E_{10}^{(1)}, E_{11}^{(1)}, E_{1}^{(2)}\) and \(E_{1}^{(3)}\) are asymptotically stable, because \(P'(x_{i}^{*})<0\), when \(i=2,3,5,6,7,8,9,10,11\).
\(P(x)\) is monotonically increasing in \((\zeta_{+},\Pi )\) and \(P'(x_{4})^{*}>0\), which means that equilibrium point \(E_{5}^{(1)}\) is unstable.
If \(P'(x^{\ast })=0\), then the Taylor expansion of \(P(x)\) at equilibria \(x^{\ast }\) is
where \(Q'(x)=P''(x)(1+cx^{2})+P'(x)\cdot 2cxP'(x)\cdot 2cxP(x) \cdot 2c\). Thus, we obtain
Since \(P(x^{\ast })=0,P'(x^{\ast })=0\), we have \(Q'(x^{\ast })=P''(x ^{\ast })(1+cx^{{\ast }^{2}})\) and \(Q(x^{\ast })=P'(x^{\ast })(1+cx ^{{\ast }^{2}})2cx^{\ast }P(x^{\ast })=0\).
Therefore, we get
It means that the sign of \(F''(x^{\ast })\) is the same as that of \(P''(x^{\ast })\) because \(1+cx^{{\ast }^{2}}>0\). \(P'(\zeta_{})=0\) and \(P'(\zeta_{+})=0\) at equilibria \(E_{2}^{(1)}:\zeta_{}, E_{8}^{(1)}: \zeta_{+}\), respectively. Thus, \(F'(\zeta_{})=0,F'(\zeta_{+})=0\).
\(P''(\zeta_{})>0\) because the graph of \(P(x)\) is concave down in the neighborhood of \(E_{2}^{(1)}\). \(P'(\zeta_{})=0,P''(\zeta_{})>0\), thus \(E_{2}^{(1)}\) is unstable. Similarly, \(P''(\zeta_{+})<0\) because the graph of \(P(x)\) is convex up in the neighborhood of \(E_{8}^{(1)}\). \(P'(\zeta_{+})=0,P''(\zeta_{+})<0\), thus \(E_{8}^{(1)}\) is asymptotically stable.
The proof is completed. □
Remark 2
It should be mentioned that we have derived the conditions for stability of the equilibria by using the Taylor expansion. In Figures 1 and 2, we see that the equilibrium is stable or unstable according to the sign \(P'(x^{*})\) at equilibrium \(x^{*}\). The method is easier and more convenient than constructing suitable LyapunovKrasovskii functionals or constructing energy functions.
Simulations
In this section, one example will be provided to illustrate and verify the theoretical results obtained in the above sections.
Example
Case 1: Consider a class of background neural networks (2.2) with \(w_{\mathrm{tot}}=1.8965, h=4.6457, vN=0.0900\) and \(s=50\). Thus, we obtain \(\frac{w^{2}_{\mathrm{tot}}}{vN}+\frac{h^{2}}{s}+1=41.3951\). We randomly select the initial point \(x(0)\in (0,80)\).
Here, the parameters satisfy our conditions in Theorem 1: \(\Delta_{P'(x)}=0.0017>0,ab=0.1762< \frac{1}{2}\).
By simple computation, equation \(P'(x)=3cx^{2}+2a^{2}x+2ab1=0\) is found to have two distinct roots \(\zeta_{}=5.7367 ,\zeta_{+}=20.9044\), with \(P(\zeta_{})=1.2559<0,P(\zeta_{+})=1.8846>0\).
\((a,b,c)\in \mathcal{T}_{113}\), thus, system (2.3) has three equilibria \(E_{4}^{(1)}:x_{3}^{*}\in (0,5.7367), E_{5}^{(1)}:x_{4} ^{*}\in (5.7367,20.9044)\) and \(E_{6}^{(1)}:x_{5}^{*}\in (20.9044,41.3951)\). According to Theorem 2, \(E_{4}^{(1)}\) and \(E_{6}^{(1)}\) are asymptotically stable, and \(E_{5}^{(1)}\) is unstable. Figure 3 demonstrates the theoretical results.
In this investigation, system (2.3) has multiple stable equilibrium points. Thus, the network belongs to the multistable neural networks. Mathematically, multistability allows the network to have multiple stable fixed points and periodic orbits or limit cycles. Multiple stable equilibrium points or periodic orbits are accompanied by the existence of continuous attractors. Continuous attractors have been found in many applications, including applications related to visual perception, visual images, eye memory, etc. The existence of unstable equilibria is essential in winnertakeall problems (Yi et al. [31]). Therefore, the proposed work in this manuscript can be applied in aforementioned applications.
Case 2: If \(w_{\mathrm{tot}}=1.2, h=12, vN=0.02\) and \(s=63.36\), then \(\frac{w^{2}_{\mathrm{tot}}}{vN}+\frac{h^{2}}{s}+1=75.2727\). We randomly select the initial point \(x(0)\in (0,80)\).
In this case \(\Delta_{P'(x)}=0\), so it follows from Theorems 1 and 2 that system (2.3) has a unique equilibrium point \(E_{1}^{(2)}:x _{10}^{*}\in (0,75.2727)\), and it is asymptotically stable. The location and stability of the equilibrium points of this system are illustrated in Figure 4.
Case 3: If \(w_{\mathrm{tot}}=1.12, h=10, vN=0.03\) and \(s=65\), then \(\frac{w ^{2}_{\mathrm{tot}}}{vN}+\frac{h^{2}}{s}+1=44.3518\). We randomly select the initial point \(x(0)\in (0,35)\).
In this case \(\Delta_{P'(x)}<0\), so system (2.3) has a unique equilibrium point \(E_{1}^{(3)}:x_{11}^{*}\in (0,44.3518)\) and it is asymptotically stable. The results are shown in Figure 5.
In cases 2 and 3, the network (2.3) has a unique equilibrium point which is stable. Such convergent behavior is called ‘monostability’. Monostable networks can be used to solve optimization problems. Under some conditions, the network (2.3) in this paper is a monostable neural network. Therefore, the proposed work provides a novel model with respect to optimization problems.
Conclusions
In this paper, the dynamical properties of equilibria of the background neural network (2.2) (i.e., (2.3)) are analyzed, such as the number of equilibria and their locations. Firstly, the conditions for the number and locations of the equilibria of the network were investigated. Secondly, the conditions for stability of the equilibria were derived. The parametric relations between the dynamical properties of the equilibria and network parameters were revealed. These theories are primarily based on an observation on the geometric structures of the equations \(P(x)=0\). These studies enrich the analytical results for the equilibrium points of other related work.
The studies based on the background neural networks with firing rate (the uniform firing rate means that the firing rate is the same for all neurons, thus, we can regard system (2.3) as onedimensional) may be further developed for the studies on general higherdimensional systems, e.g., the mathematical methods in this paper can be applied to analyze the equilibria of background neural networks with two subnetworks which exhibit rival states (i.e., 2D background neural networks). The rivaling steady states have significant meaning in the development of practical applications of 2D neural networks. This switch problem (Terman and Rubin [32]; Toth et al. [33]) and binocular rivalry (Shpiro et al. [34]) are interesting topics for further research. Many practical problems, such as mechanical design and electrical networks, can be formulated as switch problems. In recent years, dynamical analysis for switched systems have attracted considerable research interest (Cao et al. [35]; Syed Ali et al. [36]). Our manuscript has given us further insight into providing parameter conditions for switched systems.
References
 1.
Atteneave, F: Multistability in perception. Sci. Am. 225, 6371 (1971)
 2.
Cohen, MA, Grossberg, S: Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst. Man Cybern. 13, 815826 (1983)
 3.
Forti, M: On global asymptotic stability of a class of nonlinear systems arising in neural network theory. J. Differ. Equ. 113, 246264 (1994)
 4.
Hahnloser, RLT: On the piecewise analysis of networks of linear threshold neurons. Neural Netw. 11, 691697 (1998)
 5.
Zeng, Z, Wang, J, Liao, X: Global exponential stability of a general class of recurrent neural networks with timevarying delays. IEEE Trans. Circuits Syst. I, Fundam. Theory Appl. 50(10), 13531358 (2003)
 6.
Zeng, Z, Wang, J: Multiperiodicity and exponential attractivity evoked by periodic external inputs in delayed cellular neural networks. Neural Comput. 18(4), 848870 (2006)
 7.
Cao, J, Wang, J: Global asymptotic stability of a general class of recurrent neural networks with timevarying delays. IEEE Trans. Circuits Syst. I, Fundam. Theory Appl. 50(1), 3444 (2003)
 8.
Chen, T, Lu, W, Chen, G: Dynamical behaviors of a large class of general delayed neural networks. Neural Comput. 17(4), 949968 (2005)
 9.
Chen, Y: Global asymptotic stability of delayed CohenGrossberg neural networks. IEEE Trans. Circuits Syst. I, Regul. Pap. 53(2), 351357 (2006)
 10.
Tang, HJ, Tan, KC, Zhang, W: Cyclic dynamics analysis for networks of linear threshold neurons. Neural Comput. 17(1), 97114 (2005)
 11.
Zhang, L, Yi, Z, Yu, J: Multiperiodicity and attractivity of delayed recurrent neural networks with unsaturating piecewise linear transfer functions. IEEE Trans. Neural Netw. 19(1), 158167 (2008)
 12.
Zuo, Z, Yang, C, Wang, Y: A new method for stability analysis of recurrent neural networks with interval timevarying delay. IEEE Trans. Neural Netw. 21(2), 339344 (2010)
 13.
Zhang, H, Wang, Z, Liu, D: A comprehensive review of stability analysis of continuoustime recurrent neural networks. IEEE Trans. Neural Netw. Learn. Syst. 25(7), 12291262 (2014)
 14.
Li, R, Cao, J: Dissipativity analysis of memristive neural networks with timevarying delays and randomly occurring uncertainties. Math. Methods Appl. Sci. 39(11), 28962915 (2016)
 15.
Samidurai, R, Manivannan, R: Delayrangedependent passivity analysis for uncertain stochastic neural networks with discrete and distributed timevarying delays. Neurocomputing 185(12), 191201 (2016)
 16.
Cheng, C, Lin, K, Shin, C: Multistability in recurrent neural networks. SIAM J. Appl. Math. 66(4), 13011320 (2006)
 17.
Qu, H, Yi, Z, Wang, X: Switching analysis of 2D neural networks with nonsaturating linear threshold transfer functions. Neurocomputing 72, 413419 (2008)
 18.
Manivannan, R, Mahendrakumar, G, Samidurai, R, Cao, J, Alsaedi, A: Exponential stability and extended dissipativity criteria for generalized neural networks with interval timevarying delay signals. J. Franklin Inst. 354(11), 43534376 (2017)
 19.
Nie, X, Cao, J: Existence and global stability of equilibrium point for delayed competitive neural networks with discontinuous activation functions. Int. J. Syst. Sci. 43(3), 459474 (2012)
 20.
Manivannan, R, Samidurai, R, Cao, J, Alsaedi, A, Alsaadi, FE: Global exponential stability and dissipativity of generalized neural networks with timevarying delay signals. Neural Netw. 87, 149159 (2017)
 21.
Forti, M, Tesi, A: New conditions for global stability of neural networks with application to linear and quadratic programming problems. IEEE Trans. Circuits Syst. I, Fundam. Theory Appl. 42(7), 354366 (1995)
 22.
Lu, H: On stability of nonlinear continuoustime neural networks with delays. Neural Netw. 13(10), 11351143 (2000)
 23.
Zhao, W, Zhu, Q: New results of global robust exponential stability of neural networks with delays. Nonlinear Anal., Real World Appl. 11(2), 11901197 (2010)
 24.
Guo, S, Huang, L: Stability analysis of CohenGrossberg neural networks. IEEE Trans. Neural Netw. 17(1), 106117 (2006)
 25.
Wang, L: Stability of CohenGrossberg neural networks with distributed delays. Appl. Math. Comput. 160(1), 93110 (2005)
 26.
Miller, RK, Michel, AN: Ordinary Differential Equations. Academic Press, New York (1982)
 27.
Salinas, E: Background synaptic activity as a switch between dynamical states in a network. Neural Comput. 15, 14391475 (2003)
 28.
Zhang, L, Yi, Z: Dynamical properties of background neural networks with uniform firing rate and background input. Chaos Solitons Fractals 33(3), 979985 (2007)
 29.
Wan, M, Gou, J, Wang, D, Wang, X: Dynamical properties of discretetime background neural networks with uniform firing rate. Math. Probl. Eng. 2013(1), 289325 (2013)
 30.
Xu, F, Yi, Z: Convergence analysis of a class of simplified background neural networks with two subnetworks. Neurocomputing 74(18), 38773883 (2011)
 31.
Yi, Z, Heng, PA, Fung, PF: Winnertakeall discrete recurrent neural networks. IEEE Trans. Circuits Syst. II 47, 15841589 (2000)
 32.
Terman, D, Rubin, JE, Yew, AC, Wilson, CJ: Activity patterns in a model for the subthalamopallidal network of the basal ganglia. J. Neurosci. 22(7), 29632976 (2002)
 33.
Toth, LJ, Assad, JA: Dynamic coding of behaviourally relevant stimuli in parietal cortex. Nature 415, 165168 (2002)
 34.
Shpiro, A, Morenobote, R, Rubin, N, Rinzel, J: Balance between noise and adaptation in competition models of perceptual bistability. J. Comput. Neurosci. 27(1), 3754 (2009)
 35.
Cao, J, Rakkiyappan, R, Maheswari, K, Chandrsekar, A: Exponential \(H_{\infty }\) filtering analysis for discretetime switched neural networks with random delays using sojourn probabilities. Sci. China, Technol. Sci. 59(3), 387402 (2016)
 36.
Syed Ali, M, Saravanan, S, Cao, J: Finitetime boundedness, \(L_{2}\)gain analysis and control of Markovian jump switched neural networks with additive timevarying delays. Nonlinear Anal. Hybrid Syst. 23, 2743 (2017)
Acknowledgements
This work was supported in part by the National Science Foundation of China under Grant 61202045, Grant 11501475, and in part by the Program of Science and Technology of Sichuan Province of China under Grant No. 2016JY0067.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the manuscript.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Xu, F., Liu, L. & Xiao, J. Analysis of the equilibrium points of background neural networks with uniform firing rate. Adv Differ Equ 2017, 314 (2017). https://doi.org/10.1186/s136620171322z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s136620171322z
Keywords
 equilibrium point
 Rolle’s theorem
 background neural networks