Open Access

Dynamic evolution evoked by external inputs in memristor-based wavelet neural networks with different memductance functions

Advances in Difference Equations20132013:258

https://doi.org/10.1186/1687-1847-2013-258

Received: 7 June 2013

Accepted: 1 August 2013

Published: 22 August 2013

Abstract

In this paper, we present a preliminary study concerning the dynamic flows in memristor-based wavelet neural networks with continuous feedback functions and discontinuous feedback functions in the presence of different memductance functions. The theoretical studies as well as the computer simulations confirm our claim. The analysis can characterize the fundamental electrical properties of memristor devices and provide convenience for applications.

Keywords

memristorwavelet neural networksdynamics

1 Introduction

In the recent years, numerous studies focused on the use of the memristor as a discrete element in a circuit to model phenomena or to implement novel functions. Recent advances in memristor lead to the realization of large-scale artificial neural systems subserving perception, cognition, and learning [19]. Memristor acts as a modulating synapse interconnection between neurons; plasticity is accomplished through adjusting the memristance via current spikes based on the relative timings of pre-synaptic and postsynaptic neuron spikes. By using memristor as synapse in artificial neural systems, the potential in creating neuromorphic computing hardware through its variable memristance is unlimited.

As we all know, memristor-based neural networks may be a real breakthrough in the fields of electronic and circuit design [25]. Dynamic evolution of electronic circuits and systems is extremely important in systems analysis and integration. For this reason, it is important to study what dynamics arise in memristive systems and speculate about how they could be used for meaningful tasks. One question is, the neural network with memristor bridge synapse appears a plethora of complex nonlinear behaviors [59]. It is hard to predict the dynamic flows of a specific memristor-based neural network when it might become detrimental to performance, so a detailed analytical study of the dynamic evolution is necessary.

Consider the memristive neurodynamic system governed by the following equations
x ˙ i ( t ) = x i ( t ) + j = 1 , j i n w i j ( x i ( t ) ) f j ( x j ( t ) ) + u i , i = 1 , 2 , , n ,
(1)
where x i ( t ) is the voltage of the capacitor C i , u i denotes external input, f j ( ) is wavelet feedback function, w i j ( x i ( t ) ) represents memristor-based weights, and
w i j ( x i ( t ) ) = W i j C i × sgin i j , sgin i j = { 1 , i j , 1 , i = j ,

in which W i j denotes the memductance of memristor R i j . And R i j represents the memristor between the feedback function f i ( x i ( t ) ) and x i ( t ) .

Combining the physical structure of a memristor device, then one can see that
W i j = d q i j d σ i j ,
(2)

where q i j and σ i j denote charge and magnetic flux corresponding to memristor R i j , respectively.

Research shows the pinched hysteresis loops are the fingerprint of memristive devices [6, 9]. Under different pinched hysteresis loops, the evolutionary tendency or process of memristive systems evolves into different forms. It is generally known that the pinched hysteresis loop is due to the nonlinearity of memductance function. As two typical memductance functions, in this paper, we discuss the following two cases.

Case 1: The memductance function W i j is given by
W i j = { a i j , | σ i j | < i j , b i j , | σ i j | > i j ,
(3)

where a i j , b i j and i j > 0 are constants, i , j = 1 , 2 , , n .

Case 2: The memductance function W i j is given by
W i j = c i j + 3 d i j σ i j 2 ,
(4)

where c i j and d i j are constants, i , j = 1 , 2 , , n .

According to the features of memristor given in case 1 and case 2, the following two cases can happen.

Case 1′: In case 1,
w i j ( x i ( t ) ) = { w ˆ i j , sgin i j d f j ( x j ( t ) ) d t d x i ( t ) d t 0 , w ˇ i j , sgin i j d f j ( x j ( t ) ) d t d x i ( t ) d t > 0
(5)

for i , j = 1 , 2 , , n , where w ˆ i j and w ˇ i j are constants.

Case 2′: In case 2,
w i j ( x i ( t ) )  is a continuous function, and Λ ̲ i j w i j ( x i ( t ) ) Λ ¯ i j
(6)

for i , j = 1 , 2 , , n , where Λ ̲ i j and Λ ¯ i j are constants.

Clearly, the memristive neural network (1) with different memductance functions is a state-dependent switched system or a state-dependent continuous system, which is the generalization of those for conventional neural networks.

Several novel research results on conventional nonlinear neural networks have been reported, see [1025]. Whereas in memristor-based neural networks, to study the dynamic flows of these systems, the classical approach on nonlinear systemic theory is invalid; since it consists of too many subsystems, it is too difficult to do so. It is also important to develop effective methods to process these issues concurrently with the development of applications, in order to allow the memristor-based neural networks to be readily used as the alternate approaches to the traditional techniques or as components of integrated systems.

In this paper, the main purpose is to make the attempt to deal with the dynamic flows for a class of memristor-based wavelet neural networks with continuous feedback functions and discontinuous feedback functions in the presence of different memductance functions. Meanwhile, the theoretical investigation would help to design efficient memristor-based neuromorphic circuits and study other memristor-based complex systems. Note that the structure of wavelet neural networks is totally different from many traditional neural networks. Hence, the existing results can not be directly applied to the wavelet neural networks. In addition, we give some sufficient conditions on dynamic evolution. All of these conditions are very easy to be verified.

Throughout this paper, solutions of all the systems considered in the following are intended in the Filippov’s sense. [ , ] represents the interval. co { ˜ , ˆ } denotes closure of the convex hull of n generated by real numbers ˜ and ˆ . Let w ¯ i j = max { w ˆ i j , w ˇ i j } , w ̲ i j = min { w ˆ i j , w ˇ i j } , w ˜ i j = max { | w ˆ i j | , | w ˇ i j | } , Λ ˜ i j = max { | Λ ̲ i j | , | Λ ¯ i j | } , for i , j = 1 , 2 , , n .

The remaining part of this paper is organized as follows. The main results are stated in Sections 2 and 3. In Section 4, two illustrative examples are provided with simulation results. Finally, concluding remarks are given in Section 5.

2 Memristor-based wavelet neural networks (1) in case 1′

In this section, we discuss the memristor-based wavelet neural networks (1) with continuous feedback functions and discontinuous feedback functions in case 1′.

Obviously, the memristor-based wavelet neural network (1) in case 1′ is a state-dependent switched system, which has nonsmooth dynamics.

2.1 Mexican-hat-type feedback functions

As the most typical representative of continuous feedback functions, Mexican-hat-type feedback functions possess a unique wavelet structure [18].

A solution x ( t ) = ( x 1 ( t ) , x 2 ( t ) , , x n ( t ) ) T (in the sense of Filippov) of system (1) with initial condition x ( 0 ) = x 0 , is absolutely continuous on any compact interval of [ 0 , + ) , and
x ˙ i ( t ) x i ( t ) + j = 1 , j i n co { w ˆ i j , w ˇ i j } f j ( x j ( t ) ) + u i , i = 1 , 2 , , n .
(7)
In fact, it is easy to find that in case 1′, for i = 1 , 2 , , n ,
co { x i ( t ) + j = 1 , j i n w i j ( x i ( t ) ) f j ( x j ( t ) ) + u i } = x i ( t ) + j = 1 , j i n co { w ˆ i j , w ˇ i j } f j ( x j ( t ) ) + u i .
Obviously, for i , j = 1 , 2 , , n ,
co { w ˆ i j , w ˇ i j } = [ w ̲ i j , w ¯ i j ] .
Consider system (1) with a class of Mexican-hat-type feedback functions defined as
f i ( r ) = { 1 , < r < 1 , r , 1 r 1 , r + 2 , 1 < r 3 , 1 , 3 < r < + .
(8)
Figure 1 shows the configuration of (8).
Figure 1

Mexican-hat-type feedback function ( 8 ).

Define three index subsets as follows:
N 1 = { i : u i < 1 j = 1 , j i n w ˜ i j } , N 2 = { i : 1 + j = 1 , j i n w ˜ i j < u i < 3 j = 1 , j i n w ˜ i j } , N 3 = { i : u i > 3 + j = 1 , j i n w ˜ i j } .

Theorem 1 All the state components x i ( t ) of system (1) with Mexican-hat-type feedback function (8) in case 1′, i N 1 , will flow to the interval ( , 1 ] when t + .

Proof We deliver it in the following two cases due to the different location of x i ( 0 ) .

Case A x i ( 0 ) ( , 1 ] .

In this case, if there exists some t ˜ 0 such that x i ( t ˜ ) = 1 , while x i ( t ) < 1 for t < t ˜ , then from (7),
d x i ( t ) d t | t = t ˜ 1 + j = 1 , j i n w ˜ i j + u i < 0 .

Thus, x i ( t ) would never get out of ( , 1 ] . Similarly, we can also get that once x i ( T ) ( , 1 ] for some T 0 , then x i ( t ) would stay in ( , 1 ] for all t T .

Case B x i ( 0 ) ( 1 , + ) .

In this case, we claim that x i ( t ) would monotonously decrease until it reaches the interval ( , 1 ] in some finite time t ˘ > 0 , i.e., x i ( t ˘ ) 1 .

As a matter of fact, when x i ( t ) ( 3 , + ) , from (7),
x ˙ i ( t ) 3 + j = 1 , j i n w ˜ i j + u i < 1 + j = 1 , j i n w ˜ i j + u i < 0 ,
when x i ( t ) ( 1 , 3 ] , from (7),
x ˙ i ( t ) 1 + j = 1 , j i n w ˜ i j + u i < 1 + j = 1 , j i n w ˜ i j + u i < 0 ,
when x i ( t ) ( 1 , 1 ] , from (7),
x ˙ i ( t ) 1 + j = 1 , j i n w ˜ i j + u i < 0 .

To sum up, wherever the initial state x i ( 0 ) is located in, x i ( t ) would flow to and enter the interval ( , 1 ] . Combining with Case A, x i ( t ) would eventually stay in this interval ( , 1 ] . □

Theorem 2 All the state components x i ( t ) of system (1) with Mexican-hat-type feedback function (8) in case 1′, i N 2 , will flow to the interval [ 1 , 3 ] when t + .

Proof According to the different location of x i ( 0 ) , we deduce it in three cases.

Case A x i ( 0 ) [ 1 , 3 ] .

In this case, if there exists some t ˜ 0 such that x i ( t ˜ ) = 1 , while 1 < x i ( t ) 3 for t < t ˜ , then from (7),
d x i ( t ) d t | t = t ˜ 1 j = 1 , j i n w ˜ i j + u i > 0 .
Analogously, if there exists some t ˘ 0 such that x i ( t ˘ ) = 3 , while 1 x i ( t ) < 3 for t < t ˘ , then from (7),
d x i ( t ) d t | t = t ˜ 3 + j = 1 , j i n w ˜ i j + u i < 0 .

Thus, x i ( t ) would never get out of [ 1 , 3 ] . Similarly, we can also get that once x i ( T ) [ 1 , 3 ] for some T 0 , then x i ( t ) would stay in [ 1 , 3 ] for all t T .

Case B x i ( 0 ) ( , 1 ) .

When x i ( t ) ( , 1 ] , from (7),
x ˙ i ( t ) 1 j = 1 , j i n w ˜ i j + u i > 1 j = 1 , j i n w ˜ i j + 1 + j = 1 , j i n w ˜ i j = 2 > 0 ,
when x i ( t ) ( 1 , 1 ) , from (7),
x ˙ i ( t ) 1 j = 1 , j i n w ˜ i j + u i > 0 .

Thus, in this case, x i ( t ) would monotonously increase until it reaches [ 1 , 3 ] .

Case C x i ( 0 ) ( 3 , + ) .

When x i ( t ) ( 3 , + ) , from (7),
x ˙ i ( t ) < 3 + j = 1 , j i n w ˜ i j + u i < 0 .

Therefore, x i ( t ) would monotonously decrease until it enters the interval [ 1 , 3 ] .

To sum up, wherever the initial state x i ( 0 ) is located in, x i ( t ) would flow to and enter the interval [ 1 , 3 ] . □

Theorem 3 All the state components x i ( t ) of system (1) with Mexican-hat-type feedback function (8) in case 1′, i N 3 , will flow to the interval [ 3 , + ) when t + .

Proof Deliver it in the following two cases.

Case A x i ( 0 ) [ 3 , + ) .

In this case, if there exists some t ˜ 0 such that x i ( t ˜ ) = 3 , while x i ( t ) > 3 for t < t ˜ , then from (7),
d x i ( t ) d t | t = t ˜ 3 j = 1 , j i n w ˜ i j + u i > 0 .

So, x i ( t ) would never get out of [ 3 , + ) . Similarly, we can also get that, once x i ( T ) [ 3 , + ) for some T 0 , then x i ( t ) would stay in [ 3 , + ) for all t T .

Case B x i ( 0 ) ( , 3 ) .

In this case, we claim that x i ( t ) would monotonously increase until it reaches the interval [ 3 , + ) .

As a matter of fact, when x i ( t ) ( , 1 ] , from (7),
x ˙ i ( t ) 1 j = 1 , j i n w ˜ i j + u i > 1 j = 1 , j i n w ˜ i j + 3 + j = 1 , j i n w ˜ i j = 4 > 0 ,
when x i ( t ) ( 1 , 1 ] , from (7),
x ˙ i ( t ) 1 j = 1 , j i n w ˜ i j + u i > 1 j = 1 , j i n w ˜ i j + 3 + j = 1 , j i n w ˜ i j = 2 > 0 ,
when x i ( t ) ( 1 , 3 ) , from (7),
x ˙ i ( t ) 3 j = 1 , j i n w ˜ i j + u i > 0 .

Therefore, in this case, x i ( t ) would monotonously increase until it reaches [ 3 , + ) .

In summary, wherever the initial state x i ( 0 ) is located in, x i ( t ) would flow to and enter the interval [ 3 , + ) . □

Remark 1 In Theorems 1-3, a core idea is to employ nonsmooth analysis within mathematical framework of the Filippov solution. Generally speaking, nonsmooth analysis is suitable for analyzing nonsmooth dynamics of hybrid systems. Meanwhile, it is worth observing that the memristor-based wavelet neural network model in case 1′ is a state-dependent nonlinear switching dynamical system, which extends many of the existing neural networks. Therefore, the obtained results in this paper can be used in the wider scope.

2.2 Piecewise constant feedback functions

As a representative of discontinuous feedback functions, piecewise constant feedback functions have an important position among typical wavelet neural networks [24, 25].

Consider system (1) with a class of piecewise constant feedback functions defined as
f i ( r ) = { 1 , < r < 1 , 0 , 1 r 1 , 1 , 1 < r 3 , 1 , 3 < r < + .
(9)
Figure 2 shows the configuration of (9).
Figure 2

Piecewise constant feedback function ( 9 ).

Corollary 1 All the state components x i ( t ) of system (1) with piecewise constant feedback function (9) in case 1′, i N 1 , will flow to the interval ( , 1 ] when t + .

Corollary 2 All the state components x i ( t ) of system (1) with piecewise constant feedback function (9) in case 1′, i N 2 , will flow to the interval [ 1 , 3 ] when t + .

Corollary 3 All the state components x i ( t ) of system (1) with piecewise constant feedback function (9) in case 1′, i N 3 , will flow to the interval [ 3 , + ) when t + .

Corollaries 1-3 can be proved using standard arguments as Theorems 1-3.

Remark 2 Theorems 1-3 and Corollaries 1-3 are obtained based on Mexican-hat-type feedback function (8) and piecewise constant feedback function (9). In fact, even if memristive neurodynamic system (1) appears as other types of Mexican-hat-type feedback functions and piecewise constant feedback functions, the main results in this paper still can be made some parallel promotions.

3 Memristor-based wavelet neural networks (1) in case 2′

In this section, we investigate the memristor-based wavelet neural networks (1) with continuous feedback functions and discontinuous feedback functions in case 2′.

Obviously, the memristor-based wavelet neural network (1) in case 2′ is a state-dependent continuous system.

By (1), it is easy to know that for i = 1 , 2 , , n ,
x ˙ i ( t ) x i ( t ) + j = 1 , j i n Λ ˜ i j | f j ( x j ( t ) ) | + u i ( t ) .
(10)
Define three index subsets as follows:
N ˜ 1 = { i : u i < 1 j = 1 , j i n Λ ˜ i j } , N ˜ 2 = { i : 1 + j = 1 , j i n Λ ˜ i j < u i < 3 j = 1 , j i n Λ ˜ i j } , N ˜ 3 = { i : u i > 3 + j = 1 , j i n Λ ˜ i j } .

Theorem 4 All the state components x i ( t ) of system (1) with Mexican-hat-type feedback function (8) in case 2′, i N ˜ 1 , will flow to the interval ( , 1 ] when t + .

Proof We deliver it in the following two cases due to the different location of x i ( 0 ) .

Case A x i ( 0 ) ( , 1 ] .

In this case, if there exists some t ˜ 0 such that x i ( t ˜ ) = 1 , while x i ( t ) < 1 for t < t ˜ , then from (10),
d x i ( t ) d t | t = t ˜ 1 + j = 1 , j i n Λ ˜ i j + u i < 0 .

Thus, x i ( t ) would never get out of ( , 1 ] . Similarly, we can also get that once x i ( T ) ( , 1 ] for some T 0 , then x i ( t ) would stay in ( , 1 ] for all t T .

Case B x i ( 0 ) ( 1 , + ) .

In this case, we claim that x i ( t ) would monotonously decrease until it reaches the interval ( , 1 ] in some finite time t ˘ > 0 , i.e., x i ( t ˘ ) 1 .

As a matter of fact, when x i ( t ) ( 3 , + ) , from (10),
x ˙ i ( t ) 3 + j = 1 , j i n Λ ˜ i j + u i < 1 + j = 1 , j i n Λ ˜ i j + u i < 0 ,
when x i ( t ) ( 1 , 3 ] , from (10),
x ˙ i ( t ) 1 + j = 1 , j i n Λ ˜ i j + u i < 1 + j = 1 , j i n Λ ˜ i j + u i < 0 ,
when x i ( t ) ( 1 , 1 ] , from (10),
x ˙ i ( t ) 1 + j = 1 , j i n Λ ˜ i j + u i < 0 .

To sum up, wherever the initial state x i ( 0 ) is located in, x i ( t ) would flow to and enter the interval ( , 1 ] . Combining with Case A, x i ( t ) would eventually stay in this interval ( , 1 ] . □

Theorem 5 All the state components x i ( t ) of system (1) with Mexican-hat-type feedback function (8) in case 2′, i N ˜ 2 , will flow to the interval [ 1 , 3 ] when t + .

Proof According to the different location of x i ( 0 ) , we deduce it in three cases.

Case A x i ( 0 ) [ 1 , 3 ] .

In this case, if there exists some t ˜ 0 such that x i ( t ˜ ) = 1 , while 1 < x i ( t ) 3 for t < t ˜ , then from (10),
d x i ( t ) d t | t = t ˜ 1 j = 1 , j i n Λ ˜ i j + u i > 0 .
Analogously, if there exists some t ˘ 0 such that x i ( t ˘ ) = 3 , while 1 x i ( t ) < 3 for t < t ˘ , then from (10),
d x i ( t ) d t | t = t ˜ 3 + j = 1 , j i n Λ ˜ i j + u i < 0 .

Thus, x i ( t ) would never get out of [ 1 , 3 ] . Similarly, we can also get that once x i ( T ) [ 1 , 3 ] for some T 0 , then x i ( t ) would stay in [ 1 , 3 ] for all t T .

Case B x i ( 0 ) ( , 1 ) .

When x i ( t ) ( , 1 ] , from (10),
x ˙ i ( t ) 1 j = 1 , j i n Λ ˜ i j + u i > 1 j = 1 , j i n Λ ˜ i j + 1 + j = 1 , j i n Λ ˜ i j = 2 > 0 ,
when x i ( t ) ( 1 , 1 ) , from (10),
x ˙ i ( t ) 1 j = 1 , j i n Λ ˜ i j + u i > 0 .

Thus, in this case, x i ( t ) would monotonously increase until it reaches [ 1 , 3 ] .

Case C x i ( 0 ) ( 3 , + ) .

When x i ( t ) ( 3 , + ) , from (10),
x ˙ i ( t ) < 3 + j = 1 , j i n Λ ˜ i j + u i < 0 .

Therefore, x i ( t ) would monotonously decrease until it enters the interval [ 1 , 3 ] .

To sum up, wherever the initial state x i ( 0 ) is located in, x i ( t ) would flow to and enter the interval [ 1 , 3 ] . □

Theorem 6 All the state components x i ( t ) of system (1) with Mexican-hat-type feedback function (8) in case 2′, i N ˜ 3 , will flow to the interval [ 3 , + ) when t + .

Proof Deliver it in the following two cases.

Case A x i ( 0 ) [ 3 , + ) .

In this case, if there exists some t ˜ 0 such that x i ( t ˜ ) = 3 , while x i ( t ) > 3 for t < t ˜ , then from (10),
d x i ( t ) d t | t = t ˜ 3 j = 1 , j i n Λ ˜ i j + u i > 0 .

So x i ( t ) would never get out of [ 3 , + ) . Similarly, we can also get that once x i ( T ) [ 3 , + ) for some T 0 , then x i ( t ) would stay in [ 3 , + ) for all t T .

Case B x i ( 0 ) ( , 3 ) .

In this case, we claim that x i ( t ) would monotonously increase until it reaches the interval [ 3 , + ) .

As a matter of fact, when x i ( t ) ( , 1 ] , from (10),
x ˙ i ( t ) 1 j = 1 , j i n Λ ˜ i j + u i > 1 j = 1 , j i n Λ ˜ i j + 3 + j = 1 , j i n Λ ˜ i j = 4 > 0 ,
when x i ( t ) ( 1 , 1 ] , from (10),
x ˙ i ( t ) 1 j = 1 , j i n Λ ˜ i j + u i > 1 j = 1 , j i n Λ ˜ i j + 3 + j = 1 , j i n Λ ˜ i j = 2 > 0 ,
when x i ( t ) ( 1 , 3 ) , from (10),
x ˙ i ( t ) 3 j = 1 , j i n Λ ˜ i j + u i > 0 .

Therefore, in this case, x i ( t ) would monotonously increase until it reaches [ 3 , + ) .

In summary, wherever the initial state x i ( 0 ) is located in, x i ( t ) would flow to and enter the interval [ 3 , + ) . □

Corollary 4 All the state components x i ( t ) of system (1) with piecewise constant feedback function (9) in case 2′, i N ˜ 1 , will flow to the interval ( , 1 ] when t + .

Corollary 5 All the state components x i ( t ) of system (1) with piecewise constant feedback function (9) in case 2′, i N ˜ 2 , will flow to the interval [ 1 , 3 ] when t + .

Corollary 6 All the state components x i ( t ) of system (1) with piecewise constant feedback function (9) in case 2′, i N ˜ 3 , will flow to the interval [ 3 , + ) when t + .

Corollaries 4-6 can be proved using standard arguments as Theorems 4-6.

Remark 3 It is worth noting that memristive neural networks may display different types of dynamic features in the presence of different memductance functions, i.e., state-dependent switched system and state-dependent continuous system. Although the analytical method is based on two different theory architectures, the proposed criteria is very similar. The unified form of criterion is an effective methodology of enhancing the proposed criterion to be easily applied to different situations.

4 Illustrative examples

In this section, two examples are given to illustrate our results. Simulation results show that the obtained conclusions are valid.

Example 1 Consider the two-dimensional memristive neurodynamic system as follows:
{ x ˙ 1 ( t ) = x 1 ( t ) + w 12 ( x 1 ( t ) ) f 2 ( x 2 ( t ) ) + u 1 , x ˙ 2 ( t ) = x 2 ( t ) + w 21 ( x 2 ( t ) ) f 1 ( x 1 ( t ) ) + u 2 ,
(11)
where
w 12 ( x 1 ( t ) ) = { 0.8 , d f 2 ( x 2 ( t ) ) d t d x 1 ( t ) d t 0 , 0.5 , d f 2 ( x 2 ( t ) ) d t d x 1 ( t ) d t > 0 , w 21 ( x 2 ( t ) ) = { 0.8 , d f 1 ( x 1 ( t ) ) d t d x 2 ( t ) d t 0 , 0.5 , d f 1 ( x 1 ( t ) ) d t d x 2 ( t ) d t > 0 .
Simulation results are described in Figures 3 and 4 when u 1 = 2 , u 2 = 4 , where the trajectories of system (11) with Mexican-hat-type feedback function (8) and piecewise constant feedback function (9) under different initial values are depicted. According to Theorems 1 and 3, Corollaries 1 and 3, the results of theoretical calculations are consistent with the experiments.
Figure 3

Transient behaviors of system ( 11 ) with Mexican-hat-type feedback function ( 8 ) under different initial values.

Figure 4

Transient behaviors of system ( 11 ) with piecewise constant feedback function ( 9 ) under different initial values.

Example 2 Consider a two-dimensional memristive neurodynamic system described by
{ x ˙ 1 ( t ) = x 1 ( t ) + 0.6 sin ( x 1 ( t ) ) f 2 ( x 2 ( t ) ) + u 1 , x ˙ 2 ( t ) = x 2 ( t ) + 0.8 cos ( x 2 ( t ) ) f 1 ( x 1 ( t ) ) + u 2 .
(12)
Simulation results are described in Figures 5 and 6 when u 1 = 2 , u 2 = 4 , where the trajectories of system (12) with Mexican-hat-type feedback function (8) and piecewise constant feedback function (9) under different initial values are showed. According to Theorem 6 and Corollary 6, the experiments perfectly match the theoretical results.
Figure 5

Transient behaviors of system ( 12 ) with Mexican-hat-type feedback function ( 8 ) under different initial values.

Figure 6

Transient behaviors of system ( 12 ) with piecewise constant feedback function ( 9 ) under different initial values.

5 Concluding remarks

Rhythmicity represents one of most striking manifestations of dynamic behaviors in biological systems. Memristor-based neural networks have been shown to be capable of understanding of neural processes using memory devices. In this article, we give conditions to allow a dynamic orbit of memristor-based wavelet neural networks located in the designated region. The theoretical results are supplemented by simulation results in two illustrative examples.

Declarations

Acknowledgements

The work is supported by the Natural Science Foundation of China under Grant 61304057, the 973 Program of China under Grant 2011CB710606. The work of AW was done with the School of Automation, Huazhong University of Science and Technology, Wuhan, China.

Authors’ Affiliations

(1)
College of Mathematics and Statistics, Hubei Normal University, Huangshi, China
(2)
Institute for Information and System Science, Xi’an Jiaotong University, Xi’an, China
(3)
School of Automation, Huazhong University of Science and Technology, Wuhan, China

References

  1. Cantley KD, Subramaniam A, Stiegler HJ, Chapman RA, Vogel EM: Hebbian learning in spiking neural networks with nanocrystalline silicon TFTs and memristive synapses. IEEE Trans. Nanotechnol. 2011, 10(5):1066–1073.View ArticleGoogle Scholar
  2. Cantley KD, Subramaniam A, Stiegler HJ, Chapman RA, Vogel EM: Neural learning circuits utilizing nano-crystalline silicon transistors and memristors. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23(4):565–573.View ArticleGoogle Scholar
  3. Itoh M, Chua LO: Memristor cellular automata and memristor discrete-time cellular neural networks. Int. J. Bifurc. Chaos 2009, 19(11):3605–3656. 10.1142/S0218127409025031MathSciNetView ArticleMATHGoogle Scholar
  4. Kim H, Sah MP, Yang CJ, Roska T, Chua LO: Neural synaptic weighting with a pulse-based memristor circuit. IEEE Trans. Circuits Syst. I, Regul. Pap. 2012, 59(1):148–158.MathSciNetView ArticleGoogle Scholar
  5. Pershin YV, Di Ventra M: Experimental demonstration of associative memory with memristive neural networks. Neural Netw. 2010, 23(7):881–886. 10.1016/j.neunet.2010.05.001View ArticleGoogle Scholar
  6. Wen SP, Zeng ZG: Dynamics analysis of a class of memristor-based recurrent networks with time-varying delays in the presence of strong external stimuli. Neural Process. Lett. 2012, 35(1):47–59. 10.1007/s11063-011-9203-zView ArticleGoogle Scholar
  7. Wu AL, Zeng ZG: Exponential stabilization of memristive neural networks with time delays. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23(12):1919–1929.MathSciNetView ArticleGoogle Scholar
  8. Wu AL, Zeng ZG: Dynamic behaviors of memristor-based recurrent neural networks with time-varying delays. Neural Netw. 2012, 36: 1–10.View ArticleMATHGoogle Scholar
  9. Wu AL, Zeng ZG: Anti-synchronization control of a class of memristive recurrent neural networks. Commun. Nonlinear Sci. Numer. Simul. 2013, 18(2):373–385. 10.1016/j.cnsns.2012.07.005MathSciNetView ArticleMATHGoogle Scholar
  10. Huang TW: Robust stability of delayed fuzzy Cohen-Grossberg neural networks. Comput. Math. Appl. 2011, 61(8):2247–2250.MathSciNetView ArticleMATHGoogle Scholar
  11. Huang TW, Li CD, Duan SK, Starzyk JA: Robust exponential stability of uncertain delayed neural networks with stochastic perturbation and impulse effects. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23(6):866–875.View ArticleGoogle Scholar
  12. Jiang F, Shen Y: Stability in the numerical simulation of stochastic delayed Hopfield neural networks. Neural Comput. Appl. 2013, 22(7–8):1493–1498. 10.1007/s00521-012-0935-0View ArticleGoogle Scholar
  13. Jiang F, Yang H, Shen Y: On the robustness of global exponential stability for hybrid neural networks with noise and delay perturbations. Neural Comput. Appl. 2013. 10.1007/s00521-013-1374-2Google Scholar
  14. Shen Y, Wang J: Almost sure exponential stability of recurrent neural networks with Markovian switching. IEEE Trans. Neural Netw. 2009, 20(5):840–855.View ArticleGoogle Scholar
  15. Shen Y, Wang J: Robustness analysis of global exponential stability of recurrent neural networks in the presence of time delays and random disturbances. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23(1):87–96.View ArticleGoogle Scholar
  16. Song XG, Gao HB, Ding L, Liu DY, Hao MH: The globally asymptotic stability analysis for a class of recurrent neural networks with delays. Neural Comput. Appl. 2013, 22(3–4):587–595. 10.1007/s00521-012-0888-3View ArticleGoogle Scholar
  17. Gao HB, Song XG, Ding L, Liu DY, Hao MH: New conditions for global exponential stability of continuous-time neural networks with delays. Neural Comput. Appl. 2013, 22(1):41–48. 10.1007/s00521-011-0745-9View ArticleGoogle Scholar
  18. Wang LL, Chen TP: Multistability of neural networks with Mexican-hat-type activation functions. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23(11):1816–1826.View ArticleGoogle Scholar
  19. Yu W, Francisco PC, Li XO: Two-stage neural sliding mode control of magnetic levitation in minimal invasive surgery. Neural Comput. Appl. 2011, 20(8):1141–1147. 10.1007/s00521-010-0477-2View ArticleGoogle Scholar
  20. Yu W, Li XO: Automated nonlinear system modeling with multiple fuzzy neural networks and kernel smoothing. Int. J. Neural Syst. 2010, 20(5):429–435. 10.1142/S0129065710002516View ArticleGoogle Scholar
  21. Zhang HG, Liu JH, Ma DZ, Wang ZS: Data-core-based fuzzy min-max neural network for pattern classification. IEEE Trans. Neural Netw. 2011, 22(12):2339–2352.View ArticleGoogle Scholar
  22. Zhang HG, Ma TD, Huang GB, Wang ZL: Robust global exponential synchronization of uncertain chaotic delayed neural networks via dual-stage impulsive control. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 2010, 40(3):831–844.View ArticleGoogle Scholar
  23. Zhang HG, Wang YC: Stability analysis of Markovian jumping stochastic Cohen-Grossberg neural networks with mixed time delays. IEEE Trans. Neural Netw. 2008, 19(2):366–370.View ArticleGoogle Scholar
  24. Bao G, Zeng ZG: Analysis and design of associative memories based on recurrent neural network with discontinuous activation functions. Neurocomputing 2012, 77(1):101–107. 10.1016/j.neucom.2011.08.026View ArticleGoogle Scholar
  25. Huang YJ, Zhang HG, Wang ZS: Multistability and multiperiodicity of delayed bidirectional associative memory neural networks with discontinuous activation functions. Appl. Math. Comput. 2012, 219(3):899–910. 10.1016/j.amc.2012.06.068MathSciNetView ArticleMATHGoogle Scholar

Copyright

© Wu et al.; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.