Skip to content

Advertisement

Open Access

On nonergodicity for nonparametric autoregressive models

Advances in Difference Equations20132013:200

https://doi.org/10.1186/1687-1847-2013-200

Received: 25 April 2013

Accepted: 19 June 2013

Published: 5 July 2013

Abstract

In this paper, we introduce a class of nonlinear time series models with random time delay under random environment, sufficient conditions for nonergodicity of these models are developed. The so-called Markovnization methods are used, that is, proper supplementary variables are added to a non-Markov process, then a new Markov process can be obtained.

MSC:60J05, 60J10, 60K37.

Keywords

Markov chainsrandom delayrandom environmentnonergodic

1 Introduction

By virtue of their superduper properties, stable (ergodic or recurrent) stochastic processes are very popular among many researchers, so there has been a large literature devoted to the stable (ergodic or recurrent) or even stationary stochastic processes. For instance, Jeantheau [1] and Tjøstheim [2] established consistency of the estimator they proposed under stationarity and ergodicity conditions (see also [35]). Fernandes and Grammig [6] established conditions for the existence of higher-order moments, strict stationarity, geometric ergodicity and β-mixing property with exponential decay. This owes a great deal to the beautiful properties of stable processes, such as an ergodic Markov chain has an invariant probability measure which is finite, a recurrent stochastic process re-visits an arbitrary point in its image an infinite number of times. Just because of this, many researchers often like to target ergodicity or recurrence as their assumptions in their papers or books.

However, in this colorful world, lots and lots of phenomena exhibit instability behavior, for example, David [7] argued that an important lesson from economic history was that economies exhibited nonergodic behavior along many dimensions. Margolin and Barkai [8] indicated that time series of many systems exhibited intermittency, that is to say, at random times the system will switch from state on (or up) to state off (or down) and vice versa. One method to characterize such time series is using time average correlation functions to exhibit a nonergodic behavior.

Hence more and more researchers become increasingly interested in these instable processes. Recently, some problems of nonergodic stochastic processes have been studied by many authors. Basawa and Koul [9], Basawa and Brockwell [10], Basawa and Scott [11] and Feigin [12] studied asymptotic inference problems for parameters of nonergodic stochastic processes. Budhiraja and Ocone [13] proved an asymptotic stability result for discrete time systems in which the signal was allowed to be nonergodic. Durlauf [14] considered nonergodic economic growth. Goodman and Massey [15] generalized Jackson’s theorem so that the large-time behavior can be described for any nonergodic N-node Jackson network system. Griffeath [16] developed limit theorems for nonergodic set-valued Markov processes. Jacod [17] constructed the estimators for drift and diffusion coefficients of a multidimensional diffusion process and obtained consistent results without any kind of ergodicity or even recurrence assumption on the diffusion process.

Ergodicity criteria with drift functions for Markov processes have been studied by many authors. For instance, see Cline [18] and Tweedie [1921] and the references therein. As for nonergodicity criteria for Markov processes, the readers are referred to [2224]. Sheng et al. [25] also developed some sufficient conditions for nonergodicity of some time series models.

However, the processes considered by many researchers do not reflect the factors of the interference in a system and the system itself influenced by sudden environmental change. On the other hand, the time delay in the models studied is usually a fixed constant. In this paper, we popularize general nonparametric autoregressive models through introducing random environment and at the same we turn to a random time delay instead of a fixed time delay.

The remainder of the paper is organized as follows. Section 2 introduces the nonparametric autoregressive model with random time delay under random environment. Section 3 develops some useful lemmas and gives some sufficient conditions for nonergodicity of the proposed model as our main results. All the proofs are collected in Section 4.

2 The nonparametric AR model with random time delay under random environment

In this section, we first give some notations which will be used throughout the paper. In what follows, we always have a probability space ( Ω , F , P ) , a finite set E = { 1 , 2 , , r } (r is a positive integer number), with as the σ-algebra generated by all sets of E. We also let R m be an m-dimensional real space and B m be the σ-algebra generated by all Boreal subsets of R m .

The nonparametric autoregressive model with random time delay under random environment is defined as follows:
X t + 1 = F ( X t , X t 1 , , X t Z t + 1 ) + ε t + 1 ( Z t + 1 ) ,
(2.1)

where { Z t , t 1 } is an irreducible and aperiodic Markov chain defined on ( Ω , F , P ) , taking values in E; j E , F : R j + 1 R is a Borel measurable mapping; ε t ( Z t ) = i = 1 r ε t ( i ) I { i } ( Z t ) , where I { i } ( Z t ) denotes the indicator function of a single element set { i } ; { ε t ( 1 ) } , { ε t ( 2 ) } , , { ε t ( r ) } are sequences of i.i.d. random variables defined on ( Ω , F , P ) , taking values in ( R , B ) . In what follows, the density function of { ε t ( i ) } , i E , is denoted as Φ ( ) .

In this paper, we aim to obtain some sufficient conditions for nonergodicity of the proposed new model. Since the nonparametric AR model with random time delay under random environment defined by (2.1) itself does not have the Markovian property, we consider the following sequence:
( X t , X t 1 , , X t r , Z t ) = ( F ( X t 1 , X t 2 , , X t Z t ) , X t 1 , , X t r , Z t ) , t 1 .
For simplicity, let
Y t = ( X t , X t 1 , , X t r , Z t ) , t 1 .
From (2.1) and under mild conditions (see Lemma 1 below), it is easy to show that the sequence { Y t } is a Markov chain on R r + 1 × E with the following transition probability:
P ( Y t + 1 A × { j } | Y t = x ˆ × { i } ) = p i j q = 0 r 1 I A q + 1 ( x q ) A 0 Φ j ( y 0 F ( x 0 , x 1 , , x j ) ) d y 0 ,
(2.2)

where A = A 0 × A 1 × × A r R r + 1 and A i R ; I A ( ) denotes the indicator function of A; p i j = P ( Z t + 1 = j | Z t = i ) is the transition function of the Markov chain { Z t , t 1 } .

Let P ( t ) ( ( x ˆ , i ) , A × { j } ) = P ( Y s + t A × { j } | Y s = ( x ˆ , i ) ) be the t-step transition probability of { Y t } , then by the property of conditional probability and inductive approach we have: when 2 t r + 1 ,
P ( t ) ( ( x ˆ , i ) , A × { j } ) = q = 0 r t I A q + t ( x q ) k 1 k 2 k t 1 E p i k 1 p k 1 k 2 p k t 1 j A t 1 Φ k 1 ( y t 1 F ( U t ) ) d y t 1 A t 2 Φ k 2 ( y t 2 F ( U t 1 ) ) d y t 2 A 1 Φ k t 1 ( y 1 F ( U 2 ) ) d y 1 A 0 Φ j ( y 0 F ( U 1 ) ) d y 0 ,
when t r + 2 ,
P ( t ) ( ( x ˆ , i ) , A × { j } ) = k 1 k 2 k t 1 E p i k 1 p k 1 k 2 p k t 1 j R Φ k 1 ( y t 1 F ( U t ) ) d y t 1 R Φ k 2 ( y t 2 F ( U t 1 ) ) d y t 2 R Φ k t ( r + 1 ) ( y r + 1 F ( U r + 2 ) ) d y r + 1 A r Φ k t r ( y r F ( U r + 1 ) ) d y r A 0 Φ j ( y 0 F ( U 1 ) ) d y 0 ,
where
U t = ( x 0 , x 1 , , x k 1 ) , U t s = ( y t s , y t s + 1 , , y t 1 , x 0 , , x k s + 1 s ) , 1 s k s + 1 , U t s = ( y t s , y t s + 1 , , y t s + k s + 1 ) , k s + 1 < s t 1 .

3 Main results

This section gives the main results of the new model described in Section 2. We need the following conditions.

Assumption 1 { ε t ( 1 ) } , , { ε t ( r ) } are mutually independent, and i E , ε t + 1 ( i ) is independent of { X s , s t } . Moreover, for each i, E ( ε t ( i ) ) is a constant independent of t and E [ ε t ( i ) ] 2 < .

Assumption 2 { Z t , t 1 } is independent of the initial random variable X 0 .

Assumption 3 i E , { Z t , t 1 } is independent of ε t + 1 ( i ) .

Assumption 1 assures the stationarity of { ε t ( i ) } , i E . Assumption 2 and Assumption 3 guarantee the Markov property of { Y t } . These are the basic conditions we know that guarantee the following lemmas can be used properly throughout the paper.

Lemma 1 Suppose that Assumptions  1-3 hold, then the sequence { Y t } is a time-homogeneous Markov chain defined on ( Ω , F , P ) with state space ( R r + 1 × E , B r + 1 × H ) .

The irreducibility and aperiodicity in Lemma 2 are standard and can be found in Meyn and Tweedie [26] and Tong [27]. The two concepts are very useful to derive the nonergodicity of the sequence { Y t } . But before we state the results about the irreducibility and aperiodicity of the sequence { Y t } , we need the following condition about the density function of ε t ( i ) , i E .

Assumption 4 The density function Φ i ( ) of ε t ( i ) , i E is strictly positive everywhere, i.e., i E , Φ i ( ) > 0 .

Lemma 2 Under Assumptions  1-4, the Markov chain { Y t } is μ r + 1 × φ irreducible and aperiodic, where φ is a measure on ( E , H ) , μ r + 1 is a Lebesgue measure on ( R r + 1 , B r + 1 ) satisfying μ r + 1 × φ ( A × B ) > 0 if μ r + 1 ( A ) > 0 , A B r + 1 , B H .

Remark 1 Obviously, if the Markov chain { Y t } is φ 1 -irreducible, for any nontrivial and σ-finite measure φ 2 which is absolutely continuous with respect to φ 1 , then { Y t } is also φ 2 -irreducible. So, we need a normal irreducibility which can define the range of the chain much more completely than some more arbitrary irreducibility measures one may construct initially. Fortunately, Sheng et al. [25] and Meyn and Tweedie [26] proved that if { Y t } is a μ r + 1 × φ irreducible Markov chain, then there exists a maximal irreducibility measure Q. In this paper, we use those subsets whose maximal irreducibility measure is greater than zero, so here we denote ( B r + 1 × H ) + = { A B r + 1 × H : Q ( A ) > 0 } .

Our main results are as follows.

Theorem 1 Suppose that Assumptions  1-4 hold and if there exist a non-negative measurable function g on ( R r + 1 × E , B r + 1 × H ) , a set A × M B r + 1 × H , and a non-negative measurable function h ( ) on [ 0 , ) satisfying
max j E ( R h ( u ) Φ j ( u ) d u ) < ,
such that
  1. (1)
    For x 0 , x 1 , , x r , y 0 R , i , j E , we have
    | g ( ( x 0 , x 1 , , x r ) , i ) g ( ( y 0 , x 1 , , x r ) , j ) | h ( | x 0 y 0 | ) ;
     
  2. (2)
    For ( A × M ) c ( B r + 1 × H ) + , we have
    g ( x ˆ , i ) sup ( y ˆ , j ) A × M g ( y ˆ , j ) ;
    (3.1)
     
  3. (3)
    For ( x ˆ , j ) ( A × M ) c , we have
    g ( ( F ( x 0 , x 1 , , x j ) , y 1 , , y r ) , j ) g ( x ˆ , j ) + max j E ( R h ( u ) Φ j ( u ) d u ) ,
    (3.2)
     
and there exist B × I ( A × M ) c , B × I ( B r + 1 × H ) + , ( x ˆ , i ) B × I , such that
g ( ( F ( x 0 , x 1 , , x j ) , y 1 , , y r ) , j ) > g ( x ˆ , j ) + max j E ( R h ( u ) Φ j ( u ) d u ) .
(3.3)

Then the Markov chain { Y t } is nonergodic. Moreover, whatever a probability distribution function of { X t } is, its probability distribution function will never converge to some probability distribution function.

Remark 2 Conditions (2) and (3) in the above Theorem 1 can be substituted for (4) or (5) as follows:
  1. (4)

    For A × M , ( A × M ) c ( B r + 1 × H ) + , (3.1) holds, and ( x ˆ , i ) ( A × M ) c , (3.2) holds.

     
  2. (5)

    For A × M ( B r + 1 × H ) + and ( x ˆ , i ) R r + 1 × E , (3.2) holds, where when ( x ˆ , i ) A × M , (3.3) holds.

     

That is, under conditions (1) and (4) or (1) and (5), we can also show that { Y t } is nonergodic, and the method of the proof is similar to that under conditions (1)-(3).

4 Proofs

Proof of Lemma 1 x ˆ = ( x 0 , x 1 , , x r ) , x ˆ s R r + 1 , and i, i s E , where s is an integer number satisfying 0 s < t , we have
P { Y t + 1 A × { j } | Y t = ( x ˆ , i ) , Y s = ( x ˆ s , i s ) , 0 s < t } = P { ( X t + 1 , X t , , X t + 1 r ) A , Z t + 1 = j | Y t = x ˆ , Z t = i , Y s = ( x ˆ s , i s ) , 0 s < t } = P { F ( X t , X t 1 , , X t Z t + 1 ) + ε t + 1 ( Z t + 1 ) A 0 , = X t A 1 , X t 1 A 2 , , X t + 1 r A r , Z t + 1 = j | = ( X t , X t 1 , , X t r ) = ( x 0 , x 1 , , x r ) , Z t = i , Y s = ( x ˆ s , i s ) , 0 s < t } = P { F ( x 0 , x 1 , , x j ) + ε t + 1 ( j ) A 0 , x 0 A 1 , x 1 A 2 , , x r 1 A r , Z t + 1 = j | = ( X t , X t 1 , , X t r ) = ( x 0 , x 1 , , x r ) , Z t = i } = P { F ( x 0 , x 1 , , x j ) + ε t + 1 ( j ) A 0 , x 0 A 1 , x 1 A 2 , , x r 1 A r } P { Z t + 1 = j | Z t = i } = p i j P { F ( x 0 , x 1 , , x j ) + ε t + 1 ( j ) A 0 , x 0 A 1 , x 1 A 2 , , x r 1 A r } ,

where the last equation follows from the definition of the (2.1) model, Assumption 1 and the notation p i j = P { Z t + 1 = j | Z t = i } .

On the other hand,
P { Y t + 1 A × { j } | Y t = ( x ˆ , i ) } = P { ( X t + 1 , X t , , X t + 1 r ) A , Z t + 1 = j | ( X t , X t 1 , , X t r ) = ( x 0 , x 1 , , x r ) , Z t = i } = P { F ( X t , X t 1 , , X t Z t + 1 ) + ε t + 1 ( Z t + 1 ) A 0 , = X t A 1 , X t 1 A 2 , , X t + 1 r A r , Z t + 1 = j | = ( X t , X t 1 , , X t r ) = ( x 0 , x 1 , , x r ) , Z t = i } = P { F ( x 0 , x 1 , , x j ) + ε t + 1 ( j ) A 0 , x 0 A 1 , x 1 A 2 , , x r 1 A r , Z t + 1 = j | = ( X t , X t 1 , , X t r ) = ( x 0 , x 1 , , x r ) , Z t = i } = P { F ( x 0 , x 1 , , x j ) + ε t + 1 ( j ) A 0 , x 0 A 1 , x 1 A 2 , , x r 1 A r } P { Z t + 1 = j | Z t = i } = p i j P { F ( x 0 , x 1 , , x j ) + ε t + 1 ( j ) A 0 , x 0 A 1 , x 1 A 2 , , x r 1 A r } .

Hence the sequence { Y t } is a Markov chain, and its time-homogeneity follows from the stationarity of ε t + 1 ( j ) , j E . This completes the proof. □

Proof of Lemma 2 Suppose that A × B B r + 1 × H and μ r + 1 × φ ( A × B ) > 0 . Since { Z t } is irreducible, i , j E , s > 0 , such that
p i j ( t ) = P ( Z t + s = j | Z s = i ) > 0 , t s ,
that is, k 1 , k 2 , , k t 1 E , such that
p i k 1 p k 1 k 2 p k t 1 j > 0 .
Then from the t-step transition probability of { Y t } , ( x ˆ , i ) R r + 1 × E , we have
P ( t ) ( x ˆ , i ; A , j ) > 0 ,

so { Y t } is μ r + 1 × φ irreducible, and the aperiodicity of { Y t } follows from Tong [27]. This completes the proof. □

For x , y Ω , z [ 0 , 1 ) , let
ψ g ( x , z ) = 1 1 z ( z g ( x ) Ω P ( x , d y ) z g ( y ) ) .

In order to deal with the proofs of Theorem 1, we need the following propositions.

Lemma 3 [25]

Suppose { X t } is a φ-irreducible Markov chain on the state space ( Ω , F ) . If there exist constants N > 0 , 0 < C < 1 , a set A and a nonnegative measurable function g ( x ) which satisfies x Ω , Ω P ( x , d y ) g ( y ) < + , such that
  1. (1)

    ψ g ( x , z ) N , x Ω , z [ C , 1 ) ;

     
  2. (2)

    One of the following (i), (ii), (iii) hold:

     
  3. (i)
    For A c F + , x A c , we have
    g ( x ) sup y A g ( y ) , Ω P ( x , d y ) [ g ( y ) g ( x ) ] 0 ;
     
furthermore, there exist B A c , B F + such that
Ω P ( x , d y ) [ g ( y ) g ( x ) ] > 0 , x B .
  1. (ii)
    For A F + , A c F + , x A c , we have
    g ( x ) sup y A g ( y ) , Ω P ( x , d y ) [ g ( y ) g ( x ) ] > 0 .
     
  2. (iii)
    For A F + , we have
    Ω P ( x , d y ) [ g ( y ) g ( x ) ] 0 , x Ω , Ω P ( x , d y ) [ g ( y ) g ( x ) ] > 0 , x A .
     

Then { X t } is nonergodic.

Remark 3 Condition (1) in Lemma 3 is usually called the Kaplan condition; readers can consult Kaplan [24] for details. Sheng et al. [25] found a class of functions satisfying this condition. That is, if there exists a constant N > 0 such that
g ( y ) < g ( x ) P ( x , d y ) [ g ( y ) g ( x ) ] N , x Ω ,

then g ( x ) is the function wished.

Remark 4 Generally, the function Ω P ( x , d y ) [ g ( y ) g ( x ) ] , which we use frequently in Lemma 3, goes by the name of g-drift of the point x and is often expressed as γ g ( x ) . In addition, if
Ω P ( x , d y ) g ( y ) < , x Ω ,
we have
lim z 1 ψ g ( x , z ) = g ( x ) + Ω P ( x , d y ) g ( y ) = γ g ( x ) .

In fact, d z g ( y ) d z = g ( y ) z g ( y ) 1 , so when z [ 1 2 , 1 ) , d z g ( y ) d z 2 g ( y ) .

Proof of Theorem 1 By Lemma 1 and Lemma 2 we know that { Y t } is a μ r + 1 × φ irreducible, aperiodic and time-homogeneous Markov chain with state space ( R r + 1 × E , B r + 1 × H ) . So, we only need to check the conditions in Lemma 3. For simplicity, let
C = max j E ( R h ( u ) Φ j ( u ) d u ) , v ˆ u = ( F ( x 0 , x 1 , , x j ) + u , x 0 , , x r 1 ) , v ˆ = ( F ( x 0 , x 1 , , x j ) , x 0 , , x r 1 ) .
Step 1: To show that R r + 1 × E P ( ( x ˆ , i ) , d ( y ˆ , j ) ) g ( y ˆ , j ) < . For x ˆ = ( x 0 , x 1 , , x r ) R r + 1 and y ˆ = ( y 0 , y 1 , , y r ) R r + 1 , i , j E , by (2.2), we have
P ( ( x ˆ , i ) , d ( y ˆ , j ) ) = p i j q = 0 r 1 δ ( y q + 1 x q ) Φ j ( y 0 F ( x 0 , , x j ) ) d y 0 ,

where δ ( ) is a δ-function, that is, δ ( x y ) = 1 if x = y and zero otherwise.

So, by the integral transformation theorem, we can get
R r + 1 × E P ( ( x ˆ , i ) , d ( y ˆ , j ) ) g ( y ˆ , j ) = j E R r + 1 P ( ( x ˆ , i ) , d ( y ˆ , j ) ) g ( y ˆ , j ) = j E p i j R Φ j ( y 0 F ( x 0 , x 1 , , x j ) ) g ( y 0 , x 0 , , x r 1 , j ) d y 0 = j E p i j R g ( v ˆ u , j ) Φ j ( u ) d u j E p i j R h ( u ) Φ j ( u ) d u + j E p i j g ( v ˆ , i ) C + g ( v ˆ , i ) < ,

where the second-to-last line follows from condition (1) in the theorem.

Step 2: To show that when ( x ˆ , i ) ( A × M ) c , γ g ( x ˆ , i ) 0 and when ( x ˆ , i ) B × I , γ g ( x ˆ , i ) > 0 . In fact,
γ g ( x ˆ , i ) = R r + 1 × E P ( ( x ˆ , i ) , d ( y ˆ , j ) ) [ g ( y ˆ , j ) g ( x ˆ , i ) ] = j E R r + 1 P ( ( x ˆ , i ) , d ( y ˆ , j ) ) [ g ( y ˆ , j ) g ( x ˆ , i ) ] = j E R p i j Φ j ( y 0 F ( x 0 , x 1 , , x j ) ) [ g ( y 0 , x 0 , , x r 1 , j ) g ( x ˆ , i ) ] d y 0 = j E p i j ( R g ( v ˆ u , j ) Φ j ( u ) d u g ( x ˆ , i ) ) j E p i j ( R h ( u ) Φ j ( u ) d u + ( g ( v ˆ , i ) g ( x ˆ , i ) ) ) ( g ( v ˆ , i ) g ( x ˆ , i ) C ) ,

so by condition (3) we get the results wanted.

Step 3: We show that ψ g ( ( x ˆ , i ) , z ) has uniformly lower bound. Note that v ˆ u = ( F ( x 0 , x 1 , , x j ) + u , y 1 , , y r ) , v ˆ = ( F ( x 0 , x 1 , , x j ) , y 1 , , y r ) , we have
ψ g ( ( x ˆ , i ) , z ) = ( R r + 1 × E P ( ( x ˆ , i ) , d ( y ˆ , j ) ) [ z g ( y ˆ , j ) z g ( x ˆ , i ) ] ) / ( 1 z ) = 1 1 z j E R r + 1 P ( ( x ˆ , i ) , d ( y ˆ , j ) ) [ z g ( y ˆ , j ) z g ( x ˆ , i ) ] = 1 1 z j E p i j ( R z g ( v ˆ u , j ) Φ j ( u ) d u z g ( x ˆ , i ) ) = 1 1 z j E p i j ( R z g ( v ˆ u , j ) [ 1 z g ( v ˆ , i ) g ( v ˆ u , j ) ] Φ j ( u ) d u + z g ( v ˆ , i ) z g ( x ˆ , i ) )
and
1 1 z R z g ( v ˆ u , j ) [ 1 z g ( v ˆ , i ) g ( v ˆ u , j ) ] Φ j ( u ) d u 1 1 z R [ 1 z h ( | u | ) ] Φ j ( u ) d u 1 + R h ( | u | ) Φ j ( u ) d u 1 + C < ,

where the third line comes from the inequality 1 z x 1 z 1 + x , z [ 0 , 1 ) .

From condition (2) of this theorem, we can get when ( x ˆ , i ) ( A × M ) c , z g ( v ˆ , i ) z g ( x ˆ , i ) 0 ; when ( x ˆ , i ) A × M , if g ( v ˆ , i ) > g ( x ˆ , i ) , z [ 0 , 1 ) , we have z g ( v ˆ , i ) z g ( x ˆ , i ) 0 , if g ( v ˆ , i ) g ( x ˆ , i ) , we have
[ z g ( v ˆ , i ) z g ( x ˆ , i ) ] / ( 1 z ) = z g ( v ˆ , i ) [ 1 z g ( x ˆ , i ) g ( v ˆ , i ) ] / ( 1 z ) [ 1 z g ( x ˆ , i ) g ( v ˆ , i ) ] / ( 1 z ) 1 + g ( x ˆ , i ) g ( v ˆ , i ) < 1 + 2 inf ( y ˆ , j ) ( A × M ) c g ( y ˆ , j ) ,

where the last inequality lies in the fact that when ( x ˆ , i ) A × M , g ( x ˆ , i ) inf ( y ˆ , j ) ( A × M ) c g ( y ˆ , j ) < .

Therefore, for all ( x ˆ , i ) R r + 1 × E and z [ 0 , 1 ) ,
( R r + 1 × E P ( ( x ˆ , i ) , d ( y ˆ , j ) ) [ z g ( y ˆ , j ) z g ( x ˆ , i ) ] ) / ( 1 z ) < j E p i j ( 2 + C + 2 inf ( y ˆ , j ) ( A × M ) c g ( y ˆ , j ) ) 2 + C + 2 C ,
where C = inf ( y ˆ , j ) ( A × M ) c g ( y ˆ , j ) . In other words, for all ( x ˆ , i ) R r + 1 × E and z [ 0 , 1 ) , we have
ψ g ( ( x ˆ , i ) , z ) > r ( 2 + C + 2 C ) ,

so by Lemma 3 we know that { Y t } is nonergodic.

Next we will prove whatever an initial probability distribution function of { X t } is, its probability distribution function will never converge to some probability distribution function. We will use reductio ad absurdum to prove it. Suppose that A B r + 1 and there exists a probability distribution π such that
lim t P ( X t A | X 0 = x ˆ ) π ( A ) τ = 0 ,
(4.1)
where τ is the total variation norm. As a matter of fact, due to the equivalence of norm, we can use any norm here. Since
P ( X t A | X 0 = x ˆ ) = j E P ( X t A , Z t = j | X 0 = x ˆ ) = j E i E P ( X t A , Z t = j | X 0 = x ˆ , Z 0 = i ) P ( Z 0 = i | X 0 = x ˆ ) = i E P ( Y t A × E | Y 0 = ( x ˆ , i ) ) P ( Z 0 = i | X 0 = x ˆ ) = i E P ( t ) ( ( x ˆ , i ) , A × E ) P ( Z 0 = i | X 0 = x ˆ ) .
Define π ( A × E ) = π ( A ) , and it is well known that
π ( A × E ) = j E i E π ( A × { j } ) P ( Z 0 = i | X 0 = x ˆ ) ;
and therefore we have
lim t P ( t ) ( ( x ˆ , i ) , A × E ) π ( A × E ) τ = 0 ,

but this conflicts with the nonergodicity of the Markov chain { Y t } . So, there is no probability distribution function π such that (4.1) holds. This completes the proof. □

Authors’ information

School of Science, Jiangxi University of Science and Technology, No. 86, Hongqi Ave., Ganzhou, 341000, Jiangxi, P.R. China.

Declarations

Acknowledgements

The authors would like to thank the editor and anonymous referees for their valuable suggestions, which greatly improved our paper. This research is supported by the NSF of Jiangxi Province (No. 20132BAB211005), the SF of Jiangxi Provincial Education Department (No. GJJ12356), Key Scientific and Technological Research Project of Department of Education of Henan Province (No. 12B110006), and Foundation of Jiangxi University of Science and Technology (No. jxxj12064).

Authors’ Affiliations

(1)
School of Science, Jiangxi University of Science and Technology, Ganzhou, China

References

  1. Jeantheau T: Strong consistence of estimators for multivariate ARCH models. Econom. Theory 1998, 14: 70-86.MathSciNetView ArticleGoogle Scholar
  2. Tjøstheim D: Estimation in nonlinear time series models. Stoch. Process. Appl. 1986, 21: 251-273. 10.1016/0304-4149(86)90099-2View ArticleMathSciNetMATHGoogle Scholar
  3. Fan J, Yao Q Springer Series in Statistics. In Nonlinear Time Series: Nonparametric and Parametric Methods. Springer, New York; 2003.View ArticleGoogle Scholar
  4. Hafner CM, Preminger A: On asymptotic theory for multivariate GARCH models. J. Multivar. Anal. 2009, 100: 2044-2054. 10.1016/j.jmva.2009.03.011MathSciNetView ArticleMATHGoogle Scholar
  5. Lintona O, Sancetta A: Consistent estimation of a general nonparametric regression function in time series. J. Econom. 2009, 152: 70-78. 10.1016/j.jeconom.2009.02.006View ArticleMathSciNetGoogle Scholar
  6. Fernandes M, Grammig J: A family of autoregressive conditional duration models. J. Econom. 2006, 130: 1-23. 10.1016/j.jeconom.2004.08.016MathSciNetView ArticleGoogle Scholar
  7. David, PA: Path-dependence: putting the past into the future of economics. Institute for Mathematical Studies in the Social Sciences. Stanford University (1988)Google Scholar
  8. Margolin G, Barkai E: Nonergodicity of blinking nanocrystals and other Levy-walk processes. Phys. Rev. Lett. 2005, 4: 1-4.Google Scholar
  9. Basawa IV, Koul HL: Asymptotically minimax tests of composite hypotheses for nonergodic type processes. Stoch. Process. Appl. 1983, 14: 41-54. 10.1016/0304-4149(83)90045-5MathSciNetView ArticleMATHGoogle Scholar
  10. Basawa IV, Brockwell PJ: Asymptotic conditional inference for regular nonergodic models with an application to autoregressive processes. Ann. Stat. 1984, 12: 161-171. 10.1214/aos/1176346399MathSciNetView ArticleMATHGoogle Scholar
  11. Basawa IV, Scott DJ Springer Lecture Notes in Statistics 17. In Asymptotic Optimal Inference for Nonergodic Models. Springer, New York; 1983.View ArticleGoogle Scholar
  12. Feigin PD: Conditional exponential families and a representation theorem for asymptotic inference. Ann. Stat. 1981, 9: 597-603. 10.1214/aos/1176345463MathSciNetView ArticleMATHGoogle Scholar
  13. Budhiraja A, Ocone D: Exponential stability in discrete-time filtering for non-ergodic signals. Stoch. Process. Appl. 1999, 82: 245-257. 10.1016/S0304-4149(99)00032-0MathSciNetView ArticleMATHGoogle Scholar
  14. Durlauf ST: Nonergodic economic growth. Rev. Econ. Stud. 1993, 60: 349-366. 10.2307/2298061View ArticleMATHGoogle Scholar
  15. Goodman JB, Massey WA: The nonergodic Jackson network. J. Appl. Probab. 1984, 21: 860-869. 10.2307/3213702MathSciNetView ArticleMATHGoogle Scholar
  16. Griffeath D: Limit theorems for nonergodic set-valued Markov processes. Ann. Probab. 1978, 6: 379-387. 10.1214/aop/1176995524MathSciNetView ArticleMATHGoogle Scholar
  17. Jacod J: Parametric inference for discretely observed nonergodic diffusions. Bernoulli 2006, 12: 383-401. 10.3150/bj/1151525127MathSciNetView ArticleMATHGoogle Scholar
  18. Cline DBH: Regular variation of order 1 nonlinear AR-ARCH models. Stoch. Process. Appl. 2008, 117: 840-861.MathSciNetView ArticleMATHGoogle Scholar
  19. Tweedie RL: Sufficient conditions for regularity, recurrence and ergodicity of Markov processes. Math. Proc. Camb. Philos. Soc. 1975, 78: 125-136. 10.1017/S0305004100051562MathSciNetView ArticleMATHGoogle Scholar
  20. Tweedie RL: Criteria for ergodicity, exponential ergodicity and strong ergodicity of Markov processes. J. Appl. Probab. 1981, 18: 122-130. 10.2307/3213172MathSciNetView ArticleMATHGoogle Scholar
  21. Tweedie RL: Drift conditions and invariant measures for Markov chains. Stoch. Process. Appl. 2001, 92: 345-354. 10.1016/S0304-4149(00)00085-5MathSciNetView ArticleMATHGoogle Scholar
  22. Choi BD, Kim B: Non-ergodicity criteria for denumerable continuous time Markov processes. Oper. Res. Lett. 2004, 32: 574-580. 10.1016/j.orl.2004.03.001MathSciNetView ArticleMATHGoogle Scholar
  23. Kima B, Leeb I: Tests for nonergodicity of denumerable continuous time Markov processes. Comput. Math. Appl. 2008, 55: 1310-1321. 10.1016/j.camwa.2007.07.003MathSciNetView ArticleGoogle Scholar
  24. Kaplan M: A sufficient condition for nonergodicity of a Markov chain. IEEE Trans. Inf. Theory 1979, 25: 470-471. 10.1109/TIT.1979.1056059View ArticleMathSciNetMATHGoogle Scholar
  25. Sheng ZH, Wang T, Liu DL: Stability Analysis of Nonlinear Time Series Models - Ergodic Theory and Applications. Science Press, Beijing; 1993.Google Scholar
  26. Meyn SP, Tweedie RL: Markov Chains and Stochastic Stability. Springer, New York; 1993.View ArticleMATHGoogle Scholar
  27. Tong H: Nonlinear Time Series: A Dynamical System Approach. Oxford University Press, Oxford; 1990.MATHGoogle Scholar

Copyright

© Tang and Wang; licensee Springer 2013

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement