Skip to main content

Theory and Modern Applications

On the oscillation of solutions for a class of second-order nonlinear stochastic difference equations

Abstract

In this paper, we investigate the asymptotical behavior for a partial sum sequence of independent random variables, and we derive a law of the iterated logarithm type. It is worth to point out that the partial sum sequence needs not to be an independent increment process. As an application of the theory established, we also give a sufficient criterion on the almost sure oscillation of solutions for a class of second-order stochastic difference equation of neutral type.

1 Introduction

To date, the asymptotic behavior of the solutions to deterministic difference equations has been discussed in many papers. Among them there are many papers about the oscillation of the solutions to deterministic difference equations. In a related field, the asymptotic behavior of the solutions to stochastic difference equation was discussed in many papers, and there have been also very fruitful achievements. However, there is little known about the oscillation of the solutions of stochastic difference equations. Recently Appleby and Rodkina [1] and Appleby et al. [2] first investigated the oscillation of the solutions of first-order nonlinear stochastic difference equations. In [1], the authors considered the following equation:

X(n+1)=X(n)f ( X ( n ) ) +σ(n)ξ(n+1),n=0,1,.
(1.1)

The solution of (1.1) can be expressed as

X(n,ω)= X 0 i = 0 n 1 f ( X ( i , ω ) ) + i = 0 n 1 σ(i)ξ(i+1,ω),
(1.2)

where ( ξ ( n ) ) n 0 is a sequence of independent and identically distributed random variables. Note that the sequence S n =: i = 0 n σ(i)ξ(i+1) (n=0,1,) has the independent increment property, and as a result the authors can analyze the limit behavior of system (1.1) by the law of the iterated logarithm and they obtain a beautiful result, i.e., the solution of (1.1) is an almost sure oscillation under some sufficient conditions. Motivated by [1], in this paper we investigate the oscillation of the solution for the following second-order nonlinear stochastic difference equation:

Δ ( r ( k ) Δ X ( k ) ) +f(k)F ( X ( k ) ) =ξ(k+2),k=0,1,.
(1.3)

Here ΔX(k)=X(k+1)X(k) is the forward difference operator. This equation can be viewed as a stochastic analog of the following classical deterministic difference equations:

Δ ( r ( k ) Δ X ( k ) ) +f(k)F ( X ( k ) ) =0
(1.4)

or

Δ ( r ( k ) Δ X ( k ) ) +f(k)F ( X ( k ) ) =g(k).
(1.5)

The solution of (1.3) can be expressed as

X(n+1)=X(1)+ k = 1 n V ( k ) r ( k ) + k = 1 n { i = k n 1 r ( i ) } ξ(k+1),
(1.6)

where V(k) is determined by ΔV(k)=f(k)F(X(k)) and V(0)=r(0)(X(1)X(0)). The proof of (1.6) is to be given in Section 4. Denoting S n = k = 1 n { i = k n 1 r ( i ) }ξ(k+1) for any n N 1 , it implies S n S n 1 = 1 r ( n ) i = 1 n ξ(i+1). It is obvious that S n S n 1 is not independent of S n 1 even though (ξ(k)) is a sequence of independent random variables. That is to say that ( S n ) n 1 does not have the independent increment property, which means that we do not directly adopt law of the iterated logarithm to S n to obtain their limit behavior. However, under some restrictions we can use a roundabout way to analyze the limit behavior on ( S n ) n 1 by the law of the iterated logarithm, then we give some sufficient conditions on the almost sure oscillation for (1.3). These results and proofs are deferred to the following sections.

2 Definitions and assumptions

Throughout this paper, the following notation, definitions, and assumptions are needed. and denote, respectively, the positive integer numbers and real numbers. Let N a ={a,a+1,} for every aN{0}. (Ω,F,P) denotes a complete probability space. { ξ ( n ) } n N 1 is a random variable sequence defined on (Ω,F,P). We suppose that filtration ( F n ) n N is naturally generated, namely that F n =σ{ξ(1),ξ(2),,ξ(n)}. We use the standard abbreviations ‘a.s.’ and ‘i.i.d.’ instead of ‘almost surely’ and ‘independent identically distribution’, respectively. For simplicity, we denote log 2 =:loglog throughout this paper.

For (1.3), the following elementary assumptions are needed.

  • (A2.1) r(n)>0 for every n N 0 ,

  • (A2.2) f(n)0 for every n N 0 ,

  • (A2.3) F is assumed to be Borel measurable and to obey uF(u)>0 for u0, and F(0)=0,

  • (A2.4) { ξ ( n ) } n N 2 is assumed to be independent identically distributed random variable sequence defined on (Ω,F,P), and, moreover, Eξ(n)=0, E ξ 2 (n)=1.

Definition 2.1 { X ( n ) } n N 0 is called a solution of (1.3) with initial values X(0), X(1). { X ( n ) } n N 0 is constituted by X(n,ω) and X(0), X(1), where X(n,ω) is obtained by n1 steps of iteration of (1.3) with initial values X(0), X(1).

Definition 2.2 The solution { X ( n ) } n N 0 of (1.3) is said to be a.s. oscillatory if

P { X ( n ) < 0  i.o. } =1,P { X ( n ) > 0  i.o. } =1,

where ‘i.o.’ stands for infinitely often.

Definition 2.3 Equation (1.3) is said to be a.s. oscillatory if its any solution is a.s. oscillation.

3 Law of the iterated logarithm

The classical Kolmogorov law of the iterated logarithm is an effective tool in studying the limit behavior of partial sum of independent random variable sequence (see [3]). In 1973, Chow and Teicher [4] generalized the classical results and obtained the following law of the iterated logarithm for weighted averages.

Theorem 3.1 (Iterated logarithm laws of weighted averages)

If { X n ,n1} are i.i.d. random variables with E X n =0, E X n 2 =1 and { a n ,n1} are real constants satisfying

  1. (i)

    a n 2 / 1 n a j 2 C/n, n1,

  2. (ii)

    1 n a j 2

for some C in (0,), then

P { lim ¯ n j = 1 n a j X j ( 2 1 n a j 2 log 2 1 n a j 2 ) 1 / 2 = 1 } =1

and

P { lim ̲ n j = 1 n a j X j ( 2 1 n a j 2 log 2 1 n a j 2 ) 1 / 2 = 1 } =1.

On the above results, notice that 1 n a j X j 1 n 1 a j X j = a n X n is independent of 1 n 1 a j X j .

Now we establish a new result about law of iterated logarithm type. Suppose that r: N 1 R satisfies r(n)>0 for every n N 1 , and { ξ ( n ) } n N 1 is an i.i.d. random variable sequence defined on (Ω,F,P) with Eξ(n)=0, E ξ 2 (n)=1. For n N 1 , set

a n = j = 1 n 1 r ( j ) , S n = k = 1 n { j = k n 1 r ( j ) } ξ ( k + 1 ) , S n ( 1 ) = a n j = 1 n ξ ( j + 1 ) , S n ( 2 ) = j = 1 n a j ξ ( j + 2 ) .
(3.1)

Here we appoint j = k 2 k 1 ()=0 if k 1 < k 2 . It is obvious that S n S n 1 is not independent of S n 1 , but we have

S n = k = 1 n { j = k n 1 r ( j ) } ξ ( k + 1 ) = k = 1 n ( a n a k 1 ) ξ ( k + 1 ) = k = 1 n a n ξ ( k + 1 ) k = 2 n a k 1 ξ ( k + 1 ) = a n k = 1 n ξ ( k + 1 ) k = 1 n 1 a k ξ ( k + 2 ) = S n ( 1 ) S n 1 ( 2 ) .
(3.2)

Note that { a n } is a monotony increasing sequence, hence we give the following hypothesis:

(C.1) There exist constants α0 and d>0 such that lim n a n / n α =d.

Lemma 3.2 If (C.1) holds, then

lim n n a n 2 j = 1 n a j 2 =1+2α,
(3.3)
lim n log 2 1 n a j 2 log 2 n =1,
(3.4)
lim n 1 n a j 2 log 2 1 n a j 2 1 n 1 a j 2 log 2 1 n 1 a j 2 =1.
(3.5)

Proof Set O n = a n n α d, for every n1, we have

a n =( O n +d) n α .
(3.6)

In view of (C.1), for any fixed positive integer number m, there exists N>0 such that | O n | d m for every n>N. As n is sufficiently large, we have

j = 1 N a j 2 + ( m 1 m ) 2 d 2 j = N + 1 n j 2 α j = 1 n a j 2 j = 1 N a j 2 + ( m + 1 m ) 2 d 2 j = N + 1 n j 2 α .
(3.7)

It is clear that

j = 1 n j 2 α / n 2 α + 1 1 2 α + 1 ,n.

Hence

j = N + 1 n j 2 α n 2 α + 1 1 2 α + 1 ,n.
(3.8)

In view of (3.6) and (3.7), as n is sufficiently large, we get

n 2 α + 1 ( O n + d ) 2 j = 1 N a j 2 + ( m + 1 m ) 2 d 2 j = N + 1 n j 2 α n a n 2 j = 1 n a j 2 n 2 α + 1 ( O n + d ) 2 j = 1 N a j 2 + ( m 1 m ) 2 d 2 j = N + 1 n j 2 α .
(3.9)

Letting n in the above formula (3.9), and combining (3.8) and (C.1), we have

2 α + 1 ( m + 1 m ) 2 lim ̲ n n a n 2 j = 1 n a j 2 lim ¯ n n a n 2 j = 1 n a j 2 2 α + 1 ( m 1 m ) 2 .
(3.10)

Setting m in the above inequalities, we obtain (3.3).

Let A= ( 1 + 2 α ) 1 , P n = j = 1 n a j 2 n a n 2 A, n N 1 . According to (3.3), we have P n 0 (n) and j = 1 n a j 2 =(A+ P n )n a n 2 for every n N 1 . Therefore

log 2 j = 1 n a j 2 log 2 n = log 2 ( A + P n ) n a n 2 log 2 n = log ( log n + log a n 2 + log ( A + P n ) ) log 2 n = log { ( 1 + 2 α + log ( d + O n ) 2 + log ( A + P n ) log n ) log n } log 2 n = log 2 n + log ( 1 + 2 α + log ( d + O n ) 2 + log ( A + P n ) log n ) log 2 n = 1 + log ( 1 + 2 α + log ( d + O n ) 2 + log ( A + P n ) log n ) log 2 n .

Taking the limit on both sides of the above equation, result (3.4) follows.

Equation (3.5) can be proved similarly. □

Lemma 3.3 (Law of the iterated logarithm on S n defined by (3.1))

If (C.1) holds, then

P { ( 1 + 2 α ) 1 / 2 1 lim ¯ n S n ( 2 1 n a j 2 log 2 1 n a j 2 ) 1 / 2 ( 1 + 2 α ) 1 / 2 + 1 } =1
(3.11)

and

P { ( 1 + 2 α ) 1 / 2 1 lim ̲ n S n ( 2 1 n a j 2 log 2 1 n a j 2 ) 1 / 2 ( 1 + 2 α ) 1 / 2 + 1 } =1.
(3.12)

Here α is described as (C.1).

Proof According to (3.1), it is clear that ξ(2),,ξ(j+1), is an i.i.d. random variable sequence and, moreover, Eξ(j+1)=0, E ξ 2 (j+1)=1. Setting a n =1, X n =ξ(n+1) for any n N 1 , it is obvious that j = 1 n ξ(j+1)= j = 1 n a j X j , which satisfies the conditions of Theorem 3.1. In S n (2), letting X j =ξ(j+2), j N 1 , it is clear that j = 1 n a j 2 (n) by (C.1). By (3.3), it is found that there is c>0 such that a n 2 j = 1 n a j 2 c n . It is also known that S n (2)= j = 1 n a j ξ(j+2)= j = 1 n a j X j which satisfies the conditions of Theorem 3.1. Hence

P { lim ¯ n j = 1 n ξ ( j + 1 ) ( 2 n log 2 n ) 1 / 2 = 1 } =1,
(3.13)
P { lim ̲ n j = 1 n ξ ( j + 1 ) ( 2 n log 2 n ) 1 / 2 = 1 } =1,
(3.14)
P { lim ¯ n S n ( 2 ) ( 2 1 n a j 2 log 2 1 n a j 2 ) 1 / 2 = 1 } =1,
(3.15)
P { lim ̲ n S n ( 2 ) ( 2 1 n a j 2 log 2 1 n a j 2 ) 1 / 2 = 1 } =1.
(3.16)

So there is Ω 0 Ω with P( Ω 0 c )=0 such that all equalities of {} of the left side of (3.13)-(3.16) hold on Ω 0 . Therefore by (3.13), for ω Ω 0 there exists { n k (ω)}{n} such that

lim k j = 1 n k ( ω ) ξ ( j + 1 , ω ) ( 2 n k ( ω ) log 2 n k ( ω ) ) 1 / 2 =1.
(3.17)

By (3.15) and (3.16), it is clear that

{ S n k ( ω ) 1 ( 2 ) ( ω ) ( 2 1 n k ( ω ) 1 a j 2 log 2 1 n k ( ω ) 1 a j 2 ) 1 / 2 }

is a bounded sequence. Therefore there is a { n k l (ω)}{ n k (ω)} and β[1,1] such that

lim l S n k l 1 ( 2 ) ( ω ) ( 2 1 n k l 1 a j 2 log 2 1 n k l 1 a j 2 ) 1 / 2 =β.
(3.18)

By (3.2)-(3.5) and (3.17)-(3.18), we get

lim l S n k l ( ω ) ( 2 1 n k l a j 2 log 2 1 n k l a j 2 ) 1 / 2 = lim l { a n k l 1 n k l ξ ( j + 1 ) ( 2 1 n k l a j 2 log 2 1 n k l a j 2 ) 1 / 2 1 n k l 1 a j ξ ( j + 2 ) ( 2 1 n k l a j 2 log 2 1 n k l a j 2 ) 1 / 2 } = lim l { ( 1 + 2 α ) 1 / 2 1 n k l ξ ( j + 1 ) ( 2 n k l log 2 n k l ) 1 / 2 1 n k l 1 a j ξ ( j + 2 ) ( 2 1 n k l 1 a j 2 log 2 1 n k l 1 a j 2 ) 1 / 2 } = ( 1 + 2 α ) 1 / 2 β .

Hence (3.11) holds.

Equation (3.12) can be proved similarly. □

To proceed the study, we give another assumption:

(C.2) There exist α,d(0,) such that lim n a n / n α =d.

Note that α0 in (C.2). It is obvious that the conclusions of Lemma 3.2 and Lemma 3.3 are also right when (C.2) replaces (C.1).

4 The main results

In this section, we give the main results on the oscillation of the solution of (1.3).

Let { X ( k ) } k N 0 be an any solution of (1.3) with arbitrary initial values X(0),X(1)R. Set

V ( 0 ) = r ( 0 ) ( X ( 1 ) X ( 0 ) ) , Δ V ( k ) = f ( k ) F ( X ( k ) ) , k N 0

and

D(k)= j = 0 k ξ(j+2),k N 0 .

By (1.3), one obtains

Δ ( r ( k ) Δ X ( k ) ) =ΔV(k)+ΔD(k1)=Δ ( V ( k ) + D ( k 1 ) ) ,k N 0 .

Hence

r(k)X(k)=V(k)+D(k1)+c.

Let k=0 in the above equation, and one has

c=r(0) ( X ( 1 ) X ( 0 ) ) V(0)=0.

Therefore

ΔX(k)= V ( k ) r ( k ) + 1 r ( k ) D(k1).

So for any n N 1 , one has

X ( n + 1 ) { X ( 1 ) + k = 1 n V ( k ) r ( k ) } = k = 1 n 1 r ( k ) D ( k 1 ) = k = 1 n 1 r ( k ) j = 0 k 1 ξ ( j + 2 ) = k = 1 n { j = k n 1 r ( j ) } ξ ( k + 1 ) = S n .
(4.1)

Theorem 4.1 Suppose that (1.3) satisfies, respectively, (A2.1)-(A2.4), then (1.3) is an almost sure oscillation under condition (C.2).

Proof Suppose that the result is not right, then (1.3) must have a solution, denoted as { X ( n ) } n N 0 , and it is not an almost sure oscillation. That is to say at least one is not true between P{X(n)<0 i.o.}=1 and P{X(n)>0 i.o.}=1.

1. Firstly, we assume that P{X(n)<0 i.o.}<1 holds. For this case, it implies that there is Ω 1 Ω with P( Ω 1 )>0 such that the following equation holds:

X(n,ω)0,ω Ω 1
(4.2)

as nN(ω). Here N(ω) N 1 . By virtue of (3.3) of Lemma 3.2, we have

lim n a n 2 j = 1 n a j 2 =0.

By Lemma 3.3, we obtain

lim ¯ n S n ( ω ) ( 1 n a j 2 ) 1 / 2 =,a.s., lim ̲ n S n ( ω ) ( 1 n a j 2 ) 1 / 2 =,a.s.

Therefore there exists Ω 2 Ω with P( Ω 2 c )=0 such that for any ω Ω 2

lim ¯ n S n ( ω ) a n =, lim ̲ n S n ( ω ) a n =.
(4.3)

Setting Ω 3 = Ω 1 Ω 2 , it is obvious that P( Ω 3 )>0 and (4.2) and (4.3) are also true for any ω Ω 3 . For any k N 1 , we have

V(k)=V(0) j = 0 k 1 f(j)F ( X ( j , ω ) ) .

So for any ω Ω 3 , we have

X ( n + 1 , ω ) = S n ( ω ) + X ( 1 ) + k = 1 n V ( k ) r ( k ) = S n ( ω ) + X ( 1 ) + k = 1 N ( ω ) V ( k ) r ( k ) + k = N ( ω ) + 1 n V ( 0 ) j = 0 k 1 f ( j ) F ( X ( j , ω ) ) r ( k ) = S n ( ω ) + X ( 1 ) + 1 N ( ω ) V ( k ) r ( k ) + V ( 0 ) k = N ( ω ) + 1 n 1 r ( k ) k = N ( ω ) + 1 n j = 0 N ( ω ) 1 f ( j ) F ( X ( j , ω ) ) + j = N ( ω ) k 1 f ( j ) F ( X ( j , ω ) ) r ( k )

as n>N(ω). Hence

X ( n + 1 , ω ) + k = N ( ω ) + 1 n j = N ( ω ) k 1 f ( j ) F ( X ( j , ω ) ) r ( k ) = S n ( ω ) + X ( 1 ) + 1 N ( ω ) V ( k ) r ( k ) + V ( 0 ) ( a n a N ( ω ) ) k = N ( ω ) + 1 n j = 0 N ( ω ) 1 f ( j ) F ( X ( j , ω ) ) r ( k ) = S n ( ω ) + X ( 1 ) + 1 N ( ω ) V ( k ) r ( k ) + ( V ( 0 ) j = 0 N ( ω ) 1 f ( j ) F ( X ( j , ω ) ) ) ( a n a N ( ω ) ) = a n { S n ( ω ) a n + X ( 1 ) + 1 N ( ω ) V ( k ) r ( k ) a n + ( V ( 0 ) j = 0 N ( ω ) 1 f ( j ) F ( X ( j , ω ) ) ) ( 1 a N ( ω ) a n ) } .
(4.4)

Therefore the left-hand side of (4.4) is nonnegative.

On the right-hand side of (4.4), we have

X ( 1 ) + 1 N ( ω ) V ( k ) r ( k ) a n 0, a N ( ω ) a n 0(n)

due to a n (n). Therefore we find that it is an oscillation by (4.3). This is a contradiction.

2. Secondly, we assume that P{X(n)>0 i.o.}<1 holds. We may get a contradiction for case 2 similar to case 1. Thus we finish the proof of Theorem 4.1. □

Remark 1 If the condition (C.1) replaces the condition (C.2) of Theorem 4.1 and the other conditions are not changed, then the conclusions of Theorem 4.1 cannot be guaranteed to be right as α=0.

The example is as follows.

Example 1 Take r(k)= 2 k , f(k)=1, k N 0 ,

F(u)= { 1 if  u > 0 , 0 if  u = 0 , 1 if  u < 0
(4.5)

in (1.3), then (1.3) becomes the following special equation:

Δ ( 2 k Δ X ( k ) ) +F ( X ( k ) ) =ξ(k+2),k N 0 ,
(4.6)

here { ξ ( k + 2 ) } k N 0 is assumed to satisfy (A2.4) and be locally bounded, i.e., there is h>0 and Ω Ω with P( Ω >0) such that |ξ(n,ω)|h, ω Ω , n N 2 .

It is clear that r, f satisfy, respectively, (A2.1) and (A2.2), and F satisfies (A2.3), and j = 1 n 1 r ( j ) 1 (n), i.e., a n =: j = 1 n 1 r ( j ) satisfies (C.1) but it does not satisfy (C.2).

Now we illustrate that (4.6) is not an a.s. oscillation. Let { X ( n ) } n N 0 be a solution of (4.4) with initial values X(0), X(1), then we have

X ( n + 1 , ω ) = X ( 1 ) + k = 1 n V ( k , ω ) 2 k + k = 1 n ( i = k n 1 2 i ) ξ ( k + 1 , ω ) = X ( 1 ) + k = 1 n V ( k , ω ) 2 k + k = 1 n ξ ( k + 1 , ω ) 2 k 1 1 2 n k = 1 n ξ ( k + 1 , ω )
(4.7)

for every ωΩ. Here V(k) is determined by ΔV(k)=F(X(k)), k N 0 and V(0)=X(1)X(0).

About the terms of (4.7), the following assertions are right.

  1. (i)

    1 2 n k = 1 n ξ(k+1)0, a.s. (n).

  2. (ii)

    There is finite value measurable function h(ω) defined on (Ω,F,P) such that k = 1 n ξ ( k + 1 ) 2 k 1 h(ω), a.s. (n).

  3. (iii)

    V(1)1 k = 1 V ( k ) 2 k V(1)+1.

Proof of the assertions (i) Setting a n =1, X(n)=ξ(n+1), n N 1 , because {ξ(k)} is an i.i.d. random variable sequence and, moreover, Eξ(k)=0, E ξ 2 (k)=1, then the { X k } k 1 have the same properties. It is obvious that

a n 2 1 n a j 2 = 1 n , 1 n a j 2 =n.

By Theorem 3.1, we have

lim ¯ n 1 n ξ ( k + 1 ) ( 2 n log 2 n ) 1 / 2 = lim ¯ n 1 n a k X k ( 2 1 n a k 2 log 2 1 n a k 2 ) 1 / 2 = 1 , a.s. , lim ̲ n 1 n ξ ( k + 1 ) ( 2 n log 2 n ) 1 / 2 = 1 , a.s.

So we have

1 n ξ ( k + 1 ) 2 n = ( 2 n log 2 n ) 1 / 2 2 n 1 n ξ ( k + 1 ) ( 2 n log 2 n ) 1 / 2 0(n).
  1. (ii)

    Setting η(k)= ξ ( k + 1 ) 2 k 1 , k N 1 , we obviously have Eη(k)=0, Var(η(k))= 1 4 k 1 , k N 1 . Therefore k = 1 Var(η(k))<. It is obvious that {η(k)} is an independent random variable sequence, then the conclusion holds by [[5], Lemma 1, p.444] or [[3], §17.3, p.248].

  2. (iii)

    By (4.5) and ΔV(k)=F(X(k)), we have

    V(1)k+1V(k)V(1)+k1,k N 1 .

Hence

( V ( 1 ) + 1 ) k = 1 n 1 2 k k = 1 n k 2 k k = 1 n V ( k ) 2 k ( V ( 1 ) 1 ) k = 1 n 1 2 k + k = 1 n k 2 k , n N 1 .
(4.8)

It is obvious that k = 1 1 2 k =1, k = 1 k 2 k =2. So we get

V(1)1 k = 1 V ( k ) 2 k V(1)+1.

By (4.7) and the above assertions (i)-(iii), we obtain

lim n X ( n + 1 , ω ) = X ( 1 ) + k = 1 V ( k ) 2 k + h ( ω ) X ( 1 ) + V ( 1 ) 1 + h ( ω ) = X ( 1 ) + ( X ( 1 ) X ( 0 ) F ( X ( 0 ) ) ) 1 + h ( ω ) 2 X ( 1 ) X ( 0 ) 2 + h ( ω )
(4.9)

for every ωΩ. Here h(ω) and (X(0),X(1)) are mutually independent.

We choose X(0), X(1) satisfying 2X(1)>X(0)+2+2h. Due to |ξ(n,ω)|h for any ω Ω , n N 2 and

k = 1 n ξ ( k + 1 , ω ) 2 k 1 h(ω),a.s.

as n, one obtains

|h(ω)|=| k = 1 ξ ( k + 1 , ω ) 2 k 1 |h k = 1 1 2 k 1 =2h.

Thus lim n X(n+1)>0 on Ω by (4.9). Therefore {X(n)} is not an almost sure oscillation, and consequently, (4.6) is not almost surely oscillatory by Definition 2.3. □

References

  1. Appleby J, Rodkina A: On the oscillation of solutions of stochastic difference equations with state-independent perturbations. Int. J. Differ. Equ. 2007, 2(2):139–164.

    MathSciNet  Google Scholar 

  2. Appleby J, Rodkina A, Schurz H: On the oscillations of stochastic difference equations. Mat, Enseñ. Univ. 2009, 17(2):1–10.

    MATH  MathSciNet  Google Scholar 

  3. Loève M: Probability Theory. 4th edition. Springer, New York; 1978.

    MATH  Google Scholar 

  4. Chow Y, Teicher H: Iterated logarithm laws for weighted averages. Z. Wahrscheinlichkeitstheor. Verw. Geb. 1973, 26(2):87–94. 10.1007/BF00533478

    Article  MATH  MathSciNet  Google Scholar 

  5. Yan S, Wang J, Liu X: Fundamentals of Probability. Science Press, Beijing; 1982.

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grants no. 11101054, Hunan Provincial Natural Science Foundation of China under Grant no. 12JJ4005 and the Scientific Research Funds of Hunan Provincial Science and Technology Department of China under Grants no. 2010FJ6036.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Enwen Zhu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The authors contributed equally in this paper. They read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Yu, Z., Zhu, E. & Zeng, J. On the oscillation of solutions for a class of second-order nonlinear stochastic difference equations. Adv Differ Equ 2014, 91 (2014). https://doi.org/10.1186/1687-1847-2014-91

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2014-91

Keywords