Skip to main content

Theory and Modern Applications

On almost sure asymptotic periodicity for scalar stochastic difference equations

Abstract

We consider a perturbed linear stochastic difference equation

$$ X(n+1)=a(n)X(n)+g(n)+\sigma(n)\xi(n+1), \quad n=0, 1, \dots, \qquad X_{0}\in\mathbb{R}, $$
(1)

with real coefficients \(a(n)\), \(g(n)\), \(\sigma(n)\), and independent identically distributed random variables \(\xi(n)\) having zero mean and unit variance. The sequence \((a(n) )_{n\in\mathbf {N}}\) is K-periodic, where K is some positive integer, \(\lim_{n\to\infty }g(n)=\hat{g}<\infty\) and \(\lim_{n\to\infty}\sigma(n) \xi(n+1)=0\), almost surely. We establish conditions providing almost sure asymptotic periodicity of the solution \(X(n)\) for \(|L|=1\) and \(|L|<1\), where \(L:=\prod_{i=0}^{K-1}a(i)\). A sharp result on the asymptotic periodicity of \(X(n)\) is also proved. The results are illustrated by computer simulations.

1 Introduction

There is a vast literature about the periodic solutions of difference equations and we mention here only few works. Reference [1] which can be considered as a text book for difference equations, discusses periodicity. Reference [2] gives an overview of the results on the existence of periodic solutions of difference equations that have been obtained in the last two decades. It covers both ordinary and Volterra difference systems. Reference [3] is devoted to the periodicity for nonlinear difference equations. In [4, 5] the authors study linear difference equations perturbed by Volterra terms. Reference [6] discusses asymptotic stability of perturbed continuous time-difference equation with a small parameter. All these works consider only deterministic difference equations.

Stochastic difference equations have been studied intensively during the last 10 years. For results on asymptotic behaviour of the solutions to stochastic difference equations and stabilisation see e.g. [711]. References [8] and [10] deal with the nonlinear stochastic difference equation perturbed by a vanishing noise. The main equation in the present note has a similar structure, but it is linear, contains periodic coefficients and in addition to stochastic perturbations it also has deterministic perturbations. In other words we consider a linear difference equation perturbed by the deterministic term g and the stochastic term σξ:

$$ X(m+1)=a(m)X(m)+g(m)+\sigma(m)\xi(m+1), \quad m\in\mathbf {N}_{0}, \qquad X(0)=X_{0}\in\mathbb{R}. $$
(2)

Here \(\mathbf {N}_{0}=\mathbf {N} \cup\{0\}\), \((a(m) )_{m\in\mathbf {N}}\), \((\beta(m) )_{m\in \mathbf {N}}\) and \((\sigma(m) )_{m\in\mathbf {N}}\) are nonrandom sequences of real numbers, sequence \((a(m) )_{m\in\mathbf {N}}\) is periodic with a period \(K\in\mathbf {N}\), \(\lim_{m\to\infty }g(m)=\hat{g}\in\mathbb {R}\), and \((\xi(m) )_{m\in\mathbf {N}}\) is a sequence of independent and identically distributed random variables with zero mean, variance 1 and a distribution function F. The term \(\sigma(m)\xi(m+1)\) is the random perturbation added on step \(n+1\).

There are several publications about the periodic, asymptotically periodic, and almost periodic solutions of the stochastic differential equations; see e.g. [1214]. However, to the best of our knowledge, the periodicity for stochastic difference equations of type (2) was discussed only in [15], where sufficient conditions of periodicity was derived for \(g\equiv0\). This note can be viewed as an extension and generalisation of [15].

Let

$$ J(m):=\prod_{i=0}^{m-1}a(i), \quad m\in\mathbf {N}, \qquad L:=J(K). $$
(3)

The unperturbed counterpart of (2), i.e. the equation

$$ Z(m+1)=a(m)Z(m), \quad m\in\mathbf {N}_{0}, \qquad X(0)=X_{0}\in\mathbb{R}, $$
(4)

has a periodic solution only when \(|L|=1\). This case along with all other possible cases of the asymptotic behaviour of the solution \(Z(m)\) is discussed in Lemma 1, Section 3.

We assume that

$$ \lim_{m\to\infty}\sigma(m)\xi(m+1)=0, \quad\text{almost surely.} $$
(5)

A detailed analysis of condition (5) can be found in [10] (see also [9, 16]). In particular, it was shown there that when \(\xi(m)\) are independent \(\mathcal {N}(0,1)\)-random variables, the following rate of decay of σ:

$$\sigma(m) \sqrt{\log m} \to0, \quad\text{as } m\to\infty, $$

is the critical one which guarantees (5). It was also shown in [10] that when tails of ξ decay polynomially, i.e. \([1-F(y)]y^{M}\to\mbox{constant}\), as \(y\to\infty\), for some \(M\ge2\), then (5) holds if and only if \(\sum_{i=1}^{\infty} [\sigma(i) ]^{M}<\infty\).

In several statements of the paper we impose more restrictions on the decay of σ, assuming that \(\sigma\in\boldsymbol {l}_{2}\), that is, \(\sum_{i=1}^{\infty}\sigma^{2}(i)<\infty\). This assumption implies that the series \(\sum_{i=1}^{\infty}\sigma(i)\xi(i+1)\) converges almost surely (see, e.g., [17], page 384, or Lemma 4 below). This, in turn, implies condition (5).

We cannot expect the solution \(X(m)\) of (2) to be periodic when perturbations \(g(m)\) and \(\sigma(m) \xi(m+1)\) are not periodic. So we are looking for the asymptotic periodicity, when \(X(n)\) approaches a periodic stochastic process almost surely. The main result of the paper about the asymptotic periodicity of \(X(n)\) is given in Theorem 1, Section 6.2. For \(L=1\), \(\sigma\in\boldsymbol {l}_{2}\) and some additional assumptions on g, it states that there exists an almost surely finite random function \(\mathcal {R}(s)\), defined on \(\mathcal {S}:=\{0, 1, \dots, K-1\} \), such that, almost surely,

$$ \lim_{n\to\infty}\big|X(nK+s)-\mathcal {R}(s)\big|=0, \quad \text{for each $s\in \mathcal {S}$.} $$
(6)

Equation (6) also holds when \(|L|<1\) and condition (5) is fulfilled. In case \(L=-1\), \(\sigma\in \boldsymbol {l}_{2}\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\), instead of limit (6) we get, almost surely,

$$ \lim_{n\to\infty}\big|X(2nK+e)-\mathcal {R}(e)\big|=0,\quad \text{for each $e\in \mathcal {E}:=\{0, 1, \dots, 2K-1\}$}. $$
(7)

In Proposition 2, Section 6.2, we present a sharp result, which proves that, under some additional assumptions on g, condition \(\sigma\in\boldsymbol {l}_{2}\) is necessary and sufficient for the asymptotic periodicity of \(X(n)\) in form (6), when \(L=1\), and in form (7), when \(L=-1\).

The proofs of Theorem 1 and Proposition 2 are based on Lemma 10, Section 6.1, which establishes the asymptotic behaviour of the auxiliary process \(Y^{s}\), defined by

$$Y^{s}(n)=X(nK+s), \quad s\in\mathcal {S}, n\in\mathbf {N}. $$

In Section 3.1 we show that \(Y^{s}\) satisfies the equation

$$ Y^{s}(n+1)=LY^{s}(n)+G^{s}(n) +H^{s}(n+1), \qquad Y^{s}(0)=X(s), $$
(8)

where \(G^{s}(n)\) and \(H^{s}(n+1)\) behave similar to \(g(n)\) and \(\sigma(n) \xi(n+1)\) from equation (2). In particular, for each \(s\in \mathcal {S}\), \(G^{s}(n)\) is nonrandom and converges to a finite limit and \((H^{s}(n))_{n\in\mathbf {N}}\) is a sequence of independent random perturbations having mean zero and uniformly bounded second moments. Properties of \(G^{s}\) and \(H^{s}\) are discussed in Sections 4.1 and 5.2.

By solving the linear equation (8) we get the following representation of \(Y^{s}\):

$$ Y^{s}(n)=L^{n}Y^{s}(0)+\mathcal {V}^{s}(n)+\mathcal {H}^{s}(n+1), \quad n\in\mathbf {N}, $$
(9)

which allows us to get a conclusion about the asymptotic behaviour of \(Y^{s}(n)\) based on the limits of each term in the right-hand side of (9). The convergence of the sequences \((Y^{s}(n))_{n\in\mathbf {N}}\) for each \(s\in\mathcal {S}\), implies convergence of \(X(n)\). So we reduce the K-periodic case with any \(K\in\mathbf {N}\) to \(K=1\).

Deterministic sequences \((\mathcal {V}^{s}(n))_{n\in\mathbf {N}}\) are analysed in Lemma 3, Section 4.2. Stochastic sequences \((\mathcal {H}^{s}(n))_{n\in\mathbf {N}}\) are analysed in Lemma 9, Section 5.3. The proof of Lemma 9 is based on the results about the limits of martingales, which are given in Section 5.1. In Section 2 we give necessary definitions and formulate our main assumptions. Two auxiliary lemmata are referred to the Appendix, Section 7.2.

2 Main notations and assumptions

In this section we give a number of necessary definitions and lemmata which we use in the proofs of our results. A detailed exposition of the definitions and facts of the theory of random processes can be found in, for example, [17].

Let \((\Omega, {\mathcal{F}}, {\mathbb {P}} )\) be a given probability space.

Assumption 1

Let \((\xi(n))_{n\in\mathbf {N}}\) be a sequence of independent and identically distributed random variables with zero mean and variance 1, \(\mathbf {E}\xi=0\), \(\mathbf {E}\xi^{2}=1\), and with a distribution function F.

The sequence of random variables \((\xi(n))_{n\in\mathbf {N}}\) satisfying Assumption 1 generates a filtration \(\{{\mathcal{F}}_{n}\}_{n \in\mathbf {N}}\), where

$$ \mathcal{F}_{n} = \sigma \bigl\{ \xi(i) : i=1, 2, \dots,n \bigr\} , \quad n\in\mathbf {N}. $$
(10)

We use the standard abbreviation ‘a.s.’ for the wordings ‘almost sure’ or ‘almost surely’ throughout the text.

A stochastic process \((M(n))_{n \in\mathbf {N}}\) is said to be an \(\mathcal{F}_{n}\) -martingale if \(M(n)\) is \({\mathcal{F}}_{n}\)-measurable, \({\mathbf {E}}|M(n)|<\infty\) and \(\mathbf {E} [M(n) |\mathcal {F}_{n-1} ]=M(n-1)\) for all \(n\in\mathbf {N}\) a.s.

A martingale \((M(n))_{n\in\mathbf {N}}\) is called square integrable, if \(\mathbf {E} M^{2}(n)<\infty\) for all \(n \in\mathbf {N}\).

Let \((\rho(n))_{n \in\mathbf {N}}\) be a sequence of independent random variables with \(\mathbf {E}\rho(n)=0\) and \(\mathbf {E}[\rho(n)]^{2}<\infty\), for all \(n \in\mathbf {N}\). Then the stochastic process \((M(n))_{n\in \mathbf {N}}\), where \(M(0)=0\) and \(M(n)=\sum_{i=0}^{n-1} \rho(i)\), is a square integrable martingale with the quadratic variation \(\langle M(n)\rangle\) defined by

$$\bigl\langle M(n)\bigr\rangle =\mathbf {E} \bigl[M(n)\bigr]^{2}=\sum _{i=0}^{n-1}\mathbf{E}\bigl[\rho(i) \bigr]^{2}. $$

In this situation the quadratic variation \(\langle M(n)\rangle\) is not random and \(\langle M(n)\rangle=\operatorname{Var} (M(n))\), for all \(n \in\mathbf {N}\).

Assumption 2

Let \((\sigma(n))_{n\in\mathbf {N}}\) be a bounded sequence of real numbers: for some \(H_{\sigma}>0\) and all \(n\in\mathbf {N}\)

$$ \big|\sigma(n)\big|\le H_{\sigma}. $$
(11)

To avoid the trivial case we also assume that there are infinitely many \(i\in\mathbf {N}\) such that \(\sigma(i)\neq0\).

Remark 1

If Assumptions 1 and 2 hold, then \((M(n))_{n\in \mathbf {N}}\), where \(M(0)=0\) and \(M(n)=\sum_{i=0}^{n-1} \sigma(i)\xi (i)\), is a square integrable martingale with

$$\bigl\langle M(n)\bigr\rangle =\sum_{i=0}^{n-1} \sigma^{2}(i). $$

Also, \(\langle M(n)\rangle\neq0\) for big enough \(n\in\mathbf {N}\).

Assumption 3

Let \((a(n))_{n\in\mathbf {N}}\) be a periodic sequence of nonzero real numbers with a period \(K\in\mathbf {N}\), i.e. \(a(n+K)=a(n)\), \(a(n)\neq0\) for each \(n\in\mathbf {N}\).

Assumption 4

Let \((g(n))_{n\in\mathbf {N}}\) be a sequence of real numbers such that, for some \(\hat{g}\in\mathbb {R}\),

$$ \lim_{n\to\infty}g(n)=\hat{g}. $$
(12)

Denote by \(\boldsymbol {l}_{2}\) a Banach space of sequences \(\sigma=(\sigma (n))_{n\in\mathbf {N}}\) of real numbers, such that

$$\|\sigma\|_{\boldsymbol {l}_{2}}=\sum_{n=0}^{\infty}\sigma^{2}(n)< \infty. $$

Denote by \(\boldsymbol {L}_{2}= \boldsymbol {L}_{2}(\Omega, \mathcal {F}, \mathbf {P})\) a Banach space of random variables ς with \(\mathbf {E}|\varsigma|^{2}<\infty\).

Since random variables \(\xi(n)\), \(n\in\mathbf {N}\) which satisfy Assumption 1 are identically distributed, sometimes we omit the index n and write, for example, \(\mathbf {E}|\xi|\).

3 Presentation of the solution

Consider the perturbed stochastic linear difference equation

$$ X(m+1)=a(m)X(m)+g(m)+\sigma(m) \xi(m+1), \quad m\in\mathbf {N}_{0}, \qquad X(0)=X_{0}\in\mathbb {R}, $$
(13)

where sequences \((\xi(m))_{m\in\mathbf {N}}\), \((\sigma(m))_{m\in\mathbf {N}}\), \((a(m))_{m\in\mathbf {N}}\), and \((g(m))_{m\in\mathbf {N}}\) satisfy Assumptions 1, 2, 3, and 4, respectively.

Define

$$ J(m):=\prod_{i=0}^{m-1}a(i), \quad m\in\mathbf {N}, $$
(14)

and

$$ \mathcal {S}:=\{0, 1, \dots, K-1\}. $$
(15)

Since \(a(m)\neq0\) for each \(m\in\mathbf {N}\), the function \(J:\mathbf {N}\to\mathbb {R}\setminus\{0\}\), so \([J(m)]^{-1}\) is well defined and \([J(m)]^{-1}\neq0\) for all \(m\in\mathbf {N}\). By the periodicity of \(a(i)\), we have, for all \(m\in\mathbf {N}\),

$$ J(m+K)=\prod _{i=0}^{K-1}a(i)\times\prod _{i=K}^{m+K-1}a(i)=J(K)\prod _{j=0}^{m-1}a(j)=J(K)J(m). $$
(16)

Denote

$$ L:=J(K)=\prod_{i=0}^{K-1}a(i). $$
(17)

Lemma 1

Let Assumption 3 hold. Let J be defined as in (14) and \(\mathcal {S}\) be defined as in (15). Then

  1. (i)

    \(L=\prod_{i=0}^{K-1}a(i+l)=\prod_{i=l}^{K+l-1}a(i)\) for each \(l\in\mathbf {N}\).

  2. (ii)

    For each \(\tau\in\mathbf {N}\), \(s\in\mathcal {S}\), we have

    $$ J(\tau K+s)=L^{\tau}J(s). $$
  3. (iii)

    If \(L=1\), then J is K-periodic.

  4. (iv)

    If \(L=-1\), then J is 2K-periodic.

  5. (v)

    If \(|L|<1\), then \(\lim_{m\to\infty}|J(m)|=0\).

  6. (vi)

    If \(|L|>1\), then \(\lim_{m\to\infty}|J(m)|=\infty\).

The proof of Lemma 1 is straightforward and we do not present it. Note that Lemma 1 gives a full description of the limiting behaviour of the solution of unperturbed equation (4).

Since equation (13) is linear, solution \(X(n)\) can be presented in the following form (see e.g. [1], page 131):

$$ X(m)=J(m) \Biggl[X_{0}+ \sum _{i=0}^{m-1}J^{-1}(i+1) \bigl(g(i)+\sigma(i) \xi (i+1) \bigr) \Biggr], \quad m\in\mathbf {N}. $$
(18)

Denoting, for \(m\in\mathbf {N}\),

$$ V(m):=J(m)\sum_{i=0}^{m-1}J^{-1}(i+1)g(i), \qquad S(m):=J(m)\sum_{i=0}^{m-1}J^{-1}(i+1) \sigma(i) \xi(i+1), $$
(19)

we write (18) in the form

$$ X(m)=J(m)X_{0}+V(m)+ S(m), \quad m\in\mathbf {N}. $$
(20)

Remark 2

Notice that we have adopted the notation

$$\prod_{i=k+1}^{k} a(i)=1 \quad\text{and} \quad\sum_{i=k+1}^{k} a(i)=0. $$

3.1 Reduction to \(K=1\)

Let X be a solution to equation (13). Since each \(m\in \mathbf {N}\) is presented in the form

$$m=nK+s, \quad n\in\mathbf {N}, s\in\mathcal {S}, $$

for each \(s\in\mathcal {S}\) we can introduce a new process

$$ Y^{s}(n):=X(nK+s), \quad n\in\mathbf {N}. $$
(21)

From (21) we conclude that convergence of the sequences \((Y^{s}(n))_{n\in\mathbf {N}}\), \(s\in\mathcal {S}\), implies asymptotic behaviour of X. Equation (25) which we derive for \(Y^{s}\) in this section will also show that by introduction of \(Y^{s}\) we reduce the K-periodic case with any \(K\in\mathbf {N}\) to \(K=1\).

From (20) we get the following expression for \(Y^{s}\):

$$Y^{s}(n)=J(nK+s)X_{0}+V(nK+s)+ S(nK+s), \quad n\in\mathbf {N}, \qquad Y^{s}(0)=X(s), $$

where, for each \(s\in\mathcal {S}\),

$$ \begin{aligned} X(s)&= J(s)X_{0}+ V(s)+S(s) \\ &=J(s)X_{0}+ \sum_{i=0}^{s-1}J(s)J^{-1}(i+1)g(i)+ \sum_{i=0}^{s-1}J(s)J^{-1}(i+1) \sigma(i)\xi(i+1). \end{aligned} $$

Note that the initial value \(Y^{s}(0)\) is random and \(\mathcal {F}_{s}\)-measurable.

Applying Lemma 1(i)-(ii), and (19) we get

$$Y^{s}(n+1)=LJ(nK+s)X_{0}+V\bigl((n+1)K+s\bigr)+ S \bigl((n+1)K+s\bigr), $$

where

$$\begin{aligned}& \begin{aligned} V \bigl((n+1)K+s \bigr)&=J\bigl((n+1)K+s \bigr)\sum_{i=0}^{(n+1)K+s-1}J^{-1}(i+1)g(i) \\ &=LJ(nK+s)\sum_{i=0}^{nK+s-1}J^{-1}(i+1)g(i)+J\bigl((n+1)K+s \bigr)\sum_{i=nK+s}^{(n+1)K+s-1}J^{-1}(i+1)g(i) \\ &=LV(nK+s)+J\bigl((n+1)K+s\bigr)\sum_{i=nK+s}^{(n+1)K+s-1}J^{-1}(i+1)g(i), \end{aligned} \\& \begin{aligned} S\bigl((n+1)K+s\bigr)={}&LS(nK+s) \\ &+J\bigl((n+1)K+s\bigr)\sum_{i=nK+s}^{(n+1)K+s-1}J^{-1}(i+1) \sigma(i)\xi(i+1). \end{aligned} \end{aligned}$$

Since

$$LY^{s}(n)=LJ(nK+s)X_{0}+LV(nK+s)+LS(nK+s), $$

we arrive at

$$ \begin{aligned}[b] Y^{s}(n+1)-LY^{s}(n)={}&J \bigl((n+1)K+s\bigr)\sum_{i=nK+s}^{(n+1)K+s-1}J^{-1}(i+1)g(i) \\ &+J\bigl((n+1)K+s\bigr)\sum_{i=nK+s}^{(n+1)K+s-1}J^{-1}(i+1) \sigma(i)\xi(i+1). \end{aligned} $$
(22)

Denote

$$\begin{aligned}& G^{s}(n):=\sum_{j=1}^{K} \Biggl[\prod_{\tau=j}^{K-1}a(\tau+s) \Biggr]g(nK+s+j-1), \end{aligned}$$
(23)
$$\begin{aligned}& H^{s}(n+1):=\sum _{j=1}^{K} \Biggl[\prod_{\tau=j}^{K-1}a( \tau+s) \Biggr]\sigma(nK+s+j-1)\xi(nK+s+j). \end{aligned}$$
(24)

Remark 3

Note that the reason for considering \(H^{s}\) in (24) as a function of \(n+1\) is that the ξ with the maximum index is

$$\xi(nK+s+K)=\xi\bigl((n+1)K+s\bigr). $$

Since for \(j\in\mathcal {S}\)

$$J\bigl((n+1)K+s\bigr)J^{-1}(nK+s+j)=J(s+K)J^{-1}(s+j)= \prod_{\tau =j}^{K-1}a(\tau+s), $$

by substituting \(j=i+1-nK-s\) we obtain

$$ \begin{aligned} &J\bigl((n+1)K+s\bigr)\sum _{i=nK+s}^{(n+1)K+s-1}J^{-1}(i+1)g(i) \\ &\quad= \sum_{j=1}^{K}J\bigl((n+1)K+s \bigr)J^{-1}(nK+s+j)g(nK+s+j-1)=G^{s}(n) \end{aligned} $$

and

$$ J\bigl((n+1)K+s\bigr)\sum _{i=nK+s}^{(n+1)K+s-1}J^{-1}(i+1)\sigma(i) \xi(i+1)=H^{s}(n+1). $$

Now equation (22) for \(Y^{s}\) can be written as

$$ Y^{s}(n+1)=LY^{s}(n)+G^{s}(n) +H^{s}(n+1), \qquad Y^{s}(0)=X(s). $$
(25)

From equation (25) we derive, for each \(n\in\mathbf {N}\),

$$ \begin{aligned}[b] Y^{s}(n+1)&=L^{n+1}Y^{s}(0)+ \sum_{i=0}^{n}L^{i}G^{s}(n-i)+ \sum_{i=0}^{n}L^{i}H^{s}(n+1-i) \\ &=L^{n+1}Y^{s}(0)+\sum_{j=0}^{n}L^{n-j}G^{s}(j)+ \sum_{j=0}^{n}L^{n-j}H^{s}(j+1). \end{aligned} $$
(26)

Denoting

$$ \mathcal {V}^{s}(n):=\sum_{j=0}^{n}L^{n-j}G^{s}(j), \qquad\mathcal {H}^{s}(n):=\sum_{j=0}^{n-1}L^{n-1-j}H^{s}(j+1), $$
(27)

we arrive at the following presentation of solution \(Y^{s}(n)\) to equation (25):

$$ Y^{s}(n)=L^{n}Y^{s}(0)+\mathcal {V}^{s}(n-1)+\mathcal {H}^{s}(n), \quad n\in\mathbf {N}. $$
(28)

Presentation (28) shows that in order to know limiting behaviour of \(Y^{s}\) it is enough to get the same for \(\mathcal {V}^{s}\) and \(\mathcal {H}^{s}\). In Lemma 3, Section 4.2, we analyse the asymptotic behaviour of \(\mathcal {V}^{s}\). Lemma 9, Section 5.3, deals with \(\mathcal {H}^{s}\).

4 Limiting behaviour of \(\mathcal {V}^{s}\)

Denote, for each \(s\in\mathcal {S}\),

$$ \mathcal {A}_{s}:=\sum _{j=1}^{K} \Biggl[\prod_{\tau=j}^{K-1}a( \tau+s) \Biggr], \qquad\overline{\mathcal {A}_{s}}:=\sum _{j=1}^{K} \Biggl[\prod_{\tau =j}^{K-1}\big|a( \tau+s)\big| \Biggr], $$
(29)

and

$$ C_{s}:=\max_{j=0, \dots, K-1} \Biggl\{ \prod _{\tau=j}^{K-1}\big|a(\tau+s)\big| \Biggr\} , \qquad c_{s}:=\min_{j=0, \dots, K-1} \Biggl\{ \prod _{\tau=j}^{K-1}\big|a(\tau +s)\big| \Biggr\} . $$
(30)

Remark 4

Note that \(\mathcal {A}_{s}>0\) if \(a(i)>0\) for all \(i\in\mathcal {S}\) and \(\overline{\mathcal {A}_{s}}>0\). Also, for each \(s\in\mathcal {S}\),

$$ Kc_{s}\le\overline{\mathcal {A}_{s}}\le KC_{s}. $$

In Example 1, Section 7, we consider coefficients \(a(i)\), \(i\in\{0, 1, 2\}\), such that \(\mathcal {A}_{2}=0\) while \(\mathcal {A}_{0}, \mathcal {A}_{1}\neq0\).

4.1 Limiting behaviour of \(G^{s}(n)\)

Lemma 2 below shows that the limiting behaviour of \(G^{s}\) is similar to g.

Lemma 2

Let Assumptions 3 and 4 hold. Let \(G^{s}\) be defined as in (23). Then, for each \(s\in\mathcal {S}\),

$$ \lim_{n\to\infty}G^{s}(n)= \hat{g}\mathcal {A}_{s}. $$

Proof

Let \(\overline{\mathcal {A}_{s}}\) be defined in (29). Fix some \(\varepsilon>0\) and let \(l_{\varepsilon}\in\mathbf {N}\) be such that, for \(l\ge l_{\varepsilon}\),

$$\big|g(l)-\hat{g}\big|< \frac{\varepsilon}{\overline{\mathcal {A}_{s}}}. $$

Then, for \(n\ge\frac{l_{\varepsilon}+1}{K}\) and for all \(s, j\in\mathcal {S}\), we have

$$\big|g(nK+s+j-1 )-\hat{g}\big|< \frac{\varepsilon}{\overline{\mathcal {A}_{s}}}, $$

which implies that

$$ \bigl\vert G^{s}(n)-\hat{g}\mathcal {A}_{s} \bigr\vert \le \sum_{j=1}^{K} \Biggl[\prod_{\tau=j}^{K-1}\big|a(\tau+s)\big| \Biggr]\big|g(mK+s+j-1)-\hat{g}\big|\le \frac{\varepsilon}{\overline{\mathcal {A}_{s}}}\overline{\mathcal {A}_{s}} =\varepsilon. $$

 □

4.2 Limiting behaviour of \(\mathcal {V}^{s}\)

The next lemma describes some important cases of the limiting behaviour of \(\mathcal {V}^{s}\).

Lemma 3

Let Assumptions 3 and 4 hold.

  1. (i)

    Let \(L=1\).

    1. (a)

      If \(\hat{g}\neq0\), \(\mathcal {A}_{s}\neq0\), then \(|\mathcal {V}^{s}(n)| \to\infty\).

    2. (b)

      If either \(\mathcal {A}_{s}=0\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\) or \(\hat{g}=0\), \(\sum_{i=1}^{\infty}|g(i)|<\infty\), then there exists a number \(\overline{\mathcal {V}^{s}}\in\mathbb {R}\) such that

      $$\lim_{n\to\infty}\mathcal {V}^{s}(n)=\overline{\mathcal {V}^{s}}. $$
    3. (c)

      If \(g(n)\equiv0\), then \(\mathcal {V}^{s}(n)\equiv0\).

  2. (ii)

    Let \(L=-1\) and \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\). Then there exist a number \(\overline{\mathcal {V}^{s}}\in\mathbb {R}\) and a 2-periodic function \(\mathcal {V}_{1}^{s}(n)\) such that

    $$\lim_{n\to\infty} \bigl\vert \mathcal {V}^{s}(n)- \overline{\mathcal {V}^{s}}-\mathcal {V}_{1}^{s}(n) \bigr\vert =0. $$
  3. (iii)

    Let \(|L|<1\). Then

    $$\lim_{n\to\infty}\mathcal {V}^{s}(n) = \frac{ \hat{g}\mathcal {A}_{s}}{1-L}. $$
  4. (iv)

    If \(|L|>1\) and \(\sum_{j=0}^{\infty}L^{-j}G^{s}(j) \neq0\) then \(|\mathcal {V}^{s}(n)| \to\infty\).

Proof

(i) In case (a), \(\lim_{n\to\infty}G^{s}(n)=\hat{g}\mathcal {A}_{s}\neq0\), so the series which defines \(\mathcal {V}^{s}(n)\) diverges. When \(\hat{g}\mathcal {A}_{s}>0\), we can find N such that \(G^{s}(n)>0\) for \(n\ge N\). So, for \(n\ge N\),

$$\mathcal {V}^{s}(n)=\sum_{j=0}^{N}G^{s}(j)+ \sum_{j=N+1}^{n}G^{s}(j) $$

and \(\lim_{n\to\infty}\sum_{j=N+1}^{n}G^{s}(j)=\infty\). Similarly, \(\lim_{n\to\infty}\sum_{j=N+1}^{n}G^{s}(j)=-\infty\) when \(\hat{g}\mathcal {A}_{s}<0\).

In case (b), when \(\mathcal {A}_{s}=0\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\), we have

$$ \mathcal {V}^{s}(n)=\sum _{m=0}^{n} \Biggl(\sum _{j=1}^{K} \Biggl[\prod_{k=j}^{K-1}a(k+s) \Biggr] \bigl[g(mK+s+j-1)-\hat{g}\bigr] \Biggr) $$
(31)

and

$$ \big|\mathcal {V}^{s}(n)\big|\le C_{s}\sum_{m=0}^{n} \sum _{j=1}^{K} \bigl\vert g(mK+s+j-1)-\hat{g} \bigr\vert \le C_{s}\sum_{i=0}^{(n+1)K+s}\big|g(i)- \hat{g}\big|, $$
(32)

so \(\mathcal {V}^{s}(n)\) converges absolutely to the number

$$ \overline{\mathcal {V}^{s}}=\sum _{m=0}^{\infty} \Biggl(\sum_{j=1}^{K} \Biggl[\prod_{k=j}^{K-1}a(k+s) \Biggr] \bigl[g(mK+s+j-1)-\hat{g}\bigr] \Biggr). $$
(33)

When \(\hat{g}=0\), \(\sum_{i=1}^{\infty}|g(i)|<\infty\), we substitute ĝ by 0 in (31)-(33) and obtain the result.

Case (c) is straightforward.

(ii) We have

$$ \begin{aligned}[b] \mathcal {V}^{s}(n)={}&\sum _{m=0}^{n}(-1)^{n-m} \Biggl[\sum _{j=1}^{K} \Biggl[\prod _{k=j}^{K-1}a(k+s) \Biggr] \bigl[g(mK+s+j-1)-\hat{g} \bigr] \Biggr] \\ &+ \hat{g}\sum_{m=0}^{n}(-1)^{n-m} \sum_{j=1}^{K} \Biggl[\prod _{k=j}^{K-1}a(k+s) \Biggr]= \mathcal {V}_{0}^{s}(n)+\mathcal {V}_{1}^{s}(n). \end{aligned} $$
(34)

The term \(\mathcal {V}_{0}^{s}(n)\) converges absolutely to the number \(\overline{\mathcal {V}^{s}}\) defined by (33). Noting that

$$ \mathcal {V}_{1}^{s}(n)=\hat{g} \mathcal {A}_{s} \sum_{m=0}^{n}(-1)^{n-m}= \hat{g} \mathcal {A}_{s} \frac{1+(-1)^{n}}{2}, $$
(35)

we conclude that \(\mathcal {V}_{1}^{s}(n)\) is a 2-periodic nonrandom function.

(iii) The result follows from Lemma 2 and Lemma 11 (see Appendix).

(iv) The result follows from the representation

$$\mathcal {V}^{s}(n)=L^{n}\sum_{j=0}^{n}L^{-j}G^{s}(j). $$

 □

Remark 5

Note that if \(g(n)\equiv\hat{g}\), where ĝ is any real number, and \(\mathcal {A}_{s}=0\), then \(\overline{\mathcal {V}^{s}}=0\).

5 On limits of random series

In Section 5.1 we present several auxiliary statements about the limits of the martingales. In Section 5.2 we introduce a new sequence of σ-algebras and discuss properties of random variables \(H^{s}(n)\).

Lemma 9 in Section 5.3 describes the asymptotic behaviour of \(\mathcal {H}^{s}(n)\), as \(n\to\infty\).

5.1 Limits of martingales

In this section we deal with limits at infinity of the martingales \((M(n))_{n\in\mathbf {N}}\) which have the following form:

$$ M(0)=0, \qquad M(n)=\sum_{i=0}^{n-1} \beta(i)\eta(i+1), \quad n\in \mathbf {N}. $$
(36)

Here \(\beta(i)\) and \(\eta(i)\) satisfy the following assumptions.

Assumption 5

Let \((\eta(n))_{n\in\mathbf {N}}\) be a sequence of independent random variables with zero mean, \(\mathbf {E}\eta_{n}=0\), and with distribution functions \(F_{n}\). Let also \(\mathbf {E}|\eta(n)|^{2}\le H_{\eta}\) for some constant \(H_{\eta}>0\) and all \(n\in\mathbf {N}\).

Assumption 6

Let Assumption 5 hold. Let also there exist constants \(h_{\eta}, \bar{H}_{\eta}>0\) such that, for all \(n\in\mathbf {N}\),

$$ \mathbf {E}\big|\eta(n)\big|^{3}\le\bar{H}_{\eta}, \qquad \mathbf {E}\big|\eta(n)\big|^{2}\ge h_{\eta}. $$
(37)

Assumption 7

Let \(\beta=(\beta(n))_{n\in\mathbf {N}}\) be a bounded sequence of real numbers: \(|\beta(n)|\le H_{\beta}\) for some \(H_{\beta}>0\) and all \(n\in\mathbf {N}\).

Lemma 4 below is a variant of martingale convergence theorem (see e.g. [17]).

Lemma 4

Let Assumption 5 hold. Let \(M=(M(n))_{n\in\mathbf {N}}\) be a martingale defined by (36). Let \(\beta\in\boldsymbol {l}_{2}\). Then \(\lim_{n\to\infty}M(n)=\bar{M}\), where is an a.s. finite random variable.

Remark 6

Assumptions of Lemma 4 imply that M is a Cauchy sequence in \(\boldsymbol {L}_{2}(\Omega, \mathcal {F}, \mathbf {P})\), so

$$\bar{M}=\sum_{i=0}^{\infty} \beta(i)\eta(i+1)\in \boldsymbol {L}_{2}(\Omega, \mathcal {F}, \mathbf {P}). $$

Also, \(\mathbf {E}\bar{M}=0\) and \(\mathbf {E}[\bar{M}]^{2}=\sum_{i=0}^{\infty} \beta^{2}(i)\mathbf {E} |\eta(i+1)|^{2}\le H_{\eta}\|\beta\|_{\boldsymbol {l}_{2}}\).

Lemma 4 provides conditions under which \(M(n)\) has an a.s. finite limit. In the proofs of our results in Section 5.3 we also need Lemma 6 about \(\limsup_{n\to\infty} \frac{M(n)}{\sqrt{\langle M(n)\rangle}}\) and \(\liminf_{n\to\infty} \frac{M(n)}{\sqrt{\langle M(n)\rangle}}\). To prove Lemma 6 we apply a variant of the central limit theorem which is based on Theorem 1 from [17], page 329, for the sum of independent but not identically distributed random variables. To apply Theorem 1 to the martingale M we need to show that the Lindeberg condition is satisfied. In order to do this we prove that the Lyapunov condition with \(\delta=1\) holds, which implies the Lindeberg condition for M (see Lemma 5, Corollary 1 and Corollary 2 below). For more details as regards the Lyapunov and Lindeberg conditions see [17], page 332.

Lemma 5

Let Assumption 7 hold and \(\beta\notin\boldsymbol {l}_{2}\). Then

$$ \frac{\sum_{i=0}^{n} |\beta(i)|^{3}}{ (\sum_{i=0}^{n} |\beta(i)|^{2} )^{1.5}}\to0, \quad\textit{as } n\to\infty. $$
(38)

Proof

The proof follows from the estimates

$$ \frac{\sum_{i=0}^{n} |\beta(i)|^{3}}{ (\sum_{i=0}^{n} |\beta(i)|^{2} )^{1.5}}\le\frac{H_{\beta}\sum_{i=0}^{n} |\beta(i)|^{2}}{ (\sum_{i=0}^{n} |\beta(i)|^{2} )^{1.5}}=H_{\beta}\Biggl(\sum _{i=0}^{n} \big|\beta(i)\big|^{2} \Biggr)^{-0.5}\to0, \quad\text{as } n\to\infty. $$

 □

Corollary 1

Let Assumptions 5, 6 and 7 hold. Let \(\beta\notin\boldsymbol {l}_{2}\). Then the Lyapunov condition with \(\delta =1\) holds:

$$ \frac{\sum_{i=0}^{n} |\beta(i)|^{3}\mathbf {E} |\eta(i+1)|^{3}}{ [\sum_{i=0}^{n} |\beta(i)|^{2}\mathbf {E} |\eta(i+1)|^{2} ]^{1.5}}\to0, \quad \textit{as } n\to\infty. $$

Proof

The result follows from Lemma 5 and the estimate

$$\frac{\sum_{i=0}^{n} |\beta(i)|^{3}\mathbf {E} |\eta(i+1)|^{3}}{ [\sum_{i=0}^{n}|\beta(i)|^{2}\mathbf {E} |\eta(i+1)|^{2} ]^{1.5}}\le\frac{\bar{H}_{\eta}}{h^{1.5}_{h}}\frac{\sum_{i=0}^{n} |\beta(i)|^{3}}{ [\sum_{i=0}^{n} |\beta(i)|^{2} ]^{1.5}}. $$

 □

Corollary 2

Let Assumptions 5, 6 and 7 hold. Let \(\beta\notin\boldsymbol {l}_{2}\). Then the Lindeberg condition holds:

$$ \frac{\sum_{i=0}^{n} \int_{y:|y|\ge\varepsilon D_{n}} y^{2}\,d\tilde{F}_{k}(y)}{D^{2}_{n}}\to0, \quad\textit{as } n \to\infty, $$

where \(\tilde{F}_{k}\) are distributions of \(\beta(k)\eta(k)\), \(D_{n}^{2}=\sum_{i=0}^{n} |\beta(i)|^{2}\mathbf {E}|\eta(i+1)|^{2}\).

Proof

By Corollary 1, the Lyapunov condition with \(\delta =1\) holds, which, by [17], page 332, implies the Lindeberg condition. □

Corollary 3

Let Assumptions 5, 6 and 7 hold. Let \(\beta\notin\boldsymbol {l}_{2}\). Let Φ be the standard normal cumulative distribution function. Then the central limit theorem holds:

$$ \lim_{n\to\infty}\mathbf {P} \biggl[ \frac{\sum_{i=0}^{n} \beta(i)\eta (i)}{\sqrt{\sum_{i=0}^{n} |\beta(i)|^{2}\mathbf {E}|\eta(i+1)|^{2}}}>y \biggr]=1-\Phi(y), \quad\forall y\in\mathbb {R}. $$
(39)

The proof of the following result is an adaptation of the argument presented on pages 379-382 in [17] (see also [9]) and is referred to the Appendix.

Lemma 6

Let Assumptions 5, 6 and 7 hold. Let \(\beta\notin\boldsymbol {l}_{2}\). Then

$$ \begin{gathered} \limsup_{n\to\infty} \frac{1}{\sqrt{\sum_{i=0}^{n} |\beta(i)|^{2}\mathbf {E}|\eta(i+1)|^{2}}}\sum_{i=0}^{n} \beta(i) \eta(i+1) = \infty, \quad\textit {a.s.}, \\ \liminf_{n\to\infty} \frac{1}{\sqrt{\sum_{i=0}^{n} |\beta(i)|^{2}\mathbf {E}|\eta(i+1)|^{2}}}\sum _{i=0}^{n} \beta(i)\eta(i+1)=-\infty, \quad\textit{a.s.} \end{gathered} $$
(40)

5.2 Properties of \(H^{s}(n)\)

By (24) we have

$$ H^{s}(n):=\sum _{j=1}^{K} \Biggl[\prod_{\tau=j}^{K-1}a( \tau+s) \Biggr]\sigma \bigl((n-1)K+s+j-1\bigr)\xi\bigl((n-1)K+s+j\bigr). $$
(41)

Before discussing properties of random variables \(H^{s}(n)\) we need to introduce a new sequence of σ-algebras: for all \(n\in\mathbf {N}\), we define

$$ \mathcal {G}^{s}_{n}:=\mathcal {F}_{n K+s}, \quad\text{where } \mathcal {F}_{n} \text{ is defined by (10)}. $$
(42)

Lemma 7

Let Assumptions 1, 2 and 3 hold. Let \(H_{\sigma}\) be defined by (11), \(H^{s}(n)\) be defined by (41), \(C_{s}\) be defined by (30).

Then, for any \(s\in\mathcal {S}\),

  1. (i)

    \(\mathbf {E}(H^{s}(n))=0\) for each \(n\in\mathbf {N}\);

  2. (ii)

    \(\mathbf {E}(H^{s}(n))^{2}< H^{2}_{\sigma} (K-1)C_{s}^{2}\) for each \(n\in\mathbf {N}\);

  3. (iii)

    \(H^{s}(n)\) and \(H^{s}(k)\) are independent for each \(n, k\in \mathbf {N}\), \(n\neq k\);

  4. (iv)

    the family \(\{\mathcal {G}^{s}_{n}\}_{n \in\mathbf {N}}\) is a filtration;

  5. (v)

    \(H^{s}(n)\) is \(\mathcal {G}^{s}_{n}\) measurable for each \(n\in \mathbf {N}\);

  6. (vi)

    \(H^{s}(n)\to0\) a.s, if \(\sigma(n)\xi(n+1)\to0\) a.s., as \(n\to\infty\).

Proof

Proof of (i) is straightforward. To prove (ii) we apply the inequality

$$ \mathbf {E} \bigl[H^{s}(n) \bigr]^{2}=\sum _{j=1}^{K} \Biggl[\prod _{k=j}^{K-1}a(k+s) \Biggr]^{2} \sigma^{2}\bigl((n-1)K+s+j\bigr)< H^{2}_{\sigma}(K-1)C_{s}^{2}. $$
(43)

To prove (iii) we note that, for any \(k>n\), \(k, n\in\mathbf {N}\), the random variable \(H^{s}(n)\) defined as in (41) is a weighted sum of random variables from the set

$$\mathcal {T}_{n}:= \bigl\{ \xi\bigl((n-1)K+s+1\bigr), \xi \bigl((n-1)K+s+2\bigr), \dots, \xi (n K+s) \bigr\} , $$

and the random variable \(H^{s}(k)\) is a weighted sum of random variables from the set

$$\mathcal {T}_{k}:=\bigl\{ \xi\bigl((k-1)K+s+1\bigr), \xi \bigl((k-1)K+s+2\bigr), \dots, \xi(k K+s)\bigr\} . $$

Since \(k\ge n+1\), the minimum index of ξ in set \(\mathcal {T}_{k}\) is greater than the maximum index of ξ in set \(\mathcal {T}_{n}\),

$$(k-1)K+s+1\ge n K+s+1> n K+s. $$

So \(\mathcal {T}_{n}\cap\mathcal {T}_{k}=\emptyset\), which, due to independence of \(\xi_{i}\), implies independence of \(H^{s}(n)\) and \(H^{s}(k)\).

To prove (iv) we notice that, for each \(n_{1}\le n_{2}\), we have

$$\mathcal {G}^{s}_{n_{1}}=\mathcal {F}_{n_{1} K+s}\subseteq \mathcal {F}_{n_{2} K+s}=\mathcal {G}^{s}_{n_{2}}. $$

Item (v) follows from the proof of (iii) and the definition (42) of \(\mathcal {G}^{s}_{n}\).

To prove (vi) we apply the estimate

$$ \begin{aligned} \bigl\vert H^{s}(n) \bigr\vert &\le \sum _{j=1}^{K} \prod _{k=j}^{K-1}\big|a(k+s)\big|\big|\sigma \bigl((n-1)K+s+j-1\bigr)\xi \bigl((n-1)K+s+j\bigr)\big| \\ &\le C_{s}(K-1) \max_{j=1, \dots, K}\big|\sigma\bigl((n-1)K+s+j-1 \bigr)\xi\bigl((n-1)K+s+j\bigr)\big|. \end{aligned} $$

 □

5.3 On limits of \(\mathcal {H}^{s}(n)\)

Fix \(s\in\mathcal {S}\) and \(L\in\mathbb {R}\). Let \(H^{s}(j)\) be defined by (24). Denote

$$ M^{s}(n):=\sum_{j=0}^{n-1}L^{-j}H^{s}(j+1), \quad n\in\mathbf {N}. $$
(44)

The next lemma describes the properties of \(M^{s}(n)\) for \(|L|=1\) and \(|L|>1\). It is the main tool for proving Lemma 9 about the asymptotic behaviour of \(\mathcal {H}^{s}(n)\).

Lemma 8

Let Assumptions 1, 2 and 3 hold. Let \((\mathcal {G}^{s}_{n})_{n \in\mathbf {N}}\) be defined as in (42) and let \(M^{s}(n)\) be defined as in (44). Then

  1. (i)

    \(M^{s}:= (M^{s}(n))_{n \in\mathbf {N}}\) is a \(\mathcal {G}^{s}_{n}\)-martingale.

  2. (ii)

    Let \(|L|=1\).

    1. (a)

      If \(\sigma\in \boldsymbol{ l_{2}}\), then, for some a.s. finite random variable \(\bar{M}^{s}\),

      $$ \lim_{n\to0}M^{s}(n)=\bar{M}^{s}, \quad\textit{a.s.} $$
      (45)
    2. (b)

      If \(\sigma\notin \boldsymbol{l_{2}}\) and \(\mathbf {E} |\xi |^{3}<\infty\), then, a.s.,

      $$\limsup_{n\to\infty}M^{s}(n)=\infty\quad\textit{and} \quad \liminf_{n\to \infty}M^{s}(n)=-\infty. $$
  3. (iii)

    Let \(|L|>1\), then (45) holds.

Proof

From Lemma 7 we conclude that \((\mathcal {G}^{s}_{n})_{n \in \mathbf {N}}\) is a filtration, the random variable \(M^{s}(n)\) is \(\mathcal {G}^{s}_{n}\)-measurable, \(\mathbf {E}H^{s}(j)=0\), and \(\mathbf {E} \vert H^{s}(j) \vert \le H_{\sigma}(K-1)C_{s}\), for each \(j\in\mathbf {N}\). This implies that, for each \(n\in\mathbf {N}\),

$$ \begin{gathered} \mathbf {E}\big|M^{s}(n)\big|\le\sum _{j=0}^{n-1}L^{-j}\mathbf {E}\big|H^{s}(j+1)\big|< \infty \quad\text{and} \\ \mathbf {E} \bigl(M^{s}(n) |\mathcal {G}^{s}_{n-1} \bigr)=\sum_{j=0}^{n-2}L^{-j}H^{s}(j+1)=M^{s}(n-1), \end{gathered} $$

which proves (i).

(ii) Applying (43) we get

$$ \begin{aligned}[b] \bigl\langle M^{s}(n) \bigr\rangle &=\sum_{i=0}^{n-1}\mathbf {E} \bigl(|L|^{-i}H^{s}(i+1) \bigr)^{2}=\sum _{i=0}^{n-1}|L|^{-2i}\mathbf {E} \bigl(H^{s}(i+1) \bigr)^{2} \\ &=\sum_{i=0}^{n-1}|L|^{-2i} \sum _{j=1}^{K} \Biggl[\prod _{k=j}^{K-1}a(k+s) \Biggr]^{2} \sigma^{2}(iK+s+j-1). \end{aligned} $$
(46)

For \(|L|=1\), we have

$$\bigl\langle M^{s}(n) \bigr\rangle =\sum_{i=0}^{n-1} \sum_{j=1}^{K} \Biggl[\prod _{k=j}^{K-1}a(k+s) \Biggr]^{2} \sigma^{2}(iK+s+j-1), $$

and then, for \(C_{s}\) and \(c_{s}\) defined as in (30), we obtain

$$ c_{s}^{2} \sum_{q=s}^{nK+s-1} \sigma^{2} (q)\le\bigl\langle M^{s}(n) \bigr\rangle \le C_{s}^{2}\sum_{q=s}^{nK+s-1} \sigma^{2} (q). $$
(47)

Now part (a) follows from Lemma 4.

To prove part (b) we present \(M^{s}(n)\) in the following form:

$$ M^{s}(n):=\sum_{j=0}^{n-1}L^{-j} \Biggl[\sum_{i=1}^{K} \Biggl(\prod _{\tau =i}^{K-1}a(\tau+s) \Biggr) \sigma(jK+s+i-1) \Biggr] \xi(jK+s+i). $$
(48)

By the substitution

$$q:=jK+s+i-1, \qquad s\le q\le nK+s-1\quad\text{for } 0\le j\le n-1, 1\le i\le K, $$

we transform (48) into

$$ M^{s}(n)=\sum _{q=s}^{nK+s-1} \Biggl[\sum_{i=1}^{K} L^{-\frac{q+1-i-s}{K}} \Biggl(\prod_{\tau=i}^{K-1}a( \tau+s) \Biggr)\sigma(q) \Biggr]\xi(q+1). $$
(49)

Denoting

$$ \Theta(q):=\sum_{i=1}^{K} L^{-\frac{q+1-i-s}{K}} \Biggl(\prod_{\tau =i}^{K-1}a( \tau+s) \Biggr)\sigma(q) $$

we arrive at

$$ M^{s}(n)=\sum_{q=s}^{nK+s-1} \Theta(q)\xi(q+1). $$
(50)

Since \(\sigma\notin \boldsymbol{l_{2}}\) and \(|L|=1\), we have

$$ \sum_{q=0}^{l} \Theta^{2}(q)=\sum_{q=0}^{l} \Biggl[\sum_{i=1}^{K} \Biggl(\prod _{\tau=i}^{K-1}a(\tau+s) \Biggr) \Biggr]^{2} \sigma^{2}(q)\ge c_{s}^{2}(K-1)^{2}\sum _{q=0}^{l}\sigma^{2}(q)\to \infty, $$
(51)

as \(l\to\infty\). Then, for each \(s\in\mathcal {S}\),

$$ \sum_{q=s}^{nK+s-1} \Theta^{2}(q)= \sum_{q=s}^{nK+s-1} \Biggl[\sum_{i=1}^{K} \Biggl(\prod _{\tau=i}^{K-1}a(\tau+s) \Biggr) \Biggr]^{2} \sigma^{2}(q)\ge c_{s}^{2}(K-1)^{2}\sum _{q=s}^{nK+s-1} \sigma ^{2}(q)\to \infty, $$
(52)

as \(n\to\infty\).

In addition, \(\mathbf {E} |\xi|^{2}=1\), \(\mathbf {E} |\xi|^{3}<\infty\), so after application of Lemma 6 we obtain, a.s.,

$$ \limsup_{l\to\infty} \biggl[\frac{\sum_{q=0}^{l }\Theta(q)\xi(q+1)}{\sqrt {\sum_{q=0}^{l}\Theta^{2}(q)}} \biggr]=\infty, \qquad\liminf_{l\to\infty } \biggl[\frac{\sum_{q=0}^{l }\Theta(q)\xi(q+1)}{\sqrt{\sum_{q=0}^{l}\Theta^{2}(q)}} \biggr]=-\infty. $$
(53)

The limits in (53) imply that, for each \(s\in\mathcal {S}\),

$$\limsup_{n\to\infty} \biggl[\frac{\sum_{q=s}^{nK+s-1}\Theta(q)\xi (q+1)}{\sqrt{\sum_{q=s}^{nK+s-1 }\Theta^{2}(q)}} \biggr]=\infty,\qquad \liminf_{n\to\infty} \biggl[\frac{\sum_{q=s}^{nK+s-1}\Theta(q)\xi (q+1)}{\sqrt{\sum_{q=s}^{nK+s-1 }\Theta^{2}(q)}} \biggr]=-\infty. $$

Then, applying (50), (51) and (52), we conclude that

$$\limsup_{n\to\infty}M^{s}(n)=\infty, \qquad\liminf _{n\to\infty }M^{s}(n)=-\infty. $$

(iii) For \(|L|>1\) we obtain from (46) that, for all \(n\in\mathbf {N}\),

$$\bigl\langle M^{s}(n) \bigr\rangle \le(K-1)C_{s}^{2}H_{\sigma}^{2} \sum_{i=0}^{n-1}L^{-2i}< (K-1)C_{s}^{2}H_{\sigma}^{2} \frac{1}{1-L^{-2}}, $$

which along with Lemma 4 implies (45). □

Lemma 9

Let Assumptions 1, 2, 3, and 4 hold. Let \(\mathcal {H}^{s}(n)\) be defined as in (27) and \(\bar{M}^{s}\) be defined as in (45).

  1. (i)

    Let \(|L|=1\).

    1. (a)

      If \(\sigma\in\boldsymbol{l_{2}}\), then \(\lim_{n\to0}\mathcal {H}^{s}(n)=\bar{M}^{s}\) a.s.

    2. (b)

      If \(\sigma\notin \boldsymbol{ l_{2}}\) and \(\mathbf {E} |\xi |^{3}<\infty\), then

      $$\limsup_{n\to\infty}\mathcal {H}^{s}(n)=\infty\quad\textit{and} \quad \liminf_{n\to\infty}\mathcal {H}^{s}(n)=-\infty. $$
  2. (ii)

    Let \(|L|<1\) and

    $$ \lim_{n\to\infty}\sigma(n)\xi(n+1)=0, \quad \textit{a.s.}, $$
    (54)

    then \(\lim_{n\to\infty}\mathcal {H}^{s}(n)=0\) a.s.

  3. (iii)

    Let \(|L|>1\), then \(\bar{M}^{s}\) is a.s. finite and \(\limsup_{n\to0}|\mathcal {H}^{s}(n)|=\infty\) a.s. on the set \(\{\omega: \bar{M}^{s}(\omega)\neq0\}\).

Proof

Parts (i)(a)-(b) follow from Lemma 8(ii)(a)-(b).

To prove (ii) we apply first Lemma 7(vi), and then apply, almost surely, Lemma 11 (see the Appendix).

When \(|L|>1\), Lemma 8(iii), implies that \(M^{s}(n)\to\bar{M}^{s}\), where \(\bar{M}^{s}\) is a.s. finite. Since \(|L|^{n}\to\infty\), part (iii) follows on the set \(\{\omega: \bar{M}^{s}(\omega)\neq0\}\). □

Remark 7

Note that condition (54) holds when \(\xi(n)\) are normally distributed random variables and \(\sigma(n)\) decays as \([\log n]^{-1/2-\varepsilon}\) or or more quickly as \(n\to\infty\).

When tails of ξ decay polynomially, i.e. \([1-F(n)]n^{M}\to \mbox{constant}\) as \(n\to\infty\), where F is the distribution function of the ξ and \(M\ge2\), then (54) holds if and only if \(\sum_{i=1}^{\infty} [\sigma(i) ]^{M}< \infty\).

Note also that assumption \(\sigma\in\boldsymbol {l}_{2}\) implies a.s. convergence of \(\sum_{i=1}^{\infty}\sigma(i)\xi(i+1)\), and, therefore, condition (54).

A detailed analysis of condition (54) can be found in [10] (see also [9, 16]).

6 Almost sure asymptotic periodicity of \(X(n)\)

In Section 6.1 we deal with the a.s. convergence of the solution \(Y^{s}(n)\), and then, in Section 6.2, with the a.s. asymptotic periodicity of the solution \(X(m)\) of the original equation (13).

In Section 6.2 we also discuss the possibility of a partial a.s. periodicity; see Remark 9.

6.1 On limits of \(Y^{s}(n)\)

In this section we prove Lemma 10 about the asymptotic behaviour of \(Y^{s}(n)\), applying Lemmata 3 and 9 and equation (20). Proposition 1, which is a corollary of Lemma 10, contains several sharp results about convergence of \(Y^{s}(n)\).

Lemma 10

Let Assumptions 1, 2, 3, and 4 hold. Let ĝ be defined as in (12) and \(\mathcal {S}\) be defined as in (15).

Let \(s\in\mathcal {S}\). Let \(\mathcal {A}_{s}\) be defined as in (29) and \(Y^{s}\) be defined as in (28).

  1. (i)

    Let \(L=1\), \(\sigma\in\boldsymbol {l}_{2}\), \(\hat{g}=0\), \(\sum_{i=1}^{\infty}|g(i)|<\infty\). Then there exists an a.s. finite random variable \(\mathcal {Q}^{s}\) such that

    $$ \lim_{n\to\infty}\big|Y^{s}(n)-\mathcal {Q}^{s}\big|=0, \quad \textit{a.s.} $$
    (55)
  2. (ii)

    Let \(L=1\), \(\sigma\in\boldsymbol {l}_{2}\), \(\mathcal {A}_{s}=0\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\). Then there exists an a.s. finite random variable \(\mathcal {Q}^{s}\) such that (55) holds.

  3. (iii)

    Let \(L=-1\), \(\sigma\in\boldsymbol {l}_{2}\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\). Then there exist an a.s. finite random variable \(\mathcal {Q}^{s}\) and a 2-periodic nonrandom function \(\hat{V}^{s}(n)\) such that

    $$ \lim_{n\to0}\big| Y^{s}(n)-\mathcal {Q}^{s} -\hat{V}^{s}(n)\big|=0, \quad \textit{a.s.} $$
    (56)
  4. (iv)

    Let \(|L|<1\) and let condition (54) hold. Then

    $$\lim_{n\to\infty}Y^{s}(n)=\frac{\hat{g} \mathcal {A}_{s}}{1-L}. $$

Proof

In cases (i)-(ii) we have \(Y^{s}(n)=Y^{s}(0)+\mathcal {V}^{s}(n)+\mathcal {H}^{s}(n+1)\), Lemma 3(i)(b), and Lemma 9(i)(a), hold, and then

$$\begin{aligned}& \lim_{n\to\infty}\mathcal {H}^{s}(n)=\bar{M}^{s},\quad\text{where } \bar{M}^{s}\text{ is a.s. finite random variable}, \end{aligned}$$
(57)
$$\begin{aligned}& \lim_{n\to\infty}\mathcal {V}^{s}(n)=\overline{\mathcal {V}^{s}}. \end{aligned}$$
(58)

The result holds for \(\mathcal {Q}^{s}=Y^{s}(0)+\overline{\mathcal {V}^{s}}+\bar{M}^{s}\), where \(\overline{\mathcal {V}^{s}}\) defined by (33), with \(\hat{g}=0\) in case (i). Note that \(\overline{\mathcal {V}^{s}}=0\) if \(g(n)\equiv0\).

In case (iii) we have \(Y^{s}(n)=(-1)^{n}Y^{s}(0)+\mathcal {V}^{s}(n)+\mathcal {H}^{s}(n+1)\), Lemma 3(ii), and Lemma 9(i)(a), hold. So, in addition to (57), we have

$$\lim_{n\to\infty} \bigl\vert \mathcal {V}^{s}(n)- \overline{\mathcal {V}^{s}}-\mathcal {V}_{1}^{s}(n) \bigr\vert =0, $$

where \(\overline{\mathcal {V}^{s}}\) is number defined by (33) and \(\mathcal {V}_{1}^{s}(n)\) is a 2-periodic nonrandom function. Then the result holds for \(\mathcal {Q}^{s}=\overline{\mathcal {V}^{s}}+\bar{M}^{s}\) and \(\hat{V}^{s}(n)= (-1)^{n}Y^{s}(0)+\mathcal {V}_{1}^{s}(n)\).

In case (iv) we have \(Y^{s}(n)= L^{n}Y^{s}(0)+\mathcal {V}^{s}(n)+\mathcal {H}^{s}(n+1)\), Lemma 3(iii), and Lemma 9(ii), hold, and

$$\lim_{n\to\infty} L^{n}Y^{s}(0)=0, \qquad\lim _{n\to\infty}\mathcal {V}^{s}(n) = \frac{ \hat{g}\mathcal {A}_{s}}{1-L}, \quad \text{and} \quad\lim_{n\to \infty}\mathcal {H}^{s}(n) = 0, $$

which implies the result. □

Remark 8

Recalling that \(\operatorname{Var} (\bar{M}^{s})\neq0\), we can conclude that under assumptions of Lemma 10,

  1. (a)

    \(Y^{s}(n)\) converges either to an a.s. finite random variable in (i)-(ii) and (iv), or to a 2-periodic function in (iii);

  2. (b)

    The limit in (iv) is nonrandom, while the limits in (i)-(iii) are random.

  3. (c)

    In all cases the limit of \(Y^{s}(n)\) may depend on \(s\in \mathcal {S}\); see Example 1, Section 7.

  4. (d)

    The only case when the limit of \(Y^{s}(n)\) can be zero is given in (iv), when either \(\hat{g}=0\) or \(\mathcal {A}_{s}\equiv0\).

In the next proposition, which is a corollary of Lemmata 10 and 9, we highlight the cases when condition \(\sigma\in \boldsymbol {l}_{2}\) is necessary and sufficient for the convergence of \(Y^{s}(n)\) to an a.s. finite random variable (or to 2-periodic nonrandom function).

Proposition 1

Let Assumptions 1, 2, 3, and 4 hold and let \(\mathbf {E}|\xi|^{3}<\infty\). Let ĝ be defined as in (12), \(\mathcal {S}\) be defined as in (15), L be defined as in (17), \(\mathcal {A}_{s}\) be defined as in (29).

  1. (i)

    Let \(L=1\), \(\hat{g}=0\), \(\sum_{i=1}^{\infty}|g(i)|<\infty\). Then, for each \(s\in\mathcal {S}\), there exists an a.s. finite random variable \(\mathcal {Q}^{s}\) such that

    $$ \lim_{n\to0}\big|Y^{s}(n)-\mathcal {Q}^{s}\big|=0, \quad \textit{a.s.},\quad \textit{if and only if}\quad \sigma\in\boldsymbol {l}_{2}. $$
    (59)
  2. (ii)

    Let \(L=1\), \(\hat{g}\neq0\) and \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\). Let \(\mathcal {A}_{s}=0\) for some \(s\in\mathcal {S}\). Then there exists an a.s. finite random variable \(\mathcal {Q}^{s}\) such that (59) holds.

  3. (iii)

    Let \(L=-1\). Then, for each \(s\in\mathcal {S}\), there exist an a.s. finite random variable \(\mathcal {Q}^{s}\) and 2-periodic nonrandom function \(\hat{V}^{s}(n)\) such that

    $$\lim_{n\to0}\big| Y^{s}(n)-\mathcal {Q}^{s} -\hat{V}^{s}(n)\big|=0, \quad\textit{a.s.},\quad \textit{if and only if}\quad \sigma\in\boldsymbol {l}_{2}. $$

Proof

Lemma 10(i)-(ii), implies the sufficiency for parts (i)-(ii), respectively. To prove the necessity, assume that \(\sigma \notin\boldsymbol {l}_{2}\). By Lemma 9(i)(b), \(\limsup_{n\to 0}|\mathcal {H}^{s}(n)|=\infty\), a.s. The first term, \(L^{n}Y^{s}(0)=Y^{s}(0)\) in the right-hand-side of (28) is a.s. bounded, and, by Lemma 10(i)-(ii), the second term \(\mathcal {V}^{s}(n)\) is nonrandom and converges. This implies that \(\limsup_{n\to0}|Y^{s}(n)| = \infty\), a.s.

Lemma 10(iii), implies the sufficiency for part (iii). To prove the necessity, we are reasoning as in the proof for parts (i)-(ii). By Lemma 9(i)(b), \(\limsup_{n\to0}|\mathcal {H}^{s}(n)|=\infty\), a.s. if \(\sigma\notin\boldsymbol {l}_{2}\). Since the first term \((-1)^{n}Y^{s}(0)\) in the right-hand-side of (28) is a.s. bounded, and, by Lemma 10(iii), the second term \(\mathcal {V}^{s}(n)\) is nonrandom and bounded, we have \(\limsup_{n\to0}|Y^{s}(n)| = \infty\), a.s. □

6.2 Almost sure asymptotic periodicity of \(X(n)\)

In this section we return to the solution X of the original problem (13). Armed with Lemma 10 we formulate the main result of the paper, Theorem 1, which establishes conditions of a.s. asymptotic periodicity of \(X(n)\).

Define a set

$$ \mathcal {E}:=\{0, 1, \dots, K-1, K, \dots, 2K-1 \}. $$
(60)

Theorem 1

Let Assumptions 1, 2, 3, and 4 hold. Let \(\mathcal {S}\) be defined as in (15), ĝ be defined as in (12), \(\mathcal {A}_{s}\) be defined as in (29), \(\mathcal {E}\) be defined as in (60).

If X is a solution to equation (13), then:

  1. (i)

    There exists an a.s. finite random function \(\mathcal {R}(s)\), defined on \(\mathcal {S}\), such that

    $$ \lim_{n\to\infty}\big|X(nK+s)-\mathcal {R}(s)\big|=0, \quad \textit{a.s. for each }s\in\mathcal {S}, $$
    (61)

    if one of the following conditions holds:

    1. (a)

      \(L=1\), \(\sigma\in\boldsymbol {l}_{2}\), \(\hat{g}=0\), \(\sum_{i=1}^{\infty}|g(i)|<\infty\).

    2. (b)

      \(L=1\), \(\sigma\in\boldsymbol {l}_{2}\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\), and \(\mathcal {A}_{s}=0\) for each \(s\in\mathcal {S}\).

    3. (c)

      \(|L|<1\) and condition (54) holds.

  2. (ii)

    There exists an a.s. finite random function \(\mathcal {R}(e)\), defined on \(\mathcal {E}\), such that

    $$ \lim_{n\to\infty}\big|X(2nK+e)-\mathcal {R}(e)\big|=0, \quad \textit{a.s. for each }e\in\mathcal {E}, $$
    (62)

    if \(L=-1\), \(\sigma\in\boldsymbol {l}_{2}\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\).

Proof

Since \(X(nK+s)=Y^{s}(n)\), the results for (i)(a)-(i)(b) follow from Lemma 10(i)-(ii), respectively, with

$$\mathcal {R}(s)=Y^{s}(0)+\overline{\mathcal {V}^{s}}+\bar{M}^{s}. $$

The result for (i)(c) follows from Lemma 10(iv), with

$$\mathcal {R}(s)=\frac{\hat{g} \mathcal {A}_{s}}{1-L}. $$

Now we prove part (ii). Let \(\mathcal {Q}^{s}\) and \(\hat{V}^{s}(n)\) be, respectively, an a.s. finite random variable and 2-periodic nonrandom function defined as in Lemma 10(iii) (see also Lemma 3(ii)):

$$\mathcal {Q}^{s}=\overline{\mathcal {V}^{s}}+\bar{M}^{s}, \qquad\hat{V}^{s}(n)=(-1)^{n}Y^{s}(0)+ \mathcal {V}_{1}^{s}(n), \quad\text{for each } s\in \mathcal {S}, n\in\mathbf {N}. $$

Define a random function \(\mathcal {R}(e)\) on \(\mathcal {E}\) by the following:

$$ \begin{gathered} \mathcal {R}(0)=\mathcal {Q}^{0}+Y^{0}(0)+\mathcal {V}_{1}^{0}(0), \\ \mathcal {R}(1)=\mathcal {Q}^{1}+Y^{1}(0)+\mathcal {V}_{1}^{1}(0), \\ \dots \\ \mathcal {R}(K-1)=\mathcal {Q}^{K-1}+Y^{K-1}(0)+\mathcal {V}_{1}^{K-1}(0), \\ \mathcal {R}(K)=\mathcal {Q}^{0}-Y^{0}(0)+\mathcal {V}_{1}^{0}(1), \\ \mathcal {R}(K+1)=\mathcal {Q}^{1}-Y^{1}(0)+\mathcal {V}_{1}^{1}(1), \\ \dots \\ \mathcal {R}(2K-1)=\mathcal {Q}^{K-1}-Y^{K-1}(0)+\mathcal {V}_{1}^{K-1}(1). \end{gathered} $$
(63)

Recall that \(X(2kK+e)=Y^{e}(2k)\) for \(e\in\mathcal {S}\) and \(X(2kK+e)=X((2k+1)K+e-K)= Y^{e-K}(2k+1)\) for \(e-K\in\mathcal {S}\). So the result follows from equation (56) in Lemma 10(iii). □

Remark 9

Note that only in case (c) of Theorem 1, solution X tends to a periodic nonrandom function \(\hat{V}(n)\), which is identical to zero if either \(\hat{g}=0\) or \(\mathcal {A}_{s}=0\) for all \(s\in\mathcal {S}\). In all other cases X tends to a periodic stochastic process, which has nonzero variance.

It can be proved that, when \(K=3\), the case \(\mathcal {A}_{s}\equiv A\neq 0\), \(s=0, 1, 2\), is possible only when \(a(i)=a\neq0\), \(i=1,2,3\). However, this case cannot be considered as 3-periodic. So for \(|L|<1\), \(K=3\), X cannot converge to a constant nonzero limit.

Now we formulate the sharp statement about the asymptotic behaviour of X.

Proposition 2

Assumptions 1, 2, 3, and 4 hold and let \(\mathbf {E}|\xi|^{3}<\infty\). Let \(\mathcal {S}\) be defined as in (15), ĝ be defined as in (12), \(\mathcal {A}_{s}\) be defined as in (29), \(\mathcal {E}\) be defined as in (60). Let X be a solution to equation (13).

  1. (i)

    Let one of the following conditions hold:

    1. (a)

      \(L=1\), \(\hat{g}=0\), \(\sum_{i=1}^{\infty}|g(i)|<\infty\).

    2. (b)

      \(L=1\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\), and \(\mathcal {A}_{s}=0\) for each \(s\in\mathcal {S}\).

    Then there exists an a.s. finite random function \(\mathcal {R}(s)\), defined on \(\mathcal {S}\), such that

    $$\lim_{n\to\infty}\big|X(nK+s)-\mathcal {R}(s)\big|=0, \quad \textit{for each }s\in \mathcal {S}, \textit{a.s.}, $$

    if and only if \(\sigma\in\boldsymbol {l}_{2}\).

  2. (ii)

    Let \(L=-1\), \(\sum_{i=1}^{\infty}|g(i)-\hat{g}|<\infty\). Then there exists an a.s. finite random function \(\mathcal {R}(e)\), defined on \(\mathcal {E}\), such that

    $$ \lim_{n\to\infty}\big|X(2nK+e)-\mathcal {R}(e)\big|=0, \quad\textit{for each }e\in\mathcal {E}, \textit{ a.s.}, $$

    if and only if \(\sigma\in\boldsymbol {l}_{2}\).

Proof

Since \(X(nK+s)=Y^{s}(n)\), the results for parts (i)(a)-(i)(b) follow from Proposition 1(i)-(ii), and the results for part (ii) follows from Proposition 1(iii). □

7 Examples and simulations

7.1 Calculations of \(\mathcal {A}_{s} \)

In Example 1 we present \(a(i)\) such that either \(\mathcal {A}_{s_{1}}=0\), but \(\mathcal {A}_{s_{2}}\neq0\) for some \(s_{1}, s_{2}\in\mathcal {S}\) or \(\mathcal {A}_{s}=0\) for all \(s\in\mathcal {S}\). However, the first case can happen only if \(L\neq1\), as will be shown in Example 2.

Example 1

Let \(\mathcal {A}_{s}\) be defined as in (29). We consider \(K=2\) and \(K=3\).

  1. (i)

    \(K=2\), \(\mathcal {A}_{s}=\sum_{j=1}^{2}\prod_{\tau=j}^{1}a(\tau +s)\), so

    $$ \begin{gathered}\mathcal {A}_{0}=\sum _{j=1}^{2}\prod_{\tau=j}^{1}a( \tau)= a(1)+1, \\ \mathcal {A}_{1}=\sum_{j=1}^{2} \prod_{\tau=j}^{1}a(\tau+1)=a(2)+1=a(0)+1. \end{gathered} $$

    For \(a(0)=-1\), \(a(1)=1\), we have \(\mathcal {A}_{0}=2\), \(\mathcal {A}_{1}=0\).

  2. (ii)

    \(K=3\) and \(\mathcal {A}_{s}=\sum_{j=1}^{3}\prod_{\tau=j}^{2}a(\tau +s)\), so

    $$ \begin{gathered} A_{0}=\sum _{j=1}^{3}\prod_{\tau=j}^{2}a( \tau)=a(1)a(2)+a(2)+1, \\ \mathcal {A}_{1}=\sum_{j=1}^{3} \prod_{\tau=j}^{2}a(\tau+1)=a(2)a(0)+a(0)+1, \\ \mathcal {A}_{2}=\sum_{j=1}^{3} \prod_{\tau=j}^{2}a(\tau+2)=a(0)a(1)+a(1)+1. \end{gathered} $$
    (64)
    1. (a)

      For \(a(1)=1\), \(a(0)=-2\), \(a(2)=2\) we have \(\mathcal {A}_{0}=5\), \(\mathcal {A}_{1}=-5\), \(\mathcal {A}_{2}=0\).

    2. (b)

      For \(a(1)=1\), \(a(2)=-\frac{1}{a(1)+1}=-\frac{1}{2}\), \(a(0)=-\frac {a(1)+1}{a(1)}=-2\), we have \(\mathcal {A}_{0}=0\), \(\mathcal {A}_{1}=0\), \(\mathcal {A}_{2}=0\).

Example 2

Suppose that

$$ L=\prod_{i=0}^{K-1}a_{i}=1. $$
(65)

We show that if \(\mathcal {A}_{0}=0\) then \(\mathcal {A}_{s}=0\) for all \(s=1, 2, \dots, K-1\), so partial periodicity is not possible.

We have

$$\mathcal {A}_{0}=\prod_{i=1}^{K-1}a_{i}+ \prod_{i=2}^{K-1}a_{i}+ \cdots+a_{K-1}+1 =\prod_{i=1}^{K-1}a_{i} \biggl[1+\frac{1}{a_{1}} +\frac{1}{a_{1}a_{2}}+\cdots\frac{1}{\prod_{i=1}^{K-1}a_{i}} \biggr]=0 $$

so

$$\frac{1}{a_{1}} +\frac{1}{a_{1}a_{2}}+\cdots\frac{1}{\prod_{i=1}^{K-1}a_{i}}=-1. $$

But then

$$\mathcal {A}_{1}=a_{0}\prod_{i=2}^{K-1}a_{i}+a_{0} \prod_{i=3}^{K-1}a_{i}+ \cdots+a_{0}+1 =L \biggl[\frac{1}{a_{1}}+ \frac{1}{a_{1}a_{2}}+\cdots+ \frac{1}{\prod_{i=1}^{K-1}a_{i}}+\frac{1}{L} \biggr] =0. $$

Similar calculations can be done for each \(\mathcal {A}_{s}\).

7.2 Simulations

In this section we illustrate our results with computer simulations. We consider equation (2) for different types of \(a(m)\). In all the examples random variables \(\xi(m)\) are supposed to be independent and normally \(\mathcal {N}(0, 1)\) distributed, \(X(0)=2\) and

$$ \begin{gathered} \sigma(m)=\frac{\sigma}{m+1}, \\ g(m)=\hat{g}+\frac{1}{(m+1)^{2}}, \quad \text{with either $\hat{g}=1$ or $\hat{g}=0$}, \quad m\in\mathbf {N}_{0}. \end{gathered} $$
(66)

So the equation which we are simulating is

$$ \begin{gathered} X(m+1)=a(m)X(m)+ \biggl[\hat{g}+ \frac{\sigma}{(m+1)^{2}} \biggr]+\frac{1}{m+1}\xi(m+1), \quad m\in\mathbf {N}_{0}, \\ X(0)=2. \end{gathered} $$
(67)

Example 3

Let

$$K=2, \qquad a(0)=-1, \qquad a(1)=1, \qquad\hat{g}=1, $$

so

$$L=a(0) a(1)=-1, \qquad \mathcal {A}_{0}=2, \qquad\mathcal {A}_{1}=0. $$

Since the assumptions of Theorem 1(ii), hold, we can expect to get four random limits of the solution. More exactly, the limits \(\mathcal {R}(e)\), \(e\in\{0, 1, 2, 3\}\), are given by (63). The following simulations illustrate the results. In all the simulations, we have used \(\sigma=0.1\).

Figure 1 demonstrates one run, with four coloured lines indicating the random limits for \(e=0, 1, 2, 3\).

Figure 1
figure 1

\(\pmb{K=2}\) , \(\pmb{a(0)=-1}\) , \(\pmb{a(1)=1}\) , \(\pmb{\hat{g}=1}\) . One sample trajectory, showing the dynamics of the solution (black line) and the four limits for \(e=0\) (red), \(e=1\) (yellow), \(e=2\) (blue) and \(e=3\) (green).

Figures 2 and 3 demonstrate two different samples of ten runs, showing the random limits for \(e=0, 1, 2, 3\).

Figure 2
figure 2

\(\pmb{K=2}\) , \(\pmb{a(0)=-1}\) , \(\pmb{a(1)=1}\) , \(\pmb{\hat{g}=1}\) . A set of ten sample trajectories, showing the four random limits for \(e=0\) (red), \(e=1\) (yellow), \(e=2\) (blue) and \(e=3\) (green).

Figure 3
figure 3

\(\pmb{K=2}\) , \(\pmb{a(0)=-1}\) , \(\pmb{a(1)=1}\) , \(\pmb{\hat{g}=1}\) . A set of ten sample trajectories, showing the four random limits for \(e=0\) (red), \(e=1\) (yellow), \(e=2\) (blue) and \(e=3\) (green).

Figures 4 and 5 demonstrate the random limits for \(e=0\) and \(e=2\), respectively, for 80 different runs.

Figure 4
figure 4

\(\pmb{K=2}\) , \(\pmb{a(0)=-1}\) , \(\pmb{a(1)=1}\) , \(\pmb{\hat{g}=1}\) . The random limit for \(e=0\) for 80 different runs.

Figure 5
figure 5

\(\pmb{K=2}\) , \(\pmb{a(0)=-1}\) , \(\pmb{a(1)=1}\) , \(\pmb{\hat{g}=1}\) . The random limit for \(e=2\) for 80 different runs.

Example 4

Let

$$K=2, \qquad a(0)=1, \qquad a(1)=\frac{1}{2}, \qquad\hat{g}=1, $$

so

$$L=a(0) a(1)=\frac{1}{2}< 1, \qquad \mathcal {A}_{0}=a(1)+1= \frac{3}{2}, \qquad\mathcal {A}_{1}=a(2)+1=a(0)+1=2. $$

Now we are under assumptions of Theorem 1(i)(c), so we can expect to get two nonrandom limits of the solution. More exactly, they are \(\mathcal {R}(s)\), \(s\in\{0, 1\}\), where

$$\mathcal {R}(s)= \frac{\hat{g} \mathcal {A}_{s}}{1-L}=2\mathcal {A}_{s}, \quad \text{so } \mathcal {R}(0)= 3, \mathcal {R}(1)= 4. $$

Figure 6 demonstrates one run showing also the two limits for \(s=0, 1\), while Figure 7 demonstrates 10 different runs, showing nonrandom limits for \(s=0\) and \(s=1\). Both simulations used \(\sigma=0.5\).

Figure 6
figure 6

\(\pmb{K=2}\) , \(\pmb{a(0)=1}\) , \(\pmb{a(1)=1/2}\) , \(\pmb{\hat{g}=1}\) . Sample trajectory, showing the dynamics of the solution (black line) and the two limits for \(s=0\) (red) and \(s=1\) (blue).

Figure 7
figure 7

\(\pmb{K=2}\) , \(\pmb{a(0)=1}\) , \(\pmb{a(1)=1/2}\) , \(\pmb{\hat{g}=1}\) . Ten trajectories, showing two nonrandom limits for \(s=0\) (red) and \(s=1\) (blue).

Example 5

Let

$$K=2, \qquad a(0)=\frac{1}{2}, \qquad a(1)=2,\qquad\hat{g}=1, $$

so

$$L=a(0) a(1)=1, \qquad \mathcal {A}_{0}=a(1)+1=3, \qquad\mathcal {A}_{1}=a(0)+1=\frac{3}{2}. $$

Recall that, for \(L=1\), \(\mathcal {S}=\{0, 1\}\) we have

$$ X(nK+s)= X(0)+\mathcal {V}^{s}(n-1)+ \mathcal {H}^{s}(n), \quad n\in\mathbf {N}, s\in\{0, 1\}, \qquad X(0)=\varsigma, $$

where \(\mathcal {V}^{s}\) and \(\mathcal {H}^{s}\) are defined by (27). Since both \(\mathcal {A}_{0}\neq0\) and \(\mathcal {A}_{1}\neq0\), and also \(\hat{g}\neq0\), we are under assumptions of Lemma 3(i)(a), which implies that \(|\mathcal {V}^{s}(n)|\to\infty\). Since \(\bar{M}^{s}\) is a.s. finite and \(L=1\), this implies that \(\lim_{n\to\infty}|X(2n+s)|=\infty\) for each \(s=0, 1\). This is illustrated on Figure 8.

Figure 8
figure 8

\(\pmb{K=2}\) , \(\pmb{a(0)=1/2}\) , \(\pmb{a(1)=2}\) , \(\pmb{\hat{g}=1}\) . Sample trajectory, demonstrating the divergence of the solution, \(s=0\) (red) and \(s=1\) (blue).

If, however, we assume \(\hat{g}=0\), i.e. we simulate the solution of the equation

$$ X(m+1)=a(m)X(m)+\frac{1}{(m+1)^{2}}+\frac{1}{m+1}\xi(m+1), \quad m\in \mathbf {N}_{0},\qquad X(0)=\varsigma, $$

with \(a(m)\) given above, we will get two random a.s. bounded limits. Figure 9 shows the two random limits, \(s=0,1\), for 20 different trajectories.

Figure 9
figure 9

\(\pmb{K=2}\) , \(\pmb{a(0)=1/2}\) , \(\pmb{a(1)=2}\) , \(\pmb{\hat{g}=0}\) . Twenty trajectories, showing two random limits for \(s=0\) (red) and \(s=1\) (blue).

References

  1. Elaydi, SN: An Introduction to Difference Equations, 2nd edn. Springer, Berlin (1999)

    Book  MATH  Google Scholar 

  2. Dannan, F, Elaydi, S, Liu, P: Periodic solutions of difference equations. J. Differ. Equ. Appl. 6(2), 203-232 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  3. Grove, EA, Ladas, G: Periodicities in Nonlinear Difference Equations. Chapman & Hall/CRC, Boca Raton (2005)

    MATH  Google Scholar 

  4. Diblik, J, Ruzickova, M, Schmeidel, E, Zbaszyniak, M: Weighted asymptotically periodic solutions of linear Volterra difference equations. Abstr. Appl. Anal. (2011). doi:10.1155/2011/370982

    MathSciNet  MATH  Google Scholar 

  5. Diblik, J, Ruzickova, M, Schmeidel, E: Existence of asymptotically periodic solutions of system of Volterra difference equations. J. Differ. Equ. Appl. 15(11-12), 1165-1177 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Agarval, RP, Romanenko, EY: Stable periodic solutions of difference equations. Appl. Math. Lett. 11(4), 81-84 (1998)

    Article  MathSciNet  Google Scholar 

  7. Appleby, JAD, Mao, X, Rodkina, A: On stochastic stabilization of difference equations. Discrete Contin. Dyn. Syst., Ser. A 15(3), 843-857 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  8. Appleby, JAD, Kelly, C, Mao, X, Rodkina, A: On the local dynamics of polynomial difference equations with fading stochastic perturbations. Dyn. Contin. Discrete Impuls. Syst., Ser. A Math. Anal. 17(3), 401-430 (2010)

    MathSciNet  MATH  Google Scholar 

  9. Appleby, JAD, Berkolaiko, G, Rodkina, A: Non-exponential stability and decay rates in nonlinear stochastic difference equations with unbounded noise. Stoch. Int. J. Probab. Stoch. Process. 81(2), 99-127 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  10. Appleby, JAD, Berkolaiko, G, Rodkina, A: On local stability for a nonlinear difference equation with a non-hyperbolic equilibrium and fading stochastic perturbations. J. Differ. Equ. Appl. 14(9), 923-951 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  11. Berkolaiko, G, Rodkina, A: Almost sure convergence of solutions to non-homogeneous stochastic difference equation. J. Differ. Equ. Appl. 12(6), 535-553 (2006)

    Article  MATH  Google Scholar 

  12. Feng, C, Zhao, H, Zhou, B: Pathwise random periodic solutions of stochastic differential equations. J. Differ. Equ. 251, 119-149 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Cao, J, Yang, Q, Huang, Z, Liu, Q: Asymptotically almost periodic solutions of stochastic functional differential equations. Appl. Math. Comput. 218, 1499-1511 (2011)

    MathSciNet  MATH  Google Scholar 

  14. Yang, L, Li, Y: Periodic solutions for impulsive BAM neural networks with time-varying delays in leakage term. Int. J. Differ. Equ. (2013). doi:10.1155/2013/543947

    MathSciNet  MATH  Google Scholar 

  15. Dokuchaev, N, Rodkina, A: On limit periodicity of discrete time stochastic processes. Stoch. Dyn. 14(4), 1450011 (2014). doi:10.1142/S0219493714500117

    Article  MathSciNet  MATH  Google Scholar 

  16. Chan, T, Williams, D: An excursions approach to an annealing problem. Math. Proc. Camb. Philos. Soc. 105, 169-176 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  17. Shiryaev, AN: Probability, 2nd edn. Springer, Berlin (1996)

    Book  MATH  Google Scholar 

Download references

Acknowledgements

This work was done when the first author worked as a visiting professor at the Department of Statistics, Science Campus, The University of the South Africa, Johannesburg, South Africa. The authors are grateful to the anonymous referees for their comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexandra Rodkina.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The authors contributed equally to this paper. They read and approved the final manuscript.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

First we formulate and prove an auxiliary lemma, which is used in the proofs of Lemma 3 in Section 3 and Lemma 9 in Section 5. After that we present a proof of Lemma 6.

Lemma 11

Let \((\alpha_{n})_{n\in\mathbf {N}}\) be a sequence of real numbers such that \(\lim_{n\to\infty}\alpha_{n}=\bar{\alpha}\) and let \(|l|<1\). Then

$$\lim_{n\to\infty} l^{n}\sum_{i=0}^{n}l^{-i} \alpha_{i}=\frac{\bar{\alpha}}{1-l}. $$

Proof

Let \(A>0\) be such a number that, for each \(n\in\mathbf {N}\),

$$|\alpha_{i}-\hat{\alpha}|\le A. $$

Fix some \(\varepsilon>0\) and find \(N_{1}\in\mathbf {N}\) such that, for \(n\ge N_{1}\)

$$|\alpha_{i}-\hat{\alpha}|< \frac{\varepsilon(1-|l|)}{4}. $$

Let

$$N_{2}>\max \biggl\{ N_{1}, \log_{|l|} \frac{\varepsilon (1-|l|)}{2A(|l|^{-N_{1}}-|l|)} \biggr\} . $$

Then, for \(n\ge N_{2}\),

$$ \begin{aligned}\Biggl\vert l^{n}\sum_{i=0}^{n}l^{-i} \alpha_{i}-\bar{\alpha}l^{n}\sum_{i=0}^{n}l^{-i} \Biggr\vert &\le |l|^{n}\sum_{i=0}^{N_{1}}|l|^{-i} |\alpha_{i}-\bar{\alpha}|+|l|^{n}\sum _{N_{1}+1}^{n}|l|^{-i} |\alpha_{i}- \bar{\alpha}| \\ &\le A|l|^{n}\frac{|l|^{-N_{1}}-|l|}{1-|l|}+\frac{\varepsilon(1-|l|)}{4}\frac {1-|l|^{n-N_{1}}}{1-|l|}\le \varepsilon, \end{aligned}$$

which concludes the proof. □

Proof of Lemma 6

For \(c>0\) define the events

$$ \begin{gathered} A_{c}=\Biggl\{ \omega: \limsup _{n\to\infty} \frac{1}{\sqrt{\sum_{i=0}^{n} |\beta (i)|^{2}\mathbf {E}|\eta(i)|^{2}}} \sum_{i=0}^{n} \beta(i)\eta(i)>c\Biggr\} , \\ A=\Biggl\{ \omega: \limsup_{n\to\infty} \frac{1}{\sqrt{\sum_{i=0}^{n} |\beta(i)|^{2}\mathbf {E}|\eta(i)|^{2}}}\sum _{i=0}^{n} \beta(i)\eta(i)=\infty\Biggr\} . \end{gathered} $$

Then \(A_{c}\downarrow A\) as \(c\to\infty\). The events \(A_{c}\) are tail events. Therefore it follows, from the independence of the sequence \((\eta(n))_{n\in\mathbb{N}}\) and the zero-one law, that

$$ \mathbb{P}[A_{c}]>0 \quad\text{for every $c>0$} $$
(68)

implies \(\mathbb{P}[A_{c}]=1\), and so \(\mathbb{P}[A]=\lim_{c\to\infty} \mathbb{P}[A_{c}]=1\). Therefore it suffices to prove (68) to establish the first part of (40).

Using the fact that for any sequence of random variables \(\{\chi(n)\}_{n\in\mathbb{N}}\) we have

$$\Bigl\{ \omega:\limsup_{n\to\infty} \chi(n) (\omega)>x\Bigr\} \supseteq \bigl\{ \omega:\chi(n) (\omega)>x\text{ i.o.}\bigr\} , \quad\text{for all $x\in \mathbb{R}$}, $$

and the fact that \(\mathbb{P}[B_{n} \text{ i.o.}]\geq\limsup_{n\to\infty} \mathbb{P}[B_{n}]\) for any sequence of events \(\{B_{n}\}_{n\in\mathbb {N}}\), and then Corollary 3 in turn, we get

$$\begin{aligned} \mathbb{P}[A_{c}]&=\mathbb{P} \Biggl[\limsup_{n\to\infty} \frac{1}{\sqrt{\sum_{i=0}^{n} |\beta(i)|^{2}\mathbf {E}|\eta(i)|^{2}}}\sum_{i=1}^{n} \beta(i) \eta(i)>c \Biggr] \\ &\geq \mathbb{P} \Biggl[ \frac{1}{\sqrt{\sum_{i=0}^{n} |\beta(i)|^{2} E|\eta (i)|^{2}}}\sum_{i=0}^{n} \beta(i)\eta(i)>c \text{ i.o.} \Biggr] \\ &\geq\limsup_{n\to\infty} \mathbb{P} \Biggl[ \frac{1}{\sqrt{\sum_{i=0}^{n} |\beta(i)|^{2}\mathbf {E}|\eta(i)|^{2}}}\sum _{i=0}^{n} \beta(i)\eta(i)>c \Biggr] \\ &= \lim_{n\to\infty} \mathbb{P} \Biggl[ \frac{1}{\sqrt{\sum_{i=0}^{n} |\beta(i)|^{2}\mathbf {E}|\eta (i)|^{2}}}\sum _{i=0}^{n} \beta(i)\eta(i)>c \Biggr] =1-\Phi(c), \end{aligned}$$

proving (40). □

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rodkina, A., Rapoo, E. On almost sure asymptotic periodicity for scalar stochastic difference equations. Adv Differ Equ 2017, 220 (2017). https://doi.org/10.1186/s13662-017-1269-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-017-1269-0

MSC

Keywords