Investigation of linear difference equations with random effects

*Correspondence: mmerdan@gumushane.edu.tr 1Department of Mathematical Engineering, Gümüşhane University, Gümüşhane, 29100, Turkey Abstract In this study, random linear difference equations obtained by transforming the components of deterministic difference equations to random variables are investigated. Uniform, Bernoulli, binomial, negative binomial (or Pascal), geometric, hypergeometric and Poisson distributions have been used for the random effects for obtaining the random behavior of linear difference equations. The random version of the Z-transform, the RZ-transform, has been used to obtain an approximation for the random linear difference equation. Approximate expected values and variances are calculated by using the RZ-transform. The results have been obtained with Maple and are shown in graphs. It is shown that the random Z-transform is an effective tool for the investigation of random linear difference equations.


Introduction
Difference equations are known as the first theory to emerge with the systematical development of mathematics. Having emerged as the discrete analogues of differential equations, difference equations are a field of mathematics with a rich application area. Difference equations found in economy, biology, signal processing, computer engineering, genetics, medicine, ecology and digital control some of its application fields [1][2][3][4][5][6][7][8][9].
Today, difference equations are used for system analysis and design with the use of the Z-transform. The Z-transform has been introduced in mid-20th century [10]. The ztransform, together with probability theory, was first introduced by de Moivre in 1730 and is known among mathematicians as the "generator function method". The Z-transform of an array is the generating function of this array, where the independent variable Z is replaced by a mutual 1/Z. It has been used within probability theory [11] and to treat data control systems [12]. In 1952, it was called the Z-transform by Ragazzini and Zadeh [13].
In this study the random version of the Z-transform, the RZ-transform, has been applied to obtain an approximation to the solution of random linear difference equation. Difference equations are transformed into random difference equations through the use of several probability distributions. The approximate solutions obtained by Z-transform are used to obtain the approximate expected values and variances of random linear difference equations, which are shown in graphs. Sections 2 and 3 contain introductory information on linear difference equations and Z-transform method. Section 4 contains numerical examples with uniform, geometric, Poisson and Bernoulli distributions.
Definition 2 Let x be a continuous variable and h k (x) and g(x) be real valued functions with n ≥ n 0 such that h k = 0, then the difference equation is called a linear difference equation of order n.
Equation (1) is called a linear homogeneous difference equation of order n for g(x) = 0 and a non-homogeneous linear difference equation of order n for g(x) = 0 [13].

Z-transform
Definition 3 ([10-13, 24-27]; Z-transform) Let the sequence x(n) defined for the negative integers n = -1, -2, . . . be given. The Z-transform for the sequence x(n) is given as Theorem 1 The set of numbers z on the complex plane is called the convergence region x(z) for the convergence of the sequence (2). The ratio test is the most used method for finding the convergence region of the sequence (2). Assume that The convergence of (2) with ratio test is Hence, the sequence (2) is convergent for |z| > R and divergent for |z| < R. The number R is called the radius of convergence. If R = 0, the sequencex(z) is everywhere convergent. On the other side, if R = ∞, the Z-transform is everywhere divergent.
Properties of Z-transform 1. Linearity: Letx(z) be the Z-transform of x(n) with a radius of convergence of R 1 and y(z) be the Z-transform of y(n) with a radius of convergence of R 2 . For complex numbers λ, β 2. Shifting: Let R be the radius of convergence ofx(z). a. Right shifting: If x(-j) = 0, j = 1, 2, . . . , k, then b. Left shifting: The most used versions of (5) are

Initial and final values:
a. Initial value theorem: b. Final value theorem: Proof a. The proof of (6) follows from the definition ofx(z). b. To prove (7) If (5) is applied to the left of this equation 4. Multiplication by a n : Letx(z) be the Z-transform of x(n) with a radius of convergence of R. Then Z a n x(n) =x z a , |z| > |a|R.
The proof of (8) follows from the definition. 5. Multiplication by n k : Similarly, if the order is increased Z n 2 a n = -z d dz -z d dz Z a n and hence Z n 2 a n = -z d dz 2 Z a n .
Its generalization gives Definition (Inverse Z-transform) A Z-transform transforms the difference equation of an unknown x(n) sequence to an algebraic equation inx(z). Afterwards, the x(n) sequence is obtained fromx(z) through an operation known as inverse Z-transform. Symbolically, this operation can be shown as The uniqueness of the inverse Z-transform can be obtained as follows. Assume two sequences x(n), y(n) have the same Z-transform, i.e. if each of them shows equal probability, then this random variable is called a (discrete) uniform distributed random variable [28,29].
Theorem If X has a discrete uniform distribution, then

Bernoulli distribution
Definition For the random variable X, p takes a value of 1 with probability of success, and X, takes a value of 0 with probability of failure where q = 1p, then this random variable is called the Bernoulli random variable. The Bernoulli probability mass function is given as [28,29]:

Binomial distribution
Definition Let n be the total number of independent Bernoulli successful trials and X be the random variable. If the probability of success for a single experiment is p and the probability of failure (1p), then X is called the binomial random variable X and the probability function of X is Calculation of consecutive binomial probabilities: Theorem If X has a binomial distribution,

Negative binomial (Pascal) distribution
Definition Let X be the random variable of the number of trials required to achieve success K ≥ 1, with the probability of success p in each experiment for independent Bernoulli trials. In this case, X is called a negative binomial random variable and its probability function is given as (1-e t q) k .

Geometric distribution
Definition If the number of Bernoulli trials is repeated n times, the number of tests performed to achieve the first success is called geometric distribution and if X is the probability of success for each trial, p is the probability function for the number of trials required to achieve a single success [28,29], Theorem If X has a geometric distribution,

Hypergeometric distribution
Definition Let a be the number of elements of a given Type A in a mass consisting of a finite number of N elements. Let X be the number of elements of its type in a sample of n units that are randomly drawn without replacing them again. X is a random hypergeometric variable and the hypergeometric probability mass function is given as [28,29]  Theorem If X has a hypergeometric distribution,

Numerical examples
Some numerical examples are given for random linear difference equations through the use of various probability distributions.
Example 1 Let A, B be random variables with uniform distribution such that We investigate the behavior of the solution of (11) with the Z-transform method.
Solution. The Z-transform of both sides of (11) gives Partial fractions ofx(z)/z gives It can be found that a 1 = 4A+B 3 , a 2 = -A-B 3 . Hence, The inverse Z-transform Z -1 [x(z)] = x(n) gives Higher moments of the random variables are needed to obtain the approximate expected values and variances. The moment generating function of a uniformly distributed random variable X ∼ U(α, β) is given as [29] M X (t) = E e tX = e βte αt (βα)t .

12
. Hencẽ The moment generating function of a geometrically distributed random variable X ∼ G(p, q) is given as [29] M X (t) = E e tX = pe t 1qe t .

Example 3 Let A, B be random variables with a Poisson distribution such that
x(n + 3) -2x(n + 2)x(n + 1) + 2x(n) We investigate the behavior of the solution of (25) with the Z-transform method.
Solution. The Z-transform of both sides gives The inverse Z-transform Z -1 [x(z)] = x(n) gives The moment generating function of a Poisson distributed random variable X ∼ P(λ) is given as [29] M x = e λ(e t -1) .
Hence, the expected value and variance of the random variable X are To find the numerical characteristics of (28), we start with the expectation: which gives (2) n + n + 1 6 (2) n + 1 2 .
Solution. The Z-transform of both sides gives Partial fraction ofx(z)/z gives Here, , a 2 = a 1 .

Figure 3
Expected value and variance obtained from the Z-transform of (25) Hence, The inverse Z-transform Z -1 [x(z)] = x(n) gives The moment generating function of a Bernoulli distributed random variable X ∼ B(p, q) is given as [29] M x (t) = e t p + (1p).
Hence, the expected value and variance of the random variable X are To find the numerical characteristics of (35), we start with the expectation: which gives The variance is obtained as follows: (37) which is found to be =

Conclusion
The application of the Z-transform for obtaining solutions to random linear difference equations is examined in this study and random behavior of the solutions have been investigated with uniform, geometric, Poisson and Bernoulli distributions for the random behavior of linear difference equations. Expected values and variances of the solutions have been obtained and are shown in graphs. Hence, it has been shown that the Z-transform is the most suitable method for the solutions of random linear difference equations.