Open Access

Factorizations for difference operators

  • Jeffrey Bergen1,
  • Mark Giesbrecht2,
  • Pappur N Shivakumar3 and
  • Yang Zhang3Email author
Advances in Difference Equations20152015:57

https://doi.org/10.1186/s13662-015-0402-1

Received: 28 October 2014

Accepted: 3 February 2015

Published: 24 February 2015

Abstract

We consider the factorization problems of difference operators in \(\mathbb{C}[x;\sigma]\) for an automorphism σ of finite order. We study the factorization of regular polynomials in \(\mathbb{R}[x]\) in the ring of such difference operators and obtain an analogue of the fundamental theorem of algebra for skew polynomial ring \(\mathsf{K}[x; \sigma]\) over field K.

Keywords

difference operatorsfactorizations

1 Introduction

Let R be a ring, σ an endomorphism of R, and δ a left σ-derivation of R. That is, δ is an additive mapping from R to R and for any \(a, b \in\mathsf{R}\),
$$\delta(ab) = \sigma(a) \delta(b) + \delta(a)b. $$
The skew polynomial ring \(\mathsf{R}[x; \sigma, \delta]\) is the set of all polynomials
$$a_{0} +a_{1}x+ \cdots+a_{n}x^{n}, $$
where \(a_{0}, \ldots, a_{n} \in\mathsf{R}\), addition is defined as usual, and multiplication is defined by
$$xa = \sigma(a)x + \delta(a)\quad \mbox{for all } a \in\mathsf{R}. $$

Linear differential operators (\(\sigma= \mathrm{id}_{\mathsf{R}}\), the identity map on R) and linear difference operators (\(\delta= 0\), the identically zero function) are special cases of above skew polynomials, which have been studied via algebraic methods since [1]. The study of skew polynomials is now an active and important area in algebra with many applications in other areas such as solving differential/difference equations, engineering, coding theory, etc. More recent results and survey can be found in [210]. Factorization of skew polynomials is also a recent active area in computer algebra (see, for example, [1114]). Algorithms for characterization of linear difference and differential equations are of great interest.

One of the great strengths of skew polynomials is their analogy with regular commutative polynomials. A natural question thus arises: Is there an analogue of the fundamental theorem of algebra for skew polynomials? We provide a positive answer for difference operators with finite-order automorphisms over an algebraically closed field.

Throughout this paper, we assume that \(\mathsf{R}[x; \sigma]\) is a skew polynomial ring with automorphism σ as defined above.

2 Some basic results

It is clear that the polynomials \(x^{2}-1\) and \(x^{2}+1\) in \(\mathbb{R}[x]\) factor uniquely into monic, linear polynomials in \(\mathbb{C}[x]\). However, the situation is quite different when we look at \(\mathbb{R}[x]\) as a subring of the skew polynomial ring \(\mathbb{C}[x; \sigma]\). Observe that if \(t \in\mathbb{C}\) such that \(|t| = 1\), and \(\sigma(t)\) is the complex conjugate of t, then
$$x^{2}-1 = \bigl(x+\sigma(t)\bigr) (x - t) $$
in \(\mathbb{C}[x; \sigma]\). Thus \(x^{2} -1\) can be factored in an infinite number of ways in \(\mathbb{C}[x; \sigma]\). On the other hand, \(x^{2}+1\) remains irreducible in \(\mathbb{C}[x; \sigma]\). In this paper we contrast how polynomials factor in skew polynomial rings with how they factor in ordinary polynomial rings.

We next introduce the terminology used throughout the remainder of this paper. We let K denote a field and σ an automorphism of K. Define \(\mathsf{K}^{\sigma}= \{ a \in\mathsf{K} \mid \sigma(a) = a\}\) and, when we examine the special case where \(\sigma^{2} = 1\), an important subset of \(\mathsf{K}^{\sigma}\) is the set of norms defined by \(N(\mathsf{K}, \sigma) = \{t \sigma(t) \mid t \in\mathsf{K}\}\). When dealing with the field , unless stated otherwise, we always assume that σ represents complex conjugation.

For \(f(x) = a_{n} x^{n}+ \cdots+ a_{1}x +a_{0} \in\mathsf{K}[x; \sigma]\) and \(t \in\mathsf{K}\), the (standard) evaluation of \(f(x)\) at \(x = t\) is defined as \(f(t) = a_{n}t^{n} + \cdots+ a_{1}t +a_{0}\). When we study polynomials over fields, one of the first facts we prove is that linear factors correspond to roots. Note that this is not the case in skew polynomial rings: if σ denotes complex conjugation, then
$$f(x) = x^{2}-1 = (x-i) (x-i) $$
in \(\mathbb{C}[x; \sigma]\), yet \(f(i) \ne0\). To better understand the role of linear factors in skew polynomial rings, we define σ-evaluation as follows.

Definition 2.1

Let \(f(x) = a_{n}x^{n} +a_{n-1}x^{n-1} + \cdots+ a_{2}x^{2}+a_{1}x +a_{0} \in\mathsf{K}[x; \sigma]\) and \(t \in\mathsf{K}\). Then we define the σ-evaluation \(f(t; \sigma) \in\mathsf{K}\) as \(f(t; \sigma) = a_{n}(t\sigma(t) \cdots\sigma^{n-1}(t)) + a_{n-1}(t\sigma(t) \cdots\sigma^{n-2}(t)) + \cdots+a_{2}(t\sigma(t)) + a_{1}t + a_{0}\). If \(f(t; \sigma) = 0\), we say that t is a σ-root of \(f(x)\).

Observe that if \(\sigma= \mathrm{id}_{\mathsf{R}}\), and \(\mathsf{K}[x; \sigma ]=\mathsf{K}[x]\) is an ordinary polynomial ring, then a σ-root is the same as an ordinary root. Lemma 2.2 and Theorem 2.3 show us that the role of σ-roots in the skew polynomial case generalizes the role of roots in the ordinary case.

Lemma 2.2

([15])

If \(f(x) = a_{n}x^{n} + \cdots+ a_{1}x+a_{0} \in\mathsf{K}[x; \sigma]\) and \(t \in\mathsf{K}\), then there exists a unique \(q(x) \in\mathsf {K}[x; \sigma]\) such that
$$f(x) = q(x) (x-t) + f(t; \sigma). $$

When \(f(x) = q(x)(x-t)\), instead of simply saying that \(x-t\) is a factor of \(f(x)\), we may sometimes use the more precise terminology that \(x-t\) is a (right) factor of \(f(x)\). We can now see the relationship between (right) factors and σ-roots.

Theorem 2.3

Suppose \(f(x) \in\mathsf{K}[x; \sigma]\). Then \(x-t\) is a right factor of \(f(x)\) if and only if t is a σ-root of \(f(x)\).

Proof

Using Lemma 2.2, there exists a unique \(q(x) \in\mathsf{K}[x; \sigma]\) such that \(f(x) = q(x)(x-t) +f(t; \sigma)\). Therefore, if t is a σ-root of \(f(x)\), then \(f(t; \sigma) = 0\), and we can see that \(x-t\) is certainly a (right) factor of \(f(x)\). Conversely, if \(x-t\) is a (right) factor of \(f(x)\), then \(f(x) = h(x)(x-t)\) for some \(h(x) \in\mathsf{K}[x; \sigma]\). But then, by Lemma 2.2, the quotient \(h(x)\) and remainder (0) are unique, so \(f(t;\sigma)=0\). □

For ordinary polynomials of degree 2, being reducible is equivalent to having a root. In our situation, we now show that it is equivalent to having a σ-root.

Corollary 2.4

Suppose \(f(x) \in\mathsf{K}[x; \sigma]\) has degree 2; then \(f(x)\) is reducible in \(\mathsf{K}[x; \sigma]\) if and only if \(f(x)\) has a σ-root in K.

Proof

One direction follows immediately from Theorem 2.3, for if \(f(x)\) has a σ-root in K, then it has a right factor of degree one in \(\mathsf{K}[x; \sigma]\) and is therefore reducible. In the other direction, if \(f(x)\) is reducible and has degree 2, then it is easy to see that it must have a monic, linear right factor. Theorem 2.3 now asserts that \(f(x)\) has a σ-root in K. □

One of the examples that motivated this paper was the fact that \(x^{2}+1\) remains irreducible in \(\mathbb{C}[x; \sigma]\). Thus the situation of quadratic polynomials where \(\sigma^{2} = 1\) is of particular interest. We can now determine precisely that irreducible quadratics in \(\mathsf{K}^{\sigma}[x]\) remain irreducible in \(\mathsf {K}[x; \sigma]\).

Corollary 2.5

Let \(f(x) = x^{2}+ax+b \in\mathsf{K}^{\sigma}[x]\) and suppose \(\sigma^{2} =1\). Then \(f(x)\) is reducible in \(\mathsf{K}[x; \sigma]\) if and only if either (i) \(f(x)\) is reducible in \(\mathsf{K}^{\sigma}[x]\), or (ii) \(a =0 \) and \(-b \in N(\mathsf{K}, \sigma)\).

Proof

If \(f(x)\) is reducible in \(\mathsf{K}^{\sigma}[x]\), then it is certainly reducible in \(\mathsf{K}[x; \sigma]\). Furthermore, if \(a = 0\) and \(-b \in N(\mathsf{K}, \sigma)\), then \(-b = t\sigma(t)\) for some \(t \in\mathsf{K}\) and
$$f(x) = x^{2}-(-b) = x^{2} -\bigl(t\sigma(t)\bigr) = \bigl(x+ \sigma(t)\bigr) (x-t). $$
This proves one half of the result.
To see the converse, Corollary 2.4 tells us that if \(f(x)\) is reducible in \(\mathsf{K}[x; \sigma]\), then it has a σ-root \(t \in \mathsf{K}\). If t is a σ-root, then it satisfies the equation
$$t\sigma(t) + at + b = 0. $$
Since \(\sigma^{2} =1\), we know that \(t\sigma(t) \in \mathsf{K}^{\sigma}\). However, \(b\in\mathsf{K}^{\sigma}\), therefore it follows that \(at \in\mathsf{K}^{\sigma}\). If \(a \ne0\), then the fact that \(a \in\mathsf{K}^{\sigma}\) implies that \(t \in\mathsf{K}^{\sigma}\). Since t is an ordinary root of \(f(x)\) in \(\mathsf{K}^{\sigma}\), we see that \(f(x) \) is reducible in \(\mathsf{K}^{\sigma}[x]\). On the other hand, suppose \(a = 0\). Our equation above now tells us that \(-b = t\sigma\in N(\mathsf{K}, \sigma)\), thereby concluding the proof. □

The next corollary completely describes the situation for the examples \(x^{2}-1, x^{2}+1 \in\mathbb{R}[x] \subseteq\mathbb{C}[x; \sigma]\) that were mentioned at the start of this paper.

Corollary 2.6

Let \(f(x) = x^{2}+ax+b \in\mathbb{R}[x]\).
  1. (i)

    \(f(x)\) is reducible in \(\mathbb{C}[x; \sigma]\) if and only if \(f(x)\) is reducible in \(\mathbb{R}[x]\).

     
  2. (ii)

    If \(f(x)\) is reducible, then the factorization of \(f(x)\) in \(\mathbb{C}[x; \sigma]\) into monic, linear factors is unique when \(a \ne0\), whereas \(f(x)\) factors an infinite number of ways into monic, linear factors when \(a =0\).

     

Proof

In light of Corollary 2.5, in order to prove part (i), it suffices to show that if \(a = 0\) and \(-b \in N(\mathbb{C}, \sigma)\), then \(f(x)\) is reducible in \(\mathbb{R}[x]\). However, in this situation, \(N(\mathbb{C}, \sigma)\) is equal to the set of positive real numbers. Therefore, if \(-b \in N(\mathbb{C}, \sigma)\), then \(-b = c^{2}\) for some \(c \in\mathbb{R}\). Thus
$$f(x) = x^{2} - (-b) = (x+c) (x-c), $$
as required.
For the proof of part (ii), let \(\operatorname{cis}(\theta) = \cos\theta+ i \sin\theta\) for every \(\theta\in\mathbb{R}\). Then, when we are in the case where \(a = 0\), we can rewrite the equation above as
$$f(x) = x^{2}-(-b) = (x+c) (x-c) = \bigl(x+c\bigl(\operatorname{cis}(- \theta)\bigr)\bigr) \bigl(x - c\bigl(\operatorname{cis}(\theta)\bigr)\bigr). $$
By letting θ range between 0 and 2π, we can see that \(f(x)\) does factor an infinite number of ways in \(\mathbb{C}[x; \sigma]\).

Finally, suppose \(a \ne0\); the argument in the proof of Corollary 2.5 indicates that any factorization of \(f(x)\) actually takes place in \(\mathbb{R}[x]\). Since there is only one way to factor \(f(x)\) in \(\mathbb{R}[x]\), it follows that the factorization of \(f(x)\) in \(\mathbb{C}[x; \sigma]\) is also unique. □

When we look back at Corollaries 2.5 and 2.6, it becomes natural to look for an example where reducibility in \(\mathsf{K}[x; \sigma]\) is not equivalent to reducibility in \(\mathsf{K}^{\sigma}[x]\).

Example 2.7

Let \(\mathsf{K}= \mathbb{Q}(i)\) and once again let σ denote complex conjugation. In this case, \(x^{2}-2\) and \(x^{2}-5\) are irreducible in \(\mathsf{K}^{\sigma}[x] = \mathbb{Q}[x]\). However, since \(2, 5 \in N(\mathbb{Q}, \sigma)\), these polynomials become reducible in \(\mathbb{Q}(i)[x; \sigma]\) as we have
$$x^{2}-2 = \bigl(x+(1-i)\bigr) \bigl(x-(1+i)\bigr) \quad \mbox{and} \quad x^{2}- 5 =\bigl(x+(2-i)\bigr) \bigl(x-(2+i)\bigr). $$
On the other hand, since \(-2, 3 \notin N(\mathbb{Q}(i), \sigma)\), the polynomials \(x^{2}+2\) and \(x^{2}-3\) are irreducible in \(\mathbb{Q}[x]\) and remain irreducible in \(\mathbb{Q}(i)[x; \sigma]\).

3 Main theorems

In order to complete our description of the factoring of polynomials in \(\mathbb{C}[x; \sigma]\), we first need to prove a result that holds in any algebraically closed field.

Theorem 3.1

Let K be an algebraically closed field and let σ be an automorphism of K of order n. Then every non-constant reducible skew polynomial in \(\mathsf{K}[x; \sigma]\) can be written as a product of irreducible skew polynomials of degree less than or equal to n.

Proof

Since σ has order n, K has dimension n over the subfield \(\mathsf{K}^{\sigma}\). It is well known that the centre of \(\mathsf{K}[x; \sigma]\) is the polynomial ring \(\mathsf{K}^{\sigma}[x^{n}]\). Now suppose that \(f(x)\) is a non-constant element of \(\mathsf{K}[x; \sigma]\). Since \(\mathsf{K}[x; \sigma]\) is a finite module over the Noetherian ring \(\mathsf{K}^{\sigma}[x^{n}]\), \(\mathsf{K}[x; \sigma]\) contains no infinite direct sums of \(\mathsf{K}^{\sigma}[x^{n}]\)-submodules. As a result, the sum
$$f(x)\mathsf{K}^{\sigma}\bigl[x^{n}\bigr] + f(x)^{2} \mathsf{K}^{\sigma}\bigl[x^{n}\bigr] + f(x)^{3} \mathsf{K}^{\sigma}\bigl[x^{n}\bigr] + \cdots $$
cannot be direct. Therefore, there exist \(a_{1}(x), \ldots, a_{m}(x) \in\mathsf{K}^{\sigma}[x^{n}]\), with at least two nonzero \(a_{i}(x)\), such that
$$f(x)a_{1}(x) + f(x)^{2}a_{2}(x) + \cdots+ f(x)^{m} a_{m}(x) = 0. $$
By cancelling any common factors of \(f(x)\) on the left, we obtain
$$a_{j}(x)+f(x)a_{j+1}(x) + \cdots+ f(x)^{m-j}a_{m}(x) = 0. $$
Observe that \(a_{j}(x)\) is a nonzero element of \(\mathsf{K}^{\sigma}[x^{n}] \cap f(x)\mathsf{K}[x; \sigma]\). Therefore, there exists \(g(x) \in\mathsf{K}[x; \sigma]\) such that \(f(x)g(x)\) is a nonzero element of \(\mathsf{K}^{\sigma}[x^{n}]\).
Since K is algebraically closed and n-dimensional over \(\mathsf{K}^{\sigma}\), non-constant polynomials in \(\mathsf{K}^{\sigma}[y]\) can all be factored into a product of polynomials of degree at most n. If we let \(y = x^{n}\), then
$$f(x)g(x) \in\mathsf{K}^{\sigma}\bigl[x^{n}\bigr] = \mathsf{K}^{\sigma}[y]. $$
Therefore, there exist non-constant \(h_{1}(y), \ldots, h_{m}(y) \in\mathsf{K}^{\sigma}[y]\), all of degree at most n, such that
$$f(x)g(x) = h_{1}(y)\cdots h_{m}(y). $$
For each i, the polynomial \(h_{i}(y) = h_{i}(x^{n}) \in\mathsf{K}^{\sigma}[x]\) can also be factored in \(\mathsf{K}^{\sigma}[x]\) as a product of non-constant polynomials of degree at most n. As a result, we have actually factored \(f(x)g(x)\) in \(\mathsf{K}[x, \sigma]\) as a product of polynomials of degree at most n.

By Theorem 1 of Chapter II in [1], the degrees of factors of \(f(x)g(x)\) are at most n.

Thus \(f(x)\) has been factored in \(\mathsf{K}[x; \sigma]\) as a product of polynomials of degree at most n. □

Using Theorem 3.1 and Corollary 2.4, we now have the following.

Corollary 3.2

Every non-constant polynomial in \(\mathbb{C}[x; \sigma]\) is a product of linear and irreducible polynomials. Furthermore, the quadratic \(f(x) = x^{2}+a_{1}x+a_{0}\) is irreducible in \(\mathbb{C}[x; \sigma]\) if and only if there does not exist some \(t \in\mathbb{C}\) such that \(t\sigma(t) +a_{1}t +a_{0} = 0\).

Proof

Theorem 3.1 asserts that every non-constant polynomial in \(\mathbb{C}[x; \sigma]\) is a product of linear and irreducible polynomials. In addition, Corollary 2.4 tells us that \(f(x) = x^{2}+a_{1}x+a_{0}\) is irreducible in \(\mathbb{C}[x; \sigma]\) if and only if \(f(x)\) does not have a σ-root in . However, when we look at the definition of σ-roots in the quadratic case, it is clear that \(f(x)\) does not have a σ-root if and only if there does not exist \(t \in\mathbb{C}\) such that \(t\sigma(t)+ a_{1}t + a_{0} = 0\). □

Declarations

Acknowledgements

We would like to thank the anonymous referees for their careful reading and valuable comments that have improved the readability of this article. The research of the first author was supported by a grant from the University Research Council at DePaul University.

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Authors’ Affiliations

(1)
Department of Mathematical Sciences, DePaul University
(2)
David R. Cheriton School of Computer Science, University of Waterloo
(3)
Department of Mathematics, University of Manitoba

References

  1. Ore, O: Theory of non-commutative polynomials. Ann. Math. 34(22), 480-508 (1933) View ArticleMATHMathSciNetGoogle Scholar
  2. Bergen, J: Derivation algebras of rings related to Heisenberg algebras. J. Algebra Appl. 3(2), 181-191 (2004) View ArticleMATHMathSciNetGoogle Scholar
  3. Bergen, J, Grzeszczuk, P: Jacobson radicals of ring extensions. J. Pure Appl. Algebra 216(12), 2601-2607 (2012) View ArticleMATHMathSciNetGoogle Scholar
  4. Bergen, J, Grzeszczuk, P: Skew power series rings of derivation type. J. Algebra Appl. 10(6), 1383-1399 (2011) View ArticleMATHMathSciNetGoogle Scholar
  5. Cohn, PM: Free Rings and Their Relations. Academic Press, San Diego (1985) MATHGoogle Scholar
  6. Cohn, PM: Skew Fields. Cambridge University Press, Cambridge (1995) View ArticleMATHGoogle Scholar
  7. Jacobson, N: Finite Dimensional Division Algebras over Fields. Springer, Berlin (2010) Google Scholar
  8. McConnell, JC, Robson, JC: Noncommutative Noetherian Rings. Graduate Studies in Mathematics, vol. 30. Am. Math. Soc., Providence (2001) MATHGoogle Scholar
  9. Shivakumar, PN, Zhang, Y: On series solution of \(y'' + \phi(x) y = 0\) and applications. Adv. Differ. Equ. 2013, 47 (2013) View ArticleMathSciNetGoogle Scholar
  10. Shivakumar, PN, Wu, Y, Zhang, Y: Shape of Dum, a constructive approach. WSEAS Trans. Math. 10(1), 22-31 (2011) Google Scholar
  11. Churchill, R, Zhang, Y: Irreducibility criteria for skew polynomials. J. Algebra 322(11), 3797-3822 (2009) View ArticleMATHMathSciNetGoogle Scholar
  12. Eberly, W, Giesbrecht, M: Efficient decomposition of separable algebras. J. Symb. Comput. 37, 35-81 (2004) View ArticleMATHMathSciNetGoogle Scholar
  13. von zur Gathen, J, Gerhard, J: Modern Computer Algebra, 3rd edn. Cambridge University Press, Cambridge (2013) View ArticleMATHGoogle Scholar
  14. Giesbrecht, M, Zhang, Y: Factoring and decomposing Ore polynomials over \(F_{q}(t)\). In: Proceedings of the ACM 28th International Symposium on Symbolic and Algebraic Computation (ISSAC ’03), pp. 127-134. ACM, New York (2003) Google Scholar
  15. Lam, TY, Leroy, A: Vandermonde and Wronskian matrices over division rings. J. Algebra 119(2), 308-336 (1988) View ArticleMATHMathSciNetGoogle Scholar

Copyright

© Bergen et al.; licensee Springer. 2015