Computation of Fourier transform representations involving the generalized Bessel matrix polynomials

Motivated by the recent studies and developments of the integral transforms with various special matrix functions, including the matrix orthogonal polynomials as kernels. In this article, we derive the formulas for Fourier cosine transforms and Fourier sine transforms of matrix functions involving generalized Bessel matrix polynomials. With the help of these transforms a number of results are considered which are extensions of the corresponding results in the standard cases. The results given here are of general character and can yield a number of (known and new) results in modern integral transforms.


Introduction
In the past few decades, the orthogonal matrix polynomials have attracted a lot of research interests due to their close relations and various applications in many areas of mathematics, engineering, probability theory, graph theory and physics; for example, see [1][2][3][4][5][6][7][8][9]. In [4], extension to the matrix framework of the classical families of Legendre, Laguerre, Jacobi, Chebyshev, Gegenbaner and Hermite polynomials have been introduced. Meanwhile, one particular orthogonal polynomials which frequently appears in the recent studies and applications [10][11][12], is the generalized Bessel polynomials, which in its matrix form is also defined in [4,13]. Later on, distinct works of the generalized Bessel matrix polynomials have been discussed (see [14][15][16][17]).
Nowadays, many integral transforms (see, eg, Fourier transform, Laplace transform, Beta transform, Hankel transform, Mellin transform, Whittaker transform, etc.) with various special functions (also with the new generalized special matrix functions) as kernels, begun to play an important role in modeling of various phenomena of physical, engineering, automatization, biological and, several offshoots of science (see, for instance, [18][19][20]).
Fourier transforms (FTs) is a type of integral transforms that used in solving different problems in mathematical physics, applied statistics and engineering (see, [21,22]). The idea of Fourier transforms is a natural extension of the idea of Fourier series. In particular, Fourier transforms can accommodate with non-periodic functions, which Fourier series can not do. Recently, a number of results on the study of Fourier transforms and its applications have been contributed by Nicola and Trapasso [23], Urieles et al. [24], Ghodadra and Fülöp [25], Bergold and Lasser [26] and Al-Lail and Qadir [27].
On the contrary, matrix Fourier expansions and Fourier series in orthonormal matrix polynomials have been introduced by B. Osihnker in [28,29]. Defez and Jóbdar [30,31] introduced basic properties of matrix Fourier series and Fourier approximation for functions of matrix argument. Recently, Groenevelt and Koelink [32], discussed the generalized Fourier transform with hypergeometric function and matrix-valued Orthogonal polynomials as kernels. Also, applications of matrix summability to Fourier transforms is established by Ş . Yildiz [33].
Motivated by some of these aforementioned investigations of the Fourier transforms with such matrix-valued orthogonal polynomials. In our investigation here, we study the Fourier type transforms of the generalized Bessel matrix polynomials Y n (ξ; F, L), ξ ∈ C, for parameters (square) matrices F and L. In particular, we obtain a number of useful Fourier cosine transforms and Fourier sine transforms of functions involving generalized Bessel matrix polynomials with powers of the matrix, matrix exponentials, matrix trigonometric, matrix binomial and Bessel functions. Moreover, pertinent integral transforms of the different results given here with those including simpler and earlier ones are also investigated.

Auxiliary toolbox
In this segment, we recall some definitions, lemmas and terminologies which will be used to prove the main results. Let C and N denote the sets of complex numbers and positive integers, respectively, and N 0 = N ∪ {0}. Let C n denote the n-dimensional complex vector space and C n×n denote the space of all square matrices with n rows and n columns with entries are complex numbers.
For a matrix F in C n×n , the spectrum σ(F ) is the set of all eigenvalues of F for which we denote where α(F ) refers to the spectral abscissa of F and for which α(F ) = −α(−F ). A matrix F is said to be a positive stable if and only if α(F ) > 0.
Definition 2.2. [37] If F and L are commuting matrices in C n×n and w ∈ C.Then where I is identity matrix in C n×n .
Definition 2.4. [4,38] The reciprocal gamma function denoted by Γ −1 (w) = 1 Γ(w) is an entire function of the complex variable ξ. Then the image of Γ −1 (w) acting on F denoted by Γ −1 (F ) is a well-defined matrix and invertible as well as  Note that, if F = −sI, where s is a positive integer, then (F ) n = 0, whenever n > s. Now, from properties gamma matrix function, we give some lemmas which will be needed in some the proof of theorems.
Lemma 2.1. Let S be a matrix in C n×n , such that α(S) > 0 and w ∈ C with Re(w) > 0. The following integral formulas hold:

7)
Separate real and imaginary parts in (2.7), we observe that Putting S = I − R ∈ C n×n in (2.8) and (2.9), we get Similarly, we can present the following lemma.
Lemma 2.2. Let S be a matrix in C n×n , such that α(S) > 0, λ, w ∈ C with Re(λ) > 0 and Re(w) > 0. The following integral formulas hold: Definition 2.5. [4,34] Let k and r be finite positive integers, the generalized hypergeometric matrix function is defined by the matrix power series Note that for k = 1, r = 0, we have the Binomial type matrix function 1 H 0 (F 1 ; −; w) as follows Also, note that for k = 2, r = 1, we get the Gauss hypergeometric matrix function 2 H 1 in the form Several from the special matrix functions, including the matrix orthogonal polynomials are also presented in terms of the generalized hypergeometric matrix function in (cf. [4,34]).
Definition 2.6. [4,13,16] Let F and L be commuting matrices in C n×n such that L is an invertible matrix. For any natural number n ∈ N 0 , the n-th generalized Bessel matrix polynomial Y n (ξ; F, L) is defined as (2.15) Remark 2.2. If the matrix F, L ∈ C 1×1 = C, then the generalized Bessel matrix polynomial in (2.15) reduces to generalized Bessel polynomial in [10][11][12].
Definition 2.7. [35,36] For a matrix F in C n×n satisfying the condition β is not a negative integer for every β ∈ σ(F ). (2.16) The Bessel matrix function J F (w) of the first kind associate to F is given by 17) and the modified Bessel matrix functions I F (w) and K F (w) have been defined, respectively and Definition 2.8. [21,22] Let f (ξ) be a function of ξ specified for ξ > 0. Then the complex Fourier transform of f (ξ) associated with complex frequency w is defined by The cosine and sine transformations follow similarly as, respectively and Note that if f (ξ) is an even function, then The following lemma will be required in the proof theorems .
Also, if where S is a positive stable matrix in C n×n , w, λ ∈ C with Re(w) > 0, Re(λ) > 0 and K S (w) is the modified Bessel matrix function in (2.19).
Remark 2.3. Physically, the Fourier transform F(w) can be interpreted as an integral superposition of an infinite number of sinusoidal oscillations with different wavenumbers w (or different wavelengths τ = 2π w ). Thus, the definition of the Fourier transform is restricted to absolutely integrable functions. This restriction is too strong for many physical applications (see [21,22]).

Statement and proof of main theorems
In this section, we investigate several new interesting Fourier cosine transforms and Fourier sine transforms of functions involving generalized Bessel matrix polynomials asserted in the following theorems: Theorem 3.1. Let S, F and L be commuting matrices in C n×n . and let Y n (λξ; F, L) be given in (2.15). For the function (−nI) r (F + (n − 1)I) r (S + I) r (−λ(Lw) −1 ) r cos (S + rI)π/2 r! , where w, λ ∈ C with Re(w) > 0, Re(λ) > 0 and α(S) > −1.
Proof. To describe the relation in (3.5), the proof is easy, using the well known identities in (2,2). In a similar way we can get the result in (3.6).
Proof. The proofs of two results (3.8) and (3.9) can be obtained by the use of the two formula (2.10) and (2.11) with Definition 2.6.
Theorem 3.4. Let S, F and L be commuting matrices in C n×n . and let Y n (λξ; F, L) be given in (2.15). For the function then, we have
Proof. From (2.15) and applying formula (2.22) on the right hand side of (3.10) reveals that Using (2.12), we get which implies that the formula (3.11). Likewise, we can get the result in (3.12) by using (2.13).
Proof. To establish our result in (3.14), then using (3.13) in (2.22), we arrive After changing the order of summation and simplifying yield 16) which implies that the formula (3.14).
Theorem 3.6. Let Y n (λξ; F, L) be given in (2.15). For the function then, we have
Proof. To describe the relation (3.18) from (3.17) in (2.23), we see that After simplification, we obtain the desired result This completes the proof of Theorem 3.6.
Theorem 3.7. Let S and F be positive stable and commuting matrices in C n×n . For the function

Proof.
To demonstrate the truth of these results, making use of (2.22) with (3.19), we observe that Thus, after a simplification, we find that In a similar way and by using (2.23) with (3.19), we can get the result in (3.21). Hence, the demonstration of Theorem 3.7 is finished.
Theorem 3.8. Let S, F and L be commuting matrices in C n×n . If

22)
then, we have
Proof. The proof of this result indeed follows from (3.22) in (2.23), we have According to the Fourier sine transform ( [18], p. 426), we obtain This completes the proof of Equation (3.23) asserted in Theorem 3.8.
Similarly, we can arrive at the following result: Theorem 3.9. Let Y n (ξ 2 ; F, L) be given in (2.15). For the function

24)
then, we have

25)
where w, µ, λ ∈ C with Re(w) > 0, Re(µ) > 0, Re(λ) > 0, Re(w) > Re(λ), S is a positive stable matrix in C n×n such that α(I − S) > 0, and S, F and L are commuting matrices in C n×n . where w, λ ∈ C with Re(w) > 0, Re(λ) > 0, S is a positive stable matrix in C n×n such that α(S) > − 1 2 , K S (x) is modified Bessel matrix function defined in (2.19), and S, F and L are commuting matrices in C n×n .
Proof. In order to establish the result (3.27), with the help of the Lemma2.3, we get This completes the proof of theorem.