We’d like to understand how you use our websites in order to improve them. Register your interest.

# Difference-differential operators for basic adaptive discretizations and their central function systems

## Abstract

The concept of inherited orthogonality is motivated and an optimality statement for it is derived. Basic adaptive discretizations are introduced. Various properties of difference operators which are directly related to basic adaptive discretizations are looked at. A Lie-algebraic concept for obtaining basic adaptive discretizations is explored. Some of the underlying moment problems of basic difference equations are investigated in greater detail.

## 1 Introduction and motivation

The wide area of ordinary differential equations and various types of ordinary difference equations is always closely connected to special function systems. Essential contributions have been added through the last century linking concepts of functional analysis to the world of differential equations and difference equations.

Difference equations are usually understood in the sense of equidistant differences. However, new and attractive discretizations, like adaptive discretizations come in. The intention of this article is to address specific problems which are related to so-called basic adaptive discretizations: Starting from conventional difference and differential equations, we move on to basic difference equations.

In Section 2, we motivate creating new types of orthogonal polynomials from a given orthogonal polynomial system. Ingredients like tripolynomial function classes, node integrals, and some properties of the classical Hermite polynomials are revised.

Section 3 addresses the concept of inherited orthogonality - transferring orthogonality from one generation of orthogonal polynomials to the next one. It contains an optimality statement for computing properties which links two different generations of functions.

In Section 4, the concept of basic adaptive discretization is presented and the underlying Lie-algebra structure of discrete Heisenberg algebras is given.

Having worked on special functions related to differential equations, we explore in some more detail the world of discrete functions being related to basic adaptive discretizations in Section 5. There, we look in greater detail at solutions of underlying moment problems.

Let us briefly mention some preceding work:

Reference  was an important starting point to initiate results of this present article. Reference  refers to the ladder operator formalism in discrete Schrödinger theory. A wide survey of results stemming from discrete Schrödinger theory is provided through . In discrete Schrödinger theory, basic special functions like q-hypergeometric functions resp. related orthogonal polynomials play a prominent role when solving the underlying Schrödinger difference equations. Central properties of these functions are outlined for instance in [4, 5]. The application of basic difference equations to formulating discrete diffusion processes has been addressed in [6, 7].

## 2 Towards new generations of orthogonality

The main philosophy of this section shall be as follows:

Given a sequence of orthogonal polynomials, let us take one particular of its family members and call it father polynomial. We are interested in the calculation of the node integrals for this father polynomial.

Calculating the moments of the respective positivity parts of the father polynomial, one obtains all information on further systems of orthogonal polynomials which are based on the father polynomial.

We call them daughter polynomials. They oscillate between the nodes of the father polynomial. The higher the index of a daughter polynomial in the new class of orthogonal polynomials gets, the wilder is the daughter polynomial ‘dancing’: All its nodes are located between the nodes of the father polynomial, i.e., the higher the oscillations of a particular daughter polynomial get, the more nodes of the daughter polynomial can be found between the nodes of the original father polynomial.

In order to proceed into this direction, let us put some ingredients together.

Definition 2.1 Let $F:={ F 0 , F 1 , F 2 , F 3 ,…}$ be an orthogonal polynomial system where for $F j$ the degree is given by j. By $K(F)$, we understand the set of all function systems which are collinear with the elements of F. Note that 0 shall be excluded from this set. We call $K(F)$ the tripolynomial function class of the function system F.

Let $p:R→R$ be a polynomial of degree k with at least two zeros. The zeros of the polynomials shall be denoted by $z 1 ,…, z j$ where $j≤k−2$, they are also called nodes.

We call two nodes of the polynomial adjacent if there exists no further node of p between these two nodes.

The nodes $z 1 ,…, z j$ shall be arranged such that $z 1$ and $z 2$ are adjacent. Without loss of generality, such an arrangement is always possible in view of the given conditions.

For $n∈ N 0$, we call the integrals

$∫ z 1 z 2 x n p(x)dx$
(1)

node integrals of the polynomial p.

To see how the orthogonality and its oscillations are inherited from one generation to the next, we will combine the stated ingredients to formulate the corresponding analytic statements which will appear in Theorem 3.2. This theorem will contain an existence and optimality statement for predicting some properties of daughter polynomials: In the case of the Hermite polynomials, this can be done in an optimal way, i.e., there one can find an optimal way for calculating the node integrals from the original father polynomials. Let us briefly summarize some of their classical properties which will play an important role when deriving new results in the next section:

Lemma 2.2 (Properties of the classical Hermite polynomials)

Starting from the function

$G 0 :R→R,x↦ G 0 (x):= e − x 2 ,$
(2)

we define for $n∈ N 0$ and $x∈R$ the functions $G n$ successively as follows:

$G n + 1 (x)=− G n ′ (x).$
(3)

For $n∈ N 0$, the nth Hermite polynomial is then introduced as

$H n :R→R,x↦ H n (x):= e x 2 G n (x).$
(4)

The Hermite polynomials satisfy for $x∈R$ and $n∈ N 0$ the equations (5) (6)

Let $p,q∈M$ be the vector space of all real-valued polynomials over $R$. We obtain an orthogonality statement between p and q with respect to the Euclidean scalar product

$(p,q):= ∫ − ∞ ∞ p(t)q(t) e − t 2 dt,$
(7)

namely: The Hermite polynomials are pairwise orthogonal with respect to (7). For two different $m,n∈ N 0$, we receive

$∫ − ∞ ∞ H m (t) H n (t) e − t 2 dt=0.$
(8)

Moreover, for all $m∈ N 0$, the following strict inequality holds:

$0< ∫ − ∞ ∞ H m (t) H m (t) e − t 2 dt<∞.$
(9)

## 3 The oscillations of inherited orthogonality

In the sequel, our aim is to derive some general statements on node integrals of orthogonal polynomial systems, i.e., we are interested in integrals of type

$∫ z 1 z 2 x n P j (x)dx,$
(10)

where $P j$ is a fixed function of an orthogonal polynomial system P, $z 1$, and $z 2$ being adjacent nodes of $P j$. For a fixed value $j∈ N 0$, we start from the defining equation

$a j + 2 P j + 2 (x)−x b j + 1 P j + 1 (x)+ c j P j (x)= d j P j + 1 (x).$
(11)

Rewriting identity (11), we first obtain

$x b j + 1 P j + 1 (x)= a j + 2 P j + 2 (x)+ c j P j (x)− d j P j + 1 (x).$
(12)

Dividing by the b-coefficients, we receive

$x P j + 1 (x)= a j + 2 b j + 1 P j + 2 (x)+ c j b j + 1 P j (x)− d j b j + 1 P j + 1 (x).$
(13)

Applying now the procedure (13) for the calculation of $x n P j + 1 (x)$ in a successive way, we get

$x n P j + 1 (x)= c n + j + 1 ( n , j ) P n + j + 1 (x)+ c n + j ( n , j ) P n + j (x)+⋯+ c 0 ( n , j ) P 0 (x).$
(14)

Note that the coefficients

$c n + j + 1 ( n , j ) , c n + j ( n , j ) ,…, c 0 ( n , j )$
(15)

are uniquely fixed through the calculation procedure (13). In terms of the sigma notation, we can reformulate Eq. (14) in the following short-hand way:

$x n P j + 1 (x)= ∑ k = 0 n + j + 1 c k ( n , j ) P k (x).$
(16)

The bracket notation $(n,j)$ at the different coefficients $c k ( n , j )$ indicates that the coefficients depend indeed on a fixed chosen number $j∈ N 0$ and from $n∈ N 0$ in $x n$. Due to their importance, these coefficients deserve a special name which will be given by the following.

Definition 3.1 (Structure coefficients)

The coefficients $c k ( n , j )$ stemming from the procedure specified in (14) through (16) are called structure coefficients of the polynomial $P j + 1$.

The structure coefficients depend on the fixed value of the power $n∈ N 0$ on the left-hand side of (16) as well as on the fixed value of the power $j∈ N 0$ of the polynomial index of $P j + 1$ - moreover they depend additionally on the counting index k. The number k starts from the value 0 and runs from 1 through $n+j+1$.

Calculating the node integrals on the left-hand side of (16) yields

$∫ z 1 z 2 x n P j + 1 (x)dx= ∫ z 1 z 2 ( ∑ k = 0 n + j + 1 c k ( n , j ) P k ( x ) ) dx= ∑ k = 0 n + j + 1 c k ( n , j ) ∫ z 1 z 2 P k (x)dx.$
(17)

This identity is the starting point for a fascinating analytic path. One may ask the question of how to find an optimal algorithm linking the following three sequences of data:

• the node integrals which can be obtained by direct integration:

$μ n , j := ∫ z 1 z 2 x n P j (x)dx$
(18)
• the values of all polynomials $P m$ at two adjacent nodes of a special $P j$:

$P m ( z 1 ), P m ( z 2 )$
(19)
• the structure coefficients $c k ( n , j )$.

To get this procedure started, we assume j, k, m and n as suitably chosen indices.

This task can be approached on a more general level in the following theorem, leading even to an unexpected optimality statement.

Theorem 3.2 (New generations of orthogonality)

Let $P j$ be the jth polynomial of an orthogonal polynomial system. Moreover, let $z 1$ and $z 2$ be two adjacent nodes of this polynomial. Without loss of generality, we assume that $P j$ between $z 1$ and $z 2$ is nonnegative. Under these assertions, the following statements hold:

1. (1)

The evaluation of all elements $P m$ ($m∈ N 0$) of the tripolynomial function system at two adjacent nodes $z 1$ and $z 2$ of the particular polynomial $P j$ yields complete information on a sequence of new polynomials $T n$ ($n∈ N 0$). These polynomials are defined for $z 1 ≤x≤ z 2$ and have for all pairwise different $m,n∈ N 0$ the following properties: (20) (21)
2. (2)

Each of the $T n$ - without $T 0$ - changes its sign between $z 1$ and $z 2$, i.e., is oscillatory between the integration boundaries $z 1$ and $z 2$.

3. (3)

In the case of an orthogonal polynomial system, each of the polynomials $P j$ contains information on a complete set of new orthogonal polynomials: Starting from $P j$, the orthogonality is inherited to the $T n$ ($n∈ N 0$).

4. (4)

In this context, we have the following Existence and Optimality Statement:

Using the data $P m ( z 1 )$, $P m ( z 2 )$ ($m∈ N 0$), one obtains in the case of the Hermite polynomials an optimal algorithm while calculating the node integrals

$∫ z 1 z 2 x n P j (x)dx,n∈ N 0 .$
(22)
1. (5)

Moreover, the following Uniqueness Statement holds:

The Hermite polynomials are the only tripolynomial function class of analysis which shows an optimality property when calculating (22) using the data $P m ( z 1 )$, $P m ( z 2 )$ while m is ranging in $N 0$.

Proof (1) For a fixed value of $j∈ N 0$ the polynomial $P j$ does not change its sign between two different adjacent nodes. The existence of nodes is guaranteed through the orthogonality properties of the whole polynomial sequence $( P k ) k ∈ n ∈ N 0$.

Suppose now that $z 1$ and $z 2$ are two adjacent nodes of $P j$ where we assume without of loss of generality that the father polynomial $P j$ is nonnegative between $z 1$ and $z 2$. In particular, the father polynomial is positive in the open interval $( z 1 , z 2 )$ and vanishing in $z 1$ and $z 2$. According to the properties of continuous functions on compact intervals, all node integrals (22) exist for $P j$ and are positive. By means of the outlined standard procedures, we can construct from these node integrals a new sequence of orthogonal polynomials, the daughter polynomials $( T n ) n ∈ n ∈ N 0$. This reflects precisely the statements (20) and (21).

1. (2)

According to the orthogonality relations, (20) follows that the daughter polynomials must change their signs in the open interval $( z 1 , z 2 )$. And according to general results on orthogonal polynomials $( T n ) n ∈ n ∈ N 0$, it follows moreover that the higher the index n gets the more oscillatory $T n$ behaves.

2. (3)

Starting now from a tripolynomial function system with given orthogonality measure, each of its pairwise orthogonal polynomials $P j$ inherits through the procedure outlined in (1) and (2) its orthogonality to the sequence $( T n ) n ∈ n ∈ N 0$.

3. (4)

In the case of the classical Hermite polynomials, one sees that thanks to their differential equation (5) and making use of the expansion (17), there is an optimal effort for calculating the node integrals of a particular Hermite polynomial from the evaluation of all other Hermite polynomials at the two fixed nodes of the particular polynomial.

Polynomials not having a similar differential equation (5) need a longer sequence of computational steps to evaluate the node integrals of a particular polynomial from the values of all polynomials at the two nodes of the considered particular polynomial.

1. (5)

The concept of optimality is closely tied to the differential equation (5): The fact that the derivative of a polynomial in a tripolynomial system is a multiple value of its predecessor is characteristic for the tripolynomial function class of Hermite polynomials - and only for them - as direct calculation shows. This yields the uniqueness statement. □

## 4 Adaptive discretization and basic Heisenberg algebras

We have so far worked in this article on difference equations, namely orthogonal polynomial systems connecting members of a function family. They are solutions to a particular differential equation.

This is, for instance, one of the standard scenarios in conventional quantum mechanics where the Hermite functions, the Laguerre functions, and the Legendre functions play particular roles in different physical situations. The main spirit among these functions is always a continuous one. We now move to describing discretized function systems and to discovering the algebraic structures behind one special type of fancy discretization: the basic adaptive discretization.

Let Ω be a nonvanishing subset of the real axis having a characteristic function $χ Ω$ with properties

$χ Ω (qx)= χ Ω ( q − 1 x ) = χ Ω (x),x∈R,$
(23)

where Ω may be the real axis by itself. The number $0 is chosen always as a fixed number. For a real-valued function $f:Ω→R$, we define the first basic discrete grid operation by

$(Xf)(x):=xf(x)$
(24)

and the second basic discrete grid operation resp. its inverse through

$(Rf)(x):=f(qx),(Lf)(x):=f ( q − 1 x ) .$
(25)

We introduce as a metric structure on suitable subsets of the linear function space $F(Ω,C)$ of all complex-valued functions on Ω the Euclidean scalar product

$( f , g ) Ω := ∫ − ∞ ∞ f(x) g ( x ) ¯ d χ Ω x$
(26)

with corresponding Lebesgue-integration measure and obtain hence the canonical norm through

$∥ f ∥ Ω := ( f , f ) Ω .$
(27)

From these structures, it becomes apparent what the corresponding Lebesgue spaces $L p (Ω)$ are.

Remark 4.2 Let us concentrate on two different realizations of the Lebesgue integration measure $d χ Ω x$, namely in the first case on a purely discrete measure and in the second case on a measure stemming from a set Ω with properties (23) having itself a positive Lebesgue measure, i.e., $μ(Ω)>0$. This second case corresponds to piecewise-continuous realizations of the integration measure; we will refer to them also in the fifth chapter by the name basic layers.

Hence, the two different integration measures are related to the respective scalar products of the form

$( f , g ) Ω = ∑ k = − ∞ ∞ r q k f ( r q k ) g ( r q k ) ¯ + ∑ k = − ∞ ∞ s q k f ( − s q k ) n g ( − s q k ) ¯$
(28)

in the purely discrete scenario where r, s are two fixed positive numbers, and

$( f , g ) Ω := ∫ − ∞ ∞ f(x) g ( x ) ¯ χ Ω (x)dx$
(29)

in the piecewise-continuous basic layer case with $μ(Ω)>0$.

Already for the discrete case, and there in the situation of real-valued f, g with compact support, it can be shown by direct inspection that

$( X f , g ) Ω = ( f , X g ) Ω , ( R f , g ) Ω = q − 1 ( f , L g ) Ω .$
(30)

To illustrate this fact in the case of R for instance, let us without loss of generality restrict to those real-valued functions f, g with compact support, vanishing on the s-part of the lattice Ω such that

$( R f , g ) Ω = ∑ k = − ∞ ∞ r q k (Rf) ( r q k ) g ( r q k ) = ∑ k = − ∞ ∞ r q k f ( r q k + 1 ) g ( r q k ) g ( − s q k ) ¯ .$
(31)

We find furthermore that the expression on the very right-hand side of (31) equals to

$∑ k = − ∞ ∞ r q k − 1 f ( r q k ) g ( r q k − 1 ) = q − 1 ∑ k = − ∞ ∞ r q k f ( r q k ) (Lg) ( r q k ) g ( − s q k ) ¯$
(32)

which confirms the statement - the more general situation of the full lattice being similar. This finally allows us to introduce X, R, L as linear operators acting on all complex-valued functions which have compact support on the grid

$Ω= { r q k ∣ k ∈ Z } ∪ { − s q k ∣ k ∈ Z } .$
(33)

In this sense, there exists a formal adjointness star operation such that

$X ∗ =X, R ∗ = q − 1 L= q − 1 R − 1 , L ∗ =qR.$
(34)

In this sense, we speak of X as a formally symmetric operator.

Note that one can easily find the same adjointness relations (34) in the case of the integration measure (29) using similarly suitable domains for the three operators X, R, L.

Lemma 4.3 (Properties of some basic discretizers)

Let for $m,n∈Z$ the basic discretizers be the linear operators, stemming from X, R, L with compact domains in $L 2 (Ω)$ related to the scalar product (28): (35) (36) (37)

We then have in all cases with respect to (28) the adjointness properties

$( Q n m ) ∗ = Q n m , ( D n m ) ∗ =− D n m , ( P n m ) ∗ = P n m .$
(38)

Proof Let us restrict for simplicity on the case of $Q n m$, the proof for the D-objects resp. P-objects being performed similar. Since the operators $X,R,L= R − 1$ are defined on compact domains of the $L 2 (Ω)$ under consideration, we obtain

$( Q n m ) ∗ = ( X m R n + q − n R − n X m ) ∗ = ( R n ) ∗ ( X m ) ∗ + q − n ( X m ) ∗ ( R − n ) ∗$
(39)
$= ( R ∗ ) n ( X ∗ ) m + q − n ( X ∗ ) m ( R ∗ ) − n = q − n R − n X m + q − n X m q n R n$
(40)
$= q − n R − n X m + X m R n = Q n m .$
(41)

Deriving these properties, we have in detail made use of standard operator theoretical results on compactly defined linear operators with respect to star operations. □

Remark 4.4 In the sequel - throughout the end of this section - we will also make use of the properties

$RX=qXR,LX= q − 1 XL$
(42)

which hold on the domains for X, R, L and their products. Moreover, we define the anticommutator bracket and the commutator bracket between the objects going to appear in Theorem 4.5 via

${A,B}=AB+BA,[A,B]=AB−BA.$
(43)

Building all these structures from Lemma 4.3 and Remark 4.4 together, one obtains by elementary but tedious calculations the full structure of the underlying basic Heisenberg algebra which is going to reveal its beauty in the following Theorem 4.5. Note, in particular, that the Jacobi identity which is an essential ingredient to verifying the Lie-algebra structure can be addressed straight. This happens thanks to the associative behavior of the operators X, R, L in basic adaptive discretizations.

Theorem 4.5 (Lie-algebraic structures of basic Heisenberg relations)

Let in the sequel $j,k,m,n∈Z$ be arbitrary. We then have the following anticommutator relations between the formally symmetric Q-objects and the formally antisymmetric D-objects: (44) (45) (46)

The corresponding commutator relations look as follows: (47) (48) (49)

Using the formally symmetric P-objects, the anticommutator relations become (50) (51) (52)

and we finally end up with the Lie algebra between the formally symmetric Q-discretizers and the formally symmetric P-discretizers as follows: (53) (54) (55)

## 5 Moment problems and basic difference equations

Having looked at the basic discretization process by itself in the last section, the starting point for this section are now basic difference equations as concrete basic objects, namely

$∑ j = 0 m a j x 2 j f(qx)= ∑ j = 0 m + 1 b j x 2 j f(x),m∈ N 0$
(56)

with suitable nonnegative $a j$-coefficients and $b j$-coefficients. We want to investigate some specific moment problems related to it.

To do so, let us first have a look at the structures which will come in. Like in the previous section, we will always assume $0.

Definition 5.1 (Continuous structures in use)

Throughout the sequel, we will make use of the multiplication operator and shift-operator, their actions being given by

$(Xφ)(x):=xφ(x),(Rφ)(x):=φ(qx)$
(57)

on suitable definition ranges. We define the linear functional $K 0 : L 1 (R)→R$ by

$K 0 (f):= ∫ − ∞ ∞ f(x)dx$
(58)

and successively for arbitrary $n∈N$ the linear functionals $K n$ on their maximal domains $D( K n )⊆ L 1 (R)$ via

$K n (f):= ∫ − ∞ ∞ x n f(x)dx$
(59)

provided the integral in (59) exists in all cases.

We are now going to formulate a technical result concerning solutions to (56) which will be a helpful tool to the investigations in the sequel.

Lemma 5.2 (Extension property for continuous solutions)

Let us consider for a fixed number $m∈N$ the basic difference equation

$∑ j = 0 m a j X 2 j Rf= ∑ j = 0 m + 1 b j X 2 j f,$
(60)

where we assume that $a 0 , a 1 ,…, a m$, $b 0 , b 1 ,…, b m + 1$ are nonnegative numbers. We require additionally $a m b m + 1 >0$ and $a 0 b 0 >0$. Let moreover for two arbitrary but fixed $j,k∈Z$ the sets

$J j + := ( q j + 1 , q j ) , J k − := ( − q k , − q k + 1 )$
(61)

be given and let the function $u: J j + ∪{ q j }∪{ q j + 1 }→ R 0 +$, $x↦u(x)$ as well as the function $v: J k − ∪{− q k }∪{− q k + 1 }→ R 0 +$, $x↦v(x)$ be continuous and in particular in agreement with (56) resp. (60) where at most u or v exclusively is allowed to be the zero function in the sense of the Lebesgue integral.

Then u together with v can be extended via (60) to one single positive solution $f∈ L 1 (R)$ which is in any case continuous on $R∖{0}$. The action of all $K n$ on both hand sides of (60) yields

$K n ( ∑ j = 0 m a j X 2 j R f ) = K n ( ∑ j = 0 m + 1 b j X 2 j f )$
(62)

transforming into the moment equation

$∑ j = 0 m a j q − 2 j − n − 1 K n + j (f)= ∑ j = 0 m + 1 b j K n + j (f).$
(63)

Proof Rewriting (60) in the form of (56), one recognizes that the extension process from u together with v to f is standard. By conventional analytic arguments, the existence of all moments of f can be concluded. Hence, in particular, we have $f∈ L 1 (R)$, where $f(0)$ has to be specified since it does not come out of the extension process.

The choice of $f(0)$ may in particular be arbitrary since this does not violate the property $f∈ L 1 (R)$. By analytic standard arguments of integration, one may conclude that (62) and (63) are for all $n∈ N 0$ well defined. These observations practically conclude all steps while verifying the technical Lemma 5.2. □

Remark 5.3 We would like to point out that Lemma 5.2 already provides a plenty of possible solutions to (60). Let, for instance, $( f k ) k ∈ N 0$ be a sequence of functions in the sense of Lemma 5.2 and $( γ k ) k ∈ N 0$ be a sequence of nonnegative numbers where only finitely many of them are different from zero. The linear combination

$g:= ∑ k = 0 ∞ γ k f k$
(64)

is then again a solution to (60) in the sense of Lemma 5.2: The function g is in any case continuous in $R∖{0}$ and is in the maximal domain of K. Hence, the application of all the integration functionals $K n$ ($n∈ N 0$) on the equation

$∑ j = 0 m a j X 2 j Rg= ∑ j = 0 m + 1 b j X 2 j g$
(65)

$∑ j = 0 m a j q − 2 j − m − 1 K n + j (g)= ∑ j = 0 m + 1 b j K n + j (g),m∈ N 0 .$
(66)

Having shed some light on continuous solutions of the basic difference equation (60), let us now look at restrictions of the continuous solutions to special intervals. To do so, let us confine to subsets of the real axis for which the characteristic function is invariant under the shift operator R from (57) respectively its inverse. We have

Theorem 5.4 (Existence of basic layer solutions)

Let f be a continuous positive solution of (60) on $R∖{0}$. We look at a basic layer Ω which is a subset of the real axis with positive Lebesgue measure having a characteristic function $χ Ω$ with properties

$χ Ω (qx)= χ Ω ( q − 1 x ) = χ Ω (x),x∈R.$
(67)

The application of all the integration functionals $K n$ ($n∈ N 0$) on the equation

$χ Ω (X) ∑ j = 0 m a j X 2 j Rf= χ Ω (X) ∑ j = 0 m + 1 b j X 2 j f$
(68)

again leads to a moment equation of type (63) resp. (66),

$∑ j = 0 m a j q − 2 j − n − 1 K n + j ( f Ω )= ∑ j = 0 m + 1 b j K n + j ( f Ω ),n∈ N 0 ,$
(69)

where we have used the abbreviation

$f Ω := χ Ω (X)f= χ Ω ∘f.$
(70)

Proof Let us illustrate the scenario at the example

$( a + b x 2 ) f(qx)= ( c + d x 2 + e x 4 ) f(x),x∈R$
(71)

where a, b, c, d, e fulfill the respective roles of the coefficients specified in the assertions of Lemma 5.2. We choose now Ω such that the conditions of Theorem 5.4 are fulfilled. Hence, we obtain

$χ Ω (x) ( a + b x 2 ) f(qx)= χ Ω (x) ( c + d x 2 + e x 4 ) f(x),x∈R.$
(72)

According to the properties (67), we can replace the expression $χ Ω (x)$ on the left-hand side of (72) by $χ Ω (qx)$:

$χ Ω (qx) ( a + b x 2 ) f(qx)= χ Ω (x) ( c + d x 2 + e x 4 ) f(x),x∈R.$
(73)

Integration of the last equality yields

$∫ − ∞ ∞ χ Ω (qx) ( a + b x 2 ) f(qx)dx= ∫ − ∞ ∞ χ Ω (x) ( c + d x 2 + e x 4 ) f(x)dx.$
(74)

Now we see that it is advantageous to use the symmetry property (67). We obtain by substitution

$q − 1 ∫ − ∞ ∞ χ Ω (x) ( a + b q − 2 x 2 ) f(x)dx= ∫ − ∞ ∞ χ Ω (x) ( c + d x 2 + e x 4 ) f(x)dx.$
(75)

This step can be generalized by the application of all moment generating functionals, yielding

$q − n − 1 ( a K n ( f Ω ) + b q − 2 K n + 2 ( f Ω ) ) =c K n ( f Ω )+d K n + 2 ( f Ω )+e K n + 4 ( f Ω ).$
(76)

Here, we see that the specific structure of Ω does not influence the moment equality - what all moment equalities of type (76), however, have in common is the fact that Ω has a symmetry property specified by (67).

Rewriting Eq. (76), we finally end up with a non-autonomous difference equation to determine the evaluation of $K n$ at the specific $f Ω$:

$K n + 4 ( f Ω )= b q − n − 3 − d e K n + 2 ( f Ω )+ a q − n − 1 − c e K n ( f Ω ).$
(77)

□

It is now clear what is the basic philosophy behind determining the moment functionals, so it becomes apparent how the proof in the more general situation of Theorem 5.4 works.

Remark 5.5 According to the assertions of Theorem 5.4 resp. Lemma 5.2, we see that the sequence of moment values $( K 2 n ( f Ω ) ) n ∈ N 0$ is indeed a sequence of positive numbers. In particular, the nonautonomous difference equation (77) conserves this property.

A sufficient condition to ensure the nonnegativity of the numbers $( K 2 n + 1 ( f Ω ) ) n ∈ N 0$ is provided by choosing the function f in Theorem 5.4 additionally as symmetric. If this condition is imposed, one can derive from all moments $( K n ( f Ω ) ) n ∈ N 0$ a sequence of orthogonal polynomials, being generated through the choice of f resp. Ω. Note, however, that this condition is not necessary.

There is another example for a sufficient condition: Choose the function f in Theorem 5.4 with the additional property $f= χ R + (X)f$, i.e. the function f is assumed as vanishing on the negative real axis. Under this condition, it can be guaranteed that all $( K n ( f Ω ) ) n ∈ N 0$ are positive.

Given now a positive symmetric continuous solution of (71), the question arises of how two moment sequences $( K n ( f Ω 1 ) ) n ∈ N 0$ and $( K n ( f Ω 2 ) ) n ∈ N 0$ may differ when $Ω 1 ≠ Ω 2$. It may of course happen that already $K 0 ( f Ω 1 )≠ K 0 ( f Ω 2 )$ or resp. and $K 2 ( f Ω 1 )≠ K 2 ( f Ω 2 )$. Therefore, the two sequences may develop in a different manner using the generation process through (77). The underlying orthogonal polynomials will also be different.

Let us now look briefly back on the special situation sketched out in proof of Theorem 5.4. Assume that $f 1$, $f 2$, $f 3$, $f 4$ are four continuous positive solutions of (60) on $R∖{0}$. Having chosen Ω in a proper way, we learn from Eq. (77) that the sequence of all $( K n ( f Ω ) ) n ∈ N 0$ is in the considered case uniquely determined through the specification of the four values $K 0 ( f Ω )$, $K 1 ( f Ω )$, $K 2 ( f Ω )$, $K 3 ( f Ω )$.

Assume that $f 1$, $f 2$, $f 3$, $f 4$ are four continuous positive solutions of (60) on $R∖{0}$. We ask how we can combine these four functions to one positive continuous solution f of (60) such that after the choice of Ω:

$K 0 ( f Ω )= μ 0 , K 1 ( f Ω )= μ 1 , K 2 ( f Ω )= μ 2 , K 3 ( f Ω )= μ 3 ,$
(78)

where $μ 0$, $μ 1$, $μ 2$, $μ 3$ are four arbitrary given but fixed positive numbers. This corresponds to choosing fixed real coefficients α, β, γ, δ such that (79) (80)

Having solved this system of linear equations for determining α, β, γ, δ, one may check the positivity of the arising $f Ω$ separately. Once this is verified, all other moments of $f Ω$ are then determined through (77) and one can start constructing the underlying orthogonal polynomials.

This observation now directly motivates another aspect of the arising moment problems: Given two different symmetric positive continuous solutions $f Ω 1$ and $h Ω 2$ on two different basic layers $Ω 1$ and $Ω 2$. Under which conditions will the two sequences $( K n ( f Ω 1 ) ) n ∈ N 0$ and $( K n ( h Ω 2 ) ) n ∈ N 0$ be the same? Or in other words: Under which conditions will be the related orthogonal polynomials the same? The following Corollary 5.6 of Theorem 5.4 sheds some more light to a systematic construction process for a rich variety of the demanded solutions:

Corollary 5.6 (Composing and combining basic layer solutions)

Let $( f i ) i ∈ N$ be a sequence of positive continuous solutions to (60) in the sense of Theorem 5.4. Moreover, let $( Ω j ) j ∈ N$ be a sequence of sets all fulfilling property (67) and in addition being partially pairwise coincident resp. partially pairwise disjoint, i.e.

$i,j∈N,i≠j⇒ Ω i = Ω j or Ω i ∩ Ω j ={}.$
(81)

Let in addition the matrix elements $( a i j ) i , j ∈ N$ of nonnegative numbers be such that finitely many of its entries are different from zero. The function

$F:= ∑ i = 1 ∞ ∑ j = 1 ∞ a i j f Ω j i$
(82)

leads to the formally same moment equation like (63), (66), (69), namely

$∑ j = 0 m a j q − 2 j − n − 1 K n + j ( ∑ i = 1 ∞ ∑ j = 1 ∞ a i j f Ω j i ) = ∑ j = 0 m + 1 b j K n + j ( ∑ i = 1 ∞ ∑ j = 1 ∞ a i j f Ω j i ) ,n∈ N 0$
(83)

short-handed:

$∑ j = 0 m a j q − 2 j − n − 1 K n + j (F)= ∑ j = 0 m + 1 b j K n + j (F),n∈ N 0 .$
(84)

One might think of dropping the condition of $( a i j ) i , j ∈ N$ having finitely many entries different from zero. This more general approach can for instance be established by choosing the sequence $( Ω j ) j ∈ N$ such that (81) is guaranteed and such that all the projections $Ω i ∗ := Ω i ∩(0,1)$ fulfill in addition the property

$⋃ i = 1 ∞ Ω i ∗ =(0,1).$
(85)

The necessary convergence checks arising from the imposed topological structures then have to be tackled separately.

So far, we have considered the situation of piecewise continuous solutions to (60). We would like to point out that the nonautonomous basic difference equation (60) has of course also purely discrete solutions which may stem from suitable projections on the continuous solutions that we have already considered. To do so, we put first together all discrete tools that we need:

Definition 5.7 (Discrete structures in use)

Like in the continuous case, we will make use of the multiplication operator and shift-operator resp. its inverse, their actions being given by

$(Xφ)(x):=xφ(x),(Rφ)(x):=φ(qx),(Lφ)(x):=φ ( q − 1 x )$
(86)

on the considered functions. Given two arbitrary but fixed positive numbers r, s, we are going to define the set

$Ω ( r , s ) := { r q k ∣ k ∈ Z } ∪ { − s q k ∣ k ∈ Z } .$
(87)

For a suitable function $f: Ω ( r , s ) →R$, we define for all $n∈ N 0$ linear functionals via a discretized version of the continuous integral, namely

$T n ( r , s ) (f):= ∑ k = − ∞ ∞ r q k ( r q k ) n f ( r q k ) + ∑ k = − ∞ ∞ s q k ( − s q k ) n f ( − s q k ) .$
(88)

Note that we explicitly require the existence of these expressions for all $n∈ N 0$ by speaking of suitable functions f.

The sets from (87) are geometric progressions with a suitable positive part and a suitable negative part. We first would like to point out the conditions that

$R( Ω ( r , s ) )= Ω ( r , s ) ,L( Ω ( r , s ) )= Ω ( r , s )$
(89)

as direct inspection shows right away. Thus (89) provides a possible discrete analog of (67).

We are now going to relate the introduced structures to solutions of the equation

$∑ j = 0 m a j x 2 j f(qx)= ∑ j = 0 m + 1 b j x 2 j f(x),m∈ N 0$
(90)

with suitable nonnegative $a j$-coefficients and $b j$-coefficients. Here, we assume that the value x is taken from the set (87). We thus look for discrete solutions of (60). The following Theorem 5.8 reveals a plenty of discrete solution structures. Some of the continuous scenarios will similarly appear.

Theorem 5.8 (Discrete solutions of the BDE)

Like in the continuous case, we refer to the basic difference equation (BDE)

$∑ j = 0 m a j X 2 j Rf= ∑ j = 0 m + 1 b j X 2 j f,$
(91)

where $m∈ N 0$ is a fixed number. We assume again here that $a 0 , a 1 ,…, a m$ as well as $b 0 , b 1 ,…, b m + 1$ are non-negative numbers. We require also additionally $a m b m + 1 >0$ and $a 0 b 0 >0$.

1. (1)

Under these assertions, there exists to every pair $(ρ,σ)$ of nonnegative real numbers with $ρ 2 + σ 2 >0$ precisely one solution $f ( ρ , σ ) : Ω ( r , s ) → R +$ to (91) where the pair of positive numbers $(r,s)$ is assumed as fixed. The solution $f ( ρ , σ )$ is fixed through the requirements

$f ( ρ , σ ) (r)=ρ, f ( ρ , σ ) (−s)=σ.$
(92)
2. (2)

Moreover, to each of these $f ( ρ , σ )$ and all $n∈ N 0$, there exists $T n ( r , s ) ( f ( ρ , σ ) )$.

3. (3)

The action of all $T n ( r , s )$ on both hand sides of (91) yields

$T n ( r , s ) ( ∑ j = 0 m a j X 2 j R f ( ρ , σ ) ) = T n ( r , s ) ( ∑ j = 0 m + 1 b j X 2 j f ( ρ , σ ) )$
(93)

transforming into the moment equation

$∑ j = 0 m a j q − 2 j − n − 1 T n + j ( r , s ) ( f ( ρ , σ ) )= ∑ j = 0 m + 1 b j T n + j ( r , s ) ( f ( ρ , σ ) )$
(94)

being formally the same than (63), (66), (69), (83).

1. (4)

Let $( r i ) i ∈ N$ and $( s i ) i ∈ N$ be two sequences of positive numbers such that

$i,j∈N,i≠j⇒ Ω ( r i , s i ) ∩ Ω ( r j , s j ) ={}.$
(95)

Let moreover $( t i ) i ∈ N$ be a sequence of nonnegative numbers which is different from zero in finitely many cases. Let in addition $( ρ i ) i ∈ N$ and $( σ i ) i ∈ N$ be two sequences of non-negative numbers in the sense of statement (1) of this theorem. Let for all $i∈N$ the functions $f ( ρ i , σ i )$ be defined on $Ω ( r i , s i )$. The positive function

$F:Ω= ⋃ i = 1 ∞ Ω ( r i , s i ) → R + ,F:= ∑ i = 1 ∞ t i f ( ρ i , σ i )$
(96)

fulfills then - in analogy to (63), (66), (69), (83), (94):

$∑ j = 0 m a j q − 2 j − n − 1 ∑ i = 1 ∞ t i T n + j ( r i , s i ) ( f ( ρ i , σ i ) )= ∑ j = 0 m + 1 b j ∑ i = 1 ∞ t i T n + j ( r i , s i ) ( f ( ρ i , σ i ) ).$
(97)

Proof (1) Looking back at Eq. (56), we see that a discrete solution is specified completely through application of (56) as soon as the value say at position r and at position −s is provided. In the assertions of Theorem 5.8, we have chosen the pair $(ρ,σ)$ such that $ρ 2 + σ 2 >0$, i.e., the existence of a nontrivial solution is always guaranteed. In particular, it is allowed that we have solutions with vanishing part on the negative side of the lattice resp. solutions with vanishing part on the positive side of the lattice, the scenario of a vanishing part at the same time on the positive lattice and the negative lattice, however, is excluded.

1. (2)

The existence of $T n ( r , s ) ( f ( ρ , σ ) )$ for all $n∈ N 0$ is according to the asymptotic behaviour of $f ( ρ , σ )$ which is specified finally by using (56) - the argumentation follows standard results of converging sequences in analysis.

2. (3)

Let us address in particular the evaluation $T n ( r , s ) (R f ( ρ , σ ) )$ in more detail. To do so, let us concentrate on the right-hand side of the lattice, the proof for the situation on the left-hand side of the lattice being similar: We can show

$T n ( r , s ) (R f ( ρ , σ ) )= r n q − n − 1 T n ( r , s ) ( f ( ρ , σ ) )$
(98)

according to the following identity:

$∑ k = − ∞ ∞ r q k ( r q k ) n R f ( ρ , σ ) ( r q k ) = ∑ k = − ∞ ∞ r q k − 1 ( r q k − 1 ) n f ( ρ , σ ) ( r q k − 1 + 1 )$
(99)

as well as furthermore

$∑ k = − ∞ ∞ r q k − 1 ( r q k − 1 ) n f ( ρ , σ ) ( r q k − 1 + 1 ) = ∑ k = − ∞ ∞ r q − 1 q k r n q − n ( r q k ) n f ( ρ , σ ) ( r q k ) .$
(100)

Compare now in particular the last expression in (100) with the right-hand side in (98) to get the identity (98) confirmed in total. Ordering and comparing moreover the index structure in the occurring expressions then finally leads to identity (94).

1. (4)

The result in (97) is a consequence of the linearity of all $T n ( r , s )$ and the assertion of the pairwise disjointness in (95). Note in particular, that we have allowed in the assertions of Theorem 5.8 only finitely many superpositions of lattices in order to avoid more complicated convergence scenarios. □

Remark 5.9 Comparing the continuous scenario and the discrete scenario, we see that a continuous solution of (60) and a corresponding discrete solution of (91) may lead to the same momenta. This means they will in this case also lead to the same type of underlying orthogonal polynomials. In other words, these polynomials have piecewise continuous orthogonality measures on the one hand and purely discrete orthogonality measures on the other hand. According to the construction principles that we have outlined in the continuous and the discrete case, it even becomes clear that the underlying orthogonal polynomials may have orthogonality measures which are mixtures of a piecewise continuous and a purely discrete part. Let us summarize this amazing fact now in the following.

Corollary 5.10 (Mixture of continuous and discrete solutions)

Let $0, we define

$Ω ( a , b ) := ⋃ n = − ∞ ∞ R n ( ( a , b ) )$
(101)

and similarly for $0,

$Ω ( − d , − c ) := ⋃ n = − ∞ ∞ R n ( ( − d , − c ) ) .$
(102)

Assume that we have chosen a, b, c, d such that for different integer i, j the sets $R i ((a,b))$ and $R j ((a,b))$ resp. $R i ((−d,−c))$ and $R j ((−d,−c))$ are pairwise disjoint.

Assume, moreover, that for two given positive numbers r, s the set $Ω ( r , s )$ constructed according to (87) is not contained in $Ω ( a , b )$ resp. $Ω ( − d , − c )$.

We then can look for a solution $f: Ω ( a , b ) ∪ Ω ( − d , − c ) ∪ Ω ( r , s ) → R +$ fulfilling (60), being continuous on $Ω ( a , b ) ∪ Ω ( − d , − c )$ according to Theorem 5.4 and discrete on $Ω ( r , s )$ according to Theorem 5.8.

Under these assertions, the functionals

$T ( n ) := K n + T n ( r , s )$
(103)

acting on f

$T ( n ) (f):= K n ( f Ω ( a , b ) ∪ Ω ( − d , − c ) )+ T n ( r , s ) (f)$
(104)

are for all $n∈ N 0$ well defined, leading from (60) to the same nonautonomous moment difference equation on which we have ended up in the other cases:

$∑ j = 0 m a j q − 2 j − m − 1 T ( n + j ) (f)= ∑ j = 0 m + 1 b j T ( n + j ) (f),m∈ N 0 .$
(105)

## References

1. 1.

Birk, L, Roßkopf, S, Ruffing, A: Die Magie der Hermite-Polynome und neue Phänomene der Integralberechnung. Pedagogical presentation GMM (2010)

2. 2.

Berg C, Ruffing A: Generalized q -Hermite polynomials. Commun. Math. Phys. 2001, 223: 29–46. 10.1007/PL00005583

3. 3.

Ruffing, A: Contributions to discrete Schrödinger theory. Habilitationsschrift, Fakultät für Mathematik, TU München (2006)

4. 4.

Ey K, Ruffing A: The moment problem of discrete q -Hermite polynomials on different time scales. Dyn. Syst. Appl. 2004, 13: 409–418.

5. 5.

Ferreira JM, Pinelas S, Ruffing A: Nonautonomous oscillators and coefficient evaluation for orthogonal functions systems. J. Differ. Equ. Appl. 2009, 15(11–12):1117–1133. 10.1080/10236190902877764

6. 6.

Marquardt T, Ruffing A: Modifying differential representations of Hermite functions and their impact on diffusion processes and martingales. Dyn. Syst. Appl. 2004, 13: 419–431.

7. 7.

Meiler M, Ruffing A: Constructing similarity solutions to discrete basic diffusion equations. Adv. Dyn. Syst. Appl. 2008, 3(1):41–51.

## Acknowledgements

Special thanks are given to Harald Markum for arising the interest in relations between noncommutativity and discretization, in particular, in context of Lie algebras. His talk during the Easter Conference in Neustift/Novacella late March 2010 is appreciated in great detail. Discussions with him which followed during the Schladming Winter School 2011 have been of great scientific value. Moreover, special thanks are given to Michael Wilkinson for questions and encouraging words during the International Conference ‘Let’s Face Chaos Through Nonlinear Dynamics’ in Maribor in July 2011. Discussions with Sergei Suslov during a research period in Linz and Vienna in October 2011 are highly appreciated.

## Author information

Authors

### Corresponding author

Correspondence to Andreas Ruffing.

### Competing interests

The authors declare that they have no competing interests.

### Authors’ contributions

All authors have equally contributed to the scientific results which have been found in this article.

## Rights and permissions

Reprints and Permissions

Birk, L., Roßkopf, S. & Ruffing, A. Difference-differential operators for basic adaptive discretizations and their central function systems. Adv Differ Equ 2012, 151 (2012). https://doi.org/10.1186/1687-1847-2012-151 