Skip to content

Advertisement

  • Research
  • Open Access

Jordan decomposition and geometric multiplicity for a class of non-symmetric Ornstein-Uhlenbeck operators

Advances in Difference Equations20142014:34

https://doi.org/10.1186/1687-1847-2014-34

  • Received: 8 November 2013
  • Accepted: 6 January 2014
  • Published:

Abstract

In this paper, we calculate the Jordan decomposition for a class of non-symmetric Ornstein-Uhlenbeck operators with the drift coefficient matrix, being a Jordan block, and the diffusion coefficient matrix, being the identity multiplying a constant. For the 2-dimensional case, we present all the general eigenfunctions by mathematical induction. For the 3-dimensional case, we divide the calculation of the Jordan decomposition into three steps. The key step is to do the canonical projection onto the homogeneous Hermite polynomials, and then use the theory of systems of linear equations. Finally, we get the geometric multiplicity of the eigenvalue of the Ornstein-Uhlenbeck operator.

Keywords

  • Jordan decomposition
  • 2-dimensional non-symmetric Ornstein-Uhlenbeck operator
  • 3-dimensional non-symmetric Ornstein-Uhlenbeck operator

1 Introduction

For the symmetric Ornstein-Uhlenbeck operator, the eigenfunctions are the well-known Hermite polynomials [1]. The eigenfunctions of a type of finite-dimensional normal but non-symmetric Ornstein-Uhlenbeck operators have recently been found. They are the so-called complex Hermite polynomials [2] (or, say, the Hermite-Laguerre-Itô polynomials) where the idea is to proceed by means of a decomposition to the summation of series of up to a 2-dimensional normal Ornstein-Uhlenbeck operator [3]. But if the Ornstein-Uhlenbeck operator is not normal, the general eigenfunctions are still unknown up to now.

In the present paper, we consider the d-dimensional ( d 2 ) non-symmetric Ornstein-Uhlenbeck process
[ d X 1 ( t ) d X 2 ( t ) d X d ( t ) ] = [ c 1 0 0 0 c 1 0 0 0 0 c ] [ X 1 ( t ) X 2 ( t ) X d ( t ) ] d t + 2 σ 2 [ d B 1 ( t ) d B 2 ( t ) d B d ( t ) ] .
(1.1)
The associated Ornstein-Uhlenbeck operator is
A d = ( c x 1 + x 2 ) x 1 + ( c x 2 + x 3 ) x 3 + c x d x d + σ 2 ( 2 x 1 2 + 2 x 2 2 + + 2 x d 2 ) .
(1.2)
Denote B = c Id + R with Id the identity and R the nilpotent. Clearly, A d is a non-symmetric operator since B does not satisfy the reversiblea condition of Ornstein-Uhlenbeck [4]
B Q = Q B ,

where Q = 2 σ 2 Id is the diffusion coefficient matrix and B is the transpose matrix of B.

The associated Markov semigroup ( T ( t ) ) t 0 on the Banach space of the bounded measurable functions is
( T ( t ) f ) ( x ) = 1 ( 4 π ) 3 / 2 ( det Q t ) 1 / 2 R 3 e Q t 1 y , y / 4 f ( e t B x y ) d y ,
(1.3)
and
Q t = σ 2 0 t e s B e s B d s .
It is well known that ( T ( t ) ) t 0 extends to a strongly continuous semigroup of positive contractions in the Hilbert space L μ 2 = L 2 ( R d , d μ ) , where μ is the unique invariant measure [5, 6]. Still we denote by ( A d , D ) the generator of ( T ( t ) ) t 0 in L μ 2 ; it was shown [7] that the spectrum consists of eigenvalues of finite multiplicities, σ ( A d ) = { n c : n N } , and all the generalized eigenfunctions are polynomials and form a complete system in L μ 2 . Let γ = n c . It follows from [[7], Theorem 4.1] that the algebraic multiplicity of γ is
k A d ( γ ) = ( n + d 1 d 1 ) ,
(1.4)
and it follows from [[7], Proposition 4.3, Theorem 4.1] that ν A ( γ ) , the index of the eigenvalue γ, is
ν A d ( γ ) = 1 + ( d 1 ) n .
(1.5)
A natural question is what the geometric multiplicity of the eigenvalue γ is. In addition, since the spectral subspace associated with γ (i.e. Ker ( γ A d ) ν A d ( γ ) ) is a finite-dimensional vector space over the real field , what is the Jordan decomposition (or the Jordan canonical form) [8, 9] of A d restricted on the spectral subspace? That is to say, what are the integers r > 0 , 0 < q r q r 1 q 1 q 0 ν A ( γ ) and the generalized eigenfunctions f r , f r 1 , , f 1 , f 0 , such that
{ f k , ( γ A d ) f k , , ( γ A d ) q k 1 f k : k = 0 , 1 , , r }
(1.6)
form the basis of the spectral subspace associated to γ, and
( γ A d ) q k f k = 0 , k = 0 , 1 , , r ?
(1.7)

The integers ( q r , q r 1 , , q 1 , q 0 ) are also called Segre characteristic (or, say, Segre type, Segre notation). f k is called a lead vector (or, say, a cyclic vector, a generator) of a Jordan chain { f k , ( γ A d ) f k , , ( γ A d ) q k 1 f k } by some authors [9, 10].

In the present paper, we present an approach to calculate the Jordan decomposition and the generalized eigenfunctions (see Theorems 2.1, 3.1) for d = 2 , 3 .b The proof of Theorem 2.1 is by direct calculation. The main techniques of the proof of Theorem 3.1 are canonical projection and the theory of systems of linear equations. This approach is novel to the Jordan decomposition of differential operators as far as we know.

It is a difficult problem to get the geometric multiplicity of the eigenvalue of a differential operator from the perspective of functional analysis. It is well known that the spectrum of the symmetric Ornstein-Uhlenbeck operator is the starting point of stochastic analysis (more precisely: Malliavin calculus), thus the analogous results of the non-symmetric Ornstein-Uhlenbeck operator are interesting. We will treat the more general non-symmetric Ornstein-Uhlenbeck operator in the future.

2 In case of dimension 2

In this section, we treat the case of d = 2 . Denote ρ = σ 2 c . The Hermite polynomials [1] are defined by the formula
H n ( x ) = ( ρ ) n e x 2 / 2 ρ d n d x n e x 2 / 2 ρ , n = 1 , 2 , .
Clearly, it has a power series expression,
H n ( x , ρ ) = k = 0 [ n / 2 ] ( n 2 k ) ( 2 k 1 ) ! ! x n k ( ρ ) k ,
and satisfies
d d x H n ( x , ρ ) = n H n 1 ( x , ρ ) , H n + 1 ( x , ρ ) = x H n ( x , ρ ) n ρ H n 1 ( x , ρ ) , ( ρ d 2 d x 2 + x d d x ) H n ( x , ρ ) = n H n ( x , ρ ) .
Theorem 2.1 The geometric multiplicity of the eigenvalue γ is 1. Set
G i ( x ) = j = 0 i / 2 1 2 j 1 j ! ( i 2 j ) ! ( ρ 2 c 2 ) j H i 2 j ( x ) , i = 0 , 1 , .
(2.1)
Suppose f = G n , then { f , ( γ A 2 ) f , ( γ A 2 ) 2 f , , ( γ A 2 ) n f } forms a basis of the spectral subspace associated to γ. Also
( γ A 2 ) k f = ( 1 ) k i = 0 n k ( ρ 2 c ) n k i ( k n k i ) G i ( x ) H 2 k n + i ( y ) ,
(2.2)

where k = 0 , 1 , , n and H l ( y ) = 0 when l < 0 . In particular, ( γ A 2 ) n f = ( 1 ) n H n ( y ) is the eigenfunction associated to γ.

Proof We need only prove Eq. (2.2). It is easy to check that G i ( x ) satisfies the recursion relation:
{ ( i c A 2 ) G i ( x ) = y C i 1 ( x ) + ρ 2 c G i 2 ( x ) , G 0 ( x ) = 1 , G 1 ( x ) = x .
(2.3)
Clearly, for any differentiable function h ( x ) , g ( y ) , we have
( γ A 2 ) h ( x ) g ( y ) = g ( y ) ( γ A 2 ) h ( x ) + c h ( x ) ( ρ 2 y 2 + y y ) g ( y ) .
Then by the property of the Hermite polynomials [1], we have
( γ A 2 ) G i ( x ) H 2 k n + i ( y ) = H 2 k n + i ( y ) [ y G i 1 ( x ) + ρ 2 c G i 2 ( x ) + ( i n ) c G i ( x ) ] + c G i ( x ) ( 2 k n + i ) H 2 k n + i ( y ) (by (2.3)) = H 2 k n + i ( y ) [ y G i 1 ( x ) + ρ 2 c G i 2 ( x ) + 2 ( i + k n ) c G i ( x ) ] = G i 1 ( x ) [ H 2 k n + i + 1 ( y ) + ( 2 k n + i ) ρ H 2 k n + i 1 ( y ) ] + H 2 k n + i ( y ) [ ρ 2 c G i 2 ( x ) + 2 ( i + k n ) c G i ( x ) ] .
By mathematical induction, we have
( 1 ) k + 1 ( γ A 2 ) k + 1 f = i = 0 n k ( ρ 2 c ) n k i ( k n k i ) ( γ A 2 ) G i ( x ) H 2 k n + i ( y ) = i = 1 n k ( ρ 2 c ) n k i ( k n k i ) G i 1 ( x ) [ H 2 k n + i + 1 ( y ) + ( 2 k n + i ) ρ H 2 k n + i 1 ( y ) ] + i = 2 n k ( ρ 2 c ) n k + 1 i ( k n k i ) G i 2 ( x ) H 2 k n + i ( y ) ρ i = 0 n k 1 ( ρ 2 c ) n k 1 i ( n i k ) ( k n k i ) G i ( x ) H 2 k n + i ( y ) = i = 0 n k 1 ( ρ 2 c ) n k i 1 ( k + 1 n k i 1 ) G i ( x ) H 2 ( k + 1 ) n + i ( y ) .

 □

3 In case of dimension 3

In this section, we treat the case of d = 3 . For convenience, we give some notation firstly. Set P for the space of all polynomials with variables ( x , y , z ) , P n the space of polynomials of degree less than or equal to n, and H n the space of homogeneous polynomials of degree n. Then P = n P n and one has the usual direct sum decomposition of all polynomials [7],
P n = m = 0 n H m .
(3.1)

Notation 1 Set γ = n c , r = n 2 and ρ = σ 2 c .

By the monomials property of the Hermite polynomials [1]
x n = k = 0 n / 2 ( n 2 k ) ( 2 k 1 ) ! ! ρ k H n 2 k ( x ) ,

the Hermite polynomials are another basis of P .

Let H m = span { H i ( x ) H j ( y ) H k ( z ) , i + j + k = m } , then we have another direct sum decomposition of all polynomials, i.e.,
P = n P n , P n = m = 0 n H m .
(3.2)

We denote by Q m the canonical projection [11] of P onto H m .

Theorem 3.1 Let
q k = 2 n + 1 4 k , k = 0 , 1 , 2 , , r .
(3.3)
Then there exist { f k , k = 0 , , r } so that
{ f k , ( γ A 3 ) f k , , ( γ A 3 ) q k 1 f k : k = 0 , , r }
forms a basis of the spectral subspace associated to γ. Set h k = ( γ A 3 ) q k 1 f k . Then { h k , k = 0 , 1 , 2 , , r } are the basis of the eigenspace of the eigenvalue γ and satisfy
Q n h k = i = 0 k ( 2 ) k i ( k i ) H k i ( x ) H 2 i ( y ) H n k i ( z ) .
(3.4)

The proof of Theorem 3.1 is presented in Section 3.1. The following is a by-product.

Corollary 3.2 The geometric multiplicity of the eigenvalue γ of the Ornstein-Uhlenbeck operator A 3 is r + 1 .

3.1 Proof of Theorem 3.1

Note that
A 3 = c ( ρ 2 x 2 + x x ) c ( ρ 2 y 2 + y y ) c ( ρ 2 z 2 + z z ) + y x + z y .
It follows from the property of the Hermite polynomials [1] that
( γ A 3 ) ( H i ( x ) H j ( y ) H k ( z ) ) = ( m n ) c H i ( x ) H j ( y ) H k ( z ) i H i 1 ( x ) [ H j + 1 ( y ) + j ρ H j 1 ( y ) ] H k ( z ) j H i ( x ) H j 1 ( y ) [ H k + 1 ( z ) + k ρ H k 1 ( z ) ] .
(3.5)

For convenience, Eq. (3.5) can be rewritten in the following way.

Proposition 3.3 If φ H m then ( γ A 3 ) φ = Q m ( γ A 3 ) φ + Q m 2 ( γ A 3 ) φ . In particular,
( γ A 3 ) ( H i ( x ) H j ( y ) H k ( z ) ) = Q m ( γ A 3 ) ( H i ( x ) H j ( y ) H k ( z ) ) + Q m 2 ( γ A 3 ) ( H i ( x ) H j ( y ) H k ( z ) ) ,
(3.6)
where m = i + j + k and
Q m ( γ A 3 ) ( H i ( x ) H j ( y ) H k ( z ) ) = ( m n ) c H i ( x ) H j ( y ) H k ( z ) i H i 1 ( x ) H j + 1 ( y ) H k ( z ) j H i ( x ) H j 1 ( y ) H k + 1 ( z ) ,
(3.7)
and
Q m 2 ( γ A 3 ) ( H i ( x ) H j ( y ) H k ( z ) ) = i j ρ H i 1 ( x ) H j 1 ( y ) H k ( z ) j k ρ H i ( x ) H j 1 ( y ) H k 1 ( z ) .
(3.8)
In particular, if m = n , then
Q n ( γ A 3 ) ( H i ( x ) H j ( y ) H k ( z ) ) = i H i 1 ( x ) H j + 1 ( y ) H k ( z ) j H i ( x ) H j 1 ( y ) H k + 1 ( z ) .
(3.9)
Remark 1 Equations (3.7)-(3.9) make us use the terminology of graph theory. In fact, by Eq. (3.9), we have a weighted and directed acyclic graph (which also can be seen as a Hasse diagram) of the evolution of basis of H n operated by Q n ( γ A 3 ) . For example, when n = 3 and n = 4 ,c the directed acyclic graphs are, respectively,

where we denote by the triple integers ( i , j , k ) the Hermite polynomial H i ( x ) H j ( y ) H k ( z ) . It follows from Eq. (3.9) that the weights of the arrows between ( i , j , k ) and ( i , j 1 , k + 1 ) , ( i 1 , j + 1 , k ) are −j, −i, respectively.

One can find many properties from the directed acyclic graph. For example, for the vertex ( i , j , k ) with i + j + k = n , the height (which is defined by the distance between vertices ( n , 0 , 0 ) and ( i , j , k ) and thus is between 0 2 n ) is h = j + 2 k . For simplicity, in each height of the graph, we list the vertices ( i , j , k ) decreasingly by lexicographic order. Then the vertices ( i , j , k ) and ( k , j , i ) are symmetric about the n th height of the graph.

It follows from Eq. (1.4), Corollary 3.2, and Theorem 3.1 that the order of the graph (the numbers of vertices in the graph) is the algebraic multiplicity of γ, q k are 1 plus the distance between vertices ( n 2 + k , 0 , n 2 k ) and ( n 2 k , 0 , n 2 + k ) , and the numbers of vertices in the n th height of the graph is the geometric multiplicity of γ.

Proposition 3.4 The spectral subspace associated to γ = n c belongs to i = 0 n 2 H n 2 i .

Proof Suppose that f is a generalized eigenfunction, i.e., there exists an integer k 1 such that ( λ A 3 ) k f = 0 . It follows from Eq. (3.7) that if the degree of f is m n , then Q m ( λ A 3 ) k f 0 . This is a contradiction; then f P n and the degree is exactly n.d If there is an i = 0 , 1 , , n 1 2 such that Q n 1 2 i f 0 , then by Eq. (3.6), Q n 1 2 i ( λ A 3 ) k f 0 . This is a contradiction; then f i = 0 n 2 H n 2 i . □

Lemma 3.5 For any polynomial g H m with m n , there exists a unique solution f H m to the equation Q m ( γ A 3 ) f = g .

Proof Suppose that
g = i + j + k = m b i j k H i ( x ) H j ( y ) H k ( z )
and f = i + j + k = m a i j k H i ( x ) H j ( y ) H k ( z ) . It follows from Eq. (3.7) that
Q m ( γ A 3 ) f = Q m ( γ A 3 ) i + j + k = m a i j k H i ( x ) H j ( y ) H k ( z ) = i + j + k = m a i j k [ ( m n ) c H i ( x ) H j ( y ) H k ( z ) i H i 1 ( x ) H j + 1 ( y ) H k ( z ) j H i ( x ) H j 1 ( y ) H k + 1 ( z ) ] = i + j + k = m b i j k H i ( x ) H j ( y ) H k ( z ) .

By the linear independence of { H i ( x ) H j ( y ) H k ( z ) : i + j + k = m } , we have a system of ( m + 1 2 ) linear equations in ( m + 1 2 ) unknowns.

We sort { H i ( x ) H j ( y ) H k ( z ) : i + j + k = m } , which appear in the directed acyclic graph in Remark 1. The coefficient matrix of the linear equations is a lower triangle matrix with nonzero diagonal entry ( m n ) c . Thus the linear equations have a unique solution. □

Suppose that f = s = 0 m 2 f m 2 s with f s H s . It follows from Eq. (3.6) that
( γ A 3 ) f = ( γ A 3 ) s = 0 m 2 f m 2 s = s = 0 m 2 [ Q m 2 s ( γ A 3 ) f m 2 s + Q m 2 s 2 ( γ A 3 ) f m 2 s ] = Q m ( γ A 3 ) f m + s = 1 m 2 Q m 2 s ( γ A 3 ) [ f m 2 s + f m + 2 2 s ] .
Thus the equation ( γ A 3 ) f = g is equal to the system of equations
Q m ( γ A 3 ) f m = g ,
(3.10)
Q m 2 s ( γ A 3 ) f m 2 s = Q m 2 s ( γ A ) f m + 2 2 s , s = 1 , 2 , , m 2 .
(3.11)

It follows from Lemma 3.5 that when m n , Eq. (3.10) has a unique solution f m H m and when m 2 s n , Eq. (3.11) has a unique solution f m 2 s H m 2 s .

Clearly, if f satisfies ( γ A 3 ) f = g , so does f + h where h is any eigenfunction of A 3 associated to γ. Thus we have the following proposition.

Proposition 3.6 For any g P with Q n + 2 s g = 0 , s = 0 , 1 , 2 ,  , there exist solutions f P to the equation ( γ A 3 ) f = g . In addition, if f is the same degree polynomial to g then there exists one and only one solution.

Proposition 3.7 Set ψ k = i = 0 k ( 2 ) k i ( k i ) H k i ( x ) H 2 i ( y ) H n k i ( z ) , k = 0 , 1 , , r . Then it satisfies the equation Q n ( γ A 3 ) ψ k = 0 .

Proof Suppose that ψ k = i = 0 k a i H k i ( x ) H 2 i ( y ) H n k i ( z ) . By Eq. (3.7), we have
Q n ( γ A 3 ) ψ k = Q n ( γ A 3 ) i = 0 k a i H k i ( x ) H 2 i ( y ) H n k i ( z ) = i = 0 k a i [ ( k i ) H k i 1 ( x ) H 2 i + 1 ( y ) H n k i ( z ) + 2 i H k i ( x ) H 2 i 1 ( y ) H n k i + 1 ( z ) ] = 0 .
By the linear independence of { H i ( x ) H j ( y ) H k ( z ) } , we have a system of k linear homogeneous equations in k + 1 unknowns. The coefficient matrix is
M k = [ k 2 0 0 0 0 0 0 k 1 4 0 0 0 0 0 0 k 2 6 0 0 0 0 0 0 0 2 2 ( k 1 ) 0 0 0 0 0 0 1 2 k ] .

Clearly, the solution is 1-dimensional and a i = ( 2 ) k i ( k i ) , i = 0 , , k , is a solution. □

Now suppose that h = ψ + ϕ with ψ H n and ϕ P n 2 ; then by Eq. (3.6),
( γ A 3 ) h = ( γ A 3 ) ψ + ( γ A 3 ) ϕ = Q n ( γ A 3 ) ψ + [ Q n 2 ( γ A 3 ) ψ + ( γ A 3 ) ϕ ] .
Therefore, the equation ( γ A 3 ) h = 0 is equivalent to two equations:
Q n ( γ A 3 ) ψ = 0 , ψ H n ,
(3.12)
( γ A 3 ) ϕ = Q n 2 ( γ A 3 ) ψ , ϕ P n 2 .
(3.13)

By Proposition 3.7, Eq. (3.12) has 1 + r independent solutions. It follows from Proposition 3.6 that there exists a unique ϕ satisfying Eq. (3.13) for each ψ. Thus we have the following corollary.

Corollary 3.8 The geometric multiplicity of the eigenvalue γ = n c is greater than or equal to 1 + r (i.e., there are at least 1 + r independent solutions to the equation ( γ A 3 ) h = 0 ).

Denote by h k , k = 0 , 1 , , r the solutions to the equation ( γ A 3 ) h = 0 given by Corollary 3.8. Clearly, h k = ψ k + ( Id Q n ) h k , where ψ k H n is the same as in Proposition 3.7.

Suppose that f k = φ k + g k with φ k H n and g k P n 2 , then
( γ A 3 ) q k 1 f k = ( γ A 3 ) q k 1 ( φ k + g k ) = Q n ( γ A 3 ) q k 1 φ k + Q n 2 ( γ A 3 ) q k 1 φ k + ( γ A 3 ) q k 1 g k .
Therefore, the equation ( γ A 3 ) q k 1 f k = h k is equivalent to two equations:
Q n ( γ A 3 ) q k 1 φ k = ψ k ,
(3.14)
( γ A 3 ) q k 1 g k = ( Id Q n ) h k Q n 2 ( γ A 3 ) q k 1 φ k .
(3.15)

Note that ψ k = i = 0 k b i H k i ( x ) H 2 i ( y ) H n k i ( z ) . Set φ k = i = 0 k a i H n k i ( x ) H 2 i ( y ) H k i ( z ) , then the l.h.s. of Eq. (3.14) is a linear mapping from the span { H n k i ( x ) H 2 i ( y ) H k i ( z ) : i = 0 , , k } to the span { H k i ( x ) H 2 i ( y ) H n k i ( z ) : i = 0 , , k } . The linear mapping is the evolution from the 2k-height to the 2 ( n k ) -height of the directed acyclic graph in Remark 1, which is represented in the natural basis by a ( k + 1 ) -square matrix S r k (it is the multiplication of some matrices; for details, please refer to Section 3.2). By Proposition 3.10, the matrix S r k is nonsingular, which implies that Eq. (3.14) has a solution. Since g k , ( Id Q n ) h k P n 2 , it follows from Proposition 3.6 that Eq. (3.15) has a solution. Then we have the following proposition.

Proposition 3.9 There exists an f k P n such that ( γ A ) q k 1 f k = h k .

Proof of Theorem 3.1 Note that ψ k = Q n h k in Proposition 3.7 are linear independent, so are the eigenfunctions h k . Let { f k , k = 0 , 1 , , r } be as in Proposition 3.9. Then the generalized eigenfunctions { ( γ A ) j f k : j = 0 , 1 , , q k 1 , k = 0 , 1 , , r } are linear independent (please refer to the proof of [[12], p.264, Theorem 6.2.]).

Note that q k = 2 n + 1 4 k ; then the algebraic multiplicity of the eigenvalue γ can be decomposed to
( n + 2 2 ) = k = 0 r ( 2 n + 1 4 k ) = k = 0 r q k .
(3.16)

Thus { ( γ A ) j f k : j = 0 , 1 , , q k 1 , k = 0 , 1 , , r } forms a basis of the spectral subspace associated to γ. Together with Proposition 3.8, we see that the geometric multiplicity of the eigenvalue γ is equal to r + 1 (otherwise, the algebraic multiplicity should be greater than ( n + 1 2 ) ). Then { h k , k = 0 , , r } forms the basis of the eigenspace of γ. Equation (3.4) is exactly the conclusion of Proposition 3.7. □

3.2 Linear mapping represented by the multiplication of some matrices

For example, by Eq. (3.9), when n is odd, the evolution from the ( n 1 ) -height to the n-height (the n-height to the ( n + 1 ) -height) of the directed acyclic graph is
Q n ( γ A 3 ) i = 0 r a i H r + 1 i ( x ) H 2 i ( y ) H r i ( z ) = i = 0 r a i ( ( r + 1 i ) H r i ( x ) H 1 + 2 i ( y ) H r i ( z ) 2 i H r + 1 i ( x ) H 2 i 1 ( y ) H r + 1 i ( z ) ) = i = 0 r ( ( r + 1 i ) a i ( 2 + 2 i ) a i + 1 ) H r i ( x ) H 1 + 2 i ( y ) H r i ( z ) ( where  a r + 1 = 0 ) , Q n ( γ A 3 ) i = 0 r b i H r i ( x ) H 1 + 2 i ( y ) H r i ( z ) = i = 0 r ( ( r + 1 i ) b i 1 ( 1 + 2 i ) b i ) H r i ( x ) H 2 i ( y ) H r + 1 i ( z ) ( where  b 1 = 0 ) .
Then the matrices associated to the linear mappings are D 0 , A 0 (see below). When n is even, the evolution from the ( n 1 ) -height to the n-height (the n-height to the ( n + 1 ) -height) of the directed acyclic graph is
Q n ( γ A 3 ) i = 0 r 1 c i H r i ( x ) H 2 i + 1 ( y ) H r 1 i ( z ) = i = 0 r ( ( r + 1 i ) c i 1 ( 1 + 2 i ) c i ) H r i ( x ) H 2 i ( y ) H r i ( z ) ( where  c 1 = c r = 0 ) , Q n ( γ A 3 ) i = 0 r d i H r i ( x ) H 2 i ( y ) H r i ( z ) = i = 0 r 1 ( ( r i ) d i ( 2 + 2 i ) d i + 1 ) H r i 1 ( x ) H 2 i + 1 ( y ) H r i ( z ) .

Then the matrices associated to the linear mappings are B 1 , C 1 (see below).

The others are similar. In general, the matrix S r k (see Proposition 3.9) associated to the linear mapping of the evolution from the ( 2 k ) -height to the 2 ( n k ) -height of the directed acyclic graph is as follows.

Proposition 3.10 Let r = n 2 . If n is odd, then suppose that S k is an ( r + 1 k ) -square matrix given by
{ S 0 = D 0 A 0 , S k = D k C k S k 1 B k A k , k = 1 , 2 , , r ,
(3.17)
where D k , A k are ( r + 1 k ) -square matrices,
A k = [ r + k + 1 2 0 0 0 0 0 0 r + k 4 0 0 0 0 0 0 r + k 1 6 0 0 0 0 0 0 0 0 2 k + 2 n ( 2 k + 1 ) 0 0 0 0 0 0 2 k + 1 ] ,
(3.18)
and
D k = [ 1 0 0 0 0 0 0 r k 3 0 0 0 0 0 0 r k 1 5 0 0 0 0 0 0 0 0 2 n 2 k 2 0 0 0 0 0 0 1 n 2 k ] ,
(3.19)
and B k , C k are ( r + 2 k ) × ( r + 1 k ) , ( r + 1 k ) × ( r + 2 k ) matrices, respectively,
B k = [ 1 0 0 0 0 0 0 r + k 3 0 0 0 0 0 0 r + k 1 5 0 0 0 0 0 0 0 0 2 k + 2 n 2 k 2 0 0 0 0 0 0 2 k + 1 n 2 k 0 0 0 0 0 0 2 k ] ,
(3.20)
and
C k = [ r k + 1 2 0 0 0 0 0 0 r k 4 0 0 0 0 0 0 r k 1 6 0 0 0 0 0 0 0 2 n 2 k 1 0 0 0 0 0 0 1 n 2 k + 1 ] .
(3.21)
If n is even, then suppose that S k is an ( r + 1 k ) -square matrix given by
{ S 0 = Id r + 1 , S k = D k C k S k 1 B k A k , k = 1 , 2 , , r ,
(3.22)
where D k , A k are r + 1 k order matrices,
A k = [ r + k 2 0 0 0 0 0 0 r + k 1 4 0 0 0 0 0 0 r + k 2 6 0 0 0 0 0 0 0 0 2 k + 1 n 2 k 0 0 0 0 0 0 2 k ] ,
(3.23)
and
D k = [ 1 0 0 0 0 0 0 r k 3 0 0 0 0 0 0 r k 1 5 0 0 0 0 0 0 0 0 2 n 2 k 1 0 0 0 0 0 0 1 n + 1 2 k ] ,
(3.24)
and B k , C k are ( r + 2 k ) × ( r + 1 k ) , ( r + 1 k ) × ( r + 2 k ) matrices, respectively,
B k = [ 1 0 0 0 0 0 0 r + k 1 3 0 0 0 0 0 0 r + k 2 5 0 0 0 0 0 0 0 0 2 k + 2 n 2 k 1 0 0 0 0 0 0 2 k n + 1 2 k 0 0 0 0 0 0 2 k 1 ] ,
(3.25)
and
C k = [ r k + 1 2 0 0 0 0 0 0 r k 4 0 0 0 0 0 0 r k 1 6 0 0 0 0 0 0 0 2 n 2 k 0 0 0 0 0 0 1 n 2 k + 2 ] .
(3.26)

Then the matrices S k are nonsingular.

Remark 2 Set the column vector u k = [ ( 2 ) k , ( k k 1 ) ( 2 ) k 1 , , ( k 2 ) ( 2 ) 2 , ( k 1 ) ( 2 ) , 1 ] , where k = 0 , 1 , , r . We conjecture that the column vector u r k is the eigenvector of S k associated to the eigenvalue λ k , which is defined as follows: when n is odd,
{ λ 0 = 1 , λ k = 2 k ( 2 k + 1 ) ( 4 k 1 ) ( 4 k + 1 ) λ k 1 , k = 1 , 2 , , r ,
(3.27)
when n is even,
{ λ 0 = 1 , λ k = 2 k ( 2 k 1 ) ( 4 k 3 ) ( 4 k 1 ) λ k 1 , k = 1 , 2 , , r .
If the conjecture is valid, then we can characterize the leader vectors f k more clearly, i.e.,
Q n f k = i = 0 k ( 2 ) k i ( k i ) H n k i ( x ) H 2 i ( y ) H k i ( z ) .

Apply the notation in [10, 13]. Let A be an p × q matrix, α = { i 1 , , i s } , and β = { j 1 , , j t } , 1 i 1 < < i s p , 1 j 1 < < j t q . Denote by A [ i 1 , , i s ; j 1 , , j t ] , or simply A [ α , β ] , the submatrix of A consisting of the entries in rows i 1 , , i s and columns j 1 , , j t . Let A ( i 1 , , i s | j 1 , , j t ) be the submatrix of A obtained by deleting rows i 1 , , i s and columns j 1 , , j t . For convenience, A ( i 1 , , i s | ) ( A ( | j 1 , , j t ) ) means deleting rows (columns) only.

Proof We divide the proof into three steps.

Claim 1: All the minors (i.e., the determinant of the square submatrix) of the matrices Ak, B k , C k , D k , k = 1 , , r , are nonnegative. Since A k ( B k ) and D k T ( C k T ) have the same types, we only need to prove the cases of A k , B k . The square submatrix of B k is the same as that of a certain B k ( i | ) , i = 1 , , r + 2 k , which is a direct sum of two matrices [13] with the same type of D k or A k . Thus we only need to prove the case of A k . In fact, A [ i 1 , , i s ; j 1 , , j s ] is a direct sum [13] of several nonnegative triangular matrices, which implies that A [ i 1 , , i s ; j 1 , , j s ] has a nonnegative determinant.

Claim 2: All the minors of S k are nonnegative. In fact, we have the following type of Binet-Cauchy formula:
det ( AB ) [ α , β ] = κ det A [ α , κ ] × det A [ κ , β ] ,
(3.28)
where A (B) is m × n ( n × m ) matrix, m n , κ runs over all the sequences of { 1 , , n } with the same length of α, β. By successively applying Eq. (3.28), we have
det S k [ α , β ] = det ( D k C k S k 1 B k A k ) [ α , β ] = κ 1 , κ 2 , κ 3 , κ 4 det D k [ α , κ 1 ] × det C k [ κ 1 , κ 2 ] det S k 1 [ κ 2 , κ 3 ] det B k [ κ 3 , κ 4 ] det A k [ κ 4 , β ] .

Together with Claim 1 and by mathematical induction, we have det S k [ α , β ] 0 .

Claim 3: det S k > 0 . Clearly, det S 0 > 0 . Since det A k , det D k > 0 , we need only prove that det ( C k S k 1 B k ) > 0 . By the Binet-Cauchy formula, we have
det ( C k S k 1 B k ) = i , j det C k ( | i ) det S k 1 ( i | j ) det B k ( j | ) .

Clearly, det C k ( | i ) , det B k ( j | ) > 0 . By the induction assumption det S k 1 0 , Claim 2 implies that there exists at least one det S k 1 ( i | j ) > 0 . This ends the proof. □

Endnotes

a It is well known that the reversibility is equivalent to the symmetric property of the generator for a Markov process with invariant distribution.

b We think that the approach is also able to solve the same problems for d 4 .

c The directed acyclic graph can easily be given for any integer n, and we ignore it for convenience.

d The reader can also refer to [[7], Proposition 3.1].

Declarations

Acknowledgements

This work was supported by NSFC (No. 11101137).

Authors’ Affiliations

(1)
Business School of Central South University, Changsha, 410083, P.R. China
(2)
Hunan University of Science and Technology, Xiangtan, 411201, P.R. China

References

  1. Kuo HH: Introduction to Stochastic Integration. Springer, Berlin; 2006.MATHGoogle Scholar
  2. Itô K: Multiple Wiener integral. J. Math. Soc. Jpn. 1951, 3(1):157-169. (Reprinted in: Kiyosi Itô selected papers. Edited by Daniel W. Stroock, S.R.S. Varadhan, Springer (1987)) 10.2969/jmsj/00310157MATHView ArticleGoogle Scholar
  3. Chen, Y, Liu, Y: On the eigenfunctions of the complex Ornstein-Uhlenbeck operator. Kyoto J. Math. (2013, to appear)http://arxiv.org/pdf/1209.4990.pdfGoogle Scholar
  4. Chojnowska-Michalik A, Goldys B: Symmetric Ornstein-Uhlenbeck semigroups and their generators. Probab. Theory Relat. Fields 2002, 124(4):459-486. 10.1007/s004400200222MATHMathSciNetView ArticleGoogle Scholar
  5. Chojnowska-Michalik A, Goldys B: Nonsymmetric Ornstein-Uhlenbeck semigroup as second quantized operator. J. Math. Kyoto Univ. 1996, 36(3):481-498.MATHMathSciNetGoogle Scholar
  6. Lunardi A:On the Ornstein-Uhlenbeck operator in L 2 spaces with respect to invariant measures. Trans. Am. Math. Soc. 1997, 349: 155-169. 10.1090/S0002-9947-97-01802-3MATHMathSciNetView ArticleGoogle Scholar
  7. Metafune G, Pallara D:Spectrum of Ornstein-Uhlenbeck operators in L p spaces with respect to invariant measures. J. Funct. Anal. 2002, 196: 40-60. 10.1006/jfan.2002.3978MATHMathSciNetView ArticleGoogle Scholar
  8. Levelt AHM: Jordan decomposition for a class of singular differential operators. Ark. Mat. 1975, 13(1):1-27.MATHMathSciNetView ArticleGoogle Scholar
  9. Mathew, PJ: On Some Aspects of the Differential Operator. Mathematics Theses (2006)Google Scholar
  10. Hoffman K, Kunze R: Linear Algebra. 2nd edition. Prentice Hall, New York; 1971.MATHGoogle Scholar
  11. Roman S: Advanced Linear Algebra (GTM135). 3rd edition. Springer, Berlin; 2008.Google Scholar
  12. Lang S: Linear Algebra. 3rd edition. Springer, Berlin; 2004.Google Scholar
  13. Zhang FZ: Matrix Theory, Basic Results and Techniques. 2nd edition. Springer, Berlin; 2011.MATHView ArticleGoogle Scholar

Copyright

Advertisement