Open Access

Feynman-Kac formula for switching diffusions: connections of systems of partial differential equations and stochastic differential equations

Advances in Difference Equations20132013:315

https://doi.org/10.1186/1687-1847-2013-315

Received: 15 July 2013

Accepted: 20 September 2013

Published: 8 November 2013

Abstract

This work develops Feynman-Kac formulae for switching diffusion processes. It first recalls the basic notion of a switching diffusion. Then the desired stochastic representations are obtained for boundary value problems, initial boundary value problems, and the initial value problems, respectively. Some examples are also provided.

Keywords

switching diffusionFeynman-Kac formulaDirichlet problemCauchy problem

1 Introduction

Because of the increasing demands and complexity in modeling, analysis, and computation, significant efforts have been made searching for better mathematical models in recent years. It has been well recognized that many of the systems encountered in the new era cannot be represented by the traditional ordinary differential equation and/or stochastic differential equation models alone. The states of such systems have two components, namely, state = (continuous state, discrete event state). The discrete dynamics may be used to depict a random environment or other stochastic factors that cannot be represented in the traditional differential equation models. Dynamic systems mentioned above are often referred to as hybrid systems. One of the representatives in the class of hybrid system is a switching diffusion process. A switching diffusion process can be thought of as a number of diffusion processes coupled by a random switching process. At a first glance, these processes are seemingly similar to the well-known diffusion processes. A closer scrutiny shows that switching diffusions have very different behavior compared to traditional diffusion processes. Within the class of switching diffusion processes, when the discrete event process or the switching process depends on the continuous state, the problem becomes much more difficult; see [1, 2]. Because of their importance, switching diffusions have drawn much attention in recent years. Many results such as smooth dependence of the initial data, recurrence, positive recurrence, ergodicity, stability, and numerical methods for solution of stochastic differential equations with switching, etc., have been obtained. Nevertheless, certain important concepts are yet fully investigated. The Feynman-Kac formula is one of such representatives.

For diffusion processes, the Feynman-Kac formula provides a stochastic representation for solutions to certain second-order partial differential equations (PDEs). These representations are standard in any introductory text to stochastic differential equations (SDEs); see, for example, [36], and references therein. The utility of Feynman-Kac formula has enjoyed a wide-range of applications in such areas as stochastic control, mathematical finance, risk analysis, and related fields.

This work aims to derive Feynman-Kac formula for switching diffusions. It provides a probabilistic approach to the study of weakly coupled elliptic systems of partial differential equations (see [7] for weakly coupled systems). Such systems arise in financial mathematics and in the form of the so called diffusion-reaction equations, which describe the concentration of a substance under the influence of diffusion and chemical reactions. The case where the discrete process is a two state process can be found in [[8], Section 5.4]. Our effort is on developing general results, in which the switching process has a finite state space and is continuous-state dependent.

The rest of the paper is organized as follows. We begin by presenting the necessary background materials and problem formulation regarding switching diffusions in Section 2. The setup is in line with that of [1]. Then, using the generalized Itô formula and Dynkin’s formula, we present the Feynman-Kac formula in the context of the Dirichlet problem in Section 4, the initial boundary value problem in Section 5. Finally, we study the Cauchy problem in Section 6.

2 Switching diffusions

Let ( Ω , F , P ) be a probability space, and let { F t } be a filtration on this space satisfying the usual condition (i.e., F 0 contains all the null sets and the filtration { F t } is right continuous). The probability space ( Ω , F , P ) together with the filtration { F t } is denoted by ( Ω , F , { F t } , P ) . Suppose that α ( ) is a stochastic process with right-continuous sample paths (or a pure jump process), finite-state space M = { 1 , , m 0 } , and x-dependent generator Q ( x ) , so that for a suitable function f ( , ) ,
Q ( x ) f ( x , ) ( i ) = j M , j i q i j ( x ) ( f ( x , j ) f ( x , i ) ) for each  i M .
(1)
Assume throughout the paper that Q ( x ) satisfies the q-property [1]. That is, Q ( x ) = ( q i j ( x ) ) satisfies
  1. (i)

    q i j ( x ) is Borel measurable and uniformly bounded for all i , j M and x R n ;

     
  2. (ii)

    q i j ( x ) 0 for all x R n and j i ; and

     
  3. (iii)

    q i i ( x ) = j i q i j ( x ) for all x R n and i M .

     
Let w ( ) be an R n -valued standard Brownian motion defined on ( Ω , F , { F t } , P ) , b ( , ) : R n × M R n , and σ ( , ) : R n × M R n × R n such that the two-component process ( X ( ) , α ( ) ) satisfies
d X ( t ) = b ( X ( t ) , α ( t ) ) d t + σ ( X ( t ) , α ( t ) ) d w ( t ) , ( X ( 0 ) , α ( 0 ) ) = ( x , i )
(2)
and
P { α ( t + δ ) = j | α ( t ) = i , X ( s ) , α ( s ) , s t } = q i j ( X ( t ) ) δ + o ( δ ) , i j .
(3)

The process given by (2) and (3) is called a switching diffusion or a regime-switching diffusion. Now, before carrying out our analysis, we state a theorem regarding existence and uniqueness of the solution of the aforementioned stochastic differential equation, which will be important in what follows.

Theorem 1 (Yin and Zhu [1])

Let x R n , M = { 1 , , m 0 } , and Q ( x ) = ( q i j ( x ) ) be an m 0 × m 0 matrix satisfying the q-property. Consider the two component process Y ( t ) = ( X ( t ) , α ( t ) ) given by (2) with initial data ( x , i ) . Suppose that Q ( ) : R n R m 0 × m 0 is bounded and continuous, and that the functions b ( , ) and σ ( , ) satisfy
| b ( x , i ) | + | σ ( x , i ) | K ( 1 + | x | ) , i M ,
(4)
for some constant K > 0 , and for each N > 1 , there exists a positive constant M N such that for all i M and all x , y R n with | x | | y | M N ,
| b ( x , i ) b ( y , i ) | | σ ( x , i ) σ ( y , i ) | M N | x y | ,
(5)

where a b = max ( a , b ) for a , b R . Then there exists a unique solution to (2), in which the evolution of the discrete component is given by (3).

Note that (4) and (5) are known as the linear growth and local Lipschitz conditions, respectively. We assume these conditions on b ( , ) and σ ( , ) for the remainder of the paper.

2.1 Itô’s Formula

Consider ( X ( t ) , α ( t ) ) given in (2), and let a ( x , i ) = σ ( x , i ) σ ( x , i ) , where σ ( x , i ) denotes the transpose of σ ( x , i ) . Given any function g ( , i ) C 2 ( R n ) with i M , define by
L g ( x , i ) : = 1 2 tr ( a ( x , i ) D 2 g ( x , i ) ) + b ( x , i ) D g ( x , i ) + Q ( x ) g ( x , ) ( i ) ,
(6)

where D g ( , i ) = ( g x 1 , , g x n ) , D 2 g ( , i ) denotes the Hessian of g ( , i ) , and Q ( x ) g ( x , ) ( i ) is given by (1). The choice for will become clear momentarily.

It turns out that the evolution of the discrete component can be represented as a stochastic integral with respect to a Poisson random measure p ( d t , d z ) , whose intensity is d t × m ( d z ) , where m ( ) is the Lebesgue measure on . We have
d α ( t ) = R h ( X ( t ) , α ( t ) , z ) p ( d t , d z ) ,
(7)

where h is an integer-valued function; furthermore, this representation is equivalent to (3). For details, we refer the reader to [9] and [1].

We now state (generalized) Itô’s formula. For each i M and g ( , i ) C 2 ( R n ) , we have
g ( X ( t ) , α ( t ) ) g ( X ( 0 ) , α ( 0 ) ) = 0 t L g ( X ( s ) , α ( s ) ) d s + M 1 ( t ) + M 2 ( t ) ,
(8)
where
M 1 ( t ) = 0 t D g ( X ( s ) , α ( s ) ) , σ ( X ( s ) , α ( s ) ) d w ( s ) , M 2 ( t ) = 0 t R [ g ( X ( s ) , α ( 0 ) + h ( X ( s ) , α ( s ) , z ) ) g ( X ( s ) , α ( s ) ) ] μ ( d s , d z ) .
The compensated or centered Poisson measure μ ( d s , d z ) = p ( d s , d z ) d s × m ( d z ) is a martingale measure. For t 0 , and g ( , i ) C 0 2 (the collection of C 2 functions with compact support) for each i M ,
E x , i g ( X ( t ) , α ( t ) ) g ( x , i ) = E x , i 0 t L g ( X ( s ) , α ( s ) ) d s ,
(9)
where E x , i denotes the expectation with initial data ( X ( 0 ) , α ( 0 ) ) = ( x , i ) . The above equation is known as Dynkin’s formula. The condition g C 0 2 ensures that
g ( X ( t ) , α ( t ) ) g ( x , i ) 0 t L g ( X ( s ) , α ( s ) ) d s is a martingale.
Furthermore, one can show that agrees with its classical interpretation, as the (infinitesimal) generator of the process ( X ( t ) , α ( t ) ) given by
L g ( x , i ) = lim t 0 E x , i [ g ( X ( t ) , α ( t ) ) ] g ( x , i ) t .
(10)
To see this, pick t sufficiently small so that α ( t ) agrees with the initial data. Then it follows that
1 t 0 t L g ( X ( s ) , α ( s ) ) d s = 1 t 0 t L g ( X ( s ) , i ) d s L g ( x , i ) , t 0
by continuity. Hence by multiplying by t 1 , then letting t tend to zero, one gets
| 1 t E 0 t L g ( X ( s ) , α ( s ) ) d s L g ( x , i ) | 0 , as  t 0 ,
and, consequently, (10). Noting (9), when the deterministic time t is replaced by a stopping time τ satisfying τ < w.p.1 (recalling that g ( , i ) C 0 2 ), then
E x , i g ( X ( τ ) , α ( τ ) ) g ( x , i ) = E x , i 0 τ L g ( X ( s ) , α ( s ) ) d s .
(11)

Note that if τ is the first exit time of the process from a bounded domain satisfying τ < w.p.1, then Dynkin’s formula holds for any g ( , i ) C 2 and each i M without the compact support assumption. To proceed, we obtain the following system of Kolmogorov backward equations for switching diffusions; see also [2].

Theorem 2 (Kolmogorov backward equation)

Suppose that g ( , i ) C 0 2 ( R n ) , for i M , and define
u ( x , t , i ) = E x , i [ g ( X ( t ) , α ( t ) ) ] .
(12)
Then u satisfies
{ u t = L u for  t > 0 , x R n , i M , u ( x , 0 , i ) = g ( x , i ) for  x R n , i M .
(13)

A proof of the theorem can be found in [[2], Theorem 5.2]; see also Theorem 5.1 in the aforementioned reference.

Remark 1 We illustrate the proof of the theorem using the idea as in [[6], p. 140]. Fix t > 0 . Then using (10) and the Markov property, we have
E x , i [ u ( X ( r ) , t , α ( r ) ) ] u ( x , t , i ) r = E x , i [ E X ( r ) , α ( r ) [ g ( X ( t ) , α ( t ) ) ] ] E x , i [ g ( X ( t ) , α ( t ) ) ] r = E x , i [ E x , i [ g ( X ( t + r ) , α ( t + r ) ) | F r ] E x , i [ g ( X ( t ) , α ( t ) ) ] r = E x , i [ g ( X ( t + r ) , α ( t + r ) ) ] E x , i [ g ( X ( t ) , α ( t ) ) ] r = u ( x , t + r , i ) u ( x , t , i ) r u t ( x , t , i ) as  r 0 .

Thus, by the definition of , (13) is satisfied.

3 The Feynman-Kac formula

We now state the Feynman-Kac formula, which is a generalization of the Kolmogorov backward equation.

Theorem 3 (The Feynman-Kac formula)

Suppose that g ( , i ) C 0 2 ( R n ) , and let c ( , i ) C ( R n ) be bounded; i M . Define
v ( x , t , i ) = E x , i [ exp ( 0 t c ( X ( s ) , α ( s ) ) d s ) g ( X ( t ) , α ( t ) ) ] .
(14)
Then v satisfies
{ v t = L v c v for  t > 0 , x R n , i M , v ( x , 0 , i ) = g ( x , i ) for  x R n , i M .
(15)
Proof To simplify the notation, let
Y ( t ) = g ( X ( t ) , α ( t ) ) , Z ( t ) = exp ( 0 t c ( X ( s ) , α ( s ) ) d s ) .
Now, following the argument in Remark 1, we fix t > 0 . We have
E x , i [ v ( X ( r ) , t , α ( r ) ) ] v ( x , t , i ) r = E x , i [ E X ( r ) , α ( r ) [ Z ( t ) Y ( t ) ] ] E x , i [ Z ( t ) Y ( t ) ] r = E x , i [ E x , i [ exp ( 0 t c ( X ( s + r ) , α ( s + r ) ) d s ) Y ( t + r ) | F r ] ] E x , i [ Z ( t ) Y ( t ) ] r = E x , i [ E x , i [ exp ( r t + r c ( X ( s ) , α ( s ) ) d s ) Y ( t + r ) | F r ] ] E x , i [ Z ( t ) Y ( t ) ] r = E x , i [ Z ( t + r ) exp ( 0 r c ( X ( s ) , α ( s ) ) d s ) Y ( t + r ) ] E x , i [ Z ( t ) Y ( t ) ] r = E x , i [ Z ( t + r ) Y ( t + r ) ] E x , i [ Z ( t ) Y ( t ) ] r + E x , i [ Z ( t + r ) Y ( t + r ) { exp ( 0 r c ( X ( s ) , α ( s ) ) d s ) 1 } ] r = v ( x , t + r , i ) v ( x , t , i ) r + E x , i [ Z ( t + r ) Y ( t + r ) { exp ( 0 r c ( X ( s ) , α ( s ) ) d s ) 1 } ] r .
First, clearly,
v ( x , t + r , i ) v ( x , t , i ) r v t ( x , t , i ) , r 0 .
Furthermore, we claim that
E x , i [ Z ( t + r ) Y ( t + r ) { exp ( 0 r c ( X ( s ) , α ( s ) ) d s ) 1 } ] r c ( x , i ) v ( x , t , i ) .
To verify this claim, first, note that
Z ( t + r ) Y ( t + r ) Z ( t ) Y ( t ) , r 0 ,
by continuity. Now, if we let
f ( r ) = exp ( 0 r c ( X ( s ) , α ( s ) ) d s ) ,
for r sufficiently small. Denote the first jump time of α ( ) by τ 1 . With α ( 0 ) = i , for any t [ 0 , τ 1 ) , α ( t ) = i . It follows that
f ( r ) = exp ( 0 r c ( X ( s ) , i ) d s ) , r [ 0 , τ 1 ) .
Hence f is differentiable at the origin and
d d t f ( 0 ) = f ( 0 ) c ( X ( 0 ) , i ) = c ( x , i ) .
This in turn yields that
Z ( t + r ) Y ( t + r ) 1 r ( exp ( 0 r c ( X ( s ) , α ( s ) ) d s ) 1 ) = Z ( t + r ) Y ( t + r ) ( f ( r ) f ( 0 ) r ) Z ( t ) Y ( t ) c ( x , i ) , r 0 .
Furthermore, the assumptions on the functions c ( , i ) and g ( , i ) ensure that this forms a bounded sequence, so we may apply the bounded convergence theorem to yield
lim r 0 E x , i [ Z ( t + r ) Y ( t + r ) 1 r ( exp ( 0 r c ( X ( s ) , α ( s ) ) d s ) 1 ) ] = E x , i [ lim r 0 Z ( t + r ) Y ( t + r ) 1 r ( exp ( 0 r c ( X ( s ) , α ( s ) ) d s ) 1 ) ] = E x , i [ Z ( t ) Y ( t ) c ( x , i ) ] = c ( x , i ) E x , i [ Z ( t ) Y ( t ) ] = c ( x , i ) v ( x , t , i )

as claimed. This completes the proof. □

So we have seen that the functions given by (12) and (14) necessarily satisfy certain initial value problems. The remainder of the paper will be dedicated to giving stochastic representations for solutions to certain partial differential equations (PDEs) related to the operator .

4 The Dirichlet problem

Let O R n , be a bounded open set, and consider the following Dirichlet problem:
{ L u ( x , i ) + c ( x , i ) u ( x , i ) = ψ ( x , i ) in  O × M , u ( x , i ) = φ ( x , i ) on  O × M ,
(16)

where ∂O denotes the boundary of O. To proceed, we impose assumption (A1).

(A1) The following conditions hold:
  1. 1.

    O C 2 ,

     
  2. 2.

    for some 1 j r , and all i M , min x O ¯ a j j ( x , i ) > 0 ,

     
  3. 3.

    a ( , i ) and b ( , i ) are uniformly Lipschitz continuous in O ¯ for each i M ,

     
  4. 4.

    c ( x , i ) 0 and c ( , i ) is uniformly Hölder continuous in O ¯ for each i M ,

     
  5. 5.

    ψ ( , i ) is uniformly continuous in O ¯ , and φ ( , i ) is continuous on ∂O, both for each i M .

     

It follows that under (A1), the system of boundary value problems has a unique solution; see [3] or [5]. Our goal is to derive a stochastic representation for this problem, similar to the Feynman-Kac formula. In order to achieve this, we need the following lemma.

Lemma 1 Suppose that τ = inf { t 0 : X x ( t ) O } . That is, τ is the first exit time from the open set O of the switching diffusion given in (2) and (3). Then τ < w.p.1.

Proof We use the idea as in [3]. Consider a function V : R n × M R defined by
V ( x , i ) = A exp ( λ x 1 ) , A , λ > 0 , i M .
Clearly V ( , i ) C ( O ) and since V is independent of i M ,
Q ( x ) V ( x , ) ( i ) = i j q i j ( x ) ( V ( x , j ) V ( x , i ) ) = 0 ,
and, thus,
L V ( x , i ) = A exp ( λ x 1 ) [ 1 2 a 11 λ 2 + b 1 λ ] .
Note that as long as λ > b 1 2 a 11 , it follows that L V ( x , i ) < 0 . Hence, by choosing λ and A = A ( λ ) sufficiently large, we can make L V ( x , i ) 1 for each i M . As the function V ( , i ) and its derivatives w.r.t. x are bounded on O ¯ , we may apply Dynkin’s formula to yield
E x , i V ( X ( t τ ) , α ( t τ ) ) V ( x , i ) = E x , i 0 t τ L V ( X ( s ) , α ( s ) ) d s E x , i ( t τ ) ,
where E x , i denotes the expectation taken with ( X ( 0 ) , α ( 0 ) ) = ( x , i ) . This yields that
E x , i ( t τ ) V ( x , i ) E x , i V ( X ( t τ ) , α ( t τ ) ) 2 max x O ¯ , i M | V ( x , i ) | < .

Taking the limit as t , and using the monotone convergence theorem yields E x , i τ < , which in turn leads to τ < w.p.1. □

Theorem 4 Suppose that (A1) holds. Then with τ as in the previous lemma, the solution of the system of boundary value problems (16) is given by
u ( x , i ) = E x , i [ φ ( X ( τ ) , α ( τ ) ) exp ( 0 τ c ( X ( s ) , α ( s ) ) d s ) ] E x , i [ 0 τ ψ ( X ( t ) , α ( t ) ) exp ( 0 t c ( X ( s ) , α ( s ) ) d s ) d t ] .
(17)
Proof We apply Itô’s formula to the switching process
u ˜ ( X ( t ) , t , α ( t ) ) : = u ( X ( t ) , α ( t ) ) exp ( 0 t c ( X ( s ) , α ( s ) ) d s ) .
To simplify notation, we let
Z ( t ) = exp ( 0 t c ( X ( s ) , α ( s ) ) d s ) .
We have
E x , i u ( X ( t τ ) , α ( t τ ) ) Z ( t τ ) u ( x , i ) = E x , i 0 t τ ( s + L ) { u ( X ( s ) , α ( s ) ) Z ( s ) } d s = E x , i 0 t τ Z ( s ) { u ( X ( s ) , α ( s ) ) c ( X ( s ) , α ( s ) ) + L u ( X ( s ) , α ( s ) ) } d s = E x , i 0 t τ Z ( s ) ψ ( X ( s ) , α ( s ) ) d s .

Taking the limit as t and noting the boundary conditions, (17) follows. □

5 The initial boundary value problem

Consider next the initial boundary value problem given by
{ [ L + t ] u ( x , t , i ) + c ( x , t , i ) u ( x , t , i ) = ψ ( x , t , i ) in  O × [ 0 , T ) × M , u ( x , T , i ) = φ ( x , i ) in  O × M , u ( x , t , i ) = ϕ ( x , t , i ) on  O × [ 0 , T ] × M ,
(18)
where O is the same as before and
L f ( x , t , i ) = 1 2 tr ( a ( x , t , i ) D 2 f ( x , t , i ) ) + b ( x , t , i ) D f ( x , t , i ) + Q ( x ) f ( x , t , ) ( i ) .
(19)

We will use assumption (A2).

(A2) The following conditions hold:
  1. 1.

    a ( x , t , i ) y , y κ | y | 2 , for each i M and for y R n ( κ > 0 ),

     
  2. 2.

    a l k ( , , i ) , b l ( , , i ) are uniformly Lipschitz continuous in O ¯ × [ 0 , T ] , for each i M ,

     
  3. 3.

    c ( , , i ) and ψ ( , , i ) are uniformly Hölder continuous in O ¯ × [ 0 , T ] , for each i M ,

     
  4. 4.

    φ ( , i ) is continuous on O ¯ , ϕ ( , , i ) is continuous on O × [ 0 , T ] , for each i M , where ∂O denotes the boundary of O,

     
  5. 5.

    φ ( x , i ) = ϕ ( x , T , i ) , for x O .

     
Under (A2), it follows that the system of initial boundary value problems has a unique solution; see [3] or [5]. In order to get a stochastic representation for the solution, we also require the drift and diffusion coefficients of u to be Lipschitz continuous in the time variable; namely we require
| b ( x , t , i ) b ( x , s , i ) | | σ ( x , t , i ) σ ( x , s , i ) | K ( | t s | ) , i M ,

in addition to (4) and (5).

Now, for ( x , t , i ) O × [ 0 , T ) × M , consider the switching SDE given by
d X ( s ) = b ( X ( s ) , s , α ( s ) ) d s + σ ( X ( s ) , s , α ( s ) ) d w ( s ) , s [ t , T ] ,
(20)

with initial data ( X ( t ) , α ( t ) ) = ( x , i ) . If we let σ ( x , t , i ) be the square root of a ( x , t , i ) , then the following is true.

Theorem 5 Suppose that (A2) holds. Then the solution of the system of initial value problems in (18) is given by
u ( x , t , i ) = E x , i [ I { τ < T } ϕ ( X ( τ ) , τ , α ( τ ) ) exp ( t τ c ( X ( r ) , r , α ( r ) ) d r ) ] + E x , i [ I { τ = T } φ ( X ( T ) , α ( T ) ) exp ( t T c ( X ( r ) , r , α ( r ) ) d r ) ] E x , i [ t τ T ψ ( X ( s ) , s , α ( s ) ) exp ( t s c ( X ( r ) , r , α ( r ) ) d r ) d s ] .
(21)
Proof Proceeding similarly to the previous theorem, we apply Itô’s formula to the process
u ( X ( s ) , s , α ( s ) ) exp ( t s c ( X ( r ) , r , α ( r ) ) d r ) , s [ t , T ] .
To simplify notation, we let
Z t ( s ) = exp ( t s c ( X ( r ) , r , α ( r ) ) d r ) .
We have
E x , i u ( X ( τ T ) , τ T , α ( τ T ) ) Z t ( τ T ) u ( x , t , i ) = E x , i t τ T ( s + L ) { u ( X ( s ) , s , α ( s ) ) Z t ( s ) } d s = E x , i t τ T Z t ( s ) { u ( X ( s ) , s , α ( s ) ) c ( X ( s ) , s , α ( s ) ) + L u ( X ( s ) , s , α ( s ) ) } d s = E x , i t τ T Z t ( s ) ψ ( X ( s ) , s , α ( s ) ) d s .
If we note that
u ( X ( τ T ) , τ T , α ( τ T ) ) Z t ( τ T ) = { u ( X ( τ ) , τ , α ( τ ) ) Z t ( τ ) , τ < T , u ( X ( T ) , T , α ( T ) ) Z t ( T ) , τ = T = { ϕ ( X ( τ ) , τ , α ( τ ) ) Z t ( τ ) , τ < T , φ ( X ( T ) , α ( T ) ) Z t ( T ) , τ = T ,
then by replacing the correct value for
u ( X ( τ T ) , τ T , α ( τ T ) ) Z t ( τ T )

in the above derivation, one gets (21). □

6 The Cauchy problem

If we let O = R n in the initial value problem (17) of the previous section, we get the Cauchy problem
{ [ L + t ] u ( x , t , i ) + c ( x , t , i ) u ( x , t , i ) = ψ ( x , t , i ) in  R n × [ 0 , T ) × M , u ( x , T , i ) = φ ( x , i ) in  R n × M .
(22)

To proceed, we impose assumption (A3).

(A3) The following conditions hold:
  1. 1.

    The functions a l k ( , , i ) , b l ( , , i ) are bounded in R n × [ 0 , T ] and uniformly Lipschitz continuous in ( x , t , i ) in compact subsets of R n × [ 0 , T ] × M , for each i M .

     
  2. 2.

    The functions a l k ( , , i ) are Hölder continuous in x, uniformly with respect to ( x , t , i ) in R n × [ 0 , T ] × M , for each i M .

     
  3. 3.

    The function c ( , , i ) is bounded in R n × [ 0 , T ] and uniformly Hölder continuous in ( x , t , i ) in compact subsets of R n × [ 0 , T ] × M , for each i M .

     
  4. 4.
    The function ψ ( , , i ) is continuous in R n × [ 0 , T ] , for each i M , Hölder continuous in x with respect to ( x , t , i ) R n × [ 0 , T ] × M , and
    | ψ ( x , t , i ) | K ( 1 + | x | p ) , in  R n × [ 0 , T ] × M .
     
  5. 5.

    The function φ ( , i ) is continuous in R n , for each i M , and | φ ( x , i ) | K ( 1 + | x | p ) , where K and p are positive constants.

     

Under (A3), it follows that the Cauchy problem has a unique solution; see [3] or [5]. Moreover, the following is true.

Theorem 6 Suppose that (A3) holds. Then the solution of the Cauchy problem in (22) is given by
u ( x , t , i ) = E x , i [ φ ( X ( T ) , α ( T ) ) exp ( t T c ( X ( s ) , s , α ( s ) ) d s ) ] E x , i [ t T ψ ( X ( s ) , s , α ( s ) ) exp ( t s c ( X ( r ) , r , α ( r ) ) d r ) d s ] .
(23)
Proof As before, by Itô’s formula, one has
E x , i u ( X ( T ) , T , α ( T ) ) Z t ( T ) u ( x , t , i ) = E x , i t T ( s + L ) { u ( X ( s ) , s , α ( s ) ) Z t ( s ) } d s .

Now, proceeding as in the proof of the initial boundary value problem, we get (23). □

Remark 2 Note by taking c = ψ = 0 , we see that the Kolmogorov backward equation is a special case of the Cauchy problem by replacing u by
u ˜ ( x , t , i ) : = u ( x , T t , i ) .

6.1 Examples

This section presents a couple of examples.

Example 1 Let O R n be an open set, and consider the following weakly coupled system:
{ u ( x , 1 ) + q 11 ( x ) u ( x , 1 ) + q 12 ( x ) u ( x , 2 ) = ψ ( x , 1 ) in  O , u ( x , 2 ) + q 21 ( x ) u ( x , 1 ) + q 22 ( x ) u ( x , 2 ) = ψ ( x , 2 ) in  O , u ( x , 1 ) = u ( x , 2 ) = 0 on  O .
(24)
Where Q ( x ) = ( q 11 ( x ) q 12 ( x ) q 21 ( x ) q 22 ( x ) ) satisfies the q-property. Such systems are studied in [10]. It follows that this Dirichlet problem has the unique solution
u ( x , i ) = E x , i [ 0 τ ψ ( x + B ( t ) , α ( t ) ) d t ] ,

where B ( t ) is a standard, n-dimensional Browning motion, and α ( t ) is a two-state, discrete process with generator Q ( x ) .

Example 2 Let
L i = 1 2 tr ( a ( x , i ) D 2 g ( x , i ) ) + b ( x , i ) D g ( x , i ) ; i = 1 , 2 ,
and consider the following stationary system; found in [8].
{ L 1 u ( x , 1 ) + q 11 ( x ) u ( x , 1 ) + q 12 ( x ) u ( x , 2 ) = 0 in  O , L 2 u ( x , 2 ) + q 21 ( x ) u ( x , 1 ) + q 22 ( x ) u ( x , 2 ) = 0 in  O , u ( x , i ) = φ ( x , i ) on  O .
It follows that the solution of the above problem has the form:
u ( x , i ) = E x , i φ ( X ( τ ) , α ( τ ) ) exp { 0 τ q ˜ ( X ( s ) , α ( s ) ) d s } ,
where q ˜ ( x , i ) = q i i ( x ) + q i j ( x ) and α ( t ) is a two-state process satisfying:
P { α ( t + δ ) = j | α ( t ) = i , X ( s ) , α ( s ) , s t } = q i j ( X ( t ) ) δ + o ( δ ) .
Hence if the generator Q ( x ) = ( q 11 ( x ) q 12 ( x ) q 21 ( x ) q 22 ( x ) ) satisfies the q-property, then it follows that q ˜ ( x , i ) = 0 for all x, so the solution reduces to the form:
u ( x , i ) = E x , i φ ( X ( τ ) , α ( τ ) ) ,
which agrees with the solution to the Dirichlet problem given by:
{ L u ( x , i ) = 0 in  O × { 1 , 2 } , u ( x , i ) = φ ( x , i ) on  O × { 1 , 2 } .
Remark 3 In closing, we make the following remark. Recall that a vector γ = ( γ 1 , , γ n ) with nonnegative integer components is referred to as a multi-index. Put | γ | = γ 1 + + γ n , and define D x γ as
D x γ = | γ | x 1 γ 1 x n γ n .

Let us state another condition.

(A0) For each i M , b ( , i ) and σ ( , i ) have continuous partial derivatives with respect to the variable x up to the second order and that
| D x γ b ( x , i ) | + | D x β σ ( x , i ) | K 0 ( 1 + | x | β ) ,

where K 0 and β are positive constants and γ is a multi-index with | γ | 2 .

In Theorems 2 and 3, we used the approach in [6] to derive the desired equations. If we assume that (A0) holds, then the functions defined by the stochastic representations (12) and (14) are smooth and classical solutions to the systems of parabolic equations (13) and (15), respectively; see [2] for further details.

Declarations

Acknowledgements

The research of N. Baran and G. Yin was supported in part by the Army Research Office under grant W911NF-12-1-0223. The research of C. Zhu was supported in part by the National Science Foundation under DMS-1108782, and a grant from the UWM Research Growth Initiative.

Authors’ Affiliations

(1)
Department of Mathematics, Wayne State University
(2)
Department of Mathematical Sciences, University of Wisconsin-Milwaukee

References

  1. Yin G, Zhu C: Hybrid Switching Diffusions: Properties and Applications. Springer, New York; 2010.View ArticleMATHGoogle Scholar
  2. Yin G, Zhu C: Properties of solutions of stochastic differential equations with continuous-state-dependent switching. J. Differ. Equ. 2010, 249: 2409-2439. 10.1016/j.jde.2010.08.008MATHMathSciNetView ArticleGoogle Scholar
  3. Friedman A: Stochastic Differential Equations and Applications. Dover, New York; 2006.MATHGoogle Scholar
  4. Khasminskii RZ: Stochastic Stability of Differential Equations. 2nd edition. Springer, Berlin; 2012.MATHView ArticleGoogle Scholar
  5. Mao X: Stochastic Differential Equations and Their Applications. Horwood, Chichester; 1997.MATHGoogle Scholar
  6. Øksendal B: Stochastic Differential Equations: An Introduction with Applications. Springer, Berlin; 2010.MATHGoogle Scholar
  7. Mitidieri E, Sweers G: Weakly coupled elliptic systems and positivity. Math. Nachr. 1995, 173(1):259-286. 10.1002/mana.19951730115MATHMathSciNetView ArticleGoogle Scholar
  8. Freidlin M Annals of Mathematics Studies 109. In Functional Integration and Partial Differential Equations. Princeton University Press, Princeton; 1985.Google Scholar
  9. Skorokhod A Amer Mathematical Society 78. Asymptotic Methods in the Theory of Stochastic Differential Equations 2008.Google Scholar
  10. Weinberger HF: Some remarks on invariant sets for systems. Pitman Research Notes in Math 173. Maximum Principles and Eigenvalue Problems in Partial Differential Equations 1988, 189-207.Google Scholar

Copyright

© Baran et al.; licensee Springer. 2013

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.