- Research
- Open access
- Published:
Backward stochastic differential equations with Markov chains and related asymptotic properties
Advances in Difference Equations volume 2013, Article number: 285 (2013)
Abstract
This paper is concerned with the solvability of a new kind of backward stochastic differential equations whose generator f is affected by a finite-state Markov chain. We also present the asymptotic property of backward stochastic differential equations involving a singularly perturbed Markov chain with weak and strong interactions and then apply this result to the homogenization of a system of semilinear parabolic partial differential equations.
MSC:60H10, 35R60.
1 Introduction
Pioneered by the works of Pardoux and Peng [1] and Duffie and Epstein [2] about the nonlinear backward stochastic differential equations (BSDEs) driven by a Brownian motion, BSDEs have been extensively studied because of their deep connections with mathematical finance, stochastic control, and partial differential equations (PDEs), such as [3, 4], etc. Simultaneously, many researchers have been devoted to more general cases in the framework of continuous time diffusions or jump diffusions, such as BSDE driven by a Lévy process (see [5, 6]), BSDE with respect to both a Brownian motion and a Poisson random measure (see [7, 8]). For BSDEs driven by Markov chains, [9] considered the following kind of BSDE:
where is a martingale related to a continuous time, finite state Markov chain. After that, [10–12] further studied such kind of BSDE.
In this paper, we study the BSDE whose generator f is directly affected by a continuous time, finite-state Markov chain. Consider the following BSDE in the probability space :
Here is a d-dimensional Brownian motion, and is a continuous-time finite-state Markov chain independent of B with the state space and the generator . The generator f of BSDE (1) can be considered as disturbed by random environment and takes a set of discrete values described by the Markov chain α. One major objective of this paper is to find a pair of -adapted processes as its solution. Here, for any process , and .
When studying the solvability for BSDE (1) with the Markov chain α, since is not a filtration, the conventional approach [1, 3] to use the Itô representation theorem and contraction mapping cannot be applied straightforwardly. To tackle this problem, inspired by the method dealing with backward doubly stochastic differential equations [13], we construct an enlarged filtration generated by the Brownian motion B and the Markov chain α and propose a corresponding extended Itô representation theorem. By virtue of this theoretical tool, we prove the existence of the solution adapted to the enlarged filtration and then verify the desired adaptability.
When various factors in random environment are considered, the underlying Markov chain inevitably has a large state space, and the corresponding BSDE (1) becomes increasingly complicated. In many practical situations, the states of the underlying Markov chain can be divided into several classes such that the Markov chain fluctuates rapidly among different states in the same class and jumps less frequently among different classes. One hierarchical approach to reduce the complexity, explored by Yin and Zhang [14], introduces a small parameter and leads to a singularly perturbed Markov chain . The asymptotic property of can be studied by a Markov chain with a considerably smaller state space. There exists an extensive literature on singularly perturbed Markov chain and its applications. We suggest to the interested reader that they should consult [14, 15] to get a rather complete view on this research field.
In this paper, we study the following BSDE with the singularly perturbed Markov chain :
Combining with the asymptotic property of the singularly perturbed Markov chain , we show that the solution sequence converges weakly with the limit formed by the solution of a simpler BSDE which involves the limit aggregated Markov chain. As far as we know, such asymptotic property of BSDE with a singularly perturbed Markov chain is the first study in BSDEs theory. As a straightforward application of our asymptotic result, we show the homogenization of a system of semi-linear parabolic PDEs with the singularly perturbed Markov chain .
This paper is organized as follows. In Section 2, we study the solvability of BSDE (1) with a Markov chain. Section 3 gives the asymptotic property of BSDE with a singularly perturbed Markov chain. Two interpretation examples are also presented. In Section 4, we apply the asymptotic result to the homogenization of a system of semi-linear parabolic PDEs with the singularly perturbed Markov chain. Finally, we give some concluding remarks in Section 5.
Throughout this paper, we introduce the following notations: is the space of -valued -adapted random variable ξ satisfying ; denotes the space of -valued -adapted stochastic processes satisfying ; is the space of -valued -adapted continuous stochastic processes satisfying .
2 BSDEs with Markov chains
In this section, we study the solvability of BSDE (1) with the Markov chain α:
For each , define . We make the following assumptions for the coefficients of BSDE (1).
(H1) (i) ; (ii) satisfies that , , . Also, there exists a constant such that , , ,
The main purpose of this section is to prove the following solvability of BSDE (1).
Theorem 2.1 Under hypothesis (H1), there exists a unique solution pair for BSDE (1).
To obtain the solvability of BSDE (1), we borrow the idea from the method handling the backward doubly stochastic differential equations (see [13]). Firstly, we construct an enlarged filtration : , , which is generated by the Brownian motion B and the Markov chain α. Then we propose a related extended Itô representation theorem. After that, for the special case that f is independent of y and z, we show the existence of the solution of BSDE (1) and verify the adaptability of the solution with respect to . Finally, the solvability of BSDE (1) for the general case is proved.
Proposition 2.1 Let . Then there exists a unique stochastic process such that
This result can be derived from the martingale representation theorem for initially enlarged filtration (see Theorem 4.2 in [16]). For the readers’ convenience, we also give a more specific proof in the Appendix.
Similarly, for all , we can introduce the filtration defined by . And we have the following propositions.
Proposition 2.2 Let . Then there exists a unique stochastic process such that
Proposition 2.3 Under hypothesis (H1), the following BSDE
has a solution pair .
Proof Applying Hölder’s inequality, we have
Combining the assumptions of ξ and f in hypothesis (H1), we can conclude that . , define . Applying Proposition 2.1 yields that there exists a unique stochastic process such that
, letting
it can be verified that and satisfies BSDE (3). Besides, combining the Burkholder-Davis-Gundy inequality and the form of BSDE (3), we can conclude that Y is continuous and satisfies .
The remaining work is to show that the -adapted process is also adapted to . Recall that is defined by . We firstly show that , is -measurable. Combining the definitions of and yields . For ease of notation, we denote . Obviously, ϑ is -measurable. Let be a dense subset of with . For each integer , denote , , and . It is clear that and . Applying the Doob-Dynkin lemma (Lemma A.1) and the Itô representation theorem to , we have
with and . Therefore
Since , we have and .
Now, let us show the convergence of equation (4) as l tends to ∞. For its left-hand side, by Doob’s martingale convergence theorem (Lemma A.2), . Thus
For the right-hand side of equation (4),
Thus is a Cauchy sequence indexed by l in . Thus, by the Itô isometry, is a Cauchy sequence indexed by l in . Hence the right-hand side of equation (4) converges to some -measurable random variable, which yields that is also -measurable.
To prove the adaptability of Z, we rewrite BSDE (3) as
with the right-hand side being -measurable. Applying Proposition 2.2 yields is -adapted. Then, guaranteed by the continuous property of the Markov chain α, we can conclude is also adapted to , and the proof is completed. □
Now let us consider the solvability of BSDE (1) for the general case.
Proof For , using the assumptions in hypothesis (H1) and Hölder’s inequality, we have
Thus we have . Setting
and following the proof of Proposition 2.3, we can conclude that the mapping
is well defined. Notice is a continuous-time finite-state Markov chain with the state space . As α varies with t, , . Under hypothesis (H1), we have , ,
Then, using the conventional approach (see [1, 3]), we can prove that this mapping is a contraction mapping. The details are omitted here for the compactness of the paper. Thus BSDE (1) has a unique solution pair in .
Together with the form of BSDE (1) and the Burkholder-Davis-Gundy inequality, we can conclude that Y is continuous and Y∈ . □
3 BSDEs with singularly perturbed Markov chains
Consider the case that the Markov chain has a large state space, one natural method is to adopt a hierarchical approach to reduce the complexity, and thus a singularly perturbed Markov chain is involved (see [14]). In this section we study the asymptotic property of a BSDE with the singularly perturbed Markov chain.
3.1 Preliminary for singularly perturbed Markov chains
Consider a continuous-time ε-dependent singularly perturbed Markov chain with a time-invariant generator , . Then its state space can be written as such that , is a weakly irreduciblea generator corresponding to the states in . When the magnitudes of generators and have the same order, the singularly perturbed Markov chain fluctuates rapidly in a single group and jumps less frequently among groups and for . For notational convenience, for all , we denote the states in as . For more details of this singularly perturbed Markov chain, we refer the interested reader to [14]. Here, we only recall the following asymptotic property of singularly perturbed Markov chains.
Proposition 3.1 [14]
Define the aggregated process as follows: , , when . Then
-
(1)
as , converges weakly to a continuous-time Markov chain with the generator
Here , is the quasi-stationary distribution of and ;
-
(2)
for any bounded deterministic function , ,
Here is the indicator function of the set A.
3.2 Asymptotic property of BSDEs with singularly perturbed Markov chains
Denote as the Skorohod space of càdlàg functions equipped with the Jakubowski S-topology. On this space, we consider the asymptotic property of the following BSDE with a singularly perturbed Markov chain with a generator f that does not depend on :
In addition to hypothesis (H1), we make the following assumption for the coefficients in BSDE (5).
(H2) (i) ; (ii) There exists a constant such that, , .
Theorem 3.1 Under hypothesis (H1) and hypothesis (H2), the sequence of processes converges weakly to the process as on the space . Here is the solution pair to the following BSDE with the limit Markov chain :
where is a d-dimensional Brownian motion satisfying , and for .
It follows from Theorem 2.1 that both BSDEs (5) and (6) have unique solutions and . To prove the expected convergence in Theorem 3.1, we firstly show the tightness property for and then identify the limit.
Step 1: tightness of
Proposition 3.2 There exists a positive constant such that
This result can be easily obtained by the standard estimation methods. We omit the proof here and put the details in our technical report (Proposition 3.2) on the website: http://arxiv.org/abs/1009.5074.
For notational convenience, we set . Thus BSDE (5) can be rewritten as
Proposition 3.3 The sequence of processes indexed by is tight on the space equipped with the Jakubowski S-topology.
Before giving the proof, let us firstly recall the Meyer-Zheng tightness criteria (see [17, 18] for more details): on the filtered probability space , the sequence of quasi-martingales indexed by n is tight whenever
Here
with the supreme taken over all partitions of the interval .
Proof On the space , for fixed , define the filtration . We firstly prove the process is a -martingale. Let be a dense subset of with . For and fixed , we denote and . It is clear that and . Since is -measurable, by Lemmas A.1 and A.2, we have
and
where each is -measurable. Thus
Letting , we have . Thus is a -martingale and it gives . Using the Burkholder-Davis-Gundy inequality, for any ,
From Proposition 3.2, we deduce that .
, for any partition of , we have
Combining hypothesis (H1) and Proposition 3.2, we have . Thus the ‘Meyer-Zheng tightness criteria’ are fully satisfied, i.e.,
and the tightness of is obtained. □
Step 2: identification of the limit
Since is tight in , there exists a subsequence of , which we still denote by , converging weakly toward a càdlàg process .
Proposition 3.4 For the limit process , we have
-
(i)
there exists a countable subset D of such that for all ,
(8)
Here the definition of the Markov chain and its relation to the singularly perturbed Markov chain are presented in Proposition 3.1;
-
(ii)
for a d-dimensional Brownian motion , denote the filtration . If is -adapted and ξ is -measurable, then is an -martingale.
Proof Consider BSDE (7) satisfied by the process
and recall that converges weakly to . To obtain its weak limit as , we only need to consider the second item on the right-hand side
Here is the aggregated process for the singularly perturbed Markov chain given in Proposition 3.1, and for . Since converges weakly to , we have
In addition, as a direct consequence of Proposition 3.1, Proposition 3.2 and Proposition 3.3,
The limit of BSDE (7) as leads to assertion (i).
Now, let us prove assertion (ii). For any , let be a continuous mapping from . Here is the state space of aggregated process . , noticing that is a -martingale, and are -adapted, for all , we have
With the same analysis of assertion (i), we can prove that converges weakly to as . By virtue of the fact that converges weakly to as , we have
Here is a d-dimensional Brownian motion with . With the assumptions that is -adapted and ξ is -measurable, the adaptness of with respect to follows. From the freedom choice of , and , we can conclude that is an -martingale. □
We come back to finish the proof of Theorem 3.1. Denote . For all , using Itô’s formula to on , we have
From Gronwall’s lemma, we obtain for all . Since is continuous, is càdlàg, and D is countable, we get , P-a.s. for all . And the identity between and follows directly.
Remark 3.1 In this paper, we study that the asymptotic property of the solution to BSDE with the generator f only depends on . For the general case that f depends on , to prove the corresponding asymptotic property requires the tightness of the sequence . To our knowledge, due to the lack of suitable estimates, it is hard to show to be tight. This will be one of our future research objects.
3.3 Examples
Comparing the limit Markov chain with the singularly perturbed Markov chain , the state space of the former is much smaller. Thus it follows from Theorem 3.1 that the asymptotic property of BSDE (5) with the singularly perturbed Markov chain can be characterized by the limit BSDE (6) which is simpler. This advantage is more clearly illustrated by the following two examples.
Example 3.1 Consider the case that BSDE (5) with a singularly perturbed Markov chain is a fast-varying noise process, i.e., the generator is weakly irreducible with the state space ℳ. From Theorem 3.1, as , the corresponding limit BSDE is
where is the quasi-stationary distribution of . Thus its solution, an -adapted process Y, can be used to study the asymptotic distribution for the sequence of -adapted process indexed by ε.
Example 3.2 Consider the following BSDE:
where the generator of the Markov chain α is
and the corresponding state space is . It is obvious that the transition rate between and is larger than transition rates between and other states. Due to this property, we can show some approximation properties for the solution of BSDE (10) by Theorem 3.1.
Firstly, we rewrite the generator Q in the form of the generator for a singularly perturbed Markov chain
Here , and the number 0.05 are chosen to guarantee and to be generators with the same order of magnitude.
Then, with the assigned and , we introduce an ε-dependent singularly perturbed Markov chain with the generator
The state space ℳ can be divided into two groups with and , and the related weakly irreducible generators are and .
After that, following the procedures in Sections 3.1 and 3.2, we define the aggregated process , with
And we can calculate that the generator of the limit Markov chain is , and quasi-stationary distributions of and are and . By Theorem 3.1, with the following limit BSDE:
where and , thus we can use the distribution property of its solution as an approximation for the property of the solution to the original BSDE (10).
Noticing that the limit averaged Markov chain has two states and the original one has three states, we have reduced the complexity of model (10). This advantage will be much clearer when the state space of the original Markov chain is sufficiently larger.
4 Application: homogenization of a system of semilinear parabolic PDEs with singularly perturbed Markov chains
It is well known that BSDEs provide a probabilistic approach to study one class of semilinear parabolic PDEs (see [19–21]). In this section, as an application of our obtained asymptotic results, we show the homogenization property of a sequence of semi-linear parabolic PDEs with a singularly perturbed Markov chain .
For any , consider the following semi-linear parabolic PDE with the singularly perturbed Markov chain :
Here ℒ is the following second-order differential operator:
As in [19], we adopt the following notations: is the space of functions of class from to , is the space of functions of class whose partial derivatives of order less than or equal to k are bounded, and is the space of functions of class which, together with all their partial derivatives of order less than or equal to k, grow at most like a polynomial function of the variable x at infinity. Also, we make the following assumptions for the coefficients h, b, σ, and f.
(H3) (i) , , and ; (ii) For , , is of class . Moreover, , and the first-order partial derivative in u is bounded on , as well as their derivatives of order one and two with respect to x and u.
For the backward PDE (11), we consider the solution restricted to . For a fixed , let the process be the unique solution to equation (11). We have the following homogenization result.
Theorem 4.1 Under hypothesis (H3), as , the sequence of converges weakly to a process , which solves the following PDE:
Here is the limit averaged Markov chain with the state space given in Proposition 3.1, and for , .
Before giving the proof, we first present a relation between backward PDE (11) and the following FBSDE with the singularly perturbed Markov chain :
For , define , and . Then is defined on .
Proposition 4.1 For a fixed , the process solves the backward PDE (11), i.e., , .
The proof of this proposition, similar to Theorem 3.2 in [19], is omitted here, but it is fully presented in our technical report (Theorem 4.2) on the website: http://arxiv.org/abs/1009.5074. Now we are ready to prove Theorem 4.1.
Proof Noticing that the forward SDE (13) does not involve the singularly Markov chain , by hypothesis (H3), we can verify that the terminal value of BSDE (14), , satisfies hypotheses (H1)-(H2). Denote . Then also satisfies hypotheses (H1)-(H2). By Theorem 3.1, we obtain that for all , converges weakly to as , where satisfies
For all , denote . From Proposition 4.1, we know that the process solves backward PDE (12). Noticing that solves PDE (11), the result follows. □
5 Conclusion
In this paper, the solvability of a new kind of BSDE with the generator involving a finite-state Markov chain is studied. To overcome the difficulty applying the conventional approach with the contraction mapping, an enlarged filtration is constructed and the related Itô representation result is proposed. Then, considering the case that a singularly perturbed Markov chain is involved, we present the asymptotic property of the corresponding BSDE by virtue of the related property of the singularly perturbed Markov chain. This result provides a probabilistic approach to study the homogenization property of the system of backward PDEs with the singularly perturbed Markov chain.
It is noted that in this paper we only give the homogenization result of PDEs system with Markov chains when there exists the classical solution under sufficiently regular assumptions. In the successive work, we will study the weak solution in a Sobolev space for the related PDEs system and homogenization problem by virtue of BSDEs with Markov chains.
As in [22], the Markov chain can be used to capture the market trends which are crucial factors that affect most investment decisions. We believe that our results have applications in such financial markets due to the deep connection between BSDEs and finance. Some applications of this kind of BSDEs in optimal control would also be interesting to investigate in our future research.
Appendix: A specific proof for Proposition 2.1
Let us first recall the Doob-Dynkin lemma and Doob’s martingale convergence theorem before giving the proof.
Lemma A.1 (Doob-Dynkin lemma)
Let be two given functions. Then Y is -measurable if and only if for some Borel measurable function g: .
Lemma A.2 (Doob’s martingale convergence theorem)
Let be a filtration on the space , and . Then
Proof The uniqueness of Proposition 2.2 follows straightforwardly from the Itô isometry, so we only focus on the existence. Let , be two dense subsets of with . For each integer , we denote , , and . It is clear that , , , , and .
For , applying Lemma A.2, we get
By Lemma A.1, , there exists a Borel measurable function such that . Thus we can write as follows:
where is -measurable. For ease of notation, let , , and . For the above equation, applying the Itô representation theorem to each yields
Noticing that , , and α is independent of B, we have . Taking the conditional expectation of equation (16) with respect to , one can get
Thus equation (16) can be rewritten as
where
We are going to show that as , equation (17) converges to . For the left-hand side, applying Doob’s martingale convergence theorem and using the fact that and yields
Then
and the convergence of the first term on the right-hand side of equation (17) is guaranteed. The remaining work is to prove that is a Cauchy sequence. Actually, by the Itô isometry, we have
Noticing that , hence it converges to some , and the proof is completed. □
Endnote
A generator Q is called weakly irreducible if the system of equations and has a unique nonnegative solution. This nonnegative solution is called the quasi-stationary distribution of Q.
Abbreviations
- BSDEs:
-
backward stochastic differential equations
- PDEs:
-
partial differential equations.
References
Pardoux E, Peng S: Adapted solution of a backward stochastic differential equations. Syst. Control Lett. 1990, 14: 55-61. 10.1016/0167-6911(90)90082-6
Duffie D, Epstein L: Stochastic differential utility. Econometrica 1992, 60(2):353-394. 10.2307/2951600
El Karoui N, Peng S, Quenez MC: Backward stochastic differential equations in finance. Math. Finance 1997, 7: 1-71. 10.1111/1467-9965.00022
Eyraud-Loisel A: Backward stochastic differential equations with enlarged filtration: option hedging of an insider trader in a financial market with jumps. Stoch. Process. Appl. 2005, 115: 1745-1763. 10.1016/j.spa.2005.05.006
Nualart D, Schoutens W: BSDE’s and Feynman-Kac formula for Lévy processes with applications in finance. Bernoulli 2001, 7(5):761-776. 10.2307/3318541
Bahlali K, Eddahbi M, Essaky E: BSDE associated with Lévy processes and application to PDIE. J. Appl. Math. Stoch. Anal. 2003, 16: 1-17. 10.1155/S1048953303000017
Tang S, Li X: Necessary conditions for optimal control of stochastic systems with random jumps. SIAM J. Control Optim. 1994, 32(5):1447-1475. 10.1137/S0363012992233858
Kharroubi I, Ma J, Pham H, Zhang J: Backward SDEs with constrained jumps and quasi-variational inequalities. Ann. Probab. 2010, 38(2):794-840. 10.1214/09-AOP496
Cohen S, Elliott R: Solutions of backward stochastic differential equations on Markov chains. Commun. Stoch. Anal. 2008, 2(2):251-262.
Cohen S, Elliott R: On Markovian solutions to Markov chain BSDEs. Numer. Algebra Control Optim. 2012, 2: 257-267.
Cohen S, Szpruch L: Comparisons for backward stochastic differential equations on Markov chains and related no-arbitrage conditions. Ann. Appl. Probab. 2010, 20: 267-311. 10.1214/09-AAP619
Lu W, Ren Y: Anticipated backward stochastic differential equations on Markov chains. Stat. Probab. Lett. 2013, 83: 1711-1719. 10.1016/j.spl.2013.03.022
Pardoux E, Peng S: Backward doubly stochastic differential equations and systems of quasilinear SPDEs. Probab. Theory Relat. Fields 1994, 98(2):209-227. 10.1007/BF01192514
Yin G, Zhang Q: Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach. Springer, Berlin; 1997.
Yin G, Zhang Q: Continuous-Time Markov Chains and Applications: A Two-Time-Scale Approach. Springer, Berlin; 2013.
Amendinger J: Martingale representation theorems for initially enlarged filtrations. Stoch. Process. Appl. 2000, 89: 101-116. 10.1016/S0304-4149(00)00015-6
Meyer P, Zheng W: Tightness criteria for laws of semimartingales. Ann. Inst. Henri Poincaré Probab. Stat. 1984, 20(4):353-372.
Bahlali K, Elouaflin A, Pardoux E: Homogenization of semilinear PDEs with discontinuous averaged coefficients. Electron. J. Probab. 2009, 14(18):477-499.
Pardoux E, Peng S: Backward stochastic differential equations and quasilinear parabolic partial differential equations. Lecture Notes in Control and Inform. Sci. 176. Stochastic Partial Differential Equations and Their Applications 1992, 200-217.
Barles G, Buckdahn R, Pardoux E: Backward stochastic differential equations and integral-partial differential equations. Stoch. Stoch. Rep. 1997, 60: 57-83. 10.1080/17442509708834099
Pardoux E, Sow AB: Homogenization of a periodic degenerate semilinear elliptic PDE. Stoch. Dyn. 2011, 11: 475-493. 10.1142/S0219493711003401
Zhang Q, Yin G: Nearly-optimal asset allocation in hybrid stock investment models. J. Optim. Theory Appl. 2004, 121(2):419-444.
Acknowledgements
This work was supported by the Independent Innovation Foundation of Shandong University (No. 2012GN044), the Natural Science Foundations of China (Nos. 10921101, 61174092 and 61174078), and the National Science Fund for Distinguished Young Scholars of China (No. 11125102). It is our great pleasure to express the thankfulness to Prof. Qing Zhang in University of Georgia and Prof. Chenggui Yuan in Swansea University for many useful discussions and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
Authors contributed equally in writing this article. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Tang, H., Wu, Z. Backward stochastic differential equations with Markov chains and related asymptotic properties. Adv Differ Equ 2013, 285 (2013). https://doi.org/10.1186/1687-1847-2013-285
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1847-2013-285