Open Access

One kind of multiple dimensional Markovian BSDEs with stochastic linear growth generators

Advances in Difference Equations20152015:265

Received: 8 January 2015

Accepted: 12 August 2015

Published: 27 August 2015


In this article, we deal with a multiple dimensional coupled Markovian BSDEs system with stochastic linear growth generators with respect to volatility processes. An existence result is provided by virtue of approximation techniques.


backward stochastic differential equationsstochastic differential equationsapproximation techniques



1 Introduction

Backward stochastic differential equations (BSDEs) were proposed firstly by Bismut [1] in linear case to solve the optimal control problems. Later this notion was generalized by Pardoux and Peng [2] into the general nonlinear form, and the existence and uniqueness results were proved under the classical Lipschitz condition. A class of BSDEs was also introduced by Duffie and Epstein [3] in point of view of recursive utility in economics. During the past twenty years, BSDEs theory has attracted many researchers’ interest and has been fully developed into various directions. Among the abundant literature, we refer readers to the florilegium book edited by El-Karoui and Mazliak [4] for the early works before 1996. Surveys on BSDEs theory also include [5] which is written by El-Karoui, Hamadène and Matoussi collected in book [6] (see Chapter 8) and the book by Yong and Zhou [7] (see Chapter 7). Some applications on optimization problems can be found in [5]. About other applications, such as in the field of economics, we refer to El-Karoui, Peng and Quenez [8]. Recently, a complete review on BSDEs theory as well as some new results on nonlinear expectation were introduced in a survey paper by Peng [9].

One possible extension to the pioneering work of [2] is to relax as much as possible the uniform Lipschitz condition on the coefficient. Mao [10] provided an existence and uniqueness result under a weaker condition than the Lipschitz one. Hamadène introduced in [11] a one-dimensional BSDE with local Lipschitz generator. Later Lepeltier and San Martin [12] provided an existence result of minimum solution for one-dimensional BSDE where the generator function f is continuous and of linear growth in terms of \((y, z)\). When f is uniformly continuous in z with respect to \((\omega,t)\) and independent of y, a uniqueness result was obtained by Jia [13]. BSDEs with polynomial growth generator were studied by Briand in [14]. The case of one-dimensional BSDEs with coefficient which is monotonic in y and non-Lipschitz on z is shown in work [15]. About the BSDE with continuous and quadratic growth driver, a classical research should be the one by Kobylanski [16] which investigated a one-dimensional BSDE with driver \(|f(t,y,z)\leq C(1+|y|+|z|^{2})\) and bounded terminal value. This result was generated by Briand and Hu into the unbounded terminal value case in [17].

There are plenty of works on one-dimensional BSDE. However, limited results have been obtained about the multi-dimensional case. We refer to Hamadène, Lepeltier and Peng [18] for an existence result on BSDEs system of Markovian case where the driver is of linear growth on \((y,z)\) and of polynomial growth on the state process. See Bahlali [19, 20] for high-dimensional BSDE with local Lipschitz coefficient.

In the present article, we consider a high-dimensional BSDE under Markovian framework as follows:
$$Y_{t}^{i}=g^{i}(X_{T})+\int _{t}^{T}H_{i}\bigl(s, X_{s}, Y_{s}^{1},\ldots,Y_{s}^{n},Z_{s}^{1}, \ldots,Z_{s}^{n}\bigr)\,ds-\int_{t}^{T} Z_{s}^{i}\,dB_{s} $$
for \(i=1,2,\ldots,n\), with process X as a solution of a stochastic differential equation (SDE for short). For each \(i=1,2,\ldots,n\), the coefficient \(H_{i}\) is continuous on \((y^{1},\ldots,y^{n},z^{1},\ldots,z^{n})\) and satisfies
$$\bigl\vert H_{i}\bigl(t,x,y^{1},\ldots,y^{n},z^{1}, \ldots,z^{n}\bigr)\bigr\vert \leq C\bigl(1+\vert x\vert \bigr) \bigl\vert z^{i}\bigr\vert + C\bigl(1+|x|^{\gamma }+\bigl\vert y^{i}\bigr\vert \bigr),\quad \gamma>0, $$
which means that \(H_{i}\) is of stochastic linear growth on \(Z^{i}\), or in other words, it is of linear growth ω by ω. Similar situation was considered in [21] in the background of nonzero-sum stochastic differential game problem. However, in [21], the generator \(H_{i}\) is independent of \((y^{1},\ldots,y^{n})\). According to our knowledge, this general form of high-dimensional coupled BSDEs system with stochastic linear growth generator has not been considered in literature. This is the main motivation of the present work.

The rest of this article is organized as follows. In Section 2, we give some notations and assumptions on the coefficient. The properties of the forward SDE are also provided. The main existence result of BSDEs is proved in Section 3 where a measure domination result plays an important role. This domination result holds true when we assume that the diffusion process of the SDE satisfies the uniform elliptic condition. For the proof of the main result, we adopt an approximation scheme following the well-known mollify technique. The irregular coefficients are approximated by a sequence of Lipschitz functions. Then, we obtain the uniform estimates of the sequence of solutions as well as the convergence result in some appropriate spaces. Finally, we verify that the limit of the solutions is exactly the solution to the original BSDE, which completes the proof.

2 Notations and assumptions

In this section, we give some basic notations, the preliminary assumptions throughout this paper, as well as some useful results. Let \((\Omega, \mathcal{F}, \mathbf{P})\) be a probability space on which we define an m-dimensional Brownian motion \(B=(B_{t})_{0\leq t\leq T}\) with integer \(m\geq1\). Let us denote by \(\mathbf{F }=\{\mathcal{F}_{t}, 0\leq t\leq T\}\) for fixed \(T>0\) the natural filtration generated by process B and augmented by \(\mathcal{N}_{\mathbf{P}}\) the P-null sets, i.e., \(\mathcal{F}_{t}=\sigma\{B_{s}, s\leq t\}\vee\mathcal{N}_{\mathbf{P}}\).

Let \(\mathcal{P}\) be the σ-algebra on \([0,T]\times\Omega\) of \(\mathcal{F}_{t}\)-progressively measurable sets. Let \(p\in[1,\infty)\) be a real constant and \(t\in[0,T]\) be fixed. We then define the following spaces: \(\mathcal{L}^{p}\) = \(\{\xi: \mathcal{F}_{t}\mbox{-measurable and } \mathbf{R}^{m}\mbox{-valued random variable s.t. } \mathbf{E}[|\xi|^{p}]<\infty\}\); \(\mathcal{S}_{t,T}^{p} = \{\varphi=(\varphi_{s})_{t\leq s \leq T} : \mathcal{P} \mbox{-measurable and }\mathbf{R}^{m}\mbox{-valued s.t. } \mathbf{E}[\sup_{s\in[t,T]}|\varphi_{s}|^{p}]< \infty\}\) and \(\mathcal{H}_{t,T}^{p}=\{\varphi=(\varphi_{s})_{t \leq s\leq T}: \mathcal{P}\mbox{-measurable and }\mathbf{R}^{m}\mbox{-valued s.t. }\mathbf{E}[ (\int_{t}^{T}|\varphi_{s}|^{2}\,ds)^{\frac{p}{2}}]< \infty\}\). Hereafter, \(\mathcal{S}_{0,T}^{p}\) and \(\mathcal{H}_{0,T}^{p}\) are simply denoted by \(\mathcal{S}_{T}^{p}\) and \(\mathcal{H}_{T}^{p}\).

The following assumptions are in force throughout this paper. Let σ be the function defined as
$$\sigma: [0,T]\times \mathbf{R}^{m} \longrightarrow \mathbf{R}^{m\times m} $$
which satisfies the following assumption.

Assumption 2.1

  1. (i)

    σ is uniformly Lipschitz w.r.t. x, i.e., there exists a constant \(C_{1}\) such that, \(\forall t \in[0, T]\), \(\forall x , x^{\prime} \in \mathbf{R}^{m}\), \(| \sigma(t,x)-\sigma(t, x^{\prime})| \leq C_{1} |x- x^{\prime}|\).

  2. (ii)

    σ is invertible and bounded and its inverse is bounded, i.e., there exists a constant \(C_{\sigma}\) such that \(\forall(t,x)\in[0,T]\times \mathbf{R}^{m}\), \(\vert \sigma(t, x)\vert +\vert \sigma^{-1}(t,x)\vert \leq C_{\sigma}\).


Remark 2.1

(Uniform elliptic condition)

Under Assumption 2.1, we can verify that there exists a real constant \(\epsilon>0\) such that for any \((t,x)\in [0,T]\times \mathbf{R}^{m}\),
$$ \epsilon.I\leq\sigma(t,x).\sigma^{\top}(t,x)\leq \epsilon^{-1}.I, $$
where I is the identity matrix of dimension m.
Suppose that we have a system whose dynamic is described by a stochastic differential equation as follows: for \((t,x)\in[0,T]\times \mathbf{R}^{m}\),
$$ \textstyle\begin{cases} X_{s}^{t,x}= x+ \int_{t}^{s} \sigma(u,X_{u}^{t,x})\,dB_{u},\quad s\in [t,T ] ; \\ X_{s}^{t,x}= x,\quad s\in[0,t]. \end{cases} $$
The solution \(X^{t,x}= (X_{s}^{t,x})_{s\leq T}\) exists and is unique under Assumption 2.1 (cf. Karatzas and Shreve [22], p.289). We recall a well-known result associated to integrability of the solution. For any fixed \((t, x)\in[0,T]\times \mathbf{R}^{m}\), \(p\geq2\), it holds that, P-a.s.,
$$ \mathbf{E}\Bigl[\sup_{0\leq s\leq T}\bigl\vert X_{s}^{t, x}\bigr\vert ^{p} \Bigr]\leq C \bigl(1+|x|^{p}\bigr), $$
where the constant C only depends on the Lipschitz coefficient and the bound of σ.
For integer \(n\geq1\), we first present the following Borelian function as the terminal coefficient of the n-dimensional BSDE that we are considering:
$$g^{i}:\mathbf{R}^{m}\longrightarrow \mathbf{R},\quad i=1,2,\ldots,n, $$
which satisfies the following.

Assumption 2.2

The function \(g^{i}\), \(i=1,2,\ldots,n\), is of polynomial growth with respect to x, i.e., there exist a constant \(C_{g}\) and \(\gamma\geq0\) such that
$$\bigl\vert g^{i}(x)\bigr\vert \leq C_{g} \bigl(1+|x|^{\gamma}\bigr), \quad\forall x\in \mathbf{R}^{m} , \textit{ for } i=1,2,\ldots,n. $$
Now, we consider Borelian functions \(H_{i}\), \(i=1,2,\ldots,n\), from \([0,T]\times \mathbf{R}^{m}\times \mathbf{R}^{n}\times \mathbf{R}^{nm}\) into R as follows:
$$H_{i}\bigl(t, x, y^{1},\ldots,y^{n},z^{1}, \ldots,z^{n}\bigr),\quad i=1,2,\ldots,n, $$
which satisfy the following hypothesis.

Assumption 2.3

  1. (i)
    For each \((t,x,y^{1},\ldots,y^{n},z^{1},\ldots,z^{n})\in[0,T]\times \mathbf{R}^{m}\times \mathbf{R}^{n}\times \mathbf{R}^{nm}\), there exist constants \(C_{2}\), \(C_{h}\) and \(\gamma>0\) such that, for each \(i=1,2,\ldots,n\),
    $$ \bigl\vert H_{i}\bigl(t,x,y^{1}, \ldots,y^{n},z^{1},\ldots,z^{n}\bigr)\bigr\vert \leq C_{2}\bigl(1+\vert x\vert \bigr)\bigl\vert z^{i} \bigr\vert + C_{h}\bigl(1+\vert x\vert ^{\gamma}+\bigl\vert y^{i}\bigr\vert \bigr) ; $$
  2. (ii)

    the mapping \((y^{1},\ldots, y^{n}, z^{1},\ldots,z^{n})\in \mathbf{R}^{n}\times \mathbf{R}^{nm}\longmapsto H_{i}(t,x,y^{1},\ldots, y^{n},z^{1},\ldots,z^{n}) \in \mathbf{R}\) is continuous for any fixed \((t,x) \in[0,T]\times \mathbf{R}^{m}\).

For a fixed constant \(a\in \mathbf{R}^{m}\), and \(i=1,2,\ldots,n\), let us consider the following BSDE:
$$\begin{aligned} Y_{t}^{i} =&g^{i} \bigl(X_{T}^{0,a}\bigr)+\int_{t}^{T}H_{i} \bigl(s, X_{s}^{0,a}, Y_{s}^{1}, \ldots,Y_{s}^{n},Z_{s}^{1}, \ldots,Z_{s}^{n}\bigr)\,ds \\ &{}-\int_{t}^{T} Z_{s}^{i}\,dB_{s},\quad t\in[0,T]. \end{aligned}$$
From Assumptions 2.2 and 2.3, we know that this is a multiple dimensional coupled BSDEs system under Markovian framework with unbounded terminal value.

3 Existence of solutions for the multiple dimensional coupled BSDEs system

In this section, we provide an existence result of BSDEs (2.5) when \(n=2\) as an example. Actually, the case for \(n>2\) can be dealt with in the same way without any difficulties.

3.1 Measure domination

Before we state our main theorem, let us first recall a result related to measure domination.

Definition 3.1

(\(L^{q}\)-domination condition)

Let \(q\in(1,\infty)\) be fixed. For given \(t_{1}\in[0,T]\), a family of probability measures \(\{\nu_{1}(s,dx), s\in[t_{1}, T]\}\) defined on \(\mathbf{R}^{m}\) is said to be \(L^{q}\)- dominated by another family of probability measures \(\{\nu_{0}(s,dx), s\in[t_{1}, T]\}\) if for any \(\delta\in(0, T-t_{1}]\), there exists an application \(\phi^{\delta}_{t_{1}}: [t_{1}+\delta, T]\times \mathbf{R}^{m} \rightarrow \mathbf{R}^{+}\) such that:
  1. (i)

    \(\nu_{1}(s, dx)\,ds= \phi^{\delta}_{t_{1}}(s, x)\nu_{0}(s, dx)\,ds\) on \([t_{1}+\delta, T]\times \mathbf{R}^{m}\).

  2. (ii)

    \(\forall k\geq1\), \(\phi^{\delta}_{t_{1}}(s,x) \in L^{q}([t_{1}+\delta, T]\times[-k, k]^{m}; \nu_{0}(s, dx)\,ds)\).


Lemma 3.1

Let \(a\in \mathbf{R}^{m}\), \((t,x)\in[0,T]\times \mathbf{R}^{m}\), \(s\in(t,T]\) and \(\mu(t,x;s,dy)\) the law of \(X^{t,x}_{s}\), i.e.,
$$\forall A \in\mathcal{B}\bigl(\mathbf{R}^{m}\bigr),\quad \mu(t,x;s,A)= \mathbf{P}\bigl(X_{s}^{t,x}\in A\bigr). $$
Under Assumption  2.1 on σ, for any \(q\in(1,\infty )\), the family of laws \(\{\mu(t,x;s,dy), s\in[t,T]\}\) is \(L^{q}\)-dominated by \(\{\mu(0,a;s,dy), s\in[t,T]\}\) for fixed \(a\in \mathbf{R}^{m}\).


See [21], Lemma 4.3 and Corollary 4.4, pp.14-15. □

3.2 High-dimensional coupled BSDEs system

Our main result in this section is the following theorem.

Theorem 3.1

Let \(a\in \mathbf{R}^{m}\) be fixed. Then under Assumptions 2.1, 2.2 and 2.3, there exist two pairs of \(\mathcal {P}\)-measurable processes \((Y^{i}, Z^{i})\) with values in \(\mathbf{R}^{1+m}\), \(i=1,2\), and two deterministic functions \(\varsigma^{i}(t,x)\) which are of polynomial growth, i.e., \(|\varsigma^{i}(t,x)|\leq C(1+|x|^{\gamma })\) with \(\gamma\geq0\), \(i=1,2\), such that
$$ \textstyle\begin{cases} \mathbf{P}\textit{-a.s.}, \quad\forall t\leq T, Y_{t}^{i}=\varsigma^{i}(t,X_{t}^{0,a}) \textit{ and } Z^{i} \textit{ is }dt\textit{-square integrable } \mathbf{P}\textit{-a.s.};\\ Y_{t}^{1} =g^{1}(X_{t}^{0,a})+\int_{t}^{T} H_{1}(s, X_{s}^{0,a}, Y_{s}^{1},Y_{s}^{2},Z_{s}^{1},Z_{s}^{2})\,ds-\int_{t}^{T} Z_{s}^{1}\,dB_{s}; \\ Y_{t}^{2} =g^{2}(X_{t}^{0,a})+\int_{t}^{T} H_{2}(s, X_{s}^{0,a}, Y_{s}^{1},Y_{s}^{2},Z_{s}^{1},Z_{s}^{2})\,ds-\int_{t}^{T} Z_{s}^{2}\,dB_{s}. \end{cases} $$
The result holds true as well for the case \(i>2\) following the same way.


The structure of this proof is as follows. We first use the mollify technique on the generator \(H_{i}\) to construct a sequence of BSDEs with generators which are Lipschitz continuous. Then, we provide uniform estimates of the solutions as well as the convergence property. Finally, we verify that the limits of the sequences are exactly the solutions for BSDE (3.1).

Step 1. Approximation.

Let ξ be an element of \(C^{\infty}(\mathbf{R}^{2+2m}, \mathbf{R})\) with compact support and satisfy
$$\int_{\mathbf{R}^{2+2m}}\xi\bigl(y^{1},y^{2},z^{1},z^{2} \bigr)\,dy^{1}\,dy^{2}\,dz^{1}\,dz^{2}=1. $$
For \((t,x,y^{1},y^{2},z^{1},z^{2})\in[0,T]\times \mathbf{R}^{m}\times \mathbf{R}^{2+2m}\), we set
$$\begin{aligned}& \tilde{H}_{1n}\bigl( t,x,y^{1},y^{2},z^{1},z^{2} \bigr) \\& \quad=\int_{\mathbf{R}^{2+2m}}n^{4} H_{1}\bigl(s, \varphi_{n}(x),p^{1},p^{2},q^{1},q^{2} \bigr) \\& \quad\quad{}\times \xi \bigl(n\bigl(y^{1}-p^{1}\bigr),n\bigl(y^{2}-p^{2} \bigr),n\bigl(z^{1}-q^{1}\bigr),n\bigl(z^{2}-q^{2} \bigr) \bigr)\,dp^{1}\,dp^{2}\,dq^{1}\,dq^{2}, \end{aligned}$$
where the truncation function \(\varphi_{n}(x)=((x_{j}\vee(-n))\wedge n)_{j=1,2,\ldots,m}\) for \(x=(x_{j})_{j=1,2,\ldots,m}\in \mathbf{R}^{m}\). We next define \(\psi\in C^{\infty}(\mathbf{R}^{2+2m},\mathbf{R})\) by
$$ \psi\bigl(y^{1},y^{2},z^{1},z^{2}\bigr)= \textstyle\begin{cases} 1,& |y^{1}|^{2}+|y^{2}|^{2}+|z^{1}|^{2}+|z^{2}|^{2}\leq1,\\ 0, &|y^{1}|^{2}+|y^{2}|^{2}+|z^{1}|^{2}+|z^{2}|^{2}\geq4. \end{cases} $$
Then we define the measurable function sequence \((H_{1n})_{n\geq 1}\) as follows: \(\forall(t,x,y^{1},y^{2},z^{1},z^{2})\in[0,T]\times \mathbf{R}^{m}\times \mathbf{R}^{2+2m}\),
$$H_{1n}\bigl(t,x,y^{1},y^{2},z^{1},z^{2} \bigr)=\psi\biggl(\frac{y^{1}}{n},\frac{y^{2}}{n},\frac {z^{1}}{n}, \frac{z^{2}}{n}\biggr)\tilde{H}_{1n}\bigl(t,x,y^{1},y^{2},z^{1},z^{2} \bigr). $$
We have the following properties:
$$ \textstyle\begin{cases} \mbox{(a)} & H_{1n} \text{ is uniformly Lipschitz w.r.t. } (y^{1},y^{2},z^{1},z^{2}); \\ \mbox{(b)}& \vert H_{1n}(t,x,y^{1},y^{2},z^{1},z^{2})\vert \leq C_{2}(1+\vert \varphi _{n}(x)\vert )\vert z^{1}\vert +C_{h}(1+\vert \varphi_{n}(x) \vert ^{\gamma}+\vert y^{1}\vert ); \\ \mbox{(c)}& \vert H_{1n}(t,x,y^{1},y^{2},z^{1},z^{2})\vert \leq c_{n} \text{ for any } (t,x,y^{1},y^{2},z^{1},z^{2}); \\ \mbox{(d)}& \text{For any } (t,x)\in[0,T]\times \mathbf{R}^{m},\text{ and }\mathbf{K} \text{ a compact subset of } \mathbf{R}^{2+2m},\\ &\sup_{(y^{1},y^{2},z^{1},z^{2})\in\mathbf{K}}\vert H_{1n}(t,x,y^{1},y^{2},z^{1},z^{2})-H_{1}(t,x,y^{1},y^{2},z^{1},z^{2})\vert \rightarrow 0,\\ &\text{as } n\rightarrow\infty. \end{cases} $$

The construction of the approximating sequence \((H_{2}^{n})_{n\geq1}\) is carried out in the same way.

For each \(n\geq1\) and \((t,x)\in[0,T]\times \mathbf{R}^{m}\), since \(H_{1n}\) and \(H_{2n}\) are uniformly Lipschitz w.r.t. \((y^{1},y^{2},z^{1},z^{2})\), by the result of Pardoux-Peng (see [2]), we know that there exist two pairs of processes \((Y^{in;(t,x)}, Z^{in;(t,x)})\in\mathcal {S}_{t,T}^{2}(\mathbf{R})\times\mathcal{H}_{t,T}^{2}(\mathbf{R}^{m})\), \(i=1,2\), which satisfy, for \(s\in[t,T]\),
$$ \textstyle\begin{cases} Y_{s}^{1n;(t,x)} =g^{1}(X_{T}^{t,x})+\int_{s}^{T} H_{1n}(r,X_{r}^{t,x}, Y_{r}^{1n;(t,x)},Y_{r}^{2n;(t,x)},Z_{r}^{1n;(t,x)},Z_{r}^{2n;(t,x)})\,dr\\ \hphantom{Y_{s}^{1n;(t,x)} =}{} -\int_{s}^{T} Z_{r}^{1n;(t,x)}\,dB_{r}; \\ Y_{s}^{2n;(t,x)} =g^{2}(X_{T}^{t,x})+\int_{s}^{T} H_{2n}(r,X_{r}^{t,x}, Y_{r}^{1n;(t,x)},Y_{r}^{2n;(t,x)},Z_{r}^{1n;(t,x)},Z_{r}^{2n;(t,x)})\,dr\\ \hphantom{Y_{s}^{2n;(t,x)} =}{} -\int_{s}^{T} Z_{r}^{2n;(t,x)}\,dB_{r}. \end{cases} $$
Meanwhile, properties (3.2)(a), (c) and the result of El-Karoui et al. (ref. [8]) yield that there exist two sequences of deterministic measurable applications \(\varsigma ^{1n}\ (\mbox{resp. } \varsigma^{2n}): [0,T]\times \mathbf{R}^{m} \rightarrow \mathbf{R}\) and \(\mathfrak{z}^{1n}\ (\mbox{resp. } \mathfrak{z}^{2n}): [0,T]\times \mathbf{R}^{m}\rightarrow \mathbf{R}^{m}\) such that for any \(s\in[t,T]\),
$$ Y_{s}^{1n;(t,x)}= \varsigma^{1n} \bigl(s,X_{s}^{t,x}\bigr) \quad\quad\bigl(\mbox{resp. } Y_{s}^{2n;(t,x)}=\varsigma^{2n}\bigl(s,X_{s}^{t,x} \bigr)\bigr) $$
$$Z_{s}^{1n;(t,x)}= \mathfrak{z}^{1n}\bigl(s,X_{s}^{t,x} \bigr)\quad\quad \bigl(\mbox{resp. }Z_{s}^{2n;(t,x)}=\mathfrak{z}^{2n} \bigl(s,X_{s}^{t,x}\bigr)\bigr). $$
Besides, we have the following deterministic expression: for \(i=1,2\) and \(n\geq1\),
$$ \varsigma^{in}(t,x)= \mathbf{E}\biggl[g^{i} \bigl(X_{T}^{t,x}\bigr)+\int_{t}^{T} F_{in}\bigl(s,X_{s}^{t,x}\bigr)\,ds \biggr], \quad\forall(t,x)\in[0, T]\times \mathbf{R}^{m}, $$
$$F_{in}(s,x)= H_{in}\bigl(s,x,\varsigma^{1n}(s,x), \varsigma ^{2n}(s,x),\mathfrak{z}^{1n}(s,x), \mathfrak{z}^{2n}(s,x)\bigr). $$

Step 2. Uniform integrability of \((Y^{1n;(t,x)}, Z^{1n;(t,x)})_{n\geq1}\) for fixed \((t,x)\in [0,T]\times \mathbf{R}^{m}\).

For each \(n\geq1\), let us first consider the following BSDE:
$$\begin{aligned} \bar{Y}_{s}^{1n} =&g^{1} \bigl(X_{T}^{t,x}\bigr)+\int_{t}^{T} C_{2}\bigl(1+\bigl\vert \varphi_{n} \bigl(X_{r}^{t,x} \bigr)\bigr\vert \bigr)\bigl\vert \bar{Z}_{r}^{1n}\bigr\vert \\ &{}+C_{h}\bigl(1+\bigl\vert \varphi_{n} \bigl(X_{r}^{t,x}\bigr)\bigr\vert ^{\gamma}+ \bigl\vert \bar{Y}_{r}^{1n}\bigr\vert \bigr)\,dr -\int _{s}^{T} \bar{Z}_{r}^{1n}\,dB_{r}. \end{aligned}$$
For any \(x\in \mathbf{R}^{m}\) and \(n\geq1\), the mapping \((y^{1},z^{1})\in \mathbf{R}\times \mathbf{R}^{m} \mapsto C_{2}(1+|\varphi_{n} (X_{r}^{t,x})|)|z^{1}| +C_{h}(1+|\varphi _{n}(X_{r}^{t,x})|^{\gamma}+ |y^{1}|)\) is Lipschitz continuous; therefore, the solution \((\bar{Y}^{1n}, \bar{Z}^{1n}) \in\mathcal{S}_{t,T}^{2}(\mathbf{R})\times\mathcal{H}_{t,T}^{2}(\mathbf{R}^{m})\) exists and is unique. Moreover, it follows from the result of El-Karoui et al. (see [8]) that \(\bar{Y}^{1n}\) can be characterized through a deterministic measurable function \(\bar{\varsigma}^{1n}: [0,T]\times \mathbf{R}^{m} \rightarrow \mathbf{R}\), that is, for any \(s\in[t,T]\),
$$ \bar{Y}_{s}^{1n}=\bar{\varsigma}^{1n} \bigl(s, X_{s}^{t,x}\bigr). $$
Next let us consider the process
$$B^{n}_{s}= B_{s}- \int_{0}^{s} 1_{[t,T]}(r)C_{2}\bigl(1+\bigl\vert \varphi_{n} \bigl(X_{r}^{t,x}\bigr)\bigr\vert \bigr)\operatorname{sign}\bigl( \tilde{Z}_{r}^{1n}\bigr)\,dr,\quad 0\leq s\leq T, $$
which is, thanks to Girsanov’s theorem, a Brownian motion under the probability \(\mathbf{P}^{n}\) on \((\Omega, \mathcal{F})\) whose density with respect to P is \(\mathcal{E}_{T}:=\mathcal{E}_{T}(\int_{0}^{T} C_{2}(1+|\varphi _{n}(X_{s}^{t,x})|)\operatorname{sign}(\bar{Z}_{s}^{1n}) 1_{[t,T]}(s)\,dB_{s})\), where for any \(z=(z^{i})_{i=1,\ldots,d}\in \mathbf{R}^{m}\), \(\mbox{sign}(z)=(1_{[|z^{i}|\neq0]}\frac{z^{i}}{|z^{i}|})_{i=1,\ldots,d}\) and \(\mathcal{E}_{t}(\cdot)\) is defined by
$$ \mathcal{E}(M):= \bigl(\exp\bigl\{ M_{t}- \langle M \rangle_{t}/2\bigr\} \bigr)_{t\leq T} $$
for any \((\mathcal{F}_{t}, \mathbf{P})\)-continuous local martingale \(M=(M_{t})_{t\leq T}\). Then (3.6) becomes
$$ \bar{Y}_{s}^{1n}=g^{1}\bigl(X_{T}^{t,x} \bigr)+\int_{t}^{T} C_{h}\bigl(1+\bigl\vert \varphi _{n}\bigl(X_{r}^{t,x}\bigr)\bigr\vert ^{\gamma}+ \bigl\vert \bar{Y}_{r}^{1n}\bigr\vert \bigr)\,dr -\int_{s}^{T} \bar{Z}_{r}^{1n}\,dB_{r}^{n}, \quad s\in[t,T]. $$
Applying Itô-Meyer’s formula to \((e^{C_{h} t}\bar{Y}_{t}^{1n})^{+}\), \(t\leq T\), we know
$$\begin{aligned}& \bigl(e^{C_{h} t}\bar{Y}_{t}^{1n}\bigr)^{+}+\int _{t}^{T}\,dL_{s}^{0} \\& \quad= \bigl(e^{C_{h} T} g^{1}\bigl(X_{T}^{t,x} \bigr)\bigr)^{+}-\int_{t}^{T}1_{\bar{Y}_{s}^{1n}\geq0}\,d\bigl(e^{C_{h}s} \bar{Y}_{s}^{1n}\bigr) \\& \quad= \bigl(e^{C_{h} T} g^{1}\bigl(X_{T}^{t,x} \bigr)\bigr)^{+}+\int_{t}^{T} C_{h}e^{C_{h} s} \bigl(1+\bigl\vert \varphi _{n}\bigl(X_{s}^{t,x} \bigr)\bigr\vert ^{\gamma}\bigr)\,ds-\int_{t}^{T} e^{C_{h} s} \bar{Z}_{s}^{1n}\,dB_{s}^{n}, \end{aligned}$$
where \(L_{t}^{0}\) is the local time of the continuous semimartingale \(e^{C_{h} t}\bar{Y}_{t}^{1n}\) at time 0 which is an increasing process. Therefore, the term \(\int_{t}^{T}\,dL_{s}^{0}\) is positive. Considering (3.7), we have
$$\begin{aligned} e^{C_{h} t}\bar{\varsigma}^{1n}(t,x) \leq&\bigl(e^{C_{h} t}\bar{\varsigma}^{1n}(t,x)\bigr)^{+} \\ \leq&\mathbf{E}^{n} \biggl[\bigl(e^{C_{h} T}g^{1} \bigl(X_{T}^{t,x}\bigr)\bigr)^{+}+\int_{t}^{T} C_{h}e^{C_{h} s}\bigl(1+\bigl\vert \varphi_{n} \bigl(X_{s}^{t,x}\bigr)\bigr\vert ^{\gamma}\bigr)\,ds \Bigm| \mathcal{F}_{t} \biggr], \end{aligned}$$
where \(\mathbf{E}^{n}\) is the expectation under the probability \(\mathbf{P}^{n}\). Taking the expectation on both sides under the probability \(\mathbf{P}^{n}\) and taking account of \(\bar{\varsigma}^{1n}\) is deterministic and \(f^{+}\leq|f|\) for any integrable function f, we obtain
$$\bar{\varsigma}^{1n} (t,x)\leq e^{-C_{h} t}\mathbf{E}^{n} \biggl[e^{C_{h} T}\bigl\vert g^{1}\bigl(X_{T}^{t,x} \bigr)\bigr\vert +\int_{t}^{T} C_{h}e^{C_{h} s}\bigl(1+\bigl\vert \varphi _{n} \bigl(X_{s}^{t,x}\bigr)\bigr\vert ^{\gamma}\bigr)\,ds \biggr]. $$
Since \(g^{1}\) is of polynomial growth, and \(e^{-C_{h}t}\leq1\) for \(t\in [0,T]\), we infer that, \(\forall(t,x)\in[0,T]\times \mathbf{R}^{m}\),
$$ \bar{\varsigma}^{1n}(t,x) \leq C \mathbf{E}^{n} \Bigl[\sup _{t\leq s\leq T} \bigl(1+\bigl\vert X_{s}^{t,x}\bigr\vert ^{\gamma}\bigr) \Bigr] \leq C \mathbf{E}\Bigl[\mathcal{E}_{T} \cdot\sup_{t\leq s\leq T} \bigl(1+\bigl\vert X_{s}^{t,x}\bigr\vert ^{\gamma}\bigr) \Bigr], $$
where the constant C depends only on T, \(C_{h}\) and \(C_{g}\). Next, by a result of Haussmann ([23], p.14, see also [21], Lemma 3.1), there exist some \(p_{0}\in(1,2)\) and a constant C which is independent of n such that \(\mathbf{E}[|\mathcal{E}_{T}|^{p_{0}}]\leq C\) uniformly. As a result of Young’s inequality and estimate (2.3), we have
$$ \bar{\varsigma}^{1n}(t,x) \leq C_{p_{0}}\Bigl\{ \mathbf{E}\bigl[ | \mathcal{E}_{T}|^{p_{0}} \bigr] + \mathbf{E}\Bigl[ \sup _{t\leq s\leq T} \bigl(1+\bigl\vert X_{s}^{t,x}\bigr\vert ^{\frac{p_{0}\gamma}{p_{0}-1}} \bigr) \Bigr]\Bigr\} , $$
which yields
$$ \bar{\varsigma}^{1n}(t,x) \leq C\bigl(1+|x|^{\lambda}\bigr) \quad\text{with } \lambda = p_{0} \gamma/(p_{0}-1)>2. $$
Next, by the comparison theorem of BSDEs and property (3.2)(b), we actually have, for any \(s\in[t,T]\),
$$Y_{s}^{1n;(t,x)}=\varsigma^{1n}\bigl(s, X_{s}^{t,x}\bigr)\leq\bar{Y}_{s}^{1n}= \bar {\varsigma}^{1n}\bigl(s, X_{s}^{t,x}\bigr), $$
and by choosing \(s=t\), we get that \(\varsigma^{1n}(t,x)\leq C(1+|x|^{\lambda})\), \((t,x)\in[0,T]\times \mathbf{R}^{m}\). In a similar way, we can also show that for any \((t,x)\in[0,T]\times \mathbf{R}^{m}\), \(\varsigma ^{1n}(t,x)\geq-C(1+|x|^{\lambda})\). As a conclusion, \(\varsigma^{1n}\) is of polynomial growth on \((t,x)\) uniformly in n, i.e., there exist a constant C, which is independent of n, and \(\lambda >2\) such that
$$ \bigl\vert \varsigma^{1n}(t,x)\bigr\vert \leq C \bigl(1+|x|^{\lambda}\bigr). $$
Combining (3.11) and (2.3), we deduce that, for any \(\alpha>1\), \(i=1,2\),
$$ \mathbf{E}\Bigl[\sup_{t\leq s\leq T} \bigl\vert Y_{s}^{in;(t,x)}\bigr\vert ^{\alpha}\Bigr]\leq C. $$
On the other hand, by applying Itô’s formula to \((Y^{in;(t,x)})^{2}\) and considering the uniform estimate (3.12), we can infer in a regular way that, for any \(t\in[0,T]\), \(i=1,2\),
$$ \mathbf{E}\biggl[\int_{t}^{T} \bigl\vert Z_{s}^{in;(t,x)}\bigr\vert ^{2}\,ds \biggr]\leq C. $$
Step 3. For fixed \(a\in \mathbf{R}^{m}\), there exists a subsequence of \(((Y_{s}^{1n;(0,a)},Z_{s}^{1n;(0,a)})_{0\leq s\leq T})_{n\geq1}\) which converges in space \(\mathcal{S}_{T}^{2}(\mathbf{R})\times\mathcal{H}_{T}^{2}(\mathbf{R}^{m})\) respectively to \((Y^{1}_{s}, Z^{1}_{s})_{0\leq s\leq T}\), a solution of BSDE (3.1). Let us recall expression (3.5) for case \(i=1\) and apply property (3.2)(b) combined with the uniform estimates (3.12), (3.13), (2.3) and Young’s inequality to show that, for \(1< q<2\),
$$\begin{aligned}& \mathbf{E} \biggl[\int_{0}^{T} \bigl\vert F_{1n}\bigl(s,X_{s}^{0,a}\bigr)\bigr\vert ^{q}\,ds \biggr] \\& \quad\leq C\mathbf{E} \biggl[\int_{0}^{T}\bigl(1+ \bigl\vert \varphi_{n}\bigl(X_{s}^{0,a} \bigr)\bigr\vert \bigr)^{q}\bigl\vert Z_{s}^{1n;(0,a)}\bigr\vert ^{q}+\bigl(1+\bigl\vert \varphi_{n} \bigl(X_{s}^{0,a}\bigr)\bigr\vert ^{\gamma q}+\bigl\vert Y_{s}^{1n;(0,a)}\bigr\vert ^{q} \bigr)\,ds \biggr] \\& \quad\leq C\biggl\{ \mathbf{E} \biggl[\int_{0}^{T} \bigl\vert Z_{s}^{1n;(0,a)}\bigr\vert ^{2}\,ds \biggr]+ \mathbf{E} \Bigl[ \sup_{0\leq s\leq T}\bigl\vert Y_{s}^{1n;(0,a)} \bigr\vert ^{q} \Bigr]+1\biggr\} \\& \quad\leq C. \end{aligned}$$
Therefore, there exists a subsequence \(\{n_{k}\}\) (for notation simplification, we still denote it by \(\{n\}\)) and a \(\mathcal{B}([0, T])\otimes\mathcal{B}(\mathbf{R}^{m})\)-measurable deterministic function \(F_{1}(s,y)\) such that
$$ F_{1n}\rightarrow F_{1} \quad\text{weakly in } \mathcal{L}^{q}\bigl([0,T]\times \mathbf{R}^{m}; \mu(0,a;s,dy)\,ds \bigr). $$
Next we aim to prove that \((\varsigma^{1n}(t,x))_{n\geq1}\) is a Cauchy sequence for each \((t,x)\in[0,T]\times \mathbf{R}^{m}\). Now let \((t,x)\) be fixed, \(\eta>0\), k, n and \(m\geq1\) be integers. From (3.5), we have
$$\begin{aligned} \bigl\vert \varsigma^{1n}(t,x)-\varsigma^{1m}(t,x) \bigr\vert =& \biggl\vert \mathbf{E} \biggl[\int_{t}^{T}F_{1n} \bigl(s,X_{s}^{t,x}\bigr)-F_{1m}\bigl(s,X_{s}^{t,x} \bigr)\,ds \biggr]\biggr\vert \\ \leq& \mathbf{E} \biggl[\int_{t}^{t+\eta}\bigl\vert F_{1n}\bigl(s,X_{s}^{t,x}\bigr)-F_{1m} \bigl(s,X_{s}^{t,x}\bigr)\bigr\vert \,ds \biggr] \\ &{} +\biggl\vert \mathbf{E} \biggl[\int_{t+\eta}^{T} \bigl(F_{1n}\bigl(s,X_{s}^{t,x}\bigr)-F_{1m} \bigl(s,X_{s}^{t,x}\bigr) \bigr)1_{\{X_{s}^{t,x}|\leq k\}}\,ds \biggr] \biggr\vert \\ &{} +\biggl\vert \mathbf{E} \biggl[\int_{t+\eta}^{T} \bigl(F_{1n}\bigl(s,X_{s}^{t,x}\bigr)-F_{1m} \bigl(s,X_{s}^{t,x}\bigr) \bigr)1_{\{|X_{s}^{t,x}|>k\} }\,ds \biggr] \biggr\vert , \end{aligned}$$
where on the right-hand side, noticing (3.14), we obtain
$$\begin{aligned}& \mathbf{E} \biggl[\int_{t}^{t+\eta}\bigl\vert F_{1n}\bigl(s, X_{s}^{t,x}\bigr)- F_{1m}\bigl(s, X_{s}^{t,x}\bigr)\bigr\vert \,ds \biggr] \\& \quad\leq \eta^{\frac{q-1}{q}}\biggl\{ \mathbf{E} \biggl[\int_{0}^{T} \bigl\vert F_{1n}\bigl(s,X_{s}^{t,x} \bigr)-F_{1m}\bigl(s,X_{s}^{t,x}\bigr)\bigr\vert ^{q}\,ds \biggr]\biggr\} ^{\frac {1}{q}}\\& \quad\leq C\eta^{\frac{q-1}{q}}. \end{aligned}$$
At the same time, Lemma 3.1 associated with the \(\mathcal {L}^{\frac{q}{q-1}}\)-domination property implies
$$\begin{aligned}& \biggl\vert \mathbf{E} \biggl[\int_{t+\eta}^{T} \bigl(F_{1n}\bigl(s,X_{s}^{t,x}\bigr)-F_{1m} \bigl(s,X_{s}^{t,x}\bigr) \bigr)1_{\{\vert X_{s}^{t,x}\vert \leq k\}}\,ds \biggr] \biggr\vert \\& \quad= \biggl\vert \int_{\mathbf{R}^{m}}\int_{t+\eta}^{T} \bigl(F_{1n}(s,\eta)-F_{1m}(s,\eta )\bigr)1_{\{\vert \eta \vert \leq k\}} \mu(t,x;s,d\eta)\,ds\biggr\vert \\& \quad= \biggl\vert \int_{\mathbf{R}^{m}}\int_{t+\eta}^{T} \bigl(F_{1n}(s,\eta)-F_{1m}(s,\eta )\bigr)1_{\{\vert \eta \vert \leq k\}} \phi_{t,x,a}(s,\eta)\mu(0,a;s,d\eta)\,ds\biggr\vert . \end{aligned}$$
Since \(\phi_{t,x,a}(s,\eta) \in\mathcal{L}^{\frac{q}{q-1}}([t+\eta, T]\times[-k, k]^{m} ; \mu(0,a; s, d\eta)\,ds)\), for \(k\geq1\), it follows from (3.15) that for each \((t,x)\in[0, T]\times \mathbf{R}^{m}\), we have
$$ \mathbf{E} \biggl[\int_{t+\eta}^{T} \bigl(F_{1n}\bigl(s, X_{s}^{t,x}\bigr)-F_{1m} \bigl(s, X_{s}^{t,x}\bigr) \bigr)1_{\{\vert X_{s}^{t,x}\vert \leq k\}}\,ds \biggr]\rightarrow0\quad\text{as } n,m\rightarrow\infty. $$
$$\begin{aligned}& \biggl\vert \mathbf{E} \biggl[\int_{t+\eta}^{T} \bigl(F_{1n}\bigl(s,X_{s}^{t,x}\bigr)-F_{1m} \bigl(s,X_{s}^{t,x}\bigr) \bigr)1_{\{\vert X_{s}^{t,x}\vert > k\}}\,ds \biggr]\biggr\vert \\& \quad\leq C\biggl\{ \mathbf{E} \biggl[\int_{t+\eta}^{T} 1_{ \{\vert X_{s}^{t,x}\vert >k \}}\,ds \biggr]\biggr\} ^{\frac {q-1}{q}}\biggl\{ \mathbf{E} \biggl[\int_{t+\eta}^{T}\bigl\vert F_{1n} \bigl(s,X_{s}^{t,x}\bigr)-F_{1m}\bigl(s,X_{s}^{t,x} \bigr)\bigr\vert ^{q}\,ds \biggr]\biggr\} ^{\frac {1}{q}} \leq Ck^{-\frac{q-1}{q}}. \end{aligned}$$
Therefore, for each \((t,x)\in[0,T]\times \mathbf{R}^{m}\), \((\varsigma ^{1n}(t,x))_{n\geq1}\) is a Cauchy sequence, and then there exists a Borelian application \(\varsigma^{1}\) on \([0,T]\times \mathbf{R}^{m}\) such that for each \((t,x)\in[0,T]\times \mathbf{R}^{m}\), \(\lim_{n\rightarrow \infty}\varsigma^{1n}(t,x)=\varsigma^{1}(t,x)\), which indicates that for \(t\in[0,T]\), \(\lim_{n\rightarrow\infty} Y_{t}^{1n;(0,a)}(\omega )=\varsigma^{1}(t,X_{t}^{0,a})\), P-a.s. Taking account of (3.12) and Lebesgue’s dominated convergence theorem, we obtain that the sequence \(((Y_{t}^{1n;(0,a)})_{0\leq t\leq T})_{n\geq1}\) converges to \(Y^{1}=(\varsigma^{1}(t, X_{t}^{0,a}))_{0\leq t\leq T}\) in \(\mathcal{L}^{p}([0,T]\times \mathbf{R}^{m})\) for any \(p\geq1\), that is,
$$ \mathbf{E} \biggl[\int_{0}^{T} \bigl\vert Y_{t}^{1n;(0,a)}-Y_{t}^{1}\bigr\vert ^{p} \,dt \biggr]\rightarrow0,\quad \text{as } n\rightarrow\infty. $$
Next, we will show that for any \(p>1\), \(Z^{1n;(0,a)}=((\mathfrak {z}^{1n}(t,X_{t}^{0,a}))_{0\leq t\leq T})_{n\geq1}\) has a limit in \(\mathcal{H}_{T}^{2}(\mathbf{R}^{m})\). Besides, \((Y^{1n;(0,a)})_{n\geq1}\) is convergent in \(\mathcal{S}_{T}^{2}(\mathbf{R})\) as well. We now focus on the first claim. For \(n,m \geq1\) and \(0 \leq t\leq T\), using Itô’s formula with \((Y_{t}^{1n}-Y_{t}^{1m})^{2}\) (we omit the subscript \((0,a)\) for convenience) and considering (3.2)(b), we get
$$\begin{aligned}& \bigl\vert Y_{t}^{1n}-Y_{t}^{1m} \bigr\vert ^{2}+ \int_{t}^{T} \bigl\vert Z_{s}^{1n}-Z_{s}^{1m}\bigr\vert ^{2}\,ds \\& \quad= 2\int_{t}^{T}\bigl(Y_{s}^{1n}-Y_{s}^{1m} \bigr) \bigl(H_{1n}\bigl(s, X_{s}^{0,a},Y_{s}^{1n},Y_{s}^{2n},Z_{s}^{1n},Z_{s}^{2n} \bigr) \\& \quad\quad{}-H_{1m}\bigl(s, X_{s}^{0,a},Y_{s}^{1m},Y_{s}^{2m},Z_{s}^{1m},Z_{s}^{2m} \bigr)\bigr)\,ds-2\int_{t}^{T}\bigl(Y_{s}^{1n}-Y_{s}^{1m} \bigr) \bigl(Z_{s}^{1n}-Z_{s}^{1m} \bigr)\,dB_{s} \\& \quad\leq C\int_{t}^{T} \bigl\vert Y_{s}^{1n}-Y_{s}^{1m}\bigr\vert \bigl[ \bigl(\bigl\vert Z_{s}^{1n}\bigr\vert +\bigl\vert Z_{s}^{1m}\bigr\vert \bigr) \bigl(1+\bigl\vert X_{s}^{0,a}\bigr\vert \bigr) \\& \quad\quad{} +\bigl\vert Y_{s}^{1n}\bigr\vert +\bigl\vert Y_{s}^{1m}\bigr\vert + \bigl(1+\bigl\vert X_{s}^{0,a}\bigr\vert \bigr)^{\gamma} \bigr]\,ds- 2\int _{t}^{T}\bigl(Y_{s}^{1n}-Y_{s}^{1m} \bigr) \bigl(Z_{s}^{1n}-Z_{s}^{1m} \bigr)\,dB_{s}. \end{aligned}$$
Since for any \(x,y,z\in \mathbf{R}\), \(|xyz|\leq\frac{1}{a}|x|^{a}+\frac {1}{b}|y|^{b}+\frac{1}{c}|z|^{c}\) with \(\frac{1}{a}+\frac{1}{b}+\frac {1}{c}=1\), then for any \(\varepsilon>0\) we have
$$\begin{aligned}& \bigl\vert Y_{t}^{1n}-Y_{t}^{1m} \bigr\vert ^{2}+\int_{t}^{T}\bigl\vert Z_{s}^{1n}-Z_{s}^{1m}\bigr\vert ^{2}\,ds \\& \quad\leq C \biggl\{ \frac{\varepsilon^{2}}{2}\int_{t}^{T} \bigl(\bigl\vert Z_{s}^{1n}\bigr\vert +\bigl\vert Z_{s}^{1m}\bigr\vert \bigr)^{2}\,ds+ \frac{\varepsilon^{4}}{4}\int_{t}^{T}\bigl(1+\bigl\vert X_{s}^{0,a}\bigr\vert \bigr)^{4}\,ds \\& \quad\quad{}+\frac{1}{4\varepsilon^{8}}\int_{t}^{T}\bigl\vert Y_{s}^{1n}-Y_{s}^{1m}\bigr\vert ^{4}\,ds+\frac{\varepsilon}{2}\int_{t}^{T} \bigl(\bigl\vert Y_{s}^{1n}\bigr\vert +\bigl\vert Y_{s}^{1m}\bigr\vert \bigr)^{2}\,ds \\& \quad\quad{}+\frac{\varepsilon}{2}\int_{t}^{T}\bigl(1+\bigl\vert X_{s}^{0,a}\bigr\vert \bigr)^{2\gamma }\,ds+ \frac{1}{2\varepsilon}\int_{t}^{T}\bigl\vert Y_{s}^{1n}-Y_{s}^{1m}\bigr\vert ^{2}\,ds \biggr\} \\& \quad\quad{}-2\int_{t}^{T}\bigl(Y_{s}^{1n}-Y_{s}^{1m} \bigr) \bigl(Z_{s}^{1n}-Z_{s}^{1m} \bigr)\,dB_{s}. \end{aligned}$$
Taking now \(t=0\) in (3.17), expectation on both sides and the limit w.r.t. n and m, we deduce that
$$ \limsup_{n,m\rightarrow\infty} \mathbf{E} \biggl[\int _{0}^{T}\bigl\vert Z_{s}^{1n}-Z_{s}^{1m} \bigr\vert ^{2}\,ds \biggr]\leq C\biggl\{ \frac{\varepsilon ^{2}}{2}+ \frac{\varepsilon^{4}}{4}+\frac{\varepsilon}{2}\biggr\} , $$
due to (3.13), (2.3) and the convergence of (3.16). As ε is arbitrary, then the sequence \((Z^{1n})_{n\geq1}\) is convergent in \(\mathcal{H}_{T}^{2}\) to a process \(Z^{1}\).
Now, returning to inequality (3.17), taking the supremum over \([0,T]\) and using BDG’s inequality, we obtain that
$$\begin{aligned}& \mathbf{E} \biggl[\sup_{0\leq t\leq T}\bigl\vert Y_{t}^{1n}-Y_{t}^{1m} \bigr\vert ^{2}+\int_{0}^{T}\bigl\vert Z_{s}^{1n}-Z_{s}^{1m}\bigr\vert ^{2}\,ds \biggr] \\& \quad\leq C\biggl\{ \frac{\varepsilon^{2}}{2}+\frac{\varepsilon^{4}}{4}+\frac {\varepsilon}{2}\biggr\} + \frac{1}{4}\mathbf{E} \Bigl[\sup_{0\leq t\leq T}\bigl\vert Y_{t}^{1n}-Y_{t}^{1m}\bigr\vert ^{2} \Bigr]+4\mathbf{E} \biggl[\int_{0}^{T} \bigl\vert Z_{s}^{1n}-Z_{s}^{1m}\bigr\vert ^{2}\,ds \biggr], \end{aligned}$$
which implies that
$$\limsup_{n,m\rightarrow\infty}\mathbf{E} \Bigl[\sup_{0\leq t\leq T} \bigl\vert Y_{t}^{1n}-Y_{t}^{1m}\bigr\vert ^{2} \Bigr]=0, $$
since ε is arbitrary and (3.18). Thus, the sequence of \((Y^{1n})_{n\geq1}\) converges to \(Y^{1}\) in \(\mathcal{S}_{T}^{2}\), which is a continuous process.

Finally, repeating the procedure for player \(i=2\), we have also the convergence of \((Z^{2n})_{n\geq1}\) (resp. \((Y^{2n})_{n\geq1}\)) in \(\mathcal{H}_{T}^{2}\) (resp. \(\mathcal{S}_{T}^{2}\)) to \(Z^{2}\) (resp. \(Y^{2}=\varsigma^{2}(., X^{0,x})\)).

Step 4. The limit process \((Y_{t}^{i},Z_{t}^{i})_{0\leq t\leq T}\), \(i=1,2\), is the solution of BSDE (3.1).

Indeed, we need to show that (for case \(i=1\))
$$F_{1}\bigl(t,X_{t}^{0,a}\bigr)=H_{1} \bigl(s, X_{s}^{0,a}, Y_{s}^{1},Y_{s}^{2},Z_{s}^{1},Z_{s}^{2} \bigr) \quad dt\otimes d\mathbf{P}\mbox{-a.e.} $$
For \(k\geq1\), we have
$$\begin{aligned}& \mathbf{E} \biggl[\int_{0}^{T} \bigl\vert H_{1n}\bigl(s, X_{s}^{0,a}, Y_{s}^{1n},Y_{s}^{2n},Z_{s}^{1n},Z_{s}^{2n} \bigr)- H_{1}\bigl(s, X_{s}^{0,a}, Y_{s}^{1},Y_{s}^{2},Z_{s}^{1},Z_{s}^{2} \bigr)\bigr\vert \,ds \biggr] \\& \quad\leq\mathbf{E} \biggl[\int_{0}^{T} \bigl|H_{1n}\bigl(s, X_{s}^{0,a}, Y_{s}^{1n},Y_{s}^{2n},Z_{s}^{1n},Z_{s}^{2n} \bigr) \\& \quad\quad{} - H_{1}\bigl(s, X_{s}^{0,a}, Y_{s}^{1n},Y_{s}^{2n},Z_{s}^{1n},Z_{s}^{2n} \bigr) \bigr|\cdot 1_{\{ |Y_{s}^{1n}|+|Y_{s}^{2n}|+|Z_{s}^{1n}|+|Z_{s}^{2n}|< k\}}\,ds \biggr] \\& \quad\quad{} +\mathbf{E} \biggl[\int_{0}^{T} \bigl|H_{1n}\bigl(s, X_{s}^{0,a}, Y_{s}^{1n},Y_{s}^{2n},Z_{s}^{1n},Z_{s}^{2n} \bigr) \\& \quad \quad{} - H_{1}\bigl(s, X_{s}^{0,a}, Y_{s}^{1n},Y_{s}^{2n},Z_{s}^{1n},Z_{s}^{2n} \bigr) \bigr| \cdot 1_{\{ |Y_{s}^{1n}|+|Y_{s}^{2n}|+|Z_{s}^{1n}|+|Z_{s}^{2n}|\geq k\}}\,ds \biggr] \\& \quad\quad{} +\mathbf{E} \biggl[\int_{0}^{T} \bigl\vert H_{1}\bigl(s, X_{s}^{0,a} ,Y_{s}^{1n},Y_{s}^{2n},Z_{s}^{1n},Z_{s}^{2n} \bigr)- H_{1}\bigl(s, X_{s}^{0,a}, Y_{s}^{1},Y_{s}^{2},Z_{s}^{1},Z_{s}^{2} \bigr)\bigr\vert \,ds \biggr] \\& \quad := I_{1}^{n}+I_{2}^{n}+I_{3}^{n}. \end{aligned}$$
The sequence \(I_{1}^{n}\), \(n\geq1\), converges to 0. On the one hand, for \(n\geq1\), property (3.2)(b) in Step 1 implies that
$$\begin{aligned}& \bigl\vert H_{1n}\bigl(s, X_{s}^{0,a}, Y_{s}^{1n},Y_{s}^{2n},Z_{s}^{1n},Z_{s}^{2n} \bigr) - H_{1}\bigl(s, X_{s}^{0,a}, Y_{s}^{1n},Y_{s}^{2n},Z_{s}^{1n},Z_{s}^{2n} \bigr)\bigr\vert \\& \quad\quad{}\cdot1_{\{ |Y_{s}^{1n}|+|Y_{s}^{2n}|+|Z_{s}^{1n}|+|Z_{s}^{2n}|< k\}} \\& \quad\leq C_{2}\bigl(1+\bigl\vert X_{s}^{0,a}\bigr\vert \bigr)k+C_{h}\bigl(1+\bigl\vert X_{s}^{0,a} \bigr\vert ^{\gamma}+k\bigr). \end{aligned}$$
On the other hand, considering property (3.2)(d), we obtain that
$$\begin{aligned}& \bigl\vert H_{1n}\bigl(s, X_{s}^{0,a}, Y_{s}^{1n},Y_{s}^{2n},Z_{s}^{1n},Z_{s}^{2n} \bigr) - H_{1}\bigl(s, X_{s}^{0,a}, Y_{s}^{1n},Y_{s}^{2n},Z_{s}^{1n},Z_{s}^{2n} \bigr)\bigr\vert \\& \quad\quad{}\cdot1_{\{ |Y_{s}^{1n}|+|Y_{s}^{2n}|+|Z_{s}^{1n}|+|Z_{s}^{2n}|< k\}} \\& \quad\leq \mathop{\sup_{(y_{s}^{1},y_{s}^{2},z_{s}^{1},z_{s}^{2})}}_{ |y_{s}^{1}|+|y_{s}^{2}|+|z_{s}^{1}|+|z_{s}^{1}|< k} \bigl\vert H_{1n}\bigl(s, X_{s}^{0,a}, y_{s}^{1},y_{s}^{2},z_{s}^{1},z_{s}^{2} \bigr)- H_{1}\bigl(s, X_{s}^{0,a}, y_{s}^{1},y_{s}^{2},z_{s}^{1},z_{s}^{2} \bigr)\bigr\vert \\& \quad\rightarrow0 \end{aligned}$$
as \(n\rightarrow\infty\). Thanks to Lebesgue’s dominated convergence theorem, the sequence \(I_{1}^{n}\) of (3.19) converges to 0 in \(\mathcal {H}_{T}^{1}\).
The sequence \(I_{2}^{n}\) in (3.19) is bounded by \(\frac {C}{k^{2(q-1)/q}}\) with \(q\in(1,2)\). Actually, from property (3.2)(b) and Markov’s inequality, for \(q\in(1,2)\), we get
$$\begin{aligned} I_{2}^{n} \leq& C \biggl\{ \mathbf{E} \biggl[\int _{0}^{T}\bigl(1+\bigl\vert X_{s}^{0,a} \bigr\vert \bigr)^{q}\bigl\vert Z_{s}^{1n}\bigr\vert ^{q}+\bigl(1+\bigl\vert X_{s}^{0,a}\bigr\vert ^{\gamma}+ \bigl\vert Y_{s}^{1n}\bigr\vert \bigr)^{q}\,ds \biggr] \biggr\} ^{\frac{1}{q}} \\ &{} \times \biggl\{ \mathbf{E} \biggl[\int_{0}^{T} 1_{\{ \vert Y_{s}^{1n}\vert +\vert Y_{s}^{2n}\vert +\vert Z_{s}^{1n}\vert + \vert Z_{s}^{2n}\vert \geq k\}}\,ds \biggr] \biggr\} ^{\frac{q-1}{q}} \\ \leq &C \biggl\{ \mathbf{E} \biggl[\int_{0}^{T} \bigl\vert Z_{s}^{1n}\bigr\vert ^{2}\,ds \biggr]+ \mathbf{E} \biggl[\int_{0}^{T} \bigl(1+\bigl\vert X_{s}^{0,a}\bigr\vert \bigr)^{\frac{2q}{2-q}}\,ds \biggr] \\ &{} +\mathbf{E} \biggl[\int_{0}^{T}\bigl\vert Y_{s}^{1n}\bigr\vert ^{2}\,ds \biggr]+\mathbf{E} \biggl[\int_{0}^{T}\bigl(1+\bigl\vert X_{s}^{0,a}\bigr\vert \bigr)^{\gamma q}\,ds \biggr] \biggr\} ^{\frac{1}{q}} \\ &{} \times\frac{ \{\mathbf{E} [\int_{0}^{T}\vert Y_{s}^{1n}\vert ^{2}+\vert Y_{s}^{2n}\vert ^{2}+\vert Z_{s}^{1n}\vert ^{2}+ \vert Z_{s}^{2n}\vert ^{2}\,ds ] \} ^{\frac{q-1}{q}}}{(k^{2})^{\frac{q-1}{q}}} \\ \leq&\frac{C}{k^{\frac{2(q-1)}{q}}}. \end{aligned}$$
The last inequality is a straightforward result of estimates (2.3), (3.12) and (3.13).
The third sequence \(I_{3}^{n}\), \(n\geq1\), in (3.19) also converges to 0, at least for a subsequence. Actually, since the sequence \((Z^{1n})_{n\geq1}\) converges to \(Z^{1}\) in \(\mathcal{H}_{T}^{2}\), then there exists a subsequence \((Z^{1n_{k}})_{k\geq1}\) such that it converges to \(Z^{1}\), \(dt\otimes d\mathbf{P}\)-a.e.; and furthermore, \(\sup_{k\geq1}|Z_{t}^{1n_{k}}(\omega)|\in\mathcal{H}_{T}^{2}\). On the other hand, \((Y^{1n_{k}})_{k\geq1}\) converges to \(Y^{1}\), \(dt\otimes d\mathbf{P}\)-a.e. Thus, by the continuity of function \(H_{1}(t,x,y^{1},y^{2},z^{1},z^{2})\) w.r.t. \((y^{1},y^{2},z^{1},z^{2})\), we obtain that
$$\begin{aligned}& H_{1}\bigl(t,X_{t}^{0,a}, Y_{t}^{1n_{k}},Y_{t}^{2n_{k}},Z_{t}^{1n_{k}},Z_{t}^{2n_{k}} \bigr) \\& \quad\xrightarrow{k\rightarrow\infty} H_{1}\bigl(t,X_{t}^{0,a},Y_{t}^{1},Y_{t}^{2},Z_{t}^{1},Z_{t}^{2} \bigr) \quad dt\otimes d\mathbf{P}\mbox{-a.e.} \end{aligned}$$
In addition, considering that
$$\sup_{k\geq 1}\bigl\vert H_{1}\bigl(t,X_{t}^{0,a},Y_{t}^{1n_{k}},Y_{t}^{2n_{k}},Z_{t}^{1n_{k}},Z_{t}^{2n_{k}} \bigr)\bigr\vert \in \mathcal{H}_{T}^{q}(\mathbf{R})\quad\text{for } 1< q< 2, $$
which follows from (3.14). Finally, by the dominated convergence theorem, one can get that
$$ H_{1}\bigl(t,X_{t}^{0,a}, Y_{t}^{1n_{k}},Y_{t}^{2n_{k}},Z_{t}^{1n_{k}},Z_{t}^{2n_{k}} \bigr) \xrightarrow{k\rightarrow\infty} H_{1}\bigl(t,X_{t}^{0,a},Y_{t}^{1},Y_{t}^{2},Z_{t}^{1},Z_{t}^{2} \bigr) \quad\text{in } \mathcal{H}_{T}^{q}, $$
which yields the convergence of \(I_{3}^{n}\) in (3.19) to 0.

It follows that the sequence \((H_{1n}(t,X_{t}^{0,a},Y_{t}^{1n},Y_{t}^{2n},Z_{t}^{1n},Z_{t}^{2n})_{0\leq t\leq T})_{n\geq1}\) converges to \((H_{1}(t, X_{t}^{0,a},Y_{t}^{1},Y_{t}^{2},Z_{t}^{1},Z_{t}^{2}))_{0\leq t\leq T}\) in \(\mathcal{L}^{1}([0,T]\times\Omega, dt\otimes d\mathbf{P})\) and then \(F_{1}(t,X_{t}^{0,a})=H_{1}(t,X_{t}^{0,a},Y_{t}^{1},Y_{t}^{2}, Z_{t}^{1},Z_{t}^{2})\), \(dt\otimes d\mathbf{P}\)-a.e. In the same way, we have \(F_{2}(t, X_{t}^{0,x})=H_{2}(t,X_{t}^{0,a},Y_{t}^{1},Y_{t}^{2},Z_{t}^{1},Z_{t}^{2})\), \(dt\otimes d\mathbf{P}\)-a.e. Thus, the processes \((Y^{i}, Z^{i})\), \(i=1,2\), are the solutions of the backward equation (3.1). □

4 Generalizations

As we can see from Assumption 2.3(i), this model deals with BSDE (2.5) with stochastic linear growth generators. However, when x takes value of \(X^{0,a}\), the coefficient of the component \(y^{i}\) in (2.4) is deterministic. Therefore, one possible generalization is to improve this model by considering the following assumption instead of Assumption 2.3.

Assumption 4.1

  1. (i)
    For each \((t,x,y^{1},\ldots,y^{n},z^{1},\ldots,z^{n})\in[0,T]\times \mathbf{R}^{m}\times \mathbf{R}^{n}\times \mathbf{R}^{nm}\), there exist a constant C and \(0<\gamma<2\) such that, for each \(i=1,2,\ldots,n\),
    $$ \bigl\vert H_{i}\bigl(t,x,y^{1},\ldots,y^{n},z^{1}, \ldots,z^{n}\bigr)\bigr\vert \leq C\bigl(1+\vert x\vert \bigr) \bigl\vert z^{i}\bigr\vert + C\bigl(1+|x|^{\gamma }\bigr)\bigl\vert y^{i}\bigr\vert ; $$
  2. (ii)

    the mapping \((y^{1},\ldots, y^{n}, z^{1},\ldots,z^{n})\in \mathbf{R}^{n}\times \mathbf{R}^{nm}\longmapsto H_{i}(t,x,y^{1},\ldots, y^{n},z^{1},\ldots,z^{n}) \in \mathbf{R}\) is continuous for any fixed \((t,x) \in[0,T]\times \mathbf{R}^{m}\).


Then we will obtain a similar conclusion below. However, notice that we can only conveniently deal with the case when γ is strictly smaller than 2.

Theorem 4.1

Let \(a\in \mathbf{R}^{m}\) be fixed. Then, under Assumptions 2.1, 2.2 and 4.1, there exist two pairs of \(\mathcal {P}\)-measurable processes \((Y^{i}, Z^{i})\) with values in \(\mathbf{R}^{1+m}\), \(i=1,2\), and two deterministic functions \(\varsigma^{i}(t,x)\) with the growth property as follows, \(|\varsigma^{i}(t,x)|\leq e^{C(1+|x|^{\gamma })}\) with \(0<\gamma<2 \), \(i=1,2\), such that equations (3.1) hold true.

Sketch of the proof

One can follow the same method of the proof of Theorem 3.1. We only point out here some difference in Step 2 related to the growth property of the deterministic function \(\varsigma^{1n}\).

In Step 2, we consider the following BSDE sequence:
$$\begin{aligned} \bar{Y}_{s}^{1n} =&g^{1}\bigl(X_{T}^{t,x} \bigr)+\int_{t}^{T} C\bigl(1+\bigl\vert \varphi_{n} \bigl(X_{r}^{t,x}\bigr)\bigr\vert \bigr)\bigl\vert \bar{Z}_{r}^{1n}\bigr\vert \\ &{}+C\bigl(1+\bigl\vert \varphi_{n}\bigl(X_{r}^{t,x} \bigr)\bigr\vert ^{\gamma}\bigr) \bigl\vert \bar{Y}_{r}^{1n} \bigr\vert \, dr -\int_{s}^{T} \bar{Z}_{r}^{1n}\,dB_{r}. \end{aligned}$$
After changing probability following the same line as (3.8), we obtain
$$ \bar{Y}_{s}^{1n}=g^{1}\bigl(X_{T}^{t,x} \bigr)+\int_{t}^{T} C\bigl(1+\bigl\vert \varphi _{n}\bigl(X_{r}^{t,x}\bigr)\bigr\vert ^{\gamma}\bigr) \bigl\vert \bar{Y}_{r}^{1n}\bigr\vert \,dr -\int_{s}^{T} \bar{Z}_{r}^{1n}\,dB_{r}^{n},\quad s\in[t,T]. $$
Considering \(\bar{Y}_{s}^{1n}=\bar{\varsigma}^{1n}(s, X_{s}^{t,x})\) for the deterministic function \(\bar{\varsigma}^{1n}\), we obviously have
$$\begin{aligned} \bar{\varsigma}^{1n}(t,x) =&\mathbf{E}^{n} \bigl[g^{1} \bigl(X_{T}^{t,x}\bigr)e^{\int _{t}^{T}C(1+\vert \varphi_{n}(X_{r}^{t,x})\vert ^{\gamma})\operatorname{sign}(\bar{Y}_{r}^{1n})\,dr} \bigr] \\ \leq&\mathbf{E}^{n} \bigl[e^{C\sup_{0\leq s\leq T}(1+\vert X_{s}^{t,x}\vert ^{\gamma })} \bigr] \\ =&\mathbf{E}\bigl[e^{C\sup_{0\leq s\leq T}(1+\vert X_{s}^{t,x}\vert ^{\gamma})}\mathcal {E}_{T} \bigr] \\ \leq& e^{C(1+|x|^{\gamma})}. \end{aligned}$$
This inequality is obtained from the integrability of \(\mathcal{E}_{T}\) in \(\mathcal{L}^{p_{0}}\) for some \(p_{0}\in(1,2)\) and the fact that \((1+|x|)< e^{1+|x|}\) for any x. Therefore, by comparison theorem, we know \(\varsigma^{1n}(t,x)\leq e^{C(1+|x|^{\gamma})}\). In a similar way, we can also show that \(\varsigma^{1n}(t,x)\geq e^{-C(1+|x|^{\gamma})}\).
Combining this growth property and the fact that \(Y_{s}^{1n:(t,x)}=\varsigma^{1n}(s, X_{s}^{t,x})\), we conclude that, for any \(\alpha>1\) and \(0<\gamma<2\),
$$\mathbf{E}\Bigl[\sup_{t\leq s\leq T} \bigl\vert Y_{s}^{in;(t,x)} \bigr\vert ^{\alpha}\Bigr]\leq C, $$
$$ \mathbf{E}\Bigl[\sup_{t\leq s\leq T} e^{C(1+\vert X_{s}^{t,x}\vert ^{\gamma})} \Bigr]\leq C $$
is true only for positive γ strictly smaller than 2. When γ touches 2 or bigger than 2, then estimate (4.4) is not necessarily true for arbitrary \(T>0\). In this case, (4.4) may explode, and this is the reason why we only consider \(\gamma\in(0,2)\) in Assumption 4.1. However, for \(\gamma \geq2\), we can still expect that (4.4) holds true for T small enough.

The rest of the proof will have no practical difficulties, therefore, we omit it.



The authors thank the anonymous reviewers for their useful comments and suggestions. The first author was supported in part by the Natural Science Foundation for Young Scientists of Jiangsu Province, P.R. China (No. BK20140299). The second author was supported in part by National Natural Science Foundation of China (11221061 and 61174092), 111 project (B12023), the National Natural Science Fund for Distinguished Young Scholars of China (11125102) and the Chang Jiang Scholar Program of Chinese Education Ministry.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

School of Mathematics, Shandong University, Jinan, P.R. China


  1. Bismut, J-M: Conjugate convex functions in optimal stochastic control. J. Math. Anal. Appl. 44(2), 384-404 (1973) MathSciNetView ArticleGoogle Scholar
  2. Pardoux, E, Peng, S: Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 14(1), 55-61 (1990) MathSciNetView ArticleMATHGoogle Scholar
  3. Duffie, D, Epstein, LG: Stochastic differential utility. Econometrica 60, 353-394 (1992) MathSciNetView ArticleMATHGoogle Scholar
  4. El-Karoui, N, Mazliak, L: Backward Stochastic Differential Equations. Pitman Research Notes in Mathematics Series, vol. 364. Longman, Harlow (1997) MATHGoogle Scholar
  5. El-Karoui, N, Hamadene, S, Matoussi, A: Backward stochastic differential equations and applications In: Carmona, R (ed.) Indifference Pricing : Theory and Applications, pp. 267-320. Princeton University Press, Princeton (2008) Google Scholar
  6. Carmona, R: Indifference Pricing: Theory and Applications. Princeton University Press, Princeton (2009) Google Scholar
  7. Yong, J, Zhou, XY: Stochastic Controls: Hamiltonian Systems and HJB Equations, vol. 43. Springer, New York (1999) View ArticleMATHGoogle Scholar
  8. El-Karoui, N, Peng, S, Quenez, MC: Backward stochastic differential equations in finance. Math. Finance 7(1), 1-71 (1997) MathSciNetView ArticleMATHGoogle Scholar
  9. Peng, S: Backward stochastic differential equation, nonlinear expectation and their applications. In: Proceedings of the International Congress of Mathematicians, pp. 393-432 (2011) Google Scholar
  10. Mao, X: Adapted solutions of backward stochastic differential equations with non-Lipschitz coefficients. Stoch. Process. Appl. 58(2), 281-292 (1995) View ArticleMATHGoogle Scholar
  11. Hamadène, S: Equations différentielles stochastiques rétrogrades: le cas localement lipschitzien. Ann. Inst. Henri Poincaré B, Probab. Stat. 32(5), 645-659 (1996) Google Scholar
  12. Lepeltier, J-P, San Martin, J: Backward stochastic differential equations with continuous coefficient. Stat. Probab. Lett. 32(4), 425-430 (1997) MathSciNetView ArticleMATHGoogle Scholar
  13. Jia, G: A uniqueness theorem for the solution of backward stochastic differential equations. C. R. Math. 346(7), 439-444 (2008) MathSciNetView ArticleMATHGoogle Scholar
  14. Briand, P, Carmona, R: BSDEs with polynomial growth generators. Int. J. Stoch. Anal. 13(3), 207-238 (2000) MathSciNetView ArticleMATHGoogle Scholar
  15. Briand, P, Lepeltier, RJ-P, San Martin, J, et al.: One-dimensional backward stochastic differential equations whose coefficient is monotonic in y and non-Lipschitz in z. Bernoulli 13(1), 80-91 (2007) MathSciNetView ArticleMATHGoogle Scholar
  16. Kobylanski, M: Backward stochastic differential equations and partial differential equations with quadratic growth. Ann. Probab. 28(2), 558-602 (2000) MathSciNetView ArticleMATHGoogle Scholar
  17. Briand, P, Hu, Y: BSDE with quadratic growth and unbounded terminal value. Probab. Theory Relat. Fields 136(4), 604-618 (2006) MathSciNetView ArticleMATHGoogle Scholar
  18. Hamadène, S, Lepeltier, J-P, Peng, S: BSDEs with Continuous Coefficients and Stochastic Differential Games. Pitman Research Notes in Mathematics Series, pp. 115-128, (1997) Google Scholar
  19. Bahlali, K: Backward stochastic differential equations with locally Lipschitz coefficient. C. R. Acad. Sci., Ser. 1 Math. 333(5), 481-486 (2001) MathSciNetMATHGoogle Scholar
  20. Bahlali, K: Existence and uniqueness of solutions for BSDEs with locally Lipschitz coefficient. Electron. Commun. Probab. 7(17), 169-179 (2002) MathSciNetGoogle Scholar
  21. Hamadène, S, Mu, R: Existence of Nash equilibrium points for Markovian non-zero-sum stochastic differential games with unbounded coefficients. Stoch. Int. J. Probab. Stoch. Process. 87 85-111 (2015). doi:10.1080/17442508.2014.915973 View ArticleGoogle Scholar
  22. Karatzas, I, Shreve, SE: Brownian Motion and Stochastic Calculus, 2nd edn. Springer, New York (1991) MATHGoogle Scholar
  23. Haussmann, UG: A Stochastic Maximum Principle for Optimal Control of Diffusions. Wiley, New York (1986) MATHGoogle Scholar


© Mu and Wu 2015