# A regression-based Monte Carlo method to solve two-dimensional forward backward stochastic differential equations

## Abstract

The purpose of this paper is to investigate the numerical solutions to two-dimensional forward backward stochastic differential equations(FBSDEs). Based on the Fourier cos-cos transform, the approximations of conditional expectations and their errors are studied with conditional characteristic functions. A new numerical scheme is proposed by using the least-squares regression-based Monte Carlo method to solve the initial value of FBSDEs. Finally, a numerical experiment in European option pricing is implemented to test the efficiency and stability of this scheme.

## Introduction

In this paper, we consider the numerical solutions to the two-dimensional decoupled forward backward stochastic differential equations (FBSDEs):

\begin{aligned} &X_{t} = X_{0} + \int _{0}^{t} \mu (X_{s})\,d s + \int _{0}^{t} \sigma (X_{s})\,d W_{s},\qquad X_{0}=x_{0}, \end{aligned}
(1)
\begin{aligned} &Y_{t} = g(X_{T}) + \int _{t}^{T} f(s,Y_{s},Z_{s}) \,d s - \int _{t}^{T} Z_{s} \,d W_{s},\qquad Y_{T}=g(X_{T}), \end{aligned}
(2)

where $$X_{t}=(X^{1}_{t},X^{2}_{t})^{*}, 0\leq t \leq T$$, is a two-dimensional forward component and $$Y_{t}, 0\leq t \leq T$$, is a one-dimensional backward component. $$\mu (X_{t}) =(\mu _{1}(X^{1}_{t}),\mu _{2}(X^{2}_{t}))^{*}, \sigma (X_{t}) =\operatorname{diag}(\sigma _{1}(X^{1}_{t}),\sigma _{2}(X^{2}_{t}))$$ are drift and volatility terms. $$W_{t}=(W^{1}_{t},W^{2}_{t})^{*}, 0 \leq t \leq T$$, is a standard two-dimensional Brownian motion defined on a filtered probability space $$(\Omega ,\mathcal{F},\mathbf{P}, (\mathcal{F}_{t})_{0\leq t\leq T})$$, where $$\mathcal{F}_{t}$$ is filtration of $$W_{t}$$. Here, the operator $$(\cdot )^{*}$$ denotes the transpose operator for a vector.

Under the standard conditions on f and g, Pardoux and Peng [1] proved that there exists a unique solution to nonlinear FBSDEs. But it is often difficult to obtain the analytic solutions. So it is crucial to give the numerical schemes. The key of the numerical schemes is how to discrete conditional expectation. Up to now, there have been lots of methods to solve this problem. Zhang et al. [2] constructed a kind of sparse-grid Gauss–Hermite quadrature rule and hierarchical sparse-grid interpolation to approximate this conditional expectation. Fu et al. [3] gave a method of spectral sparse grid approximations to deal with high-dimensional conditional expectation. It is known that the Fourier transform is an important tool in option pricing, not only in the SDE framework but also in the ODE framework. With the Fourier cos transform, one can convert conditional expectation to a series form. By constructing a function basis from truncating series, one can approximate the conditional expectation. More efficient methods and fast algorithms follow this idea. For details, one can refer to [46]. These schemes are applied to many fields, such as option pricing [710], portfolio optimization [11, 12], and so on. Ruijter and Oosterlee [13] extended the Fourier cos method to two-dimensional FBSDE, named Fourier cos-cos method, and gave a numerical scheme to pricing European option and Bermudan option under GBM model and Heston stochastic volatility. Recently, Meng and Ding [14] investigated a Fourier sin-sin method named modified Fourier sin-sin method to price rainbow options within two-dimensional BSDE. Numerical experiments showed that its convergence and efficiency were expected. Inspired by the literature, we extend the idea to solving two-dimensional FBSDEs by using the Fourier cos-cos transform and the least-square Monte Carlo regression to obtain the numerical solution to FBSDEs (1) and (2). It is a supplement to our previous work in [15], and it can be extended to high-dimensional FBSDEs.

The paper is organized as follows. In Sect. 2, some assumptions about FBSDEs are given to ensure the existence of solution. In the discretization scheme of forward equation (1), we use the classical Euler scheme which was used by Zhao et al. [16]. For backward equation (2), we use the theta scheme. In Sect. 3, we give the approximations and their error analysis of conditional expectations from the discretization of backward equation (2). In Sect. 4, we present a numerical scheme based on the least-squares Monte Carlo regression and provide an example in option pricing for a numerical experiment. In Sect. 5, we conclude our investigation.

## Discretization of FBSDEs

In this section, we denote by $$L_{T}^{2}(\mathbf{R}^{2})$$ the set of $$\mathcal{F}_{T}$$-measurable random variables $$X: \Omega \to \mathbf{R}^{2}$$ which are square integrable, and by $$\mathcal{H}_{T}^{2}(\mathbf{R})$$ the set of predictable processes $$\eta: \Omega \times [0, T] \to \mathbf{R}$$ such that

$$\mathbb{E} \biggl[ \int _{0}^{T} \vert \eta _{t} \vert ^{2}\,d t \biggr]< \infty ,$$

where $$|\cdot |$$ is the standard Euclidean norm in the Euclidean space R. The terminal condition $$Y_{T}$$ in equation (2) is $$\mathcal{F}_{T}$$-measurable and square integrable. We give some assumptions:

1. (A1)

The function $$g(x)$$ is uniformly global functional Lipschitz continuous.

2. (A2)

The functions $$\mu (x)$$ and $$\sigma (x)$$ are uniformly Lipschitz continuous and satisfy a linear growth condition.

3. (A3)

The generator $$f(t,y,z)$$ satisfies the following continuity condition:

$$\bigl\vert f(t_{2},y_{2},z_{2}) - f(t_{1},y_{1},z_{1}) \bigr\vert \leq C_{f} \bigl( \vert t_{2}-t_{1} \vert ^{1/2} + \vert y_{2}-y_{1} \vert + \vert z_{2}-z_{1} \vert \bigr)$$

for any $$(t_{2},y_{2},z_{2}),(t_{1},y_{1},z_{1})\in [0,T]\times \mathbf{R} \times \mathbf{R}^{2}$$, where $$C_{f}>0$$ is a constant.

Assumptions (A1), (A2), and (A3) can guarantee the existence and uniqueness of solution $$(X_{t},Y_{t},Z_{t})$$ to FBSDEs (1)–(2). Now we are in the position to discretize FBSDEs (1) and (2) by using the Euler scheme. Given a partition $$\Delta: 0 = t_{0}< t_{1}<\cdots <t_{M}=T$$ with time steps $$\Delta t_{m} = t_{m}-t_{m-1}$$, denote $$X_{m}= X_{t_{m}}, Y_{m}= Y_{t_{m}}, Z_{m}= Z_{t_{m}}$$, and $$\Delta W_{m} = W_{t_{m}}-W_{t_{m-1}}$$. The classical Euler discretization for FSDE (1) is

$$X^{\Delta }_{m} = X^{\Delta }_{m-1} + \mu \bigl(X^{\Delta }_{m-1}\bigr)\Delta t_{m} + \sigma \bigl(X^{\Delta }_{m-1}\bigr)\Delta W_{m}$$

for $$m=1, \ldots , M$$. In the time interval $$[0, T]$$, we rewrite BSDE (2) to the following form:

$$Y_{m-1} = Y_{m} + \int _{t_{m-1}}^{t_{m}} f(s,Y_{s},Z_{s}) \,d s - \int _{t_{m-1}}^{t_{m}} Z_{s}\,d W_{s}.$$
(3)

Considering $$Y_{t}$$ to be an $$(\mathcal{F}_{t})$$-adapted process, we take conditional expectations on both sides of equation (3) with respect to filtration $$\mathcal{F}_{t_{m-1}}$$, and then we have an iteration backward equation

\begin{aligned} Y_{m-1} &= \mathbb{E}_{m-1}^{x} [Y_{m} ] + \int _{t_{m-1}}^{t_{m}} \mathbb{E}_{m-1}^{x} \bigl[f (t_{s},Y_{s},Z_{s} ) \bigr]\,d s , \end{aligned}
(4)

where $$\mathbb{E}_{m-1}^{x}[\cdot ] = \mathbb{E} [\cdot \mid X^{\Delta }_{m-1}=x ]$$. Multiplying $$\Delta W^{*}_{m}$$ and taking conditional expectations on both sides of (4), we have

\begin{aligned} 0={}& \mathbb{E}_{m-1}^{x} \bigl[Y_{m}\Delta W^{*}_{m} \bigr]+ \int _{t_{m-1}}^{t_{m}} \mathbb{E}_{m-1}^{x} \bigl[f (t_{m},Y_{m},Z_{m} )\cdot \Delta W^{*}_{m} \bigr]\,d s \\ &{} -\mathbb{E}_{m-1}^{x} \biggl[ \int _{t_{m-1}}^{t_{m}} Z_{s}\,d W_{s} \cdot \Delta W^{*}_{m} \biggr]. \end{aligned}
(5)

Applying the theta discretization method to (4),(5), we obtain a discrete solution $$(Y^{\Delta }_{m-1}, Z^{\Delta }_{m-1})$$ to approximate the solution $$(Y_{t-1},Z_{t-1})$$ to BSDE (2):

\begin{aligned} Y^{\Delta }_{m-1} ={}& \mathbb{E}_{m-1}^{x} \bigl[Y^{\Delta }_{m} \bigr] + \theta _{1} f \bigl(t_{m-1},Y^{\Delta }_{m-1},Z^{\Delta }_{m-1} \bigr) \Delta t_{m} \\ & {}+ (1-\theta _{1})\mathbb{E}_{m-1}^{x} \bigl[f \bigl(t_{m},Y^{ \Delta }_{m},Z^{\Delta }_{m} \bigr) \bigr]\Delta t_{m}, \end{aligned}
(6)
\begin{aligned} Z^{\Delta }_{m-1} ={}&{ -}\frac{1-\theta _{2}}{\theta _{2}} \mathbb{E}_{m-1}^{x} \bigl[Z^{\Delta }_{m} \bigr]+ \frac{1}{\theta _{2}}\mathbb{E}_{m-1}^{x} \bigl[Y^{\Delta }_{m}\Delta W^{*}_{m} \bigr] \frac{1}{\Delta t_{m}} \\ &{} +\frac{1-\theta _{2}}{\theta _{2}} \mathbb{E}_{m-1}^{x} \bigl[f \bigl(t_{m},Y^{\Delta }_{m},Z^{\Delta }_{m} \bigr)\Delta W^{*}_{m} \bigr] . \end{aligned}
(7)

Here, $$\theta _{1}$$ and $$\theta _{2}$$ are two parameters in the theta discretization scheme. As a consequence of the Feynman–Kac theorem, the terminal values $$Y_{M}$$ and $$Z_{M}$$ are both deterministic functions of $$X^{\Delta }_{M}$$, i.e., $$Y_{M} = g(X_{M})$$ and $$Z_{M} = \nabla g(X_{{M}})\cdot \sigma (X_{{M}})$$, where is a normal gradient operator with respect to the augment. Now, in combination with equations (6) and (7), we know that the solution $$(Y^{\Delta }_{m-1}, Z^{\Delta }_{m-1})$$ is represented by the kinds of conditional expectations

$$U(x)=\mathbb{E}_{m-1}^{x} \bigl[\upsilon \bigl(X^{\Delta }_{m}\bigr) \bigr],\qquad V(x)= \mathbb{E}_{m-1}^{x} \bigl[\upsilon \bigl(X^{\Delta }_{m}\bigr) \Delta W^{*}_{m} \bigr]$$

for some function $$\upsilon (x)$$. In these expectations, the first conditional expectation is one-dimensional and the second is two-dimensional. Motivated by successful use of the Fourier cos-cos method in two-dimensional BSDEs, we use the Fourier transform to obtain the approximation expressions of the above conditional expectations.

## Approximation of conditional expectation and error analysis

In this section, we give the approximation of conditional expectations $$U(x),V(x)$$ and their error analysis. First, we give the approximation of $$U(x)$$. Let $$p(y|x)$$ denote the conditional density function of $$X_{m}^{\Delta }$$ given by $$X_{m-1}^{\Delta }=x$$. The symbol ∑∑′ in theorems below means that the first term in the summation is weighted by one-half, and $$\operatorname{Re}\{\cdot \}$$ denotes the real part of a complex number.

### Theorem 3.1

Let $$\varphi (w_{1},w_{2}|x_{1},x_{2})$$ be the conditional characteristic function of $$p(y_{1},y_{2}|x_{1}, x_{2})$$, and denote $$\phi (w_{1},w_{2}|0,0)=\phi _{\mathrm{levy}}(w_{1},w_{2})$$. Then, for any rectangular area $$D=[a_{1}, b_{1}]\times [a_{2}, b_{2}] \subset \mathbf{R}^{2}$$, the conditional expectation $$U(x)$$ has the following expansion:

\begin{aligned} U(x)={}&\sum_{k_{1}=0}^{+\infty }\sum _{k_{2}=0}^{+\infty } {'}X_{k_{1},k_{2}}(x_{1},x_{2})B_{k_{1},k_{2}}- \sum_{k_{1}=0}^{+\infty }\sum _{k_{2}=0}^{+\infty }{'}Y_{k_{1},k_{2}}(x_{1},x_{2}) B_{k_{1},k_{2}} \\ &{}+ \iint _{\mathbf{R}^{2}\backslash D} \upsilon (y_{1},y_{2})p(y_{1},y_{2}|x_{1},x_{2}) \,d y_{1}\,d y_{2} , \end{aligned}
(8)

where

\begin{aligned} &X_{k_{1},k_{2}}(x_{1},x_{2})=\frac{1}{2}{ \operatorname{Re}} \biggl[ \phi _{\mathrm{levy}} \biggl( \frac{k_{1}\pi }{b_{1}-a_{1}}, \frac{k_{2}\pi }{b_{2}-a_{2}}\biggr) \cdot \exp \biggl(i k_{1}\pi \frac{x_{1}-a_{1}}{b_{1}-a_{1}}+i k_{2}\pi \frac{x_{2}-a_{2}}{b_{2}-a_{2}}\biggr) \\ &\phantom{X_{k_{1},k_{2}}(x_{1},x_{2})=}{}+\phi _{\mathrm{levy}} \biggl(\frac{k_{1}\pi }{b_{1}-a_{1}},- \frac{k_{2}\pi }{b_{2}-a_{2}}\biggr) \cdot \exp \biggl(i k_{1}\pi \frac{x_{1}-a_{1}}{b_{1}-a_{1}}-i k_{2}\pi \frac{x_{2}-a_{2}}{b_{2}-a_{2}}\biggr) \biggr], \\ &Y_{k_{1},k_{2}}(x_{1},x_{2})= \iint _{\mathbf{R}^{2}\backslash D} p(y_{1},y_{2}|x_{1},x_{2}) \cos \biggl(k_{1}\pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \cos \biggl(k_{2}\pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr)\,d y_{1} \,d y_{2}. \end{aligned}

And

$$B_{k_{1},k_{2}} = \frac{2}{b_{1}-a_{1}}\frac{2}{b_{2}-a_{2}} \iint _{D} \upsilon (y_{1},y_{2})\cos \biggl(k_{1}\pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \cos \biggl(k_{2}\pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr)\,d y_{1} \,d y_{2}$$

is a Fourier cosine coefficient of $$\upsilon (y_{1},y_{2})$$.

### Proof

For a truncated finite integration region D, we have

\begin{aligned} U(x_{1},x_{2})&= \iint _{\mathbf{R}^{2}} \upsilon (y_{1},y_{2})p(y_{1},y_{2}|x_{1},x_{2}) \,d y_{1}\,d y_{2} \\ &= \iint _{D}\upsilon (y_{1},y_{2})p(y_{1},y_{2}|x_{1},x_{2}) \,d y_{1}\,d y_{2}+ \iint _{\mathbf{R}^{2}\backslash D} \upsilon (y_{1},y_{2})p(y_{1},y_{2}|x_{1},x_{2}) \,d y_{1}\,d y_{2}. \end{aligned}

By using the Fourier cos-cos transform to $$p(y_{1},y_{2}|x_{1},x_{2})$$ in D, we have

$$p(y_{1},y_{2}|x_{1},x_{2}) = \sum _{k_{1}=0}^{+\infty }\sum_{k_{2}=0}^{+ \infty }{'}A_{k_{1},k_{2}}(x_{1},x_{2}) \cos \biggl(k_{1}\pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \cos \biggl(k_{2} \pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr),$$

where $$A_{k_{1},k_{2}}(x_{1},x_{2})$$ is the Fourier cosine coefficient of $$p(y_{1},y_{2}|x_{1},x_{2})$$:

\begin{aligned} A_{k_{1},k_{2}}(x_{1},x_{2})={}& \frac{2}{b_{1}-a_{1}} \frac{2}{b_{2}-a_{2}} \iint _{D} p(y_{1},y_{2}|x_{1},x_{2}) \cos \biggl(k_{1} \pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \\ &{}\times \cos \biggl(k_{2}\pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr)\,d y_{1} \,d y_{2}. \end{aligned}
(9)

Then we have

\begin{aligned} & U(x)- \iint _{\mathbf{R}^{2}\backslash D} \upsilon (y_{1},y_{2})p(y_{1},y_{2}|x_{1},x_{2}) \,d y_{1}\,d y_{2} \\ &\quad = \frac{b_{1}-a_{1}}{2} \frac{b_{2}-a_{2}}{2} \sum_{k_{1}=0}^{+ \infty } \sum_{k_{2}=0}^{+\infty }{'}A_{k_{1},k_{2}}(x_{1},x_{2})B_{k_{1},k_{2}}. \end{aligned}
(10)

With the cos formula

\begin{aligned} 2 \cos \alpha \cos \beta =\cos (\alpha +\beta )+ \cos (\alpha -\beta ), \end{aligned}

the integral in equation (9) will be changed to the following form:

\begin{aligned} &2 \iint _{D} p(y_{1},y_{2}|x_{1},x_{2}) \cos \biggl(k_{1}\pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \cos \biggl(k_{2}\pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr)\,d y_{1} \,d y_{2} \\ &\quad=2 \iint _{\mathbf{R}^{2}} p(y_{1},y_{2}|x_{1},x_{2}) \cos \biggl(k_{1} \pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \cos \biggl(k_{2}\pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr)\,d y_{1} \,d y_{2} \\ &\qquad{}-2 \iint _{\mathbf{R}^{2}\backslash D} p(y_{1},y_{2}|x_{1},x_{2}) \cos \biggl(k_{1}\pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \cos \biggl(k_{2}\pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr)\,d y_{1} \,d y_{2} \\ &\quad=\operatorname{Re} \biggl[ \iint _{\mathbf{R}^{2}} p(y_{1},y_{2}|x_{1},x_{2}) \exp \biggl(i\frac{k_{1}y_{1}\pi }{b_{1}-a_{1}}+i \frac{k_{2}y_{2}\pi }{b_{2}-a_{2}}\biggr)\,d y_{1} \,d y_{2}\\ &\qquad{}\times \exp \biggl(-i \frac{k_{1}a_{1}\pi }{b_{1}-a_{1}}-i\frac{k_{2}a_{2}\pi }{b_{2}-a_{2}} \biggr) \biggr] \\ &\qquad{}+\operatorname{Re} \biggl[ \iint _{\mathbf{R}^{2}} p(y_{1},y_{2}|x_{1},x_{2}) \exp \biggl(i\frac{k_{1}y_{1}\pi }{b_{1}-a_{1}}-i \frac{k_{2}y_{2}\pi }{b_{2}-a_{2}}\biggr)\,d y_{1} \,d y_{2}\\ &\qquad{}\times \exp \biggl(-i \frac{k_{1}a_{1}\pi }{b_{1}-a_{1}}+i\frac{k_{2}a_{2}\pi }{b_{2}-a_{2}} \biggr) \biggr] \\ &\qquad{}-2 \iint _{\mathbf{R}^{2}\backslash D} p(y_{1},y_{2}|x_{1},x_{2}) \cos \biggl(k_{1}\pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \cos \biggl(k_{2}\pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr)\,d y_{1} \,d y_{2} \\ &\quad= \operatorname{Re} \biggl[ \phi \biggl(\frac{k_{1}\pi }{b_{1}-a_{1}}, \frac{k_{2}\pi }{b_{2}-a_{2}}|x_{1},x_{2} \biggr) \cdot \exp \biggl(-i \frac{k_{1}a_{1}\pi }{b_{1}-a_{1}}-i\frac{k_{2}a_{2}\pi }{b_{2}-a_{2}}\biggr) \biggr] \\ &\qquad{}+\operatorname{Re} \biggl[ \phi \biggl(\frac{k_{1}\pi }{b_{1}-a_{1}},- \frac{k_{2}\pi }{b_{2}-a_{2}}|x_{1},x_{2} \biggr) \cdot \exp \biggl(-i \frac{k_{1}a_{1}\pi }{b_{1}-a_{1}}+i\frac{k_{2}a_{2}\pi }{b_{2}-a_{2}}\biggr) \biggr] \\ &\qquad{}-2 \iint _{\mathbf{R}^{2}\backslash D} p(y_{1},y_{2}|x_{1},x_{2}) \cos \biggl(k_{1}\pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \cos \biggl(k_{2}\pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr)\,d y_{1} \,d y_{2} \\ &\quad= \operatorname{Re} \biggl[ \phi _{\mathrm{levy}} \biggl(\frac{k_{1}\pi }{b_{1}-a_{1}}, \frac{k_{2}\pi }{b_{2}-a_{2}}\biggr) \cdot \exp \biggl(i k_{1}\pi \frac{x_{1}-a_{1}}{b_{1}-a_{1}}+i k_{2}\pi \frac{x_{2}-a_{2}}{b_{2}-a_{2}}\biggr) \biggr] \\ &\qquad{}+\operatorname{Re} \biggl[ \phi _{\mathrm{levy}} \biggl(\frac{k_{1}\pi }{b_{1}-a_{1}},- \frac{k_{2}\pi }{b_{2}-a_{2}}\biggr) \cdot \exp \biggl(i k_{1}\pi \frac{x_{1}-a_{1}}{b_{1}-a_{1}}-i k_{2}\pi \frac{x_{2}-a_{2}}{b_{2}-a_{2}}\biggr) \biggr] \\ &\qquad{}-2 \iint _{\mathbf{R}^{2}\backslash D} p(y_{1},y_{2}|x_{1},x_{2}) \cos \biggl(k_{1}\pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \cos \biggl(k_{2}\pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr)\,d y_{1} \,d y_{2}. \end{aligned}

Substituting the above equation into (10), we can obtain the form of equation (8). □

Next, we consider the two-dimensional conditional expectation

$$V(x)=\mathbb{E}_{m-1}^{x_{1},x_{2}} \bigl[\upsilon \bigl(X^{\Delta }_{m}\bigr) \Delta W^{*}_{m} \bigr].$$

The difficulty of $$V(x)$$ is to deal with the Brownian motion $$\Delta W_{m}$$. Note that the components of $$\Delta W_{m}$$ are independent, we can handle them separately. Denote

$$V_{1}(x)=\mathbb{E}_{m-1}^{x} \bigl[\upsilon \bigl(X^{\Delta }_{m}\bigr) \Delta W^{1}_{m} \bigr],\qquad V_{2}(x)=\mathbb{E}_{m-1}^{x} \bigl[\upsilon \bigl(X^{\Delta }_{m}\bigr) \Delta W^{2}_{m} \bigr],$$

and assume that the given condition is $$(X^{1,\Delta }_{m-1},X^{2,\Delta }_{m-1})=(x_{1},x_{2})$$. Then, from the forward scheme for equation (1), we can revise it to another form defined by

\begin{aligned} &\rho _{x_{1}}\bigl(X^{1,\Delta }_{m}\bigr)= \frac{1}{\sigma _{1}(x_{1})} \bigl(X^{1, \Delta }_{m}-x_{1}-\mu _{1}(x_{1})\Delta t_{m} \bigr), \\ &\rho _{x_{2}}\bigl(X^{2,\Delta }_{m}\bigr)= \frac{1}{\sigma _{2}(x_{2})} \bigl(X^{2, \Delta }_{m}-x_{2}-\mu _{2}(x_{2})\Delta t_{m} \bigr). \end{aligned}

We find that

\begin{aligned} V_{1}(x)=\mathbb{E}_{m-1}^{x} \bigl[\upsilon \bigl(X^{\Delta }_{m}\bigr) \Delta W^{1}_{m} \bigr]= \iint _{\mathbf{R}^{2}} \upsilon (y_{1},y_{2}) \rho _{x_{1}}(y_{1})p(y_{1},y_{2}|x_{1},x_{2}) \,d y_{1}\,d y_{2} \end{aligned}

and

\begin{aligned} V_{2}(x)=\mathbb{E}_{m-1}^{x} \bigl[\upsilon \bigl(X^{\Delta }_{m}\bigr) \Delta W^{2}_{m} \bigr]= \iint _{\mathbf{R}^{2}} \upsilon (y_{1},y_{2}) \rho _{x_{2}}(y_{2})p(y_{1},y_{2}|x_{1},x_{2}) \,d y_{1}\,d y_{2}. \end{aligned}

These integrals are similar to $$U(x)$$ and can be calculated by using the method in Theorem 3.1. Next we directly give the expansion of $$V_{1}(x)$$ and $$V_{2}(x)$$.

### Theorem 3.2

Under the assumptions of Theorem 3.1, for any rectangular area $$D=[a_{1}, b_{1}]\times [a_{2}, b_{2}] \subset \mathbf{R}^{2}$$, the components of conditional expectation $$V(x)$$ have the following expansions:

\begin{aligned} V_{j}(x)={}&\sum_{k_{1}=0}^{+\infty }\sum _{k_{2}=0}^{+\infty } {'}X_{k_{1},k_{2}}(x_{1},x_{2}) \widetilde{B}_{k_{1},k_{2}}(x_{j})- \sum _{k_{1}=0}^{+\infty }\sum_{k_{2}=0}^{+ \infty }{'}Y_{k_{1},k_{2}}(x_{1},x_{2}) \widetilde{B}_{k_{1},k_{2}}(x_{j}) \\ &{}+ \iint _{\mathbf{R}^{2}\backslash D} \upsilon (y_{1},y_{2}) \rho _{x_{j}}(y_{j}) p(y_{1},y_{2}|x_{1},x_{2}) \,d y_{1}\,d y_{2} , \end{aligned}

where

\begin{aligned} \widetilde{B}_{k_{1},k_{2}}(x_{j}) ={}& \frac{2}{b_{1}-a_{1}} \frac{2}{b_{2}-a_{2}} \iint _{D} \upsilon (y_{1},y_{2})\rho _{x_{j}}(y_{j}) \\ &{}\times \cos \biggl(k_{1}\pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \cos \biggl(k_{2}\pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr) \,d y_{1} \,d y_{2} \end{aligned}

are Fourier cosine coefficients of $$\upsilon (y_{1},y_{2})\rho _{x_{j}}(y_{j})$$, $$j=1,2$$.

### Remark 1

There are some results to approximate conditional expectation, such as using polynomial basis functions, Malliavin approach, and Monte Carlo sequence convergence (see [1722]). Most of them consider the time-spatial approximation. In fact, it needs much more time to implement, especially in dealing with high-dimensional conditional expectation. The results in Theorem 3.1 and Theorem 3.2 show that they can contain much information and have many advantages to deal with high-dimensional FBSDEs.

Theorem 3.1 and Theorem 3.2 give us some idea to approximate conditional expectations. For suitable integers $$N_{1},N_{2}$$, the conditional expectations $$U(x),V(x)$$ can be approximated by the truncation terms

\begin{aligned} U(x)\approx {}&\sum_{k_{1}=0}^{N_{1}-1} \sum_{k_{2}=0}^{N_{2}-1}{'} \frac{1}{2}{\operatorname{Re}} \biggl[ \phi _{\mathrm{levy}} \biggl( \frac{k_{1}\pi }{b_{1}-a_{1}}, \frac{k_{2}\pi }{b_{2}-a_{2}}\biggr) \cdot \exp \biggl(i k_{1}\pi \frac{x_{1}-a_{1}}{b_{1}-a_{1}}+i k_{2}\pi \frac{x_{2}-a_{2}}{b_{2}-a_{2}}\biggr) \\ &{}+\phi _{\mathrm{levy}} \biggl(\frac{k_{1}\pi }{b_{1}-a_{1}},- \frac{k_{2}\pi }{b_{2}-a_{2}}\biggr) \cdot \exp \biggl(i k_{1}\pi \frac{x_{1}-a_{1}}{b_{1}-a_{1}}-i k_{2}\pi \frac{x_{2}-a_{2}}{b_{2}-a_{2}}\biggr) \biggr]B_{k_{1},k_{2}} \end{aligned}
(11)

with the error

\begin{aligned} \epsilon (x) = {}& \epsilon _{1}(x) + \epsilon _{2}(x) - \epsilon _{3}(x) \\ = {}& \iint _{\mathbf{R}^{2}\backslash D} \upsilon (y_{1},y_{2})p(y_{1},y_{2}|x_{1},x_{2}) \,d y_{1}\,d y_{2} \\ &{}+ \frac{b_{1}-a_{1}}{2}\frac{b_{2}-a_{2}}{2} \Biggl[\sum _{k_{2}=N_{2}}^{+ \infty }\sum_{k_{1}=0}^{N_{1}-1} {'}+\sum_{k_{1}=N_{1}}^{+\infty } \sum _{k_{2}=0}^{N_{2}-1}{'}+\sum _{k_{1}=N_{1}}^{+\infty }\sum_{k_{2}=N_{2}}^{+ \infty }{'} \Biggr] A_{k_{1},k_{2}}(x_{1},x_{2})B_{k_{1},k_{2}} \\ & {} - \sum_{k_{1}=0}^{N_{1}-1}\sum _{k_{2}=0}^{N_{2}-1}{'}Y_{k_{1},k_{2}}(x_{1},x_{2})B_{k_{1},k_{2}} \end{aligned}

and

\begin{aligned} V_{j}(x)\approx {}&\sum_{k_{1}=0}^{N_{1}-1} \sum_{k_{2}=0}^{N_{2}-1}{'} \frac{1}{2}{\operatorname{Re}} \biggl[ \phi _{\mathrm{levy}} \biggl( \frac{k_{1}\pi }{b_{1}-a_{1}}, \frac{k_{2}\pi }{b_{2}-a_{2}}\biggr) \cdot \exp \biggl(i k_{1}\pi \frac{x_{1}-a_{1}}{b_{1}-a_{1}}+i k_{2}\pi \frac{x_{2}-a_{2}}{b_{2}-a_{2}}\biggr) \\ &{}+\phi _{\mathrm{levy}} \biggl(\frac{k_{1}\pi }{b_{1}-a_{1}},- \frac{k_{2}\pi }{b_{2}-a_{2}}\biggr) \\ &{}\times \exp \biggl(i k_{1}\pi \frac{x_{1}-a_{1}}{b_{1}-a_{1}}-i k_{2}\pi \frac{x_{2}-a_{2}}{b_{2}-a_{2}}\biggr) \biggr]\widetilde{B}_{k_{1},k_{2}}(x_{j}) \end{aligned}
(12)

with the error

\begin{aligned} \widetilde{\epsilon }^{j}(x) = {}& \widetilde{\epsilon }^{j}_{1}(x) + \widetilde{\epsilon }^{j}_{2}(x) - \widetilde{\epsilon }^{j}_{3}(x) \\ = {}& \iint _{\mathbf{R}^{2}\backslash D} \upsilon (y_{1},y_{2})\rho _{x_{j}}(y_{j})p(y_{1},y_{2}|x_{1},x_{2}) \,d y_{1}\,d y_{2} \\ &{}+ \frac{b_{1}-a_{1}}{2}\frac{b_{2}-a_{2}}{2} \Biggl[\sum _{k_{2}=N_{2}}^{+ \infty }\sum_{k_{1}=0}^{N_{1}-1} {'}+\sum_{k_{1}=N_{1}}^{+\infty } \sum _{k_{2}=0}^{N_{2}-1}{'}+\sum _{k_{1}=N_{1}}^{+\infty }\sum_{k_{2}=N_{2}}^{+ \infty }{'} \Biggr] A_{k_{1},k_{2}}(x_{1},x_{2})\widetilde{B}_{k_{1},k_{2}}(x_{j}) \\ & {} - \sum_{k_{1}=0}^{N_{1}-1}\sum _{k_{2}=0}^{N_{2}-1}{'}Y_{k_{1},k_{2}}(x_{1},x_{2}) \widetilde{B}_{k_{1},k_{2}}(x_{j}) \end{aligned}

for $$j=1,2$$.

Now, we give the error analysis of the approximations. Ruijter and Oosterlee [13] pointed out that the coefficients $$A_{k_{1},k_{2}}(x_{1},x_{2})$$ usually decay faster than $$B_{k_{1},k_{2}}$$. Thus, we find that the error $$\epsilon _{2}(x)$$ converges exponentially in $$N_{1}$$ and $$N_{2}$$ for density functions in the class $$C^{\infty }([a_{1}, b_{1}]\times [a_{2}, b_{2}])$$, i.e.,

\begin{aligned} \bigl\vert \epsilon _{2}(x) \bigr\vert < P_{1}\exp \bigl(-(N-1)\nu \bigr) \end{aligned}
(13)

for some positive constants $$P_{1},N,\nu$$, and $$N=\min \{N_{1},N_{2}\}$$. If a density function has a discontinuity point in one of its derivatives, then the error $$\epsilon _{2}(x)$$ has an algebraic convergence, i.e.,

\begin{aligned} \bigl\vert \epsilon _{2}(x) \bigr\vert < P_{2} (N-1)^{1-\beta } \end{aligned}
(14)

for some positive constants $$P_{2},\beta ,N$$, where $$\beta \geq N,N=\min \{N_{1},N_{2}\}$$. On the other hand, according to [23], $$B_{k_{1},k_{2}}$$ exhibits at least algebraic convergence and gives us information of algebraic convergence of Fourier series, i.e., for suitable positive constants $$N,n,P,Q$$, we have

\begin{aligned} & \Biggl\vert \sum_{k_{1}\geq N_{1},or k_{2}\geq N_{2}}^{+\infty }{'} \cos \biggl(k_{1} \pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \cos \biggl(k_{2}\pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr)B_{k_{1},k_{2}} \Biggr\vert \\ &\quad \leq \sum_{k_{1}\geq N_{1},or k_{2}\geq N_{2}}^{+\infty }{'} \vert B_{k_{1},k_{2}} \vert \leq \frac{P}{(N-1)^{n-1}}\leq Q. \end{aligned}

After interchanging the summation and integration, we rewrite $$\epsilon _{3}(x)$$ in another form:

\begin{aligned} &\epsilon _{3}(x) \\ &\quad=\sum_{k_{1}=0}^{N_{1}-1}\sum _{k_{2}=0}^{N_{2}-1}{'} \iint _{ \mathbf{R}^{2}\backslash D} p(y_{1},y_{2}|x_{1},x_{2}) \cos \biggl(k_{1} \pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \cos \biggl(k_{2}\pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr)\,d y_{1} \,d y_{2}B_{k_{1},k_{2}} \\ &\quad=\epsilon _{1}(x)- \iint _{\mathbf{R}^{2}\backslash D} \Biggl[\Biggl(\sum_{k_{2}=N_{2}}^{+ \infty } \sum_{k_{1}=0}^{N_{1}-1}{'} +\sum _{k_{1}=N_{1}}^{+\infty } \sum _{k_{2}=0}^{N_{2}-1}{'}+\sum _{k_{1}=N_{1}}^{+\infty }\sum_{k_{2}=N_{2}}^{+ \infty }{'} \Biggr) \\ &\qquad{}\times \cos \biggl(k_{1}\pi \frac{y_{1}-a_{1}}{b_{1}-a_{1}}\biggr) \cos \biggl(k_{2} \pi \frac{y_{2}-a_{2}}{b_{2}-a_{2}}\biggr)B_{k_{1},k_{2}} \Biggr] p(y_{1},y_{2}|x_{1},x_{2})\,d y_{1} \,d y_{2}. \end{aligned}

It then follows that

\begin{aligned} \bigl\vert \epsilon _{3}(x) \bigr\vert \leq \bigl\vert \epsilon _{1}(x) \bigr\vert +3Q \bigl\vert \epsilon _{4}(x) \bigr\vert . \end{aligned}
(15)

From (11)–(15), with a properly chosen truncation of the integration range, the overall error $$\epsilon (x)$$ converges. With the same method, we can also prove that the overall error $$\widetilde{\epsilon }_{j}(x)\ (j=1,2)$$ converges.

Therefore, if we choose a suitable region $$[a_{1},b_{1}]\times [a_{2},b_{2}]$$, then the errors of the approximation can be well controlled. We can use approximations (11) and (12) as a substitution to conditional expectations. The key is to choose basis functions. In the next section, we state our basis functions and employ the least-squares Monte Carlo regression method to numerical FBSDEs (1) and (2).

## Numerical experiment

In this section, we give the numerical scheme to FBSDEs (1) and (2) based on the least-square Monte Carlo regression and perform a numerical experiment in pricing European option. In the following, we give the basis functions and corresponding coefficients $$\alpha _{j,m}$$ at time $$t_{m}$$. For approximations $$Y_{m},Z_{m}$$, we use truncation functions to represent.

First, we state the numerical scheme. Under the condition of value $$(Y_{m},Z_{m})$$, we implement the following least-square regression by using finite-dimensional basis functions, respectively, to approximate $$Y_{m-1},Z_{m-1}$$ at each time $$t_{m}$$:

\begin{aligned} & \bigl(Y^{\Delta }_{m-1},Z^{\Delta }_{m-1} \bigr) \\ &\quad = \mathop{\arg \inf}_{(Y,Z)\in L^{2}(\mathcal{F}_{t_{m-1}})} \mathbb{E} \bigl[ \bigl\vert Y^{\Delta }_{m} +(1-\theta )f \bigl(t_{m},Y^{\Delta }_{m},Z^{ \Delta }_{m} \bigr)\Delta t_{m} \\ & \qquad{}- Y + \theta f (t_{m-1},Y,Z )\Delta t_{m}- \theta Z \Delta W_{m} - (1-\theta )Z^{\Delta }_{m}\Delta W_{m} \bigr\vert ^{2} \bigr]. \end{aligned}
(16)

Notice that if $$\theta =1$$ in (16), the scheme will deduce to the representation in Gobet et al. [24]. Many numerical experiments show that the theta scheme is of second-order convergence when $$\theta =0.5$$. Following this idea, we also consider $$\theta = 0.5$$. On the choice of basis functions, Gobet et al. use hypercubes and global polynomials as basis functions to test the effectiveness and stability under the assumption of assets following geometric Brownian motions (GBMs). Unfortunately, they only give one-dimensional numerical experiments to test the efficiency. We want to know the stability and efficiency in a high-dimensional space. From Sect. 3, we can use conditional characteristic functions to express the basis functions.

Next, we simplify the basis functions by following Theorems 3.1 and 3.2. We assume that the underly assets follow GBM, i.e., $$\mu _{j}(x_{j}) = \mu _{j} x_{j}$$ and $$\sigma _{j}(x_{j}) = \sigma _{j} x_{j}$$ for $$j=1,2$$. Then the basis functions with respect to $$U(x)$$ are given by

$$\Phi _{m,k}(x) = \cos \theta _{1}\cdot \cos \theta _{2} \cdot \exp (\beta _{m,k} ),$$

and for $$V_{j}(x)\ (j=1,2)$$, the function bases are given

$$\widetilde{\Phi }^{j}_{m,k}(x)=\cos \theta _{1}\cdot \cos \theta _{2} \cdot \exp (\beta _{m,k} )/\sigma _{j}x.$$

Here,

\begin{aligned} &\theta _{j}=\frac{k_{j} \pi }{b_{j}-a_{j}}(x_{j}+\mu _{j} \Delta t_{m}-a_{j}), \\ &\beta _{m,k}=-\frac{1}{2} \biggl(\frac{k_{1}\pi }{b_{1}-a_{1}} \biggr)^{2} \sigma ^{2}_{1} \Delta t_{m} -\frac{1}{2} \biggl( \frac{k_{2}\pi }{b_{2}-a_{2}} \biggr)^{2}\sigma ^{2}_{2} \Delta t_{m} \end{aligned}

for each $$k_{j} = 0,1,2,\ldots N{-}1,j=1,2$$. We combine the Monte Carlo method and Picard iterations to implement the procedure:

• Simulations. Simulate L independent simulations of

$$\bigl(X^{\Delta }_{m,l}\bigr)_{2 \leq m \leq M+1,1\leq l\leq L},(\Delta W_{m,l})_{1 \leq m \leq M-1,1\leq l\leq L}.$$
• Initialization. The algorithm is initialized with $$Y^{\Delta }_{M,l}=g(X^{\Delta }_{M,l})$$. The value $$(Y^{\Delta }_{m},Z^{\Delta }_{m})$$ represented via basis functions and corresponding coefficients is known at time $$t_{m-1}$$. The coefficients of basis functions are computed by the least-square method.

• Backward iteration. Assume that $$Y^{\Delta ,I,I}_{m,L}$$ is built with L simulations. Denote $$\alpha ^{r,i,I}_{m}=(\alpha ^{r,i,I}_{1,m},\alpha ^{r,i,I}_{2,m}), \widetilde{\Phi }_{m,k}(x)=(\widetilde{\Phi }^{1}_{m,k}(x), \widetilde{\Phi }^{2}_{m,k}(x))$$. The symbol represents multiplication of corresponding elements. Then do Picard iterations:

→:

The initialization $$i= 0$$ of Picard iterations is settled as $$(Y^{\Delta ,0,I}_{m-1,l},Z^{\Delta ,0,I}_{m-1,l})=(0,0)$$, i.e., $$\alpha ^{r,0,I}_{j,m} = 0,j=0,1,2$$.

→:

For $$i = 1,2,\ldots ,I$$, the coefficients $$\alpha ^{r,i,I}_{j,m}$$ are iteratively obtained as the argmin in $$(\alpha _{0,m},\alpha _{1,m},\alpha _{2,m})$$ of the quantity

\begin{aligned} &\frac{1}{L}\sum_{l=1}^{L} \Biggl[ Y^{\Delta }_{m,l} +0.5f\bigl(t_{m},Y^{ \Delta }_{m,l},Z^{\Delta }_{m,l} \bigr)\Delta t_{m} - 0.5Z^{\Delta }_{m,l} \Delta W_{m,l} \\ &\quad{}+ 0.5 f \Biggl(t_{m-1},\sum_{k_{1}=0}^{N-1} \sum_{k_{2}=0}^{N-1}{'} \alpha ^{1,p-1,I}_{0,m}\Phi _{m,k}(x),\sum _{k_{1}=0}^{N-1}\sum_{k_{2}=0}^{N-1}{'} \alpha ^{r,i-1,I}_{m}\star \widetilde{\Phi }_{m,k}(x) \Biggr)\Delta t_{m} \\ &\quad{} - \sum_{k_{1}=0}^{N-1} \sum _{k_{2}=0}^{N-1}{'}\alpha ^{1,i,I}_{0,m} \Phi _{m,k}(x)- 0.5 \Biggl(\sum_{k_{1}=0}^{N-1} \sum_{k_{2}=0}^{N-1}{'} \alpha ^{r,i,I}_{m}\star \widetilde{\Phi }_{m,k}(x) \Biggr) \Delta W_{m,l} \Biggr]^{2}. \end{aligned}
→:

Take $$\alpha ^{r}_{j,m} = \alpha ^{r,I,I}_{j,m}$$. Use the coefficients $$\alpha ^{r}_{j,m}$$, $$j = 1,\ldots ,6$$, to compute $$Y^{\Delta }_{m-1}$$ and $$Z^{\Delta }_{m-1}$$.

• Initial value. Compute the initial value $$(Y^{\Delta }_{0}, Z^{\Delta }_{0})$$.

Now we test the algorithm on an example—test on European option. We do $$S=50$$ times and collect each time the value $$Y^{\Delta ,S}_{0}$$. At each time the simulated value is defined as $$\{Y^{\Delta ,S}_{0,s}: s=1,\ldots ,50 \}$$. The mean is denoted by

$$\overline{Y}^{\Delta ,S}_{0} = \frac{1}{50}\sum _{s=1}^{50}Y^{\Delta ,S}_{0,s}.$$

Following the literature [8], we choose $$a_{1}=a_{2}=a$$ and $$b_{1}=b_{2}=b$$, where

$$a = \min_{i} \Bigl[x^{i}_{0}+\xi ^{i}_{1}-10\sqrt{\xi ^{i}_{2}+ \sqrt{\xi ^{i}_{4}}} \Bigr],\qquad b= \max_{i} \Bigl[x^{i}_{0}+\xi ^{i}_{1}+10 \sqrt {\xi ^{i}_{2}+\sqrt{\xi ^{i}_{4}}} \Bigr],$$

and $$\xi ^{i}_{j}$$ denotes the jth cumulant of the stochastic variable $$X^{i}_{T}$$. Denote by $$e_{Y}=Y_{0}-Y^{\Delta }_{0}$$ the error of the numerical solution of Y. In the experiment, we present an application of our scheme to financial problems, i.e., pricing European option and hedging strategy. We consider option pricing of a basket call option in the Black–Scholes model. Someone has two kinds of assets. Denote by $$p_{t}$$ and $$X_{t}=(X^{1}_{t},X^{2}_{t})$$ the bond price and the prices of two independent stocks, respectively, that satisfy

\begin{aligned} & d p_{t}=r p_{t} \,d t, \\ & d X^{i}_{t}= \mu _{i} X^{i}_{t} \,d t+ \sigma _{i} X^{i}_{t}\,d W^{i}_{t},\quad i=1,2, \end{aligned}

with the initial conditions $$p_{0}=p, X_{0}=x_{0}=(x^{1}_{0},x^{2}_{0}),t\in [0,T]$$. At time t, an investor takes with wealth $$y_{t}$$ in hand. He puts $$\pi ^{i}_{t}\ (i=1,2)$$ to buy the ith stock and $$y_{t}-(\pi ^{1}_{t}+\pi ^{2}_{t})$$ to buy the bond. The processes $$y_{t}$$ and $$\pi ^{i}_{t}\ (i=1,2)$$ satisfy the following SDE:

\begin{aligned} -d y_{t}=- \Biggl[ry_{t}+\sum_{i=1}^{2}( \mu _{i}-r)\pi ^{i}_{t} \Biggr]\,d t- \sum _{i=1}^{2}\sigma _{i}\pi ^{i}_{t}\,d W^{i}_{t}. \end{aligned}

Denote $$z^{i}_{t}=\sigma _{i}\pi ^{i}_{t}\ (i=1,2)$$, then $$(y_{t},z_{t})$$ satisfies

\begin{aligned} -d y_{t}=- \Biggl[ry_{t}+\sum_{i=1}^{2} \frac{\mu _{i}-r}{\sigma _{i}}z^{i}_{t} \Biggr]\,d t-\sum _{i=1}^{2}z^{i}_{t}\,d W^{i}_{t} \end{aligned}

with the terminal condition $$y_{T}=\max \{\sqrt{X^{1}_{T}X^{2}_{T}}-K,0\}$$. If $$\mu _{i}=\mu ,\sigma _{i}=\sigma$$, then the analytic solution can be given by a two-dimensional Black–Scholes formula. In our numerical experiment, we set

$$T=0.1,\qquad K=100,\qquad r=0.03,\qquad x^{1}_{0}=x^{2}_{0}=100,\qquad \mu =0.05,\qquad \sigma =0.2.$$

The absolute error $$|e_{Y}|$$ of experiments is listed in Table 1.

In this table, we find that the error is accepted. Generally speaking, with the increase of $$M,N$$, the scheme is stable but the computation time is longer. In this example, if $$N=20$$ and $$M=17$$, then the error is accepted.

## Conclusion

In this paper, we extend the Fourier cos transform to propose a method of numerical solutions to high-dimensional FBSDEs by combining conditional characteristic functions. In this method, the Fourier cos-cos transform is used to deal with two kinds of conditional expectations. Following the error analysis in [13], we prove that the errors in approximation of conditional expectations are well controlled in theory. It also shows that this numerical scheme is efficient and stable.

Not applicable.

## References

1. 1.

Pardoux, E., Peng, S.G.: Backward stochastic differential equations and quasilinear parabolic partial differential equations. In: Stochastic Partial Differential Equations and Their Applications, vol. 176, pp. 200–217. Springer, Berlin (1992)

2. 2.

Zhang, G., Gunzburger, M., Zhao, W.: A sparse-grid method for multi-dimensional backward stochastic differential equations. J. Comput. Math. 31, 221–248 (2013)

3. 3.

Fu, Y., Zhao, W., Zhou, T.: Efficient spectral sparse grid approximations for solving multi-dimensional forward backward SDEs. Discrete Contin. Dyn. Syst., Ser. B 22, 3439–3458 (2017)

4. 4.

Ding, D., U, S.C.: Efficient option pricing methods based on Fourier series expansions. J. Math. Res. Expo. 31, 12–22 (2011)

5. 5.

Fang, F., Oosterlee, C.W.: Pricing early-exercise and discrete barrier options by Fourier-cosine series expansions. Numer. Math. 114, 27–62 (2009)

6. 6.

Yang, Y., Su, W., Zhang, Z.: Estimating the discounted density of the deficit at ruin by Fourier cosine series expansion. Stat. Probab. Lett. 146, 147–155 (2019)

7. 7.

Chan, T.L.R.: Hedging and pricing early-exercise options with complex Fourier series expansion. N. Am. J. Econ. Finance 2019, Article ID 100973 (2019)

8. 8.

Ibrahim, S.N.I., Ng, T.W.: Fourier-based approach for power options valuation. Malaysian J. Math. Sci. 13, 31–40 (2019)

9. 9.

Lin, S., He, X.J.: A regime switching fractional Black–Scholes model and European option pricing. Commun. Nonlinear Sci. Numer. Simul. 85, Article ID 105222 (2020)

10. 10.

Ma, J., Wang, H.: Convergence rates of moving mesh methods for moving boundary partial integro-differential equations from regime-switching jump-diffusion Asian option pricing. J. Comput. Appl. Math. 370, Article ID 112598 (2020)

11. 11.

Drapeau, S., Luo, P., Xiong, D.: Characterization of fully coupled FBSDE in terms of portfolio optimization. Electron. J. Probab. 25, 1–26 (2020)

12. 12.

Xie, B., Yu, Z.: An exploration of $$L_{p}$$-theory for forward-backward stochastic differential equations with random coefficients on small durations. J. Math. Anal. Appl. 483(2), Article ID 123642 (2020)

13. 13.

Ruijter, M.J., Oosterlee, C.W.: Two-dimensional Fourier cosine series expansion method for pricing financial options. SIAM J. Sci. Comput. 34, 642–671 (2012)

14. 14.

Meng, Q.J., Ding, D.: An efficient pricing method for rainbow options based on two-dimensional modified sine-sine series expansions. Int. J. Comput. Math. 90, 1096–1113 (2013)

15. 15.

Ding, D., Li, X., Liu, Y.: A regression-based numerical scheme for backward stochastic differential equations. Comput. Stat. 32, 1357–1373 (2017)

16. 16.

Zhao, W., Chen, L., Peng, S.G.: A new kind of accurate numerical method for backward stochastic differential equations. SIAM J. Sci. Comput. 28, 1563–1581 (2006)

17. 17.

Bender, C., Zhang, J.: Time discretization and Markovian iteration for coupled FBSDEs. Ann. Appl. Probab. 18, 143–177 (2008)

18. 18.

Bouchard, B., Ekeland, I., Touzi, N.: On the Malliavin approach to Monte Carlo approximation of conditional expectations. Finance Stoch. 8, 45–71 (2004)

19. 19.

Bouchard, B., Touzi, N.: Discrete-time approximation and Monte-Carlo simulation of backward stochastic differential equations. Stoch. Process. Appl. 111, 175–206 (2004)

20. 20.

Crimaldi, I., Pratelli, L.: Convergence results for conditional expectations. Bernoulli 11, 737–745 (2005)

21. 21.

Li, Y., Yang, J., Zhao, W.: Convergence error estimates of the Crank–Nicolson scheme for solving decoupled FBSDEs. Sci. China Math. 60, 923–948 (2017)

22. 22.

Sun, Y., Zhao, W.: New second-order schemes for forward backward stochastic differential equations. East Asian J. Appl. Math. 8, 399–421 (2018)

23. 23.

Boyd, J.P.: Chebyshev & Fourier Spectral Methods. Dover, Mineola (2001)

24. 24.

Gobet, E., Lemor, J.P., Warin, X.: A regression-based Monte Carlo method to solve backward stochastic differential equations. Ann. Appl. Probab. 15, 2172–2202 (2005)

## Acknowledgements

We would like to thank the editor for handling this paper and the referees for their significant suggestions.

## Funding

The work is supported by the National Natural Science Foundation of China (81960618, 61773217), Ministry of Education of Humanities and Social Science Project (17YJC840015), Hubei Key Laboratory of Applied Mathematics (AM201807), Research Project of Hubei Provincial Department of Education (B2020341), Hunan Provincial Science and Technology Project Foundation (2019RS1033), the Scientific Research Fund of Hunan Provincial Education Department (18A013), and Research Project of College of Engineering and Technology Yangtze University (2019KY01, 2020KY07).

## Author information

Authors

### Corresponding author

Correspondence to Songbo Hu.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

Not applicable.

## Rights and permissions

Reprints and Permissions

Li, X., Wu, Y., Zhu, Q. et al. A regression-based Monte Carlo method to solve two-dimensional forward backward stochastic differential equations. Adv Differ Equ 2021, 207 (2021). https://doi.org/10.1186/s13662-021-03361-5

• Accepted:

• Published:

• 60H35
• 65C20
• 60H10

### Keywords

• Forward backward stochastic differential equations
• Fourier cos-cos transform
• Characteristic functions
• Least-squares regressions
• Monte Carlo