Skip to content

Advertisement

  • Research
  • Open Access

Non-smooth analysis method in optimal investment-BSDE approach

Advances in Difference Equations20182018:460

https://doi.org/10.1186/s13662-018-1920-4

  • Received: 5 June 2018
  • Accepted: 4 December 2018
  • Published:

Abstract

In this paper, the investment process is modeled by backward stochastic differential equation. We investigate a necessary condition for optimal investment problem by the method of non-smooth analysis. Furthermore, some applications of our result are given.

Keywords

  • Backward stochastic differential equation (BSDE)
  • Non-smooth analysis
  • Optimal investment

MSC

  • 60H10
  • 93E20
  • 49J52

1 Introduction

Single-period mean-variance portfolio selection was first introduced and studied by Markowitz [14]. Markowitz’s pioneer work [14] has laid down the foundation for modern financial portfolio theory. In 2000, Li and Ng [11] extended the Markowitz model to the dynamic setting. From then on, various continuous time Markowitz models were studied in much of the literature (see, e.g., [1, 12, 13, 21]).

Generally, there are two approaches, i.e. the forward (primal) approach and the backward (dual) approach, employed to solve the above problem in the continuous time case. The first approach is inspired by indefinite stochastic linear quadratic control (see, e.g., [3, 20]) and builds the relationship between the mean-variance problem and a family of indefinite stochastic linear quadratic optimal control problems (see, e.g., [12, 13, 21]). The second approach is first studied by Bielecki et al. in [1], which is the generalization of the well-known risk neutral computational approach in the discrete time case (see, e.g., [9, 17, 18]). The backward approach includes two steps: the first step is to compute the optimal terminal wealth; the second step is to compute the replicating portfolio strategy corresponding to the obtained optimal terminal wealth. As shown in [1], the optimal terminal wealth is first obtained using the Lagrange multiplier method and then the optimal replicating portfolio strategy is obtained by solving a backward stochastic differential equation (BSDE for short). Along this line, in 2008, Ji and Peng [10] used Ekeland’s variational principle to obtain a necessary condition for portfolio optimization problem by the backward approach. In this paper, we study optimal investment problem by the method of non-smooth analysis, which makes more general portfolio optimization problem be inside the scopes of our consideration.

Now the framework of optimal investment problem that we study are introduced. Let \((\varOmega,{ \mathcal {F}},P)\) be a probability space and \((W_{t})_{t\geq0}\) be a d-dimensional standard Brownian motion with respect to filtration \(({\mathcal {F}}_{t})_{t\geq0}\) generated by Brownian motion and all P-null subsets, i.e.,
$${{\mathcal {F}}_{t}}=\sigma\{W_{s};s\leq t\}\vee{ \mathcal {N}}, $$
where \({\mathcal {N}}\) is the set of all P-null subsets. Fix a real number \(T>0\). Denote \(\mathrm{F}:=({\mathcal {F}}_{t})_{t\in[0,T]}\) Here, let \(L^{2}(\mathcal{F}_{T})\) denote the space of \({\mathcal {F}}_{T}\)-measurable random variables \(\xi:\varOmega\mapsto R\) satisfying \(E[\xi^{2}]<\infty\); \(H_{T}^{2}(R^{n})\) denote the space of F-predictable processes \(\varphi:\varOmega\times[0,T]\mapsto R^{n}\) satisfying \(E [\int_{0}^{T}|\varphi_{t}|^{2}\,\mathrm{ d}t ]<\infty\) and \(S_{T}^{2}(R)\) denote the space of F-progressively measurable RCLL processes \(\varphi:\varOmega \times[0,T]\mapsto R\) satisfying \(E [\sup_{0\leq t \leq T}|\varphi_{t}|^{2} ]<\infty\).
Given an initial capital x, the investment process can be modeled by the following one-dimensional BSDE on \([0,T]\):
$$ y_{t}=\xi+ \int_{t}^{T}g(s,y_{s},z_{s})\, \mathrm{d}s- \int_{t}^{T}z_{s}\cdot\,\mathrm{ d}W_{s} $$
(1)
with \(y_{0}\leq x\), where \(\xi\in L^{2}({\mathcal {F}}_{T})\) and \(g(\omega,t,y,z):\varOmega\times [0,T]\times R \times R^{d}\mapsto R\) is a function uniformly satisfying the Lipschitz condition, i.e., there exists a positive constant M such that for all \((y_{1},z_{1}),(y_{2},z_{2})\in R \times R^{d}\)
$$\begin{aligned} \bigl\vert g(\omega,t,y_{1},z_{1})-g( \omega,t,y_{2},z_{2}) \bigr\vert \leq M\bigl( \vert y_{1}-y_{2} \vert + \vert z_{1}-z_{2} \vert \bigr) \end{aligned}$$
(A.1)
and
$$\begin{aligned} g(\cdot,0,0)\in H_{T}^{2}(R). \end{aligned}$$
(A.2)
By Pardoux and Peng [15], we know that: Suppose that g satisfies (A.1) and (A.2). Then, for any \(\xi\in L^{2}(\mathcal{F}_{T})\), BSDE (1) has a unique pair of adapted processes \((y_{t},z_{t})\in S_{T}^{2}(R)\times H_{T}^{2}(R^{d})\). Usually, we call \(\{y_{t}\}\) a wealth process and \(\{z_{t}\}\) a portfolio process.
Assume that the investor has initial wealth x, he invests it in the financial market according to BSDE (1). Denote \(\mathcal{E}_{0,T}^{g}(\xi):=y_{0}\). Then his investment strategy is determined by all available terminal values for him, i.e.,
$$\mathcal{A}(x):= \bigl\{ \xi\in L^{2}(\mathcal{F}_{T})| \mathcal{E}_{0,T}^{g}(\xi )\leq x \bigr\} . $$
In this paper, we study the following problem: if there exists \(\xi^{*}\in \mathcal{A}(x)\), which is the optimal objective of problem
$$ \min_{\xi\in\mathcal{A}(x)}\rho(\xi), $$
(2)
what is the necessary condition for \(\xi^{*}\)? Here, ρ is a function defined on \(L^{2}(\mathcal{F}_{T})\) and represents a risk measure. We also assume that ρ satisfies the Lipschitz condition.

This paper is organized as follows. In Sect. 2, we give some results about non-smooth analysis that are used in this paper. In Sect. 3, we study the optimal investment problem on wealth and portfolio processes. With the help of the method of non-smooth analysis, a necessary condition for an optimal objective is obtained, which generalizes the result of Ji and Peng [10]. In Sect. 4, we give some examples as the applications of our result.

2 Some results about non-smooth analysis

In this section, we give some results about non-smooth analysis that are used in this paper. The following definitions, lemmas and propositions can be found in [4, 5].

Suppose that X is a Banach space, and \(X^{*}\) is its dual space. Obviously, \(L^{2}(\mathcal{F}_{T})\) is a Banach space with the norm \(\Vert \xi \Vert :=(E[\xi^{2}])^{\frac{1}{2}}\) for any \(\xi\in L^{2}(\mathcal{F}_{T})\), and \(L^{2}(\mathcal{F}_{T})\) is also its dual space.

Definition 1

Suppose that f is a function defined on X, Given an \(x\in X\), if \(\limsup_{y\in X,y\rightarrow x,t\downarrow0}\frac{f(y+tv)-f(y)}{t}\in R\) for any \(v\in X\), denote
$$ f^{o}(x;v):=\limsup_{y\in X,y\rightarrow x,t\downarrow0} \frac{f(y+tv)-f(y)}{t}. $$
(3)
We call \(f^{o}(x;v)\) the generalized directional derivative of f at x.

Remark 1

Suppose that a function \(f:X\mapsto R \) satisfies Lipschitz condition, i.e., for any \(x_{1}\), \(x_{2}\in X\),
$$\bigl\vert f(x_{1})-f(x_{2}) \bigr\vert \leq M \Vert x_{1}-x_{2} \Vert $$
holds for some \(M>0\), depending on f. By Definition 1, we have:
(i) Obviously, \(f^{o}(x;v)\) is sub-linear on X, then, by the Hahn–Banach theorem, the generalized directional derivative set of f at x
$$ \partial^{o} f(x):= \bigl\{ \zeta\in X^{*}| \zeta(v)\leq f^{o}(x;v), \forall v\in X \bigr\} $$
(4)
is nonempty and weak star compact in \(X^{*}\).
(ii) If \(f(x)\) attains minimum value at some point \(x_{0}\in X\), i.e.,
$$f(x)\geq f(x_{0}), \quad\forall x\in X, $$
the Fermat condition
$$0\in\partial^{o} f(x_{0}) $$
holds.
Given a set \(C\subset X\), the distance function \(d_{C}:X\mapsto R\) is defined as
$$d_{C}(x):=\inf_{y\in C} \Vert y-x \Vert , $$
for a given \(x\in X\).

Lemma 1

(Exact penalization)

Suppose that f is a Lipschitz function with coefficient M defined on X, \(x^{*}\in C\subset X\) and f takes its minimum value at \(x^{*}\) on C. Then, for any \(\hat {M}\geq M\), \(g(x):=f(x)+\hat{M} d_{C}(x)\) attains minimum value at \(x_{0}\) on X. On the contrary, if \(\hat{M}>M\) and C is closed, then the minimum point of g, \(x_{0}\) must belong to C.

Remark 2

From Lemma 1, we know that, supposing that f is a Lipschitz function with coefficient M defined on X and C is a closed subset of X and letting \(g(y):=f(y)+\hat{M} d_{C}(y)\), for any \(y\in X\), where \(\hat{M}> M\), then, if there exists \(x^{*}\in C\) such that
$$ f \bigl(x^{*} \bigr)=\min_{x\in C}f(x), $$
(5)
then we have
$$ g \bigl(x^{*} \bigr)=\min_{x\in X}g(x)=f \bigl(x^{*} \bigr). $$
(6)

Definition 2

Assume \(x\in C\). Given \(v\in X\), if \(d^{o}_{C}(x;v)=0\), then v is said to be a contingent derivative at x. Denote
$$T_{C}(x):=\bigl\{ v\in X|d^{o}_{C}(x;v)=0\bigr\} . $$
\(T_{C}(x)\) is the set of contingent derivatives at x. By polarity, the normal derivative set is defined as
$$N_{C}(x):= \bigl\{ \zeta\in X^{*}| \zeta(v)\leq0, \forall v\in T_{C}(x) \bigr\} . $$

Proposition 1

Suppose that \(x\in C\), then we have
$$N_{C}(x)= cl \biggl\{ \bigcup_{\lambda\geq0}\lambda \partial^{o} d_{C}(x) \biggr\} , $$
where cl means the weak star closure.

Proposition 2

Given an \(x\in X\), suppose that h is Lipschitz in a neighborhood of x and \(0\notin \partial^{o} h(x)\). Let
$$ C:= \bigl\{ y\in X|h(y)\leq h(x) \bigr\} . $$
(7)
Then we have
$$ N_{C}(x)\subset\bigcup_{\lambda\geq0} \lambda\partial^{o} h(x). $$
(8)

Remark 3

Given an \(x\in X\), suppose that h is Lipschitz in a neighborhood of x and \(0\notin\partial^{o} h(x)\). Let \(C:=\{y\in X|h(y)\leq h(x)\}\). We also assume that f is a Lipschitz function with coefficient M defined on X, and f takes its minimum value at x on C. By Definition 1, Remark 2, Definition 2 and Proposition 2, then the Fermat condition
$$ 0\in\partial^{o}f(x)+N_{C}(x) $$
(9)
holds, i.e., there exist a non-negative constant λ, \(\zeta\in \partial^{o}f(x)\) and \(\eta\in\partial^{o} h(x)\) such that
$$\zeta+\lambda\eta=0 $$
holds.

Definition 3

A function f defined on X is called strictly differentiable at \(x\in X\), if there exists \(x^{*}\in X^{*}\) such that
$$ \lim_{y\in X,y\rightarrow x,t\rightarrow0^{+}}\frac{f(y+tv)-f(y)}{t} = \bigl\langle x^{*}, v \bigr\rangle , $$
(10)
for any \(v\in X\).

Lemma 2

A function f defined on X is strictly differentiable at \(x\in X\), then \(\partial^{o} f(x)=\{x^{*}\}\), where \(x^{*}\) can be obtained by (10).

Lemma 3

Suppose that \(\{h_{i}, i=1,2,\ldots, n\}\) is a set of Lipschitz functions on X. Define
$$h(x):=\max\bigl\{ h_{i}(x)|i=1,2,\ldots, n\bigr\} ,\quad \forall x\in X. $$
For each \(x\in X\), let \(I(x)\) be the subset of \(\{1,2,\ldots, n\}\) satisfying that \(h_{i}(x)=h(x)\), \(i\in I(x)\). Then we have
$$ \partial^{o} h(x)\subset co \bigl\{ \partial^{o} h_{i}(x)|i\in I(x) \bigr\} , $$
(11)
where \(coA\) is the convex hull of A.

3 Maximum principle for optimal investment problem

In this section, our aim is to derive a necessary condition for optimal problem (2). Just as usual, in order to use the method of non-smooth analysis, we need the following assumption:
$$\begin{aligned} \text{the risk measure } \rho \text{ is Lipschitz}. \end{aligned}$$
(A.3)

The following proposition shows that \(\mathcal{E}_{0,T}^{g}\) is Lipschitz in \(L^{2}(\mathcal{F}_{T})\). For more details, see [2, 22, 23].

Proposition 3

Suppose that g satisfies (A.1) and (A.2), \(\xi_{i}\in L^{2}(\mathcal {F}_{T}), i=1,2\), and \((y^{i}_{t},z^{i}_{t})\) are the solutions of BSDE (1) with terminal values \(\xi_{i}\), respectively, then there exists a constant \(C>0\) such that
$$ E \Bigl[\sup_{t\in[0,T]} \bigl\vert y^{2}_{t}-y^{1}_{t} \bigr\vert ^{2} \Bigr]+E \biggl[ \int_{0}^{T} \bigl\vert z^{2}_{t}-z^{1}_{t} \bigr\vert ^{2}\,\mathrm{ d}t \biggr] \leq CE \bigl[ \vert \xi_{2}-\xi_{1} \vert ^{2} \bigr]. $$
(12)

Remark 4

From Proposition 3, we know that, supposing g satisfies (A.1) and (A.2), then, for any \(\xi_{1}\), \(\xi_{2}\in L^{2}(\mathcal{F}_{T})\),
$$\bigl\vert \mathcal{E}_{0,T}^{g}(\xi_{1})- \mathcal{E}_{0,T}^{g}(\xi_{2}) \bigr\vert \leq C^{\frac{1}{2}} \Vert \xi _{1}-\xi_{2} \Vert $$
holds.

The following theorem is our main result.

Theorem 1

Under the conditions (A.1)(A.3), if \(\xi^{*}\) is an optimal objective of problem (2), then there exist a non-negative constant λ, \(\zeta\in\partial ^{o} \rho(\xi^{*})\) and \(\eta\in\partial^{o} \mathcal{E}_{0,T}^{g}(\xi^{*})\) such that
$$ \zeta+ \lambda\eta= 0 $$
(13)
holds.

In order to prove Theorem 1, we need the following lemma.

Lemma 4

Suppose that g satisfies (A.1) and (A.2). Then, for any \(\xi\in L^{2}(\mathcal{F}_{T})\), we have \(0\notin\partial^{o} \mathcal{E}_{0,T}^{g}(\xi)\).

Proof

For notational simplicity, we only consider the case \(d=1\). We assume that there exists \(\xi^{*}\in L^{2}(\mathcal{F}_{T})\) such that \(0\in\partial^{o} \mathcal{E}_{0,T}^{g}(\xi^{*})\). For any \(\xi\in L^{2}(\mathcal{F}_{T})\), let \(h(\xi):=\mathcal{E}_{0,T}^{g}(\xi)\), then
$$h^{o}\bigl(\xi^{*}; \eta\bigr)\geq0 $$
holds, for any \(\eta\in L^{2}(\mathcal{F}_{T})\). For each \(\xi\in L^{2}(\mathcal{F}_{T})\), suppose that \((y_{t},z_{t})\) and \((y_{t}^{r},z_{t}^{r})\) are the solutions of the following BSDEs on \([0,T]\):
$$\begin{aligned} &y_{t}=\xi+ \int_{t}^{T}g(s,y_{s},z_{s})\, \mathrm{ d}s- \int_{t}^{T}z_{s}\,\mathrm{ d}W_{s}, \\ &y_{t}^{r}=\xi+r\eta+ \int_{t}^{T}g\bigl(s,y_{s}^{r},z_{s}^{r} \bigr)\,\mathrm{ d}s- \int_{t}^{T}z_{s}^{r}\, \mathrm{ d}W_{s}, \end{aligned}$$
respectively. Thus, we have
$$\begin{aligned} y_{t}^{r}-y_{t}={}&r\eta+ \int_{t}^{T} \bigl[g\bigl(s,y_{s}^{r},z_{s}^{r} \bigr)-g(s,y_{s},z_{s}) \bigr]\,\mathrm{ d}s - \int_{t}^{T} \bigl(z_{s}^{r}-z_{s} \bigr)\,\mathrm{ d}W_{s} \\ ={}&r\eta+ \int_{t}^{T}\bigl[\alpha_{s} \bigl(y_{s}^{r}-y_{s}\bigr) +\beta_{s} \bigl(z_{s}^{r}-z_{s}\bigr)\bigr]\,\mathrm{ d}s- \int_{t}^{T}\bigl(z_{s}^{r}-z_{s} \bigr)\,\mathrm{ d}W_{s}, \end{aligned}$$
where
$$\begin{aligned} &\alpha_{s}=\frac{g(s,y_{s}^{r},z_{s})-g(s,y_{s},z_{s})}{y_{s}^{r}-y_{s}}1_{ \{ y_{s}^{r}-y_{s}\neq0 \}}, \\ &\beta_{s}=\frac{g(s,y_{s}^{r},z_{s}^{r})-g(s,y_{s}^{r},z_{s})}{z_{s}^{r}-z_{s}}1_{\{ z_{s}^{r}-z_{s}\neq0\}}, \end{aligned}$$
which imply \(\sup_{s\in[0,T]}|\alpha_{s}|\leq M\) and \(\sup_{s\in[0,T]}|\beta_{s}|\leq M\) from the Lipschitz condition (A.1). Applying Itô’s formula, we have
$$\mathcal{E}_{0,T}^{g}(\xi+r\eta)-\mathcal{E}_{0,T}^{g}( \xi)=E \biggl[r\eta\cdot \operatorname{exp} { \biggl\{ \int_{0}^{T} \biggl[\alpha_{s}- \frac{1}{2} \vert \beta_{s} \vert ^{2} \biggr]\, \mathrm{d}s+ \int_{0}^{T} \beta_{s} \, \mathrm{d}W_{s} \biggr\} } \biggr]. $$
Let \(\eta=-1\), then, by Definition 1, we can deduce that
$$h^{o}\bigl(\xi^{*};\eta\bigr)=\limsup_{\xi\rightarrow\xi^{*},r\downarrow0} \frac{\mathcal{E}_{0,T}^{g}(\xi+r\eta)-\mathcal{E}_{0,T}^{g}(\xi)}{r}< 0. $$

We get a contradiction with \(h^{o}(\xi^{*}; \eta)\geq0\). The proof of Lemma 4 is complete. □

Proof of Theorem 1

By Lemma 4, we have \(0\notin\partial^{o} \mathcal{E}_{0,T}^{g}(\xi)\), for any \(\xi\in L^{2}(\mathcal{F}_{T})\). Hence by Definition 1, Remark 2, Definition 2 and Proposition 1, we can see that, if \(\xi^{*}\) is an optimal objective of problem (2), then the Fermat condition
$$ 0\in\partial^{o} \rho \bigl(\xi^{*} \bigr)+ N_{\mathcal{A}(x)} \bigl(\xi^{*} \bigr) $$
(14)
holds.
Case 1: \(\mathcal{E}_{0,T}^{g}(\xi^{*})= x\). In this case, \(\mathcal{A}(x)= \{\xi\in L^{2}(\mathcal{F}_{T})|\mathcal{E}_{0,T}^{g}(\xi )\leq\mathcal{E}_{0,T}^{g}(\xi^{*}) \}\). Thus, by Proposition 2, there exist a non-negative constant λ, \(\zeta\in\partial^{o} \rho(\xi^{*})\) and \(\eta\in\partial^{o} \mathcal{E}_{0,T}^{g}(\xi^{*})\) such that
$$ \zeta+ \lambda\eta= 0 $$
(15)
holds.
Case 2: \(\mathcal{E}_{0,T}^{g}(\xi^{*})=x_{0}< x\). In this case, denote
$$\widetilde{C}:=\bigl\{ \xi\in L^{2}(\mathcal{F}_{T})| \mathcal{E}_{0,T}^{g}(\xi)\leq x_{0}\bigr\} . $$
Since \(\xi^{*}\) is an optimal objective of problem (2), \(\xi^{*}\) is also an optimal objective of problem
$$ \min_{\xi\in\widetilde{C}}\rho(\xi). $$
(16)
Thus, by Definition 1, Remark 2, Definition 2 and Proposition 2 again, Equation (15) also holds. The proof of Theorem 1 is complete. □

From Remark 4, we know that \(\mathcal{E}_{0,T}^{g}\) is Lipschitz in \(L^{2}(\mathcal{F}_{T})\). By Definition 1, we can see that \(\partial^{o} \mathcal {E}_{0,T}^{g}(\xi)\) is not empty for any \(\xi\in L^{2}(\mathcal{F}_{T})\). Indeed, we have following result.

Lemma 5

Suppose that \(\xi\in L^{2}(\mathcal{F}_{T})\) and g satisfies (A.1) and (A.2). Let \((y_{t},z_{t})\) be the solution of BSDE (1) with terminal value ξ, then for any \(\eta\in\partial^{o} \mathcal{E}_{0,T}^{g}(\xi)\) and \(\zeta\in L^{2}(\mathcal{F}_{T})\), there exists \((\varphi_{t}, \psi_{t})\in\partial^{o} g(t,y_{t},z_{t})\) such that
$$\langle\eta, \zeta\rangle=\tilde{y}_{0} $$
holds, where \((\tilde{y}_{t}, \tilde{z}_{t})\) is the solution of the following BSDE on \([0,T]\):
$$ \tilde{y}_{t}= \zeta+ \int_{t}^{T} (\varphi_{s} \tilde{y}_{s} + \psi_{s}\tilde{z}_{s})\,\mathrm{ d}s - \int_{t}^{T}\tilde{z}_{s}\cdot\,\mathrm{ d}W_{s}. $$
(17)

Proof

For any given \(\omega\in\varOmega\) and \(t\in[0,T]\), let \(\hat{g}(t, \hat{y}, \hat{z})=g^{o}(t, y_{t},z_{t};\hat{y}, \hat{z})\), \(\forall(\hat{y}, \hat{z})\in R\times R^{d}\). By Definition 1, we can see that \(\hat{g}(t, \hat{y}, \hat{z})\) is Lipschitz, positively homogeneous and convex in \((\hat{y}, \hat{z})\), and hence
$$ \hat{g}(t, \hat{y}, \hat{z})=\max_{(\varphi_{t}, \psi _{t})\in\partial^{o} g(t,y_{t},z_{t})} \bigl\langle (\varphi_{t}, \psi_{t}), (\hat{y}, \hat{z}) \bigr\rangle . $$
(18)
For any \(\zeta\in L^{2}(\mathcal{F}_{T})\), we consider the following BSDE on \([0,T]\):
$$ \hat{y}_{t}=\zeta+ \int_{t}^{T}\hat{g}(s, \hat{y}_{s}, \hat{z}_{s}) \,\mathrm{ d}s- \int_{t}^{T}\hat{z}_{s}\cdot\,\mathrm{ d}W_{s}. $$
(19)
Let \(\hat{f}(\zeta):=\hat{y}_{0}\). Denote
$$ D:= \bigl\{ \eta\in L^{2}(\mathcal{F}_{T})| \langle\eta, \zeta \rangle=\tilde{y}_{0}, \forall(\varphi_{t}, \psi_{t})\in\partial^{o} g(t,y_{t},z_{t}) \bigr\} , $$
(20)
where \((\tilde{y}_{t}, \tilde{z}_{t})\) is the solution of the following BSDE on \([0,T]\):
$$\tilde{y}_{t}= \zeta+ \int_{t}^{T} (\varphi_{s} \tilde{y}_{s} + \psi_{s}\tilde{z}_{s})\,\mathrm{ d}s- \int_{t}^{T}\tilde{z}_{s}\cdot\,\mathrm{ d}W_{s}. $$
By the comparison theorem of BSDE (for example, see El Karoui et al. [7]), we have \(\hat{f}(\zeta)\geq \langle\eta, \zeta\rangle\), for any \(\eta\in D\). Since ĝ is positively homogeneous and convex in \((\hat{y}, \hat{z})\), by Propositions 3.3 and 3.4 in Peng [16], we know that is positively homogeneous and convex in \(L^{2}(\mathcal{F}_{T})\), and hence
$$ \hat{f}(\zeta)=\max_{\eta\in D} \langle\eta, \zeta \rangle. $$
(21)
By Definition 1, we can obtain that
$$\bigl(\mathcal{E}_{0,T}^{g}\bigr)^{o}(\xi; \zeta)= \hat{f}(\zeta)=\max_{\eta\in D} \langle\eta, \zeta\rangle. $$
This means that \(\partial^{o} \mathcal{E}_{0,T}^{g}(\xi)=D\). The proof of Lemma 5 is complete. □

In Ji and Peng [10], the authors assume that g is continuously differentiable in \((y,z)\). In this special case, we can get an explicit form of \(\partial^{o} \mathcal{E}_{0,T}^{g}\).

Lemma 6

Suppose that \(\xi\in L^{2}(\mathcal{F}_{T})\) and g satisfies (A.1) and (A.2). Let \((y_{t},z_{t})\) be the solution of BSDE (1) with terminal value ξ. If g is continuously differentiable in \((y,z)\in R\times R^{d}\), then we have
$$\partial^{o} \mathcal{E}_{0,T}^{g}(\xi)= \{q_{T} \} $$
and for any \(\eta\in L^{2}(\mathcal{F}_{T})\),
$$ \langle q_{T}, \eta\rangle=\tilde{y}_{0}, $$
(22)
where \((\tilde{y}_{t},\tilde{z}_{t} )\) is the solution of the following BSDE on \([0,T]\):
$$\tilde{y}_{t}= \eta+ \int_{t}^{T} \bigl[g^{\prime}_{y}(s,y_{s},z_{s}) \tilde{y}_{s}+g^{\prime}_{z}(s,y_{s},z_{s}) \tilde{z}_{s} \bigr]\,\mathrm { d}s- \int_{t}^{T}\tilde{z}_{s}\cdot\,\mathrm{ d}W_{s}. $$

Proof

For any given \(\omega\in\varOmega\) and \(t\in[0,T]\), let \(\tilde{g}(t, \tilde{y}, \tilde{z})=g^{o}(t, y_{t},z_{t};\tilde{y}, \tilde{z})\), \(\forall(\tilde{y}, \tilde{z})\in R\times R^{d}\). Since g satisfies (A.1), (A.2), and is continuously differentiable in \((y,z)\in R\times R^{d}\), we know that g is strictly differentiable in \((y,z)\in R\times R^{d}\) and
$$\tilde{g}(t, \tilde{y}, \tilde{z})=g^{\prime}_{y}(t,y_{t},z_{t}) \tilde {y}+g^{\prime}_{z}(t,y_{t},z_{t}) \tilde{z} $$
holds. For any \(\eta\in L^{2}(\mathcal{F}_{T})\), let \((\tilde{y}_{t},\tilde{z}_{t})\) be the solution of the following BSDE on \([0,T]\):
$$\tilde{y}_{t}= \eta+ \int_{t}^{T} \tilde{g}(s, \tilde{y}_{s}, \tilde{z}_{s})\,\mathrm{ d}s- \int_{t}^{T}\tilde {z}_{s}\cdot\,\mathrm{ d}W_{s}. $$
Denote \(\mathcal{E}_{0,T}^{\tilde{g}}(\eta):=\tilde{y}_{0}\). By Definition 1, we obtain
$$ \bigl(\mathcal{E}_{0,T}^{g} \bigr)^{o}(\xi; \eta)= \mathcal{E}_{0,T}^{\tilde{g}}(\eta). $$
(23)
Since is linear in \((\tilde{y},\tilde{z})\), by Propositions 3.3 and 3.4 in Peng [16], we know that \(\mathcal{E}_{0,T}^{\tilde{g}}\) is linear in \(L^{2}(\mathcal{F}_{T})\). By Proposition 3, we can see that \(\mathcal{E}_{0,T}^{\tilde{g}}\) is continuous in \(L^{2}(\mathcal{F}_{T})\). Thus, by the Riesz representation theorem, there exists \(q_{T}\in L^{2}(\mathcal{F}_{T})\) such that
$$ \mathcal{E}_{0,T}^{\tilde{g}}(\eta)=\langle q_{T}, \eta \rangle $$
(24)
holds for any \(\eta\in L^{2}(\mathcal{F}_{T})\). From (23) and (24), we have \(\partial^{o} \mathcal{E}_{0,T}^{g}(\xi)= \{q_{T} \}\). The proof of Lemma 6 is complete. □

By Theorem 1 and Lemma 6, we immediately obtain the following theorem.

Theorem 2

Suppose that ρ satisfies (A.3), and g satisfies (A.1), (A.2), and is continuously differentiable in \((y,z)\). Assume that \(\xi^{*}\) is an optimal objective of problem (2), let \((y^{*}_{t},z^{*}_{t})\) be the solution of BSDE (1) with terminal value \(\xi^{*}\). Then there exist a non-negative constant λ and \(\zeta\in \partial^{o} \rho(\xi^{*})\) such that
$$\zeta+ \lambda q^{*}_{T}= 0 $$
holds, where \(q^{*}_{T}\) can be obtained by (22).

4 Some examples

In this section, we give three examples for our obtained result. In the sequel, suppose that g satisfies (A.1), (A.2), and is continuously differentiable in \((y,z)\).

Problem A. Minimize
$$E\bigl[\varphi(\xi)\bigr]-c $$
subject to
$$\textstyle\begin{cases} E[u(\xi)]= c,\\ \mathcal{E}_{0,T}^{g}(\xi)= x,\\ \mathcal{E}_{0,T}^{g}(\xi)\geq0,\\ \xi\in L^{2}(\mathcal{F}_{T}), \end{cases} $$
where \(\varphi:R\mapsto R\) is Lipschitz and continuously differentiable, and \(u:R\mapsto R\) has bounded continuous derivative \(u^{\prime}\).

We can extend Problem A to a more general framework.

Problem B. Minimize
$$E\bigl[\varphi(\xi)\bigr]-c $$
subject to
$$\textstyle\begin{cases} E[u(\xi)]\leq c,\\ \mathcal{E}_{0,T}^{g}(\xi)\leq x,\\ \mathcal{E}_{0,T}^{g}(\xi)\geq0,\\ \xi\in L^{2}(\mathcal{F}_{T}), \end{cases} $$
where \(\varphi:R\mapsto R\) is Lipschitz and continuously differentiable, and \(u:R\mapsto R\) has bounded continuous derivative \(u^{\prime}\).

In order to study the optimization problems A and B, we need the following lemma.

Lemma 7

If \(u:R\mapsto R\) has bounded continuous derivative \(u^{\prime}\), then the function \(\rho(\xi)=E[u(\xi)]\) for any \(\xi\in L^{2}(\mathcal {F}_{T})\) is strictly differentiable, and \(\partial^{o} \rho(\xi)=\{u^{\prime}(\xi)\}\).

By the Fubini theorem and the dominated convergence theorem, we can easily prove Lemma 7. So we omit it.

Theorem 3

If the optimal objective \(\xi^{*}\) of Problem A exists, there exist two constants \(\lambda_{1}\) and \(\lambda_{2}\) such that
$$\varphi^{\prime}\bigl(\xi^{*}\bigr)+\lambda_{1}u^{\prime} \bigl(\xi^{*}\bigr)+\lambda_{2}q^{*}_{T}=0 $$
holds, where \(\partial^{o}\mathcal{E}_{0,T}^{g}(\xi^{*})=\{q^{*}_{T}\}\).

Proof

Denote \(\rho(\xi):=E[\varphi(\xi)]-c\), for any \(\xi\in L^{2}(\mathcal {F}_{T})\) and
$$U:=\bigl\{ \xi\in L^{2}(\mathcal{F}_{T})|h(\xi)\leq0\bigr\} , $$
where \(h(\xi):= \max\{E[u(\xi)]-c,c-E[u(\xi)],\mathcal{E}_{0,T}^{g}(\xi)-x,x-\mathcal {E}_{0,T}^{g}(\xi),-\mathcal{E}_{0,T}^{g}(\xi) \}\). If the optimal objective \(\xi^{*}\) of Problem A exists, then, by Lemmas 6 and 7, we have \(\partial^{o} \rho(\xi^{*})= \{\varphi^{\prime}(\xi^{*}) \}\), \(\partial^{o} E[u(\xi ^{*})]=\{u^{\prime}(\xi^{*})\}\) and \(\partial^{o}\mathcal{E}_{0,T}^{g}(\xi^{*})=\{q^{*}_{T}\}\).
By Theorem 2 and Lemma 3, there exist two constants \(\lambda_{1}\) and \(\lambda_{2}\) such that
$$\varphi^{\prime}\bigl(\xi^{*}\bigr)+\lambda_{1}u^{\prime} \bigl(\xi^{*}\bigr)+\lambda_{2}q^{*}_{T}=0 $$
holds. The proof of Theorem 3 is complete. □

Theorem 4

If the optimal objective \(\xi^{*}\) of Problem B exists, then there exist three constants \(\lambda\geq0\) and \(a,\beta\in[0,1]\) satisfying \(\alpha+\beta \leq1\), such that
$$\textstyle\begin{cases} \varphi^{\prime}(\xi^{*})+\lambda q^{*}_{T}=0, \qquad\mathcal{E}_{0,T}^{g}(\xi^{*})-x>\max \{E[u(\xi^{*})]-c,-\mathcal{E}_{0,T}^{g}(\xi^{*}) \},\\ \varphi^{\prime}(\xi^{*})-\lambda q^{*}_{T}=0, \qquad-\mathcal{E}_{0,T}^{g}(\xi^{*})>\max\{ E[u(\xi^{*})]-c,\mathcal{E}_{0,T}^{g}(\xi^{*})-x \},\\ \varphi^{\prime}(\xi^{*})+\lambda u^{\prime}(\xi^{*})=0, \qquad E[u(\xi^{*})]-c>\max\{ \mathcal{E}_{0,T}^{g}(\xi^{*})-x,-\mathcal{E}_{0,T}^{g}(\xi^{*}) \},\\ \varphi^{\prime}(\xi^{*})+\lambda[\alpha u^{\prime}(\xi^{*})+(1-\alpha)q^{*}_{T} ]=0, \quad E[u(\xi^{*})]-c=\mathcal{E}_{0,T}^{g}(\xi^{*})-x>-\mathcal{E}_{0,T}^{g}(\xi ^{*}),\\ \varphi^{\prime}(\xi^{*})+\lambda[\alpha u^{\prime}(\xi^{*})+(\alpha-1)q^{*}_{T} ]=0, \qquad E[u(\xi^{*})]-c=-\mathcal{E}_{0,T}^{g}(\xi^{*})>\mathcal{E}_{0,T}^{g}(\xi ^{*})-x,\\ \varphi^{\prime}(\xi^{*})+\lambda(2\alpha-1)q^{*}_{T}=0, \qquad \mathcal{E}_{0,T}^{g}(\xi ^{*})-x=-\mathcal{E}_{0,T}^{g}(\xi^{*})>E[u(\xi^{*})]-c,\\ \varphi^{\prime}(\xi^{*})+\lambda[(\alpha-\beta)q^{*}_{T}+(1-\alpha-\beta )u^{\prime}(\xi^{*}) ]=0, \\\mathcal{E}_{0,T}^{g}(\xi^{*})-x=-\mathcal{E}_{0,T}^{g}(\xi^{*})=E[u(\xi^{*})]-c, \end{cases} $$
holds, where \(\partial^{o}\mathcal{E}_{0,T}^{g}(\xi^{*})=\{q^{*}_{T}\}\).

Proof

Denote \(\rho(\xi):=E[\varphi(\xi)]-c\), for any \(\xi\in L^{2}(\mathcal {F}_{T})\) and
$$U:=\bigl\{ \xi\in L^{2}(\mathcal{F}_{T})|h(\xi)\leq0\bigr\} , $$
where \(h(\xi):= \max\{E[u(\xi)]-c,\mathcal{E}_{0,T}^{g}(\xi)-x,-\mathcal {E}_{0,T}^{g}(\xi) \}\). If the optimal objective \(\xi^{*}\) of Problem B exists, then, by Lemmas 6 and 7, we have \(\partial^{o} \rho(\xi^{*})= \{\varphi^{\prime}(\xi^{*}) \}\), \(\partial^{o} E[u(\xi ^{*})]=\{u^{\prime}(\xi^{*})\}\) and \(\partial^{o}\mathcal{E}_{0,T}^{g}(\xi^{*})=\{q^{*}_{T}\}\).

Case 1: \(\mathcal{E}_{0,T}^{g}(\xi^{*})-x>\max\{E[u(\xi ^{*})]-c,-\mathcal{E}_{0,T}^{g}(\xi^{*}) \}\). By Theorem 2, there exists a non-negative constant λ such that \(\varphi^{\prime}(\xi^{*})+\lambda q^{*}_{T}=0\) holds.

Case 2: \(-\mathcal{E}_{0,T}^{g}(\xi^{*})>\max\{E[u(\xi ^{*})]-c,\mathcal{E}_{0,T}^{g}(\xi^{*})-x \}\). By Theorem 2, there exists a non-negative constant λ such that \(\varphi^{\prime}(\xi^{*})-\lambda q^{*}_{T}=0\) holds.

Case 3: \(E[u(\xi^{*})]-c>\max\{\mathcal{E}_{0,T}^{g}(\xi ^{*})-x,-\mathcal{E}_{0,T}^{g}(\xi^{*}) \}\). By Theorem 2, there exists a non-negative constant λ such that \(\varphi^{\prime}(\xi^{*})+\lambda u^{\prime}(\xi^{*})=0\) holds.

Case 4: \(E[u(\xi^{*})]-c=\mathcal{E}_{0,T}^{g}(\xi^{*})-x>-\mathcal {E}_{0,T}^{g}(\xi^{*})\). By Theorem 2 and Lemma 3, there exist two constants \(\lambda\geq0\) and \(\alpha\in[0,1]\) such that \(\varphi^{\prime}(\xi^{*})+\lambda[\alpha u^{\prime}(\xi^{*})+(1-\alpha)q^{*}_{T} ]=0\) holds.

Case 5: \(E[u(\xi^{*})]-c=-\mathcal{E}_{0,T}^{g}(\xi^{*})>\mathcal {E}_{0,T}^{g}(\xi^{*})-x\). By Theorem 2 and Lemma 3, there exist two constants \(\lambda\geq0\) and \(\alpha\in[0,1]\) such that \(\varphi^{\prime}(\xi^{*})+\lambda[\alpha u^{\prime}(\xi^{*})+(\alpha-1)q^{*}_{T} ]=0\) holds.

Case 6: \(\mathcal{E}_{0,T}^{g}(\xi^{*})-x=-\mathcal{E}_{0,T}^{g}(\xi ^{*})>E[u(\xi^{*})]-c\). By Theorem 2 and Lemma 3, there exist two constants \(\lambda\geq0\) and \(\alpha\in[0,1]\) such that \(\varphi^{\prime}(\xi^{*})+\lambda(2\alpha-1)q^{*}_{T}=0\) holds.

Case 7: \(\mathcal{E}_{0,T}^{g}(\xi^{*})-x=-\mathcal{E}_{0,T}^{g}(\xi ^{*})=E[u(\xi^{*})]-c\). By Theorem 2 and Lemma 3, there exist three constants \(\lambda\geq0\) and \(a,\beta\in[0,1]\) satisfying \(\alpha+\beta \leq1\), such that \(\varphi^{\prime}(\xi^{*})+\lambda[(\alpha-\beta)q^{*}_{T}+(1-\alpha-\beta )u^{\prime}(\xi^{*}) ]=0\) holds. The proof of Theorem 4 is complete. □

In the classic investment problem, one often takes the variance as a risk measure and the mean-variance method is used in much of the literature. But by Delbaen [6], and Föllmer and Schied [8], such a kind of risk measure is not perfect. We often take \(\rho(\cdot)\) as a coherent or convex risk measure in (2). In Rosazza Gianin [19], the author proved that if g is a sub-additive and positively homogeneous function satisfying (A.1) and (A.2), define \(\rho(\xi):=\mathcal{E}_{0,T}^{g}(-\xi)\), for any \(\xi\in L^{2}(\mathcal{F}_{T})\), then \(\rho(\cdot)\) is a convex risk measure.

Problem C. Suppose f is sub-additive and homogeneous function satisfying (A.1) and (A.2) and independent of y, and \(f^{\prime}_{z}\) is continuous in z. Minimize
$$\mathcal{E}_{0,T}^{f}(-\xi) $$
subject to
$$\textstyle\begin{cases} E[\xi]\geq c,\\ \mathcal{E}_{0,T}^{g}(\xi)\leq x,\\ \mathcal{E}_{0,T}^{g}(\xi)\geq0,\\ \xi\in L^{2}(\mathcal{F}_{T}). \end{cases} $$

Theorem 5

If the optimal objective \(\xi^{*}\) of Problem C exists, then there exist three constants \(\lambda\geq0\) and \(a,\beta\in[0,1]\) satisfying \(\alpha+\beta \leq1\), such that
$$\textstyle\begin{cases} k_{T}^{*}+\lambda q^{*}_{T}=0, \qquad\mathcal{E}_{0,T}^{g}(\xi^{*})-x>\max\{c-E[\xi ^{*}],-\mathcal{E}_{0,T}^{g}(\xi^{*}) \},\\ k_{T}^{*}-\lambda q^{*}_{T}=0,\qquad -\mathcal{E}_{0,T}^{g}(\xi^{*})>\max\{c-E[\xi ^{*}],\mathcal{E}_{0,T}^{g}(\xi^{*})-x \},\\ k_{T}^{*}-\lambda=0, \qquad c-E[\xi^{*}]>\max\{\mathcal{E}_{0,T}^{g}(\xi ^{*})-x,-\mathcal{E}_{0,T}^{g}(\xi^{*}) \},\\ k_{T}^{*}+\lambda[-\alpha+(1-\alpha)q^{*}_{T} ]=0, \qquad c-E[\xi^{*}]=\mathcal {E}_{0,T}^{g}(\xi^{*})-x>-\mathcal{E}_{0,T}^{g}(\xi^{*}),\\ k_{T}^{*}+\lambda[-\alpha+(\alpha-1)q^{*}_{T} ]=0, \qquad c-E[\xi^{*}]=-\mathcal {E}_{0,T}^{g}(\xi^{*})>\mathcal{E}_{0,T}^{g}(\xi^{*})-x,\\ k_{T}^{*}+\lambda(2\alpha-1)q^{*}_{T}=0, \qquad \mathcal{E}_{0,T}^{g}(\xi^{*})-x=-\mathcal {E}_{0,T}^{g}(\xi^{*})>c-E[\xi^{*}],\\ k_{T}^{*}+\lambda[(\alpha-\beta)q^{*}_{T}+(\alpha+\beta-1) ]=0, \qquad \mathcal{E}_{0,T}^{g}(\xi^{*})-x=-\mathcal{E}_{0,T}^{g}(\xi^{*})=c-E[\xi^{*}], \end{cases} $$
holds, where \(\partial^{o}\mathcal{E}_{0,T}^{g}(\xi^{*})=\{q^{*}_{T}\}\) and \(\partial ^{o}\mathcal{E}_{0,T}^{f}(-\xi^{*})=\{k^{*}_{T}\}\). Let \((y^{*}_{t},z^{*}_{t})\) be the solution of BSDE (1) with terminal value \(-\xi^{*}\). For any \(\eta\in L^{2}(\mathcal{F}_{T})\), \(\langle k^{*}_{T}, -\eta\rangle=\bar{y}_{0}\). \((\bar{y}_{t},\bar{z}_{t})\) is the solution of the following BSDE on \([0,T]\):
$$\bar{y}_{t}= -\eta+ \int_{t}^{T} f_{z}^{\prime} \bigl(s,z^{*}_{s}\bigr)\bar{z}_{s}\,\mathrm{ d}s- \int _{t}^{T}\bar{z}_{s}\cdot\,\mathrm{ d}W_{s}. $$

The proof of Theorem 5 is very similar to that of Theorem 4. So we omit the proof.

Declarations

Acknowledgements

The authors would like to thank the anonymous referees for their constructive suggestions and valuable comments.

Funding

The work of Helin Wu is supported by the Scientific and Technological Research Program of Chongqing Municipal Education Commission (KJ1400922). The work of Yong Ren is supported by the National Natural Science Foundation of China (11871076). The work of Feng Hu is supported by the National Natural Science Foundation of China (11801307) and the Natural Science Foundation of Shandong Province (ZR2016JL002 and ZR2017MA012).

Authors’ contributions

The authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Mathematics and Statistics, Chongqing University of Technology, Chongqing, China
(2)
Department of Mathematics, Anhui Normal University, Wuhu, China
(3)
School of Statistics, Qufu Normal University, Qufu, China

References

  1. Bielecki, T.R., Jin, H.Q., Pliska, S.R., Zhou, X.: Continuous time mean-variance portfolio selection with bankruptcy prohibition. Math. Finance 15, 213–244 (2005) MathSciNetView ArticleGoogle Scholar
  2. Briand, P., Coquet, F., Hu, Y., Mémin, J., Peng, S.: A converse comparison theorem for BSDEs and related properties of g-expectation. Electron. Commun. Probab. 5, 101–117 (2000) MathSciNetView ArticleGoogle Scholar
  3. Chen, S., Li, X., Zhou, X.: Stochastic linear quadratic regulators with indefinite control weight costs. SIAM J. Control Optim. 36, 1685–1702 (1998) MathSciNetView ArticleGoogle Scholar
  4. Clarke, F.H.: Optimization and Nonsmooth Analysis, 2nd edn. Classics in Applied Mathematics, vol. 5. SIAM, Philadelphia (1990) View ArticleGoogle Scholar
  5. Clarke, F.H., Ledyaev, Y.S., Stern, R.J., Wolenski, P.R.: Nonsmooth Analysis and Control Theory. Graduate Texts in Mathematics, vol. 178. Springer, New York (1998) MATHGoogle Scholar
  6. Delbaen, F.: Coherent risk measures on general probability space. In: Sandmann, K., Schonbucher, P.J. (eds.) Advance in Finance and Stochastics, pp. 1–37. Springer, Berlin (2002) Google Scholar
  7. El Karoui, N., Peng, S., Quenez, M.C.: Backward stochastic differential equations in finance. Math. Finance 7, 1–71 (1997) MathSciNetView ArticleGoogle Scholar
  8. Föllmer, H., Schied, A.: Stochastic Finance: An Introduction in Discrete Time. De Gruyter, Berlin (2004) View ArticleGoogle Scholar
  9. Harrison, J.M., Kreps, D.: Martingales and multiperiod securities market. J. Econ. Theory 20, 381–408 (1979) MathSciNetView ArticleGoogle Scholar
  10. Ji, S., Peng, S.: Terminal perturbation method for the backward approach to continuous time mean-variance portfolio selection. Stoch. Process. Appl. 118, 952–967 (2008) MathSciNetView ArticleGoogle Scholar
  11. Li, D., Ng, W.L.: Optimal dynamic portfolio selection: multiperiod mean variance formulation. Math. Finance 10, 387–406 (2000) MathSciNetView ArticleGoogle Scholar
  12. Li, X., Zhou, X., Lim, A.E.B.: Dynamic mean variance portfolio selection with no shorting constraints. SIAM J. Control Optim. 40, 1540–1555 (2001) MathSciNetView ArticleGoogle Scholar
  13. Lim, A.E.B., Zhou, X.: Mean variance portfolio selection with random parameters. Math. Oper. Res. 27, 101–120 (2002) MathSciNetView ArticleGoogle Scholar
  14. Markowitz, H.: Portfolio selection. J. Finance 7, 77–91 (1952) Google Scholar
  15. Pardoux, E., Peng, S.: Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 14, 55–61 (1990) MathSciNetView ArticleGoogle Scholar
  16. Peng, S.: Modelling derivatives pricing with their generating functions (2006) arXiv:math/0605599 [math.PR]
  17. Pliska, S.R.: A discrete time stochastic decision model. In: Fleming, W.H., Gorostiza, L.G. (eds.) Advances in Filtering and Optimal Stochastic Control, in. Lecture Notes in Control and Information Sciences, vol. 42, pp. 290–304. Springer, New York (1982) View ArticleGoogle Scholar
  18. Pliska, S.R.: Introduction to Mathematical Finance. Blackwell, Oxford (1997) Google Scholar
  19. Rosazza Gianin, E.: Risk measures via g-expectations. Insur. Math. Econ. 39, 19–34 (2006) MathSciNetView ArticleGoogle Scholar
  20. Yong, J., Zhou, X.: Stochastic Controls—Hamiltonian Systems and HJB Equations. Springer, New York (1999) MATHGoogle Scholar
  21. Zhou, X., Li, D.: Continuous time mean variance portfolio selection: a stochastic LQ framework. Appl. Math. Optim. 42, 19–33 (2000) MathSciNetView ArticleGoogle Scholar
  22. Zong, Z.: A comonotonic theorem for backward stochastic differential equations in \(L^{p}\) and its applications. Ukr. Math. J. 64, 857–874 (2012) View ArticleGoogle Scholar
  23. Zong, Z., Hu, F.: BSDEs under filtration-consistent nonlinear expectations and the corresponding decomposition theorem for ε-supermartingales in \(L^{p}\). Rocky Mt. J. Math. 43, 677–695 (2013) View ArticleGoogle Scholar

Copyright

© The Author(s) 2018

Advertisement