Open Access

Maximum principle for near-optimality of stochastic delay control problem

Advances in Difference Equations20172017:98

https://doi.org/10.1186/s13662-017-1155-9

Received: 18 November 2016

Accepted: 21 March 2017

Published: 31 March 2017

Abstract

This paper is concerned with near-optimality for stochastic control problems of linear delay systems with convex control domain and controlled diffusion. Necessary and sufficient conditions for a control to be near-optimal are established by Pontryagin’s maximum principle together with Ekeland’s variational principle.

Keywords

maximum principle near-optimal control stochastic differential delay equation Ekeland’s variational principle

MSC

93Exx 49Kxx 60H30

1 Introduction

Many real-world systems are characteristic of dependence on the past, i.e. their present states not only depend on the current situation, but also on the previous history. This is called time delay. Indeed, phenomena with time delays are common in the fields of both natural and social sciences, such as physics, engineering, biology, economics and finance; see for example [13].

Stochastic optimal control problems with time-delay systems have received a lot of research attention recently. However, this kind of control problem remains practically intractable due to its infinite-dimensional nature. Fortunately, when the distributed (average) and pointwise time delays are involved in the state process, optimal control problems are found to be solvable under certain conditions. For the applications of the dynamic programming principle to this field, see [4, 5]. For Pontryagin’s maximum principle applied to it, see [68]. Along this line, by a duality between linear stochastic differential delay equations (SDDEs) and anticipated backward stochastic differential equations (ABSDEs) established in [9], the maximum principle for stochastic delay optimal control problems was studied by [1013].

Let us mention that it is inadequate to only focus on exact optimality. As is well known, optimal controls may not exist in many situations, and insisting on exact optimality is not only unrealistic but also unnecessary for many real systems. Let us give an example to show that optimal control may not exist even in deterministic optimal delay control problems. The system evolves by \(X_{t}=\int_{0}^{t} u_{s-\delta}\,ds\) for \(0\leq t\leq1\), where \(\delta=1/4\) and \(u_{\cdot}\) is chosen from the admissible control set \(\mathcal{U}\), which is the collection of measurable functions \(u: [0,1]\rightarrow\{-1,1\}\). We assume that \(u_{t}=1\) for \(-\delta\leq t<-\delta/2\) and \(u_{t}=-1\) for \(-\delta/2\leq t<0\) for any \(u_{\cdot}\in\mathcal{U}\). The objective is to minimize \(J(u_{\cdot})=\int_{\delta}^{1}(X_{t})^{2}\,dt\) over \(\mathcal{U}\). Let us show that \(\inf_{u_{\cdot}\in\mathcal{U}}J(u_{\cdot})=0\). Firstly, \(X_{\delta}=0\). Then define a sequence of admissible controls \(\{u^{n}_{t}\}\), \(0\leq t\leq3\delta\) by \(u^{n}_{t}=(-1)^{k}\), \(k/(4n)\leq t\leq(k+1)/(4n)\), \(0\leq k\leq3n-1\). Then the corresponding trajectory \(X^{n}_{\cdot}\) satisfies \(\vert X^{n}_{t} \vert \leq1/(4n)\) for \(\delta\leq t\leq1\). Thus, \(J(u^{n}_{\cdot})\leq3/(64n^{2})\) and so \(\inf_{u_{\cdot}\in\mathcal {U}}J(u_{\cdot})=0\). However, there does not exist \(u^{*}_{\cdot}\in \mathcal{U}\) satisfying \(J(u^{*}_{\cdot})=0\); otherwise, we have \(X^{*}_{t}=0\) for \(\delta\leq t\leq1\), which implies \(u^{*}_{t}=0\) for \(0\leq t\leq3\delta\), contradicting the definition of the admissible control.

As stated in [14], near-optimality has as many attractive features as exact optimality in view of both theory and applications. First, near-optimal controls may exist under mild assumptions. Second, by studying near-optimality it is possible to greatly simplify the optimization process with only a small loss in the objective of the decision makers, and a near-optimal solution can satisfactorily serve the ultimate purpose of the decision makers in most practical situations. Third, many more near-optimal controls are available than optimal ones, so it is possible to select among them appropriate ones that are easier for analysis and implementation.

Near-optimality for deterministic control problems was studied in [1517]. Near-optimality for one kind of stochastic control problem with controlled diffusion and nonconvex control domain was studied in [14], for which necessary and sufficient conditions of near-optimality were established. Following [14], various kinds of near-optimal stochastic control problems have been investigated; see for example [1823] for forward control systems, and [2429] for forward-backward systems.

In view of the importance and wide applicability of time-delay systems and near-optimality, this paper is the first attempt to study near-optimization for one kind of stochastic delay control problem. In the control problem, the control domain is convex, the control variable can enter the diffusion term of the control system, and both the state and the control variables involve delays. For simplicity and clarity, we only consider linear systems. Necessary as well as sufficient conditions for a control to be near-optimal are established. By using the maximum principle and Ekeland’s variational principle, we first establish a necessary condition for near-optimality, which reveals the ‘minimum’ qualification for an admissible control to be ε-optimal. Then we prove a sufficient verification theorem for near-optimality, which can help to verify whether a candidate control is indeed near-optimal and thus can help to find near-optimal controls. Finally, the theoretical results are applied to some illustrative examples.

The main features of this paper are as follows. This is the first attempt to study near-optimal controls of stochastic delay control problems with the maximum principle method and by means of ABSDEs. We establish necessary and sufficient conditions for any near-optimal controls and give some examples. Since exact optimal control could be regarded as a particular case of ε-optimal control when \(\varepsilon=0\), this paper is a generalization of [10] in the linear system case. We give two sufficient conditions for near-optimality, which cannot contain each other in general. The functions l and Φ in the cost functional can be quadratic functions of x, which generalizes the corresponding assumptions in [14, 18, 21] and some other papers. In most existing literature, the error bound in the necessary condition for an admissible control to be ε-optimal is \(\varepsilon^{\gamma}\) with \(\gamma\in[0,\frac{1}{3})\) or \(\gamma\in[0,\frac{1}{3}]\), while it is improved in this paper to \(\varepsilon^{\gamma}\) with \(\gamma\in[0,\frac{1}{2}]\). In two illustrative examples, we give some near-optimal controls in the explicit form.

The rest of this paper is organized as follows. In Section 2, we give the formulation of the problem and present some preliminaries. We establish the necessary conditions for near-optimal controls in Section 3 and the sufficient conditions in Section 4. The theoretical results are applied to two examples in Section 5 and a conclusion is given in Section 6.

2 Formulation of the problem and preliminaries

For \(n\geq1\), we use \(\mathbb{R}^{n}\) to denote the n-dimensional Euclidean space with the usual norm \(\vert \cdot \vert \) and inner product \(\langle\cdot,\cdot\rangle\). Denote by \(A^{T}\) the transpose of a matrix A. Let \((\Omega, \mathcal {F}, \mathbb{P})\) be a probability space and \(\mathbb{E}\) the expectation with respect to \(\mathbb{P}\). By \(\{\mathcal{F}_{t}, t\geq0\}\) we denote the completed natural filtration of a standard Brownian motion \(\{W_{t}, t\geq0\}\), which is assumed to be scalar-valued for simplicity. For \(a< b\), denote by \(M^{2}(a,b;\mathbb{R}^{n})\) the set of n-dimensional adapted processes \(\{\phi_{t}, a\leq t \leq b \}\) satisfying \(\mathbb{E}\int_{a}^{b} \vert \phi_{t} \vert ^{2} \,dt<\infty\), and by \(S^{2}(a,b;\mathbb{R}^{n})\) the set of n-dimensional continuous adapted processes \(\{\psi_{t}, a\leq t \leq b \}\) satisfying \(\mathbb{E} [\sup_{a\leq t \leq b} \vert \psi_{t} \vert ^{2} ]<\infty\). We use C, \(C'\), \(C''\) to represent positive constants, which can be different from line to line.

Assume that \(\delta_{1}\) and \(\delta_{2}\) are positive constants, and \(\xi_{\cdot}: [-\delta_{1},0]\rightarrow\mathbb{R}^{n}\) is a continuous function. Given a bounded convex set \(U\subset\mathbb {R}^{k}\) and a measurable function \(\eta_{\cdot}: [-\delta_{2},0]\rightarrow U\), we define the admissible control set \(\mathcal{U}\) as the collection of U-valued adapted processes \(\{v_{t},-\delta_{2}\leq t\leq T\}\) satisfying \(v_{t}=\eta_{t}\) for \(-\delta_{2}\leq t\leq0\). For \(v_{\cdot}\in\mathcal{U}\), the controlled system evolves by
$$ \textstyle\begin{cases} dX_{t}^{v}=b (t,X_{t}^{v},X_{t-\delta_{1}}^{v},v_{t},v_{t-\delta_{2}} )\,dt+\sigma (t,X_{t}^{v},X_{t-\delta_{1}}^{v},v_{t},v_{t-\delta_{2}} )\,dW_{t},\quad 0\leq t\leq T,\\ X_{t}=\xi_{t},\quad -\delta_{1}\leq t\leq0, \end{cases} $$
(1)
with
$$\begin{aligned} &b(t,x,x_{\delta},v,v_{\delta})=A_{1}(t)x+B_{1}(t)x_{\delta}+C_{1}(t)v+D_{1}(t)v_{\delta}+E_{1}(t), \\ &\sigma(t,x,x_{\delta},v,v_{\delta})=A_{2}(t)x+B_{2}(t)x_{\delta}+C_{2}(t)v+D_{2}(t)v_{\delta}+E_{2}(t), \end{aligned}$$
where the coefficients \(A_{i}(\cdot)\), \(B_{i}(\cdot)\), \(C_{i}(\cdot)\), \(D_{i}(\cdot )\), \(i=1,2\), are bounded adapted processes with appropriate dimensions, and \(E_{1}(\cdot)\), \(E_{2}(\cdot)\in M^{2}(0,T;\mathbb{R}^{n})\). The solution \(X_{\cdot}^{v}\) of SDDE (1) is called the response of the control \(v_{\cdot}\), and \((X_{\cdot}^{v},v_{\cdot})\) is called an admissible pair. The cost functional is given by
$$ J(v_{\cdot})=\mathbb{E} \biggl[ \int_{0}^{T}l \bigl(t,X_{t}^{v},X_{t-\delta _{1}}^{v},v_{t},v_{t-\delta_{2}} \bigr)\,dt +\Phi \bigl(X_{T}^{v} \bigr) \biggr],\quad v_{\cdot}\in\mathcal{U}, $$
(2)
where \(l(\omega,t,x,x_{\delta},v,v_{\delta}):\Omega\times[0,T]\times \mathbb{R}^{n}\times\mathbb{R}^{n}\times U\times U\rightarrow\mathbb {R}\) is an adapted function and \(\Phi(\omega,x):\Omega\times\mathbb {R}^{n}\rightarrow\mathbb{R}\) is a measurable function. The objective of our control problem is to find an admissible control \(u_{\cdot}^{*}\in\mathcal{U}\) which satisfies
$$ J\bigl(u_{\cdot}^{*}\bigr)=V\triangleq\inf _{v_{\cdot}\in\mathcal{U}}J(v_{\cdot}). $$
(3)
The following assumption will be in force throughout this paper.
  1. (H1)

    The functions l and Φ are continuously differentiable in \((x,x_{\delta},v,v_{\delta})\), and there exist a positive constant C and a continuous function \(h(v,v_{\delta}): U\times U\rightarrow\mathbb{R}\) such that the partial derivatives of l and Φ are bounded by \(C(1+ \vert x \vert + \vert x_{\delta} \vert +h(v,v_{\delta}))\). Besides, \(\Phi(0)\) is \(\mathcal{F}_{T}\)-measurable, and \(\mathbb {E} \vert \Phi(0) \vert +\mathbb{E}\int_{0}^{T} \vert l(t,0,0,0,0) \vert \,dt<\infty\).

     

For later use, let us assume that \(B_{1}(t)\) and \(B_{2}(t)\) are well defined and bounded for \(T< t\leq T+\delta_{1}\), \(D_{1}(t)\) and \(D_{2}(t)\) are well defined and bounded for \(T< t\leq T+\delta_{2}\), \(l_{x_{\delta}}(t,x,x_{\delta},v,v_{\delta})=0\) for \(T< t\leq T+\delta_{1}\), and \(l_{v_{\delta}}(t,x,x_{\delta},v,v_{\delta})=0\) for \(T< t\leq T+\delta_{2}\).

By Theorem 2.1 in [11], SDDE (1) admits a unique solution \(X_{\cdot}^{v}\in S^{2}(0,T;\mathbb{R}^{n})\). Moreover, there exists \(C>0\) which is independent of \(v_{\cdot}\in\mathcal{U}\) such that
$$ \mathbb{E} \Bigl[\sup_{0\leq t\leq T} \bigl\vert X_{t}^{v} \bigr\vert ^{2} \Bigr]\leq C,\quad \forall v_{\cdot}\in\mathcal{U}. $$
(4)
Then from (H1) it follows that J is well defined on \(\mathcal{U}\) and there exists \(C>0\) which is independent of \(v_{\cdot}\in\mathcal {U}\) such that \(\vert J(v_{\cdot}) \vert \leq C\).

For the study of near-optimality, let us give the related definitions; see [14].

Definition 1

For \(\varepsilon>0\), \(v^{\varepsilon}_{\cdot}\in \mathcal{U}\) is called ε-optimal if \(\vert J(v^{\varepsilon}_{\cdot})-V \vert \leq\varepsilon\). A family of admissible controls \(\{v^{\varepsilon}_{\cdot}\}\) parameterized by \(\varepsilon>0\) is called near-optimal if \(\vert J(v^{\varepsilon}_{\cdot})-V \vert \leq r(\varepsilon)\) holds for sufficiently small ε, where \(r(\varepsilon)\rightarrow0\) as \(\varepsilon\rightarrow0\). If the error bound \(r(\varepsilon)\) satisfies \(r(\varepsilon)=c\varepsilon^{\gamma}\) for some \(\gamma>0\) independent of c, then \(v^{\varepsilon}_{\cdot}\) is called near-optimal with order \(\varepsilon^{\gamma}\).

Denote \(\Theta_{t}^{v}= (t,X_{t}^{v},X_{t-\delta _{1}}^{v},v_{t},v_{t-\delta_{2}} )\). Let us introduce the following adjoint equation:
$$ \textstyle\begin{cases} dY_{t}^{v}=- \{\mathbb{E}^{\mathcal{F}_{t}} [B_{1}(t+\delta _{1})^{T}Y_{t+\delta_{1}}^{v} +B_{2}(t+\delta_{1})^{T}Z_{t+\delta_{1}}^{v} +l_{x_{\delta}}(\Theta_{t+\delta_{1}}^{v}) ]\\ \hphantom{dY_{t}^{v}=}{} +A_{1}(t)^{T}Y_{t}^{v}+A_{2}(t)^{T}Z_{t}^{v}+l_{x}(\Theta_{t}^{v}) \}\,dt+Z_{t}^{v}\,dW_{t},\quad 0\leq t\leq T,\\ Y_{T}^{v}=\Phi_{x}(X_{T}^{v}),\\ Y_{t}^{v}=0,\quad\quad Z_{t}^{v}=0,\quad T< t\leq T+\delta_{1}, \end{cases} $$
(5)
whose solution is defined to be a pair of processes \((Y_{\cdot}^{v},Z_{\cdot}^{v})\in M^{2}(0,T;\mathbb{R}^{n})\times M^{2}(0,T;\mathbb{R}^{n})\) satisfying (5). Let us assume w.o.l.g. that \(Y_{t}^{v}\) and \(Z_{t}^{v}\) vanish for \(T< t\leq T+\max\{\delta_{1},\delta_{2}\}\) for all \(v_{\cdot}\in\mathcal{U}\).

Proposition 2

Assume (H1). Then the adjoint equation (5) admits a unique solution \((Y_{\cdot}^{v},Z_{\cdot}^{v})\) for any \(v_{\cdot}\in\mathcal{U}\). Moreover, there exists \(C>0\) which is independent of \(v_{\cdot}\in \mathcal{U}\) such that
$$ \mathbb{E} \biggl[\sup_{0\leq t\leq T} \bigl\vert Y_{t}^{v} \bigr\vert ^{2}+ \int_{0}^{T} \bigl\vert Z_{t}^{v} \bigr\vert ^{2}\,dt \biggr]\leq C,\quad \forall v_{\cdot}\in \mathcal{U}. $$
(6)

Proof

Set
$$\begin{aligned} g(t,y,z,\zeta_{s},\kappa_{r})&=A_{1}(t)^{T}y+A_{2}(t)^{T}z+l_{x} \bigl(\Theta_{t}^{v}\bigr) \\ &\quad{} + \mathbb{E}^{\mathcal{F}_{t}} \bigl[B_{1}(t+\delta_{1})^{T} \zeta_{s} +B_{2}(t+\delta_{1})^{T} \kappa_{r} +l_{x_{\delta}}\bigl(\Theta_{t+\delta_{1}}^{v} \bigr) \bigr]. \end{aligned}$$
First, g is Lipschitz continuous in \((y,z,\zeta_{s},\kappa_{r})\), so the assumption (H1) in [9] is satisfied. Next, we have
$$\mathbb{E} \int_{0}^{T} \bigl\vert g(t,0,0,0,0) \bigr\vert ^{2}\,dt\leq 2\mathbb{E} \int_{0}^{T} \bigl\vert l_{x}\bigl( \Theta_{t}^{v}\bigr) \bigr\vert ^{2}\,dt +2 \mathbb{E} \int_{0}^{T} \bigl\vert \mathbb{E}^{\mathcal{F}_{t}} \bigl[l_{x_{\delta}}\bigl(\Theta_{t+\delta_{1}}^{v}\bigr) \bigr] \bigr\vert ^{2}\,dt. $$
Using Jensen’s inequality, Fubini’s theorem and a change of variables lead to
$$\mathbb{E} \int_{0}^{T} \bigl\vert \mathbb{E}^{\mathcal{F}_{t}} \bigl[l_{x_{\delta}}\bigl(\Theta_{t+\delta_{1}}^{v}\bigr) \bigr] \bigr\vert ^{2}\,dt \leq\mathbb{E} \int_{\delta_{1}}^{T+\delta_{1}} \bigl\vert l_{x_{\delta}}\bigl( \Theta_{t}^{v}\bigr) \bigr\vert ^{2}\,dt \leq \mathbb{E} \int_{0}^{T+\delta_{1}} \bigl\vert l_{x_{\delta}}\bigl( \Theta _{t}^{v}\bigr) \bigr\vert ^{2}\,dt. $$
Since it is assumed that \(l_{x_{\delta}}(t,x,x_{\delta},v,v_{\delta})=0\) for \(T< t\leq T+\delta_{1}\), we have
$$\mathbb{E} \int_{0}^{T+\delta_{1}} \bigl\vert l_{x_{\delta}}\bigl( \Theta _{t}^{v}\bigr) \bigr\vert ^{2}\,dt= \mathbb{E} \int_{0}^{T} \bigl\vert l_{x_{\delta}}\bigl( \Theta_{t}^{v}\bigr) \bigr\vert ^{2}\,dt. $$
Thus,
$$\mathbb{E} \int_{0}^{T} \bigl\vert g(t,0,0,0,0) \bigr\vert ^{2}\,dt\leq 2\mathbb{E} \int_{0}^{T} \bigl( \bigl\vert l_{x} \bigl(\Theta_{t}^{v}\bigr) \bigr\vert ^{2}+ \bigl\vert l_{x_{\delta}}\bigl(\Theta_{t}^{v}\bigr) \bigr\vert ^{2} \bigr)\,dt. $$
Recall that U is a bounded set. Then, in view of (H1), we can use (4) to show that there exists \(C>0\), which is independent of \(v_{\cdot}\), such that
$$\mathbb{E} \int_{0}^{T} \bigl\vert g(t,0,0,0,0) \bigr\vert ^{2}\,dt\leq C. $$
Besides, \(\mathbb{E} \vert \Phi_{x}(X_{T}^{v}) \vert ^{2}\leq C' \mathbb {E}(1+ \vert X_{T}^{v} \vert ^{2})\leq C\). Consequently, by Theorem 4.2 in [9] we conclude that (5) admits a unique solution. Finally, the estimate (6) can easily be obtained by Proposition 4.4 in [9]. □
Let us define a metric d on \(\mathcal{U}\) by
$$d(u_{\cdot},v_{\cdot})=\sqrt{\mathbb{E} \int_{0}^{T} \vert u_{t}-v_{t} \vert ^{2}\,dt}. $$
Then it is well known that \((\mathcal{U},d)\) is a complete metric space.

Next result gives the continuity of \(X_{\cdot}^{v}\) in \(v_{\cdot}\in \mathcal{U}\).

Proposition 3

Assume (H1). Then there exists \(C>0\) satisfying
$$\mathbb{E} \Bigl[\sup_{0\leq t\leq T} \bigl\vert X_{t}^{u}-X^{v}_{t} \bigr\vert ^{2} \Bigr] \leq Cd(u_{\cdot},v_{\cdot})^{2}, \quad\forall u_{\cdot},v_{\cdot}\in \mathcal{U}. $$

Proof

Using the estimate (3) in [11], we get
$$\begin{aligned} \mathbb{E} \Bigl[\sup_{0\leq t\leq T} \bigl\vert X_{t}^{u}-X^{v}_{t} \bigr\vert ^{2} \Bigr] &\leq\mathbb{E} \int_{0}^{T} \bigl\vert b\bigl(\Theta _{t}^{u}\bigr)-b\bigl(t,X_{t}^{u},X_{t-\delta_{1}}^{u},v_{t},v_{t-\delta_{2}} \bigr) \bigr\vert ^{2}\,dt \\ &\quad{} +\mathbb{E} \int_{0}^{T} \bigl\vert \sigma\bigl( \Theta_{t}^{u}\bigr)-\sigma \bigl(t,X_{t}^{u},X_{t-\delta_{1}}^{u},v_{t},v_{t-\delta_{2}} \bigr) \bigr\vert ^{2}\,dt. \end{aligned}$$
Then it follows that
$$\mathbb{E} \Bigl[\sup_{0\leq t\leq T} \bigl\vert X_{t}^{u}-X^{v}_{t} \bigr\vert ^{2} \Bigr]\leq C\mathbb{E} \int_{0}^{T} \vert u_{t}-v_{t} \vert ^{2}\,dt+C\mathbb{E} \int_{0}^{T} \vert u_{t-\delta_{2}}-v_{t-\delta _{2}} \vert ^{2}\,dt, $$
where by the definition of admissible controls, we can use a change of variables to get
$$\mathbb{E} \int_{0}^{T} \vert u_{t-\delta_{2}}-v_{t-\delta_{2}} \vert ^{2}\,dt =\mathbb{E} \int_{-\delta_{2}}^{T-\delta_{2}} \vert u_{t}-v_{t} \vert ^{2}\,dt\leq\mathbb{E} \int_{0}^{T} \vert u_{t}-v_{t} \vert ^{2}\,dt. $$
Thus, the proof is complete. □
Let us assume, moreover,
  1. (H2)

    \((\Phi_{x},l_{x},l_{x_{\delta}},l_{v},l_{v_{\delta}})\) are Lipschitz in \((x,x_{\delta},v,v_{\delta})\).

     

The following result shows that \((Y_{\cdot}^{v},Z_{\cdot}^{v})\) is continuous in \(v_{\cdot}\in\mathcal{U}\).

Proposition 4

Assume (H1) and (H2). Then there exists \(C>0\) such that
$$\mathbb{E} \biggl[\sup_{0\leq t\leq T} \bigl\vert Y_{t}^{u}-Y_{t}^{v} \bigr\vert ^{2}+ \int_{0}^{T} \bigl\vert Z_{t}^{u}-Z_{t}^{v} \bigr\vert ^{2}\,dt \biggr] \leq Cd(u_{\cdot},v_{\cdot})^{2}, \quad\forall u_{\cdot}, v_{\cdot}\in \mathcal{U}. $$

Proof

Set \(\bar{Y}_{t}=Y_{t}^{u}-Y_{t}^{v}\), \(\bar{Z}_{t}=Z_{t}^{u}-Z_{t}^{v}\). Let us prove by dividing \([0,T]\) backwardly. Firstly, for \(t\in I_{1}=[T-\delta_{1},T]\), \((\bar{Y}_{t},\bar{Z}_{t})\) solves a linear BSDE
$$ \textstyle\begin{cases} -d\bar{Y}_{t}=\{A_{1}(t)^{T}\bar{Y}_{t}+A_{2}(t)^{T}\bar{Z}_{t}+l_{x}(\Theta _{t}^{u})-l_{x}(\Theta_{t}^{v})\}\,dt-\bar{Z}_{t}\,dW_{t},\\ \bar{Y}_{T}=\Phi_{x}(X_{T}^{u})-\Phi_{x}(X_{T}^{v}). \end{cases} $$
The basic prior estimate of BSDEs gives
$$\mathbb{E} \biggl[\sup_{t\in I_{1}} \vert \bar{Y}_{t} \vert ^{2}+ \int_{t\in I_{1}} \vert \bar{Z}_{t} \vert ^{2} \,dt \biggr] \leq C\mathbb{E} \biggl[ \bigl\vert \Phi_{x} \bigl(X_{T}^{u}\bigr)-\Phi_{x}\bigl(X_{T}^{v} \bigr) \bigr\vert ^{2}+ \int_{t\in I_{1}} \bigl\vert l_{x}\bigl( \Theta_{t}^{u}\bigr)-l_{x}\bigl(\Theta _{t}^{v}\bigr) \bigr\vert ^{2}\,dt \biggr]. $$
Then, in view of (H2), using Proposition 3 and a change of variables lead to
$$ \mathbb{E} \biggl[\sup_{t\in I_{1}} \vert \bar{Y}_{t} \vert ^{2}+ \int_{t\in I_{1}} \vert \bar{Z}_{t} \vert ^{2} \,dt \biggr] \leq Cd(u_{\cdot},v_{\cdot})^{2}. $$
(7)
Next, on \(I_{2}=[T-2\delta_{1},T-\delta_{1}]\), \((\bar{Y}_{\cdot},\bar {Z}_{\cdot})\) solves a BSDE with terminal value \(\bar{Y}_{T-\delta_{1}}\) and generator function \(f(t,y,z)=A_{1}(t)^{T}y+A_{2}(t)^{T}z+\Delta(t)\), where \(\Delta(t)=l_{x}(\Theta_{t}^{u})-l_{x}(\Theta_{t}^{v})+\mathbb{E}^{\mathcal {F}_{t}} [B_{1}(t+\delta_{1})^{T}\bar{Y}_{t+\delta_{1}} +B_{2}(t+\delta_{1})^{T}\bar{Z}_{t+\delta_{1}} +l_{x_{\delta}}(\Theta_{t+\delta_{1}}^{u})-l_{x_{\delta}}(\Theta_{t+\delta _{1}}^{v}) ]\). On the one hand, by (7), \(\mathbb{E} \vert \bar {Y}_{T-\delta_{1}} \vert ^{2}\leq Cd(u_{\cdot},v_{\cdot})^{2}\). On the other hand, by Jensen’s inequality and a change of variables we get
$$\begin{aligned} \mathbb{E} \int_{t\in I_{2}} \bigl\vert \Delta(t) \bigr\vert ^{2} \,dt &\leq C\mathbb{E} \int_{t\in I_{2}} \bigl\vert l_{x}\bigl(\Theta _{t}^{u}\bigr)-l_{x}\bigl(\Theta_{t}^{v} \bigr) \bigr\vert ^{2}\,dt \\ &\quad{} +C\mathbb{E} \int_{t\in I_{1}} \bigl( \vert \bar{Y}_{t} \vert ^{2}+ \vert \bar{Z}_{t} \vert ^{2}+ \bigl\vert l_{x_{\delta}}\bigl(\Theta_{t}^{u} \bigr)-l_{x_{\delta}}\bigl(\Theta_{t}^{v}\bigr) \bigr\vert ^{2} \bigr)\,dt. \end{aligned}$$
Then, by (H2), (7) and Proposition 3, we can use a change of variables again to get
$$\mathbb{E} \int_{t\in I_{2}} \bigl\vert \Delta(t) \bigr\vert ^{2} \,dt\leq Cd(u_{\cdot},v_{\cdot})^{2}. $$
So,
$$ \mathbb{E} \biggl[\sup_{t\in I_{2}} \vert \bar{Y}_{t} \vert ^{2}+ \int_{t\in I_{2}} \vert \bar{Z}_{t} \vert ^{2} \,dt \biggr] \leq C'\mathbb{E} \biggl[ \vert \bar{Y}_{T-\delta_{1}} \vert ^{2}+ \int_{t\in I_{2}} \bigl\vert \Delta(t) \bigr\vert ^{2} \,dt \biggr]\leq Cd(u_{\cdot},v_{\cdot})^{2}. $$
Thus, we derive
$$ \mathbb{E} \biggl[\sup_{T-2\delta_{1}\leq t\leq T} \vert \bar{Y}_{t} \vert ^{2}+ \int_{T-2\delta_{1}}^{T} \vert \bar {Z}_{t} \vert ^{2}\,dt \biggr]\leq Cd(u_{\cdot},v_{\cdot})^{2}. $$
In the same way, we can get the result after finite steps. □

Next we prove that J is a continuous functional of \(v_{\cdot}\in \mathcal{U}\).

Proposition 5

Assume (H1). Then there exists \(C>0\) such that \(\vert J(u_{\cdot})-J(v_{\cdot}) \vert \leq Cd(u_{\cdot},v_{\cdot})\) holds for all \(u_{\cdot},v_{\cdot}\in\mathcal{U}\).

Proof

Set \(\bar{X}_{t}=X_{t}^{u}-X_{t}^{v}\), \(\bar{v}_{t}=u_{t}-v_{t}\). We have
$$\begin{aligned}& \Phi\bigl(X_{T}^{u}\bigr)-\Phi\bigl(X_{T}^{v} \bigr)= \int_{0}^{1}\bigl\langle \Phi_{x} \bigl(X_{T}^{v}+\lambda \bar{X}_{T} \bigr), \bar{X}_{T}\bigr\rangle \,d\lambda, \\& l\bigl(\Theta_{t}^{u}\bigr)-l\bigl(\Theta_{t}^{v} \bigr)= \int_{0}^{1}\bigl\{ \bigl\langle l_{x}( \Lambda_{t}),\bar {X}_{t}\bigr\rangle +\bigl\langle l_{x_{\delta}}(\Lambda_{t}),\bar{X}_{t-\delta_{1}}\bigr\rangle +\bigl\langle l_{v}(\Lambda_{t}),\bar{v}_{t}\bigr\rangle + \bigl\langle l_{v_{\delta}}(\Lambda_{t}),\bar{v}_{t-\delta_{2}}\bigr\rangle \bigr\} \,d\lambda, \end{aligned}$$
with \(\Lambda_{t}= (t,X_{t}^{v}+\lambda\bar{X}_{t},X_{t-\delta _{1}}^{v}+\lambda\bar{X}_{t-\delta_{1}},v_{t}+\lambda\bar{v}_{t},v_{t-\delta _{1}}+\lambda\bar{v}_{t-\delta_{1}} )\). By (H1), (4) and Proposition 3, we can use the Cauchy-Schwartz inequality to get
$$\mathbb{E} \bigl\vert \Phi\bigl(X_{T}^{u}\bigr)-\Phi \bigl(X_{T}^{v}\bigr) \bigr\vert \leq C d(u_{\cdot},v_{\cdot}). $$
With a similar method, together with a change of variables, we have
$$\mathbb{E} \int_{0}^{T} \bigl\vert l\bigl( \Theta_{t}^{u}\bigr)-l\bigl(\Theta_{t}^{v} \bigr) \bigr\vert \,dt\leq C d(u_{\cdot},v_{\cdot}). $$
Thus the proof is complete. □

The following Ekeland’s variational principle will play a key role in what follows, for which one can see [15].

Lemma 6

Let \((S,d)\) be a complete metric space and \(F:S\rightarrow\mathbb{R}\) a lower-semicontinuous and bounded from below function. Assume that \(v^{\varepsilon}\in S\) satisfies \(F(v^{\varepsilon})\leq\inf_{v\in S}F(v)+\varepsilon\) for some \(\varepsilon\geq0\). Then, for any \(\lambda>0\), there exists \(v^{\lambda}\in S\) such that \(F(v^{\lambda})\leq F(v^{\varepsilon})\), \(d(v^{\lambda},v^{\varepsilon})\leq\lambda\), and \(F(v^{\lambda})\leq F(v)+\frac{\varepsilon}{\lambda}d(v,v^{\lambda})\) for all \(v\in S\).

3 Necessary condition for near-optimality

This section is devoted to establishing necessary conditions for near-optimal controls of the stochastic control problem (1)-(3).

Recall from the previous section that \(J(v_{\cdot})\) is a continuous and bounded from below functional on the complete metric space \((\mathcal{U},d)\). Now let \(u^{\varepsilon}_{\cdot}\in\mathcal{U}\) be an ε-optimal control of problem (1)-(3) with \(\varepsilon>0\), that is, \(J(u^{\varepsilon}_{\cdot})\leq\inf_{v_{\cdot}\in\mathcal{U}}J(v_{\cdot})+\varepsilon\). Then applying Lemma 6 with \(\lambda=\sqrt{\varepsilon}\) leads to the existence of \(\tilde{u}^{\varepsilon}_{\cdot}\in\mathcal {U}\) such that
$$\begin{aligned}& J\bigl(\tilde{u}^{\varepsilon}_{\cdot}\bigr)\leq J\bigl(u^{\varepsilon}_{\cdot}\bigr), \end{aligned}$$
(8)
$$\begin{aligned}& d\bigl(\tilde{u}^{\varepsilon}_{\cdot},u^{\varepsilon}_{\cdot}\bigr)\leq\sqrt {\varepsilon}, \end{aligned}$$
(9)
$$\begin{aligned}& J\bigl(\tilde{u}^{\varepsilon}_{\cdot}\bigr)\leq J(v_{\cdot})+ \sqrt{\varepsilon }d\bigl(v_{\cdot},\tilde{u}^{\varepsilon}_{\cdot}\bigr),\quad\forall v_{\cdot}\in \mathcal{U}. \end{aligned}$$
(10)
In what follows, we first study \(\tilde{u}^{\varepsilon}_{\cdot}\), and then turn to \(u^{\varepsilon}_{\cdot}\). Let \(u_{\cdot}\in M^{2}(-\delta_{2},T)\) satisfy \(\tilde{u}^{\varepsilon}_{\cdot}+ u_{\cdot}\in\mathcal{U}\). Then it is easy to see that \(u_{t}=0\) for \(-\delta_{2}\leq t<0\), and the convexity of \(\mathcal{U}\) shows that \(u_{\cdot}^{\theta}\triangleq \tilde{u}^{\varepsilon}_{\cdot}+\theta u_{\cdot}\in\mathcal{U}\) for any \(\theta\in[0,1]\). Since U is bounded, there exists \(C>0\), independent of ε and θ, such that \(d(u^{\theta}_{\cdot},\tilde{u}^{\varepsilon}_{\cdot})\leq C\theta\). So (10) leads to
$$ J\bigl(u^{\theta}_{\cdot}\bigr)-J\bigl( \tilde{u}^{\varepsilon}_{\cdot}\bigr)\geq-C\sqrt {\varepsilon}\theta. $$
(11)
Let \(X_{\cdot}^{\varepsilon}\), \(\tilde{X}_{\cdot}^{\varepsilon}\), \(X_{\cdot}^{\theta}\) be, respectively, the trajectories associated with \(u_{\cdot}^{\varepsilon}\), \(\tilde{u}_{\cdot}^{\varepsilon}\), \(u_{\cdot}^{\theta}\). Let \((Y_{\cdot}^{\varepsilon},Z_{\cdot}^{\varepsilon})\) and \((\tilde {Y}^{\varepsilon}_{\cdot},\tilde{Z}^{\varepsilon}_{\cdot})\) be, respectively, the solutions of the adjoint equation (5) associated with \((u_{\cdot}^{\varepsilon},X_{\cdot}^{\varepsilon})\) and \((\tilde{u}^{\varepsilon}_{\cdot},\tilde{X}^{\varepsilon}_{\cdot})\). Set \(\Theta_{t}^{\varepsilon}= (t,X_{t}^{\varepsilon},X_{t-\delta _{1}}^{\varepsilon},u_{t}^{\varepsilon},u_{t-\delta_{2}}^{\varepsilon})\), \(\tilde{\Theta}_{t}^{\varepsilon}= (t,\tilde{X}_{t}^{\varepsilon},\tilde{X}_{t-\delta_{1}}^{\varepsilon},\tilde{u}_{t}^{\varepsilon},\tilde {u}_{t-\delta_{2}}^{\varepsilon})\), \(\Theta_{t}^{\theta}= (t,X_{t}^{\theta},X_{t-\delta_{1}}^{\theta},u_{t}^{\theta},u_{t-\delta_{2}}^{\theta})\). Let us introduce the following variational equation:
$$ \textstyle\begin{cases} dX_{t}^{1}= [ A_{1}(t)X_{t}^{1}+B_{1}(t)X_{t-\delta_{1}}^{1}+C_{1}(t)u_{t}+D_{1}(t)u_{t-\delta _{2}} ]\,dt\\ \hphantom{dX_{t}^{1}=}{}+ [ A_{2}(t)X_{t}^{1}+B_{2}(t)X_{t-\delta_{1}}^{1}+C_{2}(t)u_{t}+D_{2}(t)u_{t-\delta _{2}} ]\,dW_{t}, \quad0\leq t\leq T,\\ X_{t}^{1}=0,\quad-\delta_{1}\leq t\leq0. \end{cases} $$
(12)
It is easy to check that (12) admits a unique solution \(X^{1}_{\cdot}\in S^{2}(0,T;\mathbb{R}^{n})\).

The following result is a necessary condition for \(\tilde {u}_{\cdot}^{\varepsilon}\).

Proposition 7

Assume (H1)-(H2). Then there exists \(C>0\), independent of ε, such that
$$\begin{aligned} &\mathbb{E} \int_{0}^{T}\bigl\langle \mathbb{E}^{\mathcal{F}_{t}} \bigl[D_{1}(t+\delta _{2})^{T}\tilde{Y}_{t+\delta_{2}}^{\varepsilon}+D_{2}(t+ \delta_{2})^{T}\tilde {Z}_{t+\delta_{2}}^{\varepsilon}+l_{v_{\delta}} \bigl(\tilde{\Theta}_{t+\delta _{2}}^{\varepsilon}\bigr)\bigr] \\ &\quad{} +C_{1}(t)^{T}\tilde{Y}_{t}^{\varepsilon}+C_{2}(t)^{T} \tilde {Z}_{t}^{\varepsilon}+l_{v}\bigl(\tilde{ \Theta}_{t}^{\varepsilon}\bigr) ,v-\tilde{u}_{t}^{\varepsilon}\bigr\rangle \,dt\geq-C\sqrt{\varepsilon},\quad \forall v\in U. \end{aligned}$$
(13)

Proof

Following the proof of Lemma 3.3 in [11], we have
$$\begin{aligned} \lim_{\theta\downarrow0}\theta^{-1} \bigl[J\bigl(u_{\cdot}^{\theta}\bigr)-J\bigl(\tilde{u}_{\cdot}^{\varepsilon}\bigr) \bigr] &=\mathbb{E} \biggl\{ \bigl\langle \Phi_{x}\bigl(\tilde{X}_{T}^{\varepsilon}\bigr),X_{T}^{1}\bigr\rangle + \int_{0}^{T} \bigl[\bigl\langle l_{x} \bigl(\tilde{\Theta }_{t}^{\varepsilon}\bigr),X_{t}^{1} \bigr\rangle \\ & \quad{} +\bigl\langle l_{x_{\delta}}\bigl(\tilde{\Theta}_{t}^{\varepsilon}\bigr),X_{t-\delta_{1}}^{1}\bigr\rangle +\bigl\langle l_{v} \bigl(\tilde{\Theta}_{t}^{\varepsilon}\bigr),u_{t}\bigr\rangle +\bigl\langle l_{v_{\delta}}\bigl(\tilde{\Theta}_{t}^{\varepsilon}\bigr), u_{t-\delta_{2}}\bigr\rangle \bigr]\,dt \biggr\} . \end{aligned}$$
Using a change of variables leads to
$$\mathbb{E} \int_{0}^{T}\bigl\langle l_{x_{\delta}}\bigl( \tilde{\Theta }_{t}^{\varepsilon}\bigr),X_{t-\delta_{1}}^{1} \bigr\rangle \,dt =\mathbb{E} \int_{-\delta_{1}}^{T-\delta_{1}}\bigl\langle \mathbb {E}^{\mathcal{F}_{t}} \bigl[l_{x_{\delta}}\bigl(\tilde{\Theta}_{t+\delta _{1}}^{\varepsilon}\bigr) \bigr],X_{t}^{1}\bigr\rangle \,dt. $$
Then, since \(X_{t}^{1}=0\) for \(-\delta_{1}\leq t<0\) and \(l_{x_{\delta}}(t,x,x_{\delta},v,v_{\delta})=0\) for \(T< t\leq T+\delta_{1}\), we have
$$\mathbb{E} \int_{0}^{T}\bigl\langle l_{x_{\delta}}\bigl( \tilde{\Theta }_{t}^{\varepsilon}\bigr),X_{t-\delta_{1}}^{1} \bigr\rangle \,dt =\mathbb{E} \int_{0}^{T}\bigl\langle \mathbb{E}^{\mathcal{F}_{t}} \bigl[l_{x_{\delta}}\bigl(\tilde{\Theta}_{t+\delta_{1}}^{\varepsilon}\bigr) \bigr],X_{t}^{1}\bigr\rangle \,dt. $$
Similarly,
$$\mathbb{E} \int_{0}^{T}\bigl\langle l_{v_{\delta}}\bigl( \tilde{\Theta }_{t}^{\varepsilon}\bigr),u_{t-\delta_{2}}\bigr\rangle \,dt =\mathbb{E} \int_{0}^{T}\bigl\langle \mathbb{E}^{\mathcal{F}_{t}} \bigl[l_{v_{\delta}}\bigl(\tilde{\Theta}_{t+\delta_{2}}^{\varepsilon}\bigr) \bigr],u_{t}\bigr\rangle \,dt. $$
Consequently, from (11) it follows that
$$\begin{aligned} &\mathbb{E}\bigl\langle \Phi_{x}\bigl( \tilde{X}_{T}^{\varepsilon}\bigr),X_{T}^{1}\bigr\rangle +\mathbb{E} \int_{0}^{T}\bigl\langle l_{x}\bigl( \tilde{\Theta}_{t}^{\varepsilon}\bigr) +\mathbb{E}^{\mathcal{F}_{t}} \bigl[l_{x_{\delta}}\bigl(\tilde{\Theta}_{t+\delta _{1}}^{\varepsilon}\bigr) \bigr],X_{t}^{1}\bigr\rangle \,dt \\ &\quad{} +\mathbb{E} \int_{0}^{T}\bigl\langle l_{v}\bigl( \tilde{\Theta }_{t}^{\varepsilon}\bigr) +\mathbb{E}^{\mathcal{F}_{t}} \bigl[l_{v_{\delta}}\bigl(\tilde {\Theta}_{t+\delta_{2}}^{\varepsilon}\bigr) \bigr], u_{t}\bigr\rangle \,dt\geq-C\sqrt {\varepsilon}. \end{aligned}$$
(14)
On the other hand, applying Itô’s formula to \(\langle X_{t}^{1},\tilde {Y}_{t}^{\varepsilon}\rangle\) gives
$$\begin{aligned} &\mathbb{E}\bigl\langle \Phi_{x}\bigl(\tilde{X}_{T}^{\varepsilon}\bigr),X_{T}^{1}\bigr\rangle +\mathbb{E} \int_{0}^{T}\bigl\langle l_{x}\bigl( \tilde{\Theta}_{t}^{\varepsilon}\bigr) +\mathbb{E}^{\mathcal{F}_{t}} \bigl[l_{x_{\delta}}\bigl(\tilde{\Theta}_{t+\delta _{1}}^{\varepsilon}\bigr) \bigr], X_{t}^{1}\bigr\rangle \,dt \\ &\quad =\mathbb{E} \int_{0}^{T} \bigl(\bigl\langle B_{1}(t)^{T} \tilde{Y}_{t}^{\varepsilon}, X^{1}_{t-\delta_{1}}\bigr\rangle -\bigl\langle \mathbb{E}^{\mathcal {F}_{t}}\bigl[B_{1}(t+ \delta_{1})^{T}\tilde{Y}_{t+\delta_{1}}^{\varepsilon}\bigr],X^{1}_{t}\bigr\rangle \bigr)\,dt \\ &\quad\quad{} +\mathbb{E} \int_{0}^{T} \bigl(\bigl\langle B_{2}(t)^{T} \tilde {Z}_{t}^{\varepsilon}, X^{1}_{t-\delta_{1}}\bigr\rangle -\bigl\langle \mathbb {E}^{\mathcal{F}_{t}}\bigl[B_{2}(t+ \delta_{1})^{T}\tilde{Z}_{t+\delta _{1}}^{\varepsilon}\bigr],X^{1}_{t}\bigr\rangle \bigr)\,dt \\ &\quad\quad{} +\mathbb{E} \int_{0}^{T}\bigl\langle C_{1}(t)^{T} \tilde {Y}_{t}^{\varepsilon}+C_{2}(t)^{T} \tilde{Z}_{t}^{\varepsilon},u_{t}\bigr\rangle \,dt + \mathbb{E} \int_{0}^{T}\bigl\langle D_{1}(t)^{T} \tilde{Y}_{t}^{\varepsilon}+D_{2}^{T}(t) \tilde{Z}_{t}^{\varepsilon},u_{t-\delta_{2}}\bigr\rangle \,dt. \end{aligned}$$
Then we can use a change of variables to get
$$\begin{aligned} &\mathbb{E}\bigl\langle \Phi_{x}\bigl(\tilde{X}_{T}^{\varepsilon}\bigr),X_{T}^{1}\bigr\rangle +\mathbb{E} \int_{0}^{T}\bigl\langle l_{x}\bigl( \tilde{\Theta}_{t}^{\varepsilon}\bigr) +\mathbb{E}^{\mathcal{F}_{t}} \bigl[l_{x_{\delta}}\bigl(\tilde{\Theta}_{t+\delta _{1}}^{\varepsilon}\bigr) \bigr], X_{t}^{1}\bigr\rangle \,dt \\ &\quad =\mathbb{E} \int_{0}^{T}\bigl\langle C_{1}(t)^{T} \tilde{Y}_{t}^{\varepsilon}+C_{2}(t)^{T} \tilde{Z}_{t}^{\varepsilon}+\mathbb{E}^{\mathcal{F}_{t}} \bigl[D_{1}(t+\delta_{2})^{T}\tilde{Y}_{t+\delta _{2}}^{\varepsilon}+D_{2}(t+ \delta_{2})^{T}\tilde{Z}_{t+\delta_{2}}^{\varepsilon}\bigr] ,u_{t}\bigr\rangle \,dt. \end{aligned}$$
Combining this equality and (14) gives
$$\begin{aligned} &\mathbb{E} \int_{0}^{T}\bigl\langle \mathbb{E}^{\mathcal{F}_{t}} \bigl[D_{1}(t+\delta _{2})^{T}\tilde{Y}_{t+\delta_{2}}^{\varepsilon}+D_{2}(t+ \delta_{2})^{T}\tilde {Z}_{t+\delta_{2}}^{\varepsilon}+l_{v_{\delta}} \bigl(\tilde{\Theta}_{t+\delta _{2}}^{\varepsilon}\bigr)\bigr] \\ &\quad{} +C_{1}(t)^{T}\tilde{Y}_{t}^{\varepsilon}+C_{2}(t)^{T} \tilde {Z}_{t}^{\varepsilon}+l_{v}\bigl(\tilde{ \Theta}_{t}^{\varepsilon}\bigr) ,u_{t}\bigr\rangle \,dt\geq-C \sqrt{\varepsilon}. \end{aligned}$$
Recall that \(u_{\cdot}\) is any process in \(M^{2}(-\delta_{2},T)\) satisfying \(\tilde{u}^{\varepsilon}_{\cdot}+u_{\cdot}\in\mathcal{U}\). For any \(v\in U\), let us define \(v_{t}=v\) when \(0< t\leq T\) and \(v_{t}=\eta_{t}\) when \(-\delta_{2}\leq t\leq0\). Replacing \(u_{t}\) in the previous inequality with \(v_{t}-\tilde{u}^{\varepsilon}_{t}\) leads to the conclusion. □
Let us define
$$\begin{aligned}& H(t,x,x_{\delta},y,z,v,v_{\delta})=\bigl\langle b(t,x,x_{\delta},v,v_{\delta}),y\bigr\rangle +\bigl\langle \sigma(t,x,x_{\delta},v,v_{\delta}),z \bigr\rangle +l(t,x,x_{\delta},v,v_{\delta}), \\& \mathcal{H}^{\varepsilon}(t,v)=H \bigl(\Sigma_{t}^{\varepsilon},v,u_{t-\delta_{2}}^{\varepsilon}\bigr) +\mathbb{E}^{\mathcal{F}_{t}} \bigl[ H \bigl(\Sigma_{t+\delta _{2}}^{\varepsilon},u_{t+\delta_{2}}^{\varepsilon},v \bigr) \bigr], \end{aligned}$$
where \(\Sigma_{t}^{\varepsilon}= (t,X_{t}^{\varepsilon},X_{t-\delta _{1}}^{\varepsilon},Y_{t}^{\varepsilon},Z_{t}^{\varepsilon})\). By the dominated convergence theorem, \(\mathcal{H}^{\varepsilon}(t,v)\) is differentiable in v and
$$\mathcal{H}_{v}^{\varepsilon}(t,v)=H_{v} \bigl( \Sigma_{t}^{\varepsilon},v,u_{t-\delta_{2}}^{\varepsilon}\bigr) + \mathbb{E}^{\mathcal{F}_{t}} \bigl[ H_{v_{\delta}} \bigl(\Sigma _{t+\delta_{2}}^{\varepsilon},u_{t+\delta_{2}}^{\varepsilon},v \bigr) \bigr]. $$

We are now in a position to establish the necessary condition for near-optimal controls of the stochastic control problem (1)-(3).

Theorem 8

Assume (H1)-(H2). There exists \(C>0\) such that for any \(\gamma\in[0,\frac{1}{2}]\), any \(\varepsilon>0\) and any ε-optimal control pair \((X_{\cdot}^{\varepsilon},u_{\cdot}^{\varepsilon})\) of the stochastic control problem (1)-(3), we have
$$\mathbb{E} \int_{0}^{T}\bigl\langle \mathcal{H}^{\varepsilon}_{v}\bigl(t,u_{t}^{\varepsilon}\bigr),v-u_{t}^{\varepsilon}\bigr\rangle \,dt\geq-C\varepsilon ^{\gamma},\quad\forall v\in U. $$

Proof

The inequality is just
$$\begin{aligned} &\mathbb{E} \int_{0}^{T}\bigl\langle \mathbb{E}^{\mathcal{F}_{t}} \bigl[D_{1}(t+\delta_{2})^{T}Y_{t+\delta_{2}}^{\varepsilon}+D_{2}(t+ \delta _{2})^{T}Z_{t+\delta_{2}}^{\varepsilon}+l_{v_{\delta}} \bigl(\Theta_{t+\delta _{2}}^{\varepsilon}\bigr) \bigr] \\ & \quad{} +C_{1}(t)^{T}Y_{t}^{\varepsilon}+C_{2}(t)^{T}Z_{t}^{\varepsilon}+l_{v} \bigl(\Theta_{t}^{\varepsilon}\bigr) ,v-u_{t}^{\varepsilon}\bigr\rangle \,dt\geq-C\varepsilon^{\gamma}, \quad\forall v\in U. \end{aligned}$$
(15)
In view of (13), we only need to show that the difference between the terms on the left-hand sides of (13) and (15) is not more than \(C\varepsilon^{\gamma}\) for some constant C that is independent of ε and γ. Note that \(\varepsilon<1\), \(\gamma\leq\frac{1}{2}\).
Firstly consider
$$\Gamma_{1}=\mathbb{E} \int_{0}^{T}\bigl\{ \bigl\langle C_{1}(t)^{T} \tilde {Y}_{t}^{\varepsilon},v-\tilde{u}_{t}^{\varepsilon}\bigr\rangle -\bigl\langle C_{1}(t)^{T}Y_{t}^{\varepsilon},v-u_{t}^{\varepsilon}\bigr\rangle \bigr\} \,dt. $$
Note that \(\Gamma_{1}=\Gamma_{11}+\Gamma_{12}\) with
$$\Gamma_{11}=\mathbb{E} \int_{0}^{T}\bigl\langle C_{1}(t)^{T} \bigl(\tilde {Y}_{t}^{\varepsilon}-Y_{t}^{\varepsilon}\bigr),v-\tilde{u}_{t}^{\varepsilon}\bigr\rangle \,dt,\quad\quad \Gamma_{12}=\mathbb{E} \int_{0}^{T}\bigl\langle C_{1}(t)^{T}Y_{t}^{\varepsilon}, u_{t}^{\varepsilon}-\tilde{u}_{t}^{\varepsilon}\bigr\rangle \,dt. $$
Since U is bounded, there exists \(C>0\), which is independent of ε, such that \(\Gamma_{11}\leq C\mathbb{E}\int_{0}^{T} \vert \tilde{Y}_{t}^{\varepsilon}-Y_{t}^{\varepsilon} \vert \,dt\). Then, by Proposition 4, applying the Cauchy-Schwartz inequality we get \(\Gamma_{11}\leq Cd (u_{\cdot}^{\varepsilon},\tilde{u}_{\cdot}^{\varepsilon})\), and furthermore \(\Gamma_{11}\leq C\sqrt{\varepsilon}\) due to (9). On the other hand, using the Cauchy-Schwartz inequality again, in view of (6) and (9), we get \(\Gamma_{12}\leq C\sqrt{\varepsilon}\). Thus, \(\Gamma_{1}\leq C\sqrt{\varepsilon}\leq C\varepsilon^{\gamma}\). Similarly, we can prove
$$\mathbb{E} \int_{0}^{T}\bigl\{ \bigl\langle C_{2}(t)^{T} \tilde{Z}_{t}^{\varepsilon},v-\tilde{u}_{t}^{\varepsilon}\bigr\rangle -\bigl\langle C_{2}(t)^{T}Z_{t}^{\varepsilon},v-u_{t}^{\varepsilon}\bigr\rangle \bigr\} \,dt\leq C\sqrt{\varepsilon}. $$
Next, let us consider
$$\Gamma_{2}=\mathbb{E} \int_{0}^{T}\bigl\{ \bigl\langle l_{v} \bigl(\tilde{\Theta }_{t}^{\varepsilon}\bigr),v-\tilde{u}_{t}^{\varepsilon}\bigr\rangle - \bigl\langle l_{v}\bigl(\Theta_{t}^{\varepsilon}\bigr),v-u_{t}^{\varepsilon}\bigr\rangle \bigr\} \,dt. $$
We have \(\Gamma_{2}=\Gamma_{21}+\Gamma_{22}\), where
$$\Gamma_{21}=\mathbb{E} \int_{0}^{T} \bigl\langle l_{v}\bigl( \tilde{\Theta }_{t}^{\varepsilon}\bigr)-l_{v}\bigl( \Theta_{t}^{\varepsilon}\bigr),v-\tilde {u}_{t}^{\varepsilon}\bigr\rangle \,dt, \quad \quad \Gamma_{22}=\mathbb{E} \int_{0}^{T} \bigl\langle l_{v}\bigl( \Theta_{t}^{\varepsilon}\bigr),u_{t}^{\varepsilon}- \tilde{u}_{t}^{\varepsilon}\bigr\rangle \,dt. $$
Note that
$$\Gamma_{21}\leq C\mathbb{E} \int_{0}^{T} \bigl\vert l_{v}\bigl( \tilde{\Theta }_{t}^{\varepsilon}\bigr)-l_{v}\bigl( \Theta_{t}^{\varepsilon}\bigr) \bigr\vert \,dt. $$
By (H2), we can use a change of variables and the Cauchy-Schwartz inequality to get
$$\Gamma_{21}\leq C\sqrt{\mathbb{E} \int_{0}^{T} \bigl\vert \tilde {X}_{t}^{\varepsilon}-X_{t}^{\varepsilon}\bigr\vert ^{2}\,dt}+Cd \bigl(u_{\cdot}^{\varepsilon}, \tilde{u}_{\cdot}^{\varepsilon}\bigr). $$
Then by Proposition 3 and (9) we get \(\Gamma _{21}\leq C\sqrt{\varepsilon}\). Besides, (H1) gives
$$\Gamma_{22}\leq C\mathbb{E} \int_{0}^{T} \bigl(1+ \bigl\vert X_{t}^{\varepsilon}\bigr\vert + \bigl\vert X_{t-\delta_{1}}^{\varepsilon}\bigr\vert \bigr) \bigl\vert u_{t}^{\varepsilon}- \tilde{u}_{t}^{\varepsilon}\bigr\vert \,dt, $$
so, by (4) and (9), we can use the Cauchy-Schwartz inequality again to get \(\Gamma_{22}\leq C\sqrt{\varepsilon}\). Thus, \(\Gamma_{2}\leq C\sqrt{\varepsilon}\leq C\varepsilon^{\gamma}\). Finally let us consider
$$\begin{aligned} \Gamma_{3}&=\mathbb{E} \int_{0}^{T} \bigl\{ \bigl\langle \mathbb{E}^{\mathcal {F}_{t}}\bigl[D_{1}(t+\delta_{2})^{T} \tilde{Y}_{t+\delta_{2}}^{\varepsilon}+D_{2}(t+\delta_{2})^{T} \tilde{Z}_{t+\delta_{2}}^{\varepsilon}+l_{v_{\delta}}\bigl(\tilde{ \Theta}_{t+\delta_{2}}^{\varepsilon}\bigr)\bigr] ,v-\tilde{u}_{t}^{\varepsilon}\bigr\rangle \\ & \quad{} -\bigl\langle \mathbb{E}^{\mathcal{F}_{t}}\bigl[D_{1}(t+\delta _{2})^{T}Y_{t+\delta_{2}}^{\varepsilon}+D_{2}(t+ \delta_{2})^{T}Z_{t+\delta _{2}}^{\varepsilon}+l_{v_{\delta}} \bigl(\Theta_{t+\delta_{2}}^{\varepsilon}\bigr)\bigr] ,v-u_{t}^{\varepsilon}\bigr\rangle \bigr\} \,dt. \end{aligned}$$
In fact, by using Fubini’s theorem, a change of variables and recalling our assumptions we get
$$\begin{aligned} \Gamma_{3}&=\mathbb{E} \int_{\delta_{2}}^{T} \bigl\{ \bigl\langle D_{1}(t)^{T}\tilde {Y}_{t}^{\varepsilon}+D_{2}(t)^{T} \tilde{Z}_{t}^{\varepsilon}+l_{v_{\delta}}\bigl(\tilde{ \Theta}_{t}^{\varepsilon}\bigr) ,v-\tilde{u}_{t-\delta_{2}}^{\varepsilon}\bigr\rangle \\ & \quad{} -\bigl\langle D_{1}(t)^{T}Y_{t}^{\varepsilon}+D_{2}(t)^{T}Z_{t}^{\varepsilon}+l_{v_{\delta}} \bigl(\Theta_{t}^{\varepsilon}\bigr) ,v-u_{t-\delta_{2}}^{\varepsilon}\bigr\rangle \bigr\} \,dt \\ &=\mathbb{E} \int_{\delta_{2}}^{T} \bigl\{ \bigl\langle D_{1}(t)^{T}\tilde {Y}_{t}^{\varepsilon}, v- \tilde{u}_{t-\delta_{2}}^{\varepsilon}\bigr\rangle - \bigl\langle D_{1}(t)^{T}Y_{t}^{\varepsilon},v-u_{t-\delta_{2}}^{\varepsilon}\bigr\rangle \bigr\} \,dt \\ &\quad{} +\mathbb{E} \int_{\delta_{2}}^{T} \bigl\{ \bigl\langle D_{2}(t)^{T}\tilde {Z}_{t}^{\varepsilon}, v- \tilde{u}_{t-\delta_{2}}^{\varepsilon}\bigr\rangle - \bigl\langle D_{2}(t)^{T}Z_{t}^{\varepsilon},v-u_{t-\delta_{2}}^{\varepsilon}\bigr\rangle \bigr\} \,dt \\ &\quad{} +\mathbb{E} \int_{\delta_{2}}^{T} \bigl\{ \bigl\langle l_{v_{\delta}} \bigl(\tilde {\Theta}_{t+\delta_{2}}^{\varepsilon}\bigr),v-\tilde{u}_{t-\delta _{2}}^{\varepsilon}\bigr\rangle -\bigl\langle l_{v_{\delta}} \bigl(\Theta_{t}^{\varepsilon}\bigr), v-u_{t-\delta_{2}}^{\varepsilon}\bigr\rangle \bigr\} \,dt. \end{aligned}$$
Then similar to \(\Gamma_{1}\) and \(\Gamma_{2}\) we have \(\Gamma_{3}\leq C\varepsilon^{\gamma}\). Thus, (15) can be obtained, and the proof is complete. □

4 Sufficient conditions for near-optimality

In this section, we study under what conditions an admissible control turns out to be near-optimal. For this purpose, let us assume, moreover,
  1. (H3)

    l and Φ are convex in \((x,x_{\delta},v,v_{\delta})\).

     
  2. (H4)

    l is Lipschitz in \((v,v_{\delta})\).

     

Theorem 9

Let \((X^{\varepsilon}_{\cdot},u^{\varepsilon}_{\cdot})\) be an admissible pair and \((Y_{\cdot}^{\varepsilon},Z_{\cdot}^{\varepsilon})\) the corresponding solution of the adjoint equation (5).
  1. (i)
    Assume (H1)-(H3). If \(u^{\varepsilon}_{\cdot}\) satisfies
    $$ \mathbb{E} \int_{0}^{T}\bigl\langle \mathcal{H}^{\varepsilon}_{v}\bigl(t,u_{t}^{\varepsilon}\bigr),v_{t}-u_{t}^{\varepsilon}\bigr\rangle \,dt\geq-\varepsilon ,\quad\forall v_{\cdot}\in\mathcal{U}, $$
    (16)
    then \(J(u^{\varepsilon}_{\cdot})\leq V+\varepsilon\).
     
  2. (ii)
    Assume (H1)-(H4). If \(u^{\varepsilon}_{\cdot}\) satisfies
    $$ \inf_{v_{\cdot}\in\mathcal{U}}\mathbb{E} \int_{0}^{T}\bigl[\mathcal {H}^{\varepsilon}(t,v_{t})- \mathcal{H}^{\varepsilon}\bigl(t,u_{t}^{\varepsilon}\bigr)\bigr]\,dt \geq-\varepsilon^{2}, $$
    (17)
    then there exists \(C'>0\), which is independent of ε, such that \(J(u^{\varepsilon}_{\cdot})\leq V+C'\varepsilon\).
     

Proof

For any \(v_{\cdot}\in\mathcal{U}\), set \(\hat {v}_{t}=v_{t}-u^{\varepsilon}_{t}\) and \(\hat{X}_{t}=X_{t}^{v}-X^{\varepsilon}_{t}\). Applying Itô’s formula to \(\langle\hat{X}_{t},Y_{t}^{\varepsilon}\rangle\) yields
$$\begin{aligned} &\mathbb{E}\bigl\langle \Phi_{x}\bigl(X_{T}^{\varepsilon}\bigr),\hat{X}_{T}\bigr\rangle +\mathbb{E} \int_{0}^{T}\bigl\langle l_{x}\bigl( \Theta_{t}^{\varepsilon}\bigr) +\mathbb{E}^{\mathcal{F}_{t}} \bigl[l_{x_{\delta}}\bigl(\Theta_{t+\delta _{1}}^{\varepsilon}\bigr)\bigr], \hat{X}_{t}\bigr\rangle \,dt \\ &\quad =\mathbb{E} \int_{0}^{T} \bigl(\bigl\langle B_{1}(t)^{T}Y_{t}^{\varepsilon}, \hat {X}_{t-\delta_{1}}\bigr\rangle -\bigl\langle \mathbb{E}^{\mathcal {F}_{t}} \bigl[B_{1}(t+\delta_{1})^{T}Y_{t+\delta_{1}}^{\varepsilon}\bigr],\hat {X}_{t}\bigr\rangle \bigr)\,dt \\ & \quad\quad{} +\mathbb{E} \int_{0}^{T} \bigl(\bigl\langle B_{2}(t)^{T}Z_{t}^{\varepsilon}, \hat{X}_{t-\delta_{1}}\bigr\rangle -\bigl\langle \mathbb{E}^{\mathcal{F}_{t}} \bigl[B_{2}(t+\delta_{1})^{T}Z_{t+\delta _{1}}^{\varepsilon}\bigr],\hat{X}_{t}\bigr\rangle \bigr)\,dt \\ & \quad\quad{} +\mathbb{E} \int_{0}^{T}\bigl\langle C_{1}(t)^{T}Y_{t}^{\varepsilon}+C_{2}(t)^{T}Z_{t}^{\varepsilon}, \hat{v}_{t}\bigr\rangle \,dt +\mathbb{E} \int_{0}^{T}\bigl\langle D_{1}(t)^{T}Y_{t}^{\varepsilon}+D_{2}^{T}(t)Z_{t}^{\varepsilon}, \hat{v}_{t-\delta_{2}}\bigr\rangle \,dt. \end{aligned}$$
Then by a change of variables we get
$$\begin{aligned} &\mathbb{E}\bigl\langle \Phi_{x}\bigl(X_{T}^{\varepsilon}\bigr),\hat{X}_{T}\bigr\rangle +\mathbb{E} \int_{0}^{T} \bigl(\bigl\langle l_{x} \bigl(\Theta_{t}^{\varepsilon}\bigr),\hat {X}_{t}\bigr\rangle +\bigl\langle l_{x_{\delta}}\bigl(\Theta_{t}^{\varepsilon}\bigr), \hat{X}_{t-\delta _{1}}\bigr\rangle \bigr) \,dt \\ &\quad =\mathbb{E} \int_{0}^{T}\bigl\langle C_{1}(t)^{T}Y_{t}^{\varepsilon}+C_{2}(t)^{T}Z_{t}^{\varepsilon}+ \mathbb{E}^{\mathcal{F}_{t}} \bigl[D_{1}(t+\delta_{2})^{T}Y_{t+\delta_{2}}^{\varepsilon}+D_{2}(t+ \delta _{2})^{T}Z_{t+\delta_{2}}^{\varepsilon}\bigr], \hat{v}_{t}\bigr\rangle \,dt. \end{aligned}$$
On the other hand, thanks to (H3), we can use a change of variables again to get
$$\begin{aligned} J(v_{\cdot})-J\bigl(u_{\cdot}^{\varepsilon}\bigr)&\geq\mathbb{E} \bigl\langle \Phi _{x}\bigl(X_{T}^{\varepsilon}\bigr), \hat{X}_{T}\bigr\rangle +\mathbb{E} \int_{0}^{T} \bigl(\bigl\langle l_{x} \bigl(\Theta_{t}^{\varepsilon}\bigr),\hat{X}_{t}\bigr\rangle +\bigl\langle l_{x_{\delta}}\bigl(\Theta_{t}^{\varepsilon}\bigr), \hat{X}_{t-\delta _{1}}\bigr\rangle \bigr) \,dt \\ &\quad{} +\mathbb{E} \int_{0}^{T}\bigl\langle l_{v} \bigl( \Theta_{t}^{\varepsilon}\bigr)+\mathbb{E}^{\mathcal{F}_{t}} \bigl[l_{v_{\delta}} \bigl(\Theta _{t+\delta_{2}}^{\varepsilon}\bigr) \bigr], \hat{v}_{t}\bigr\rangle \,dt. \end{aligned}$$
Combining them gives
$$ J(v_{\cdot})-J\bigl(u_{\cdot}^{\varepsilon}\bigr) \geq\mathbb{E} \int_{0}^{T}\bigl\langle \mathcal{H}^{\varepsilon}_{v} \bigl(t,u_{t}^{\varepsilon}\bigr),\hat{v}_{t}\bigr\rangle \,dt. $$
(18)
So, if (16) holds, then \(J(u_{\cdot}^{\varepsilon})\leq J(v_{\cdot})+\varepsilon\). Thus, the conclusion (i) follows from the arbitrariness of \(v_{\cdot}\in \mathcal{U}\).
Next we prove (ii). To this end, define by
$$\tilde{d}\bigl(u_{\cdot},u'_{\cdot}\bigr)=\mathbb{E} \int_{0}^{T}\nu_{t}^{\varepsilon}\bigl\vert u_{t}-u_{t}' \bigr\vert \,dt $$
with \(\nu_{t}^{\varepsilon}=1+ \vert Y_{t}^{\varepsilon} \vert + \vert Z_{t}^{\varepsilon} \vert +\mathbb{E}^{\mathcal{F}_{t}} [ \vert Y_{t+\delta_{2}}^{\varepsilon} \vert + \vert Z_{t+\delta_{2}}^{\varepsilon} \vert ]\). Then \((\mathcal{U},\tilde{d})\) is a complete metric space. Define a new functional \(f(u_{\cdot}): \mathcal{U}\rightarrow\mathbb {R}\) by
$$f(u_{\cdot})=\mathbb{E} \int_{0}^{T}\mathcal{H}^{\varepsilon}(t,u_{t}) \,dt. $$
Then, by (H4), there exists \(L>0\) such that \(\vert f(u_{\cdot})-f(u'_{\cdot}) \vert \leq L\tilde {d}(u_{\cdot},u'_{\cdot})\), which shows that f is continuous on \((\mathcal{U},\tilde{d})\). Besides, the assumption (17) shows that
$$f\bigl(u_{\cdot}^{\varepsilon}\bigr)\leq\inf_{u_{\cdot}\in\mathcal{U}}f(u_{\cdot})+\varepsilon^{2}. $$
Consequently, applying Lemma 6 for \(\lambda=\varepsilon\) leads to the existence of \(\tilde{u}_{\cdot}^{\varepsilon}\in\mathcal{U}\) which satisfies
$$\begin{aligned}& \tilde{d}\bigl(\tilde{u}_{\cdot}^{\varepsilon},u_{\cdot}^{\varepsilon}\bigr)\leq \varepsilon, \end{aligned}$$
(19)
$$\begin{aligned}& F\bigl(\tilde{u}_{\cdot}^{\varepsilon}\bigr)\leq\inf _{u_{\cdot}\in\mathcal {U}}F(u_{\cdot}), \end{aligned}$$
(20)
where
$$F(u_{\cdot})\triangleq f(u_{\cdot})+\varepsilon\tilde{d}\bigl( \tilde {u}_{\cdot}^{\varepsilon},u_{\cdot}\bigr) =\mathbb{E} \int_{0}^{T} \bigl[\mathcal{H}^{\varepsilon}(t,u_{t}) +\varepsilon\nu_{t}^{\varepsilon}\bigl\vert \tilde{u}_{t}^{\varepsilon}-u_{t} \bigr\vert \bigr]\,dt. $$
Note that (20) implies a pointwise maximum principle, that is, for a.e. \(t\in[0,T]\), a.s., \(\mathcal{H}^{\varepsilon}(t,v) +\varepsilon\nu_{t}^{\varepsilon} \vert \tilde{u}_{t}^{\varepsilon}-v \vert \) attains its minimum over U at \(\tilde {u}_{t}^{\varepsilon}\). By Propositions 2.3.2 and 2.3.3 in [30], this yields
$$0\in\partial_{v}\mathcal{H}^{\varepsilon}\bigl(t, \tilde{u}_{t}^{\varepsilon}\bigr) + \bigl[-\varepsilon \nu_{t}^{\varepsilon},\varepsilon\nu_{t}^{\varepsilon}\bigr], $$
where \(\partial\varphi(x)\) denotes Clarke’s generalized gradient of φ at x. Since \(\mathcal{H}^{\varepsilon}(t,v)\) is differentiable in v, the previous inclusion implies the existence of \(\beta_{t}^{\varepsilon}\in [-\varepsilon\nu_{t}^{\varepsilon},\varepsilon\nu_{t}^{\varepsilon}]\) such that \(\mathcal{H}^{\varepsilon}_{v}(t,\tilde{u}_{t}^{\varepsilon})=-\beta _{t}^{\varepsilon}\). Thus,
$$ \bigl\vert \mathcal{H}^{\varepsilon}_{v}\bigl(t, \tilde{u}_{t}^{\varepsilon}\bigr) \bigr\vert \leq\varepsilon \nu_{t}^{\varepsilon}. $$
(21)
Then, by (H4) and the equality
$$\begin{aligned} \mathcal{H}^{\varepsilon}_{v}\bigl(t,u_{t}^{\varepsilon}\bigr) &=\mathcal{H}^{\varepsilon}_{v}\bigl(t,\tilde{u}_{t}^{\varepsilon}\bigr) + \bigl[H_{v} \bigl(\Sigma_{t}^{\varepsilon},u_{t}^{\varepsilon},u_{t-\delta _{2}}^{\varepsilon}\bigr) -H_{v} \bigl(\Sigma_{t}^{\varepsilon}, \tilde{u}_{t}^{\varepsilon},u_{t-\delta _{2}}^{\varepsilon}\bigr) \bigr] \\ &\quad{} +\mathbb{E}^{\mathcal{F}_{t}} \bigl[H_{v_{\delta}} \bigl(\Sigma _{t+\delta_{2}}^{\varepsilon},u_{t+\delta_{2}}^{\varepsilon},u_{t}^{\varepsilon}\bigr) -H_{v_{\delta}} \bigl( \Sigma_{t+\delta _{2}}^{\varepsilon},u_{t+\delta_{2}}^{\varepsilon}, \tilde{u}_{t}^{\varepsilon}\bigr) \bigr], \end{aligned}$$
there exists \(C>0\), independent of ε, such that
$$\bigl\vert \mathcal{H}^{\varepsilon}_{v}\bigl(t,u_{t}^{\varepsilon}\bigr) \bigr\vert \leq\varepsilon\nu_{t}^{\varepsilon}+C \nu_{t}^{\varepsilon}\bigl\vert \tilde{u}_{t}^{\varepsilon}-u_{t}^{\varepsilon}\bigr\vert . $$
This, together with (6) and (19), leads to the existence of \(C''>0\), independent of ε, such that
$$\mathbb{E} \int_{0}^{T} \bigl\vert \mathcal{H}^{\varepsilon}_{v}\bigl(t,u_{t}^{\varepsilon}\bigr) \bigr\vert \,dt\leq C''\varepsilon. $$
Consequently, considering the boundedness of U, we can use (18) to derive
$$J(v_{\cdot})-J\bigl(u_{\cdot}^{\varepsilon}\bigr) \geq - \mathbb{E} \int_{0}^{T} \bigl\vert \mathcal{H}^{\varepsilon}_{v}\bigl(t,u_{t}^{\varepsilon}\bigr) \bigr\vert \vert \hat{v}_{t} \vert \,dt \geq -C'\varepsilon. $$
Since \(v_{\cdot}\in\mathcal{U}\) is arbitrarily chosen, this completes the proof. □

Remark 10

Theorem 9(i) shows that, under (H1)-(H3), an admissible control \(u_{\cdot}^{\varepsilon}\) of problem (1)-(3) is ε-optimal if it satisfies (16). By Theorem 9(ii), we know that, under (H1)-(H4), if an admissible control \(u_{\cdot}^{\varepsilon}\) of problem (1)-(3) satisfies
$$ \inf_{v_{\cdot}\in\mathcal{U}}\mathbb{E} \int_{0}^{T} \bigl[\mathcal{H}^{\varepsilon}(t,v_{t})- \mathcal{H}^{\varepsilon}\bigl(t,u_{t}^{\varepsilon}\bigr) \bigr]\,dt \geq- \bigl(\varepsilon/C' \bigr)^{2}, $$
then it is indeed ε-optimal. Note that the conclusions in Theorem 9(i) and (ii) cannot contain each other in general.

5 Applications

In this section, the theoretical results are applied to two examples.

Example 1

Take \(U=[0,1]\). Assume that \(X_{\cdot}\) satisfies
$$dX_{t}^{v}=v_{t-\delta}\,dW_{t},\quad 0\leq t\leq T;\quad\quad X_{t}^{v}=\xi_{t},\quad -\delta\leq t\leq0. $$
The objective is to minimize
$$J(v_{\cdot})=\mathbb{E} \biggl[ \int_{0}^{T}v_{t}\,dt+\frac {1}{2} \bigl(X_{T}^{v}\bigr)^{2} \biggr]. $$
In this case, the adjoint equation is described by
$$dY_{t}^{v}=Z_{t}^{v}\,dW_{t},\quad 0 \leq t\leq T;\quad \quad Y_{T}^{v}=X_{T}^{v};\quad \quad Y_{t}^{v}=Z_{t}^{v}=0, \quad T< t\leq T+ \delta. $$
Comparing the adjoint equation with the system equation, by the uniqueness of the solutions, we get \((Y_{t}^{v},Z_{t}^{v})=(X_{t}^{v},v_{t-\delta})\) for \(0\leq t\leq T\).
Note that \(H(t,x,x_{\delta},y,z,v,v_{\delta})=v_{\delta}z+v\) and
$$\mathcal{H}^{\varepsilon}(t,v)= \bigl(1+\mathbb{E}^{\mathcal {F}_{t}} \bigl[Z_{t+\delta}^{\varepsilon}\bigr] \bigr)v+ \bigl(Z_{t}^{\varepsilon}u_{t-\delta}^{\varepsilon}+\mathbb{E}^{\mathcal {F}_{t}} \bigl[u_{t+\delta}^{\varepsilon}\bigr] \bigr). $$
Thus, \(\mathcal{H}^{\varepsilon}_{v}(t,u_{t}^{\varepsilon})=1+\mathbb{E}^{\mathcal {F}_{t}} [Z_{t+\delta}^{\varepsilon}]\). Besides, since \(Z_{t}^{\varepsilon}=u^{\varepsilon}_{t-\delta}\) for \(0\leq t\leq T\) and \(Z_{t}^{\varepsilon}=0\) for \(T< t\leq T+\delta\), we have \(1+\mathbb{E}^{\mathcal{F}_{t}} [Z_{t+\delta}^{\varepsilon}]=f^{\varepsilon}(t)\), with \(f^{\varepsilon}(t)=1+u_{t}^{\varepsilon}\) for \(0\leq t\leq T-\delta\) and \(f^{\varepsilon}(t)=1\) for \(T-\delta< t\leq T\). Thus,
$$\inf_{v_{\cdot}\in\mathcal{U}}\mathbb{E} \int_{0}^{T}\mathcal {H}^{\varepsilon}_{v} \bigl(t,u_{t}^{\varepsilon}\bigr) \bigl(v_{t}-u_{t}^{\varepsilon}\bigr)\,dt =-\mathbb{E} \int_{0}^{T}f^{\varepsilon}(t)u^{\varepsilon}_{t}\,dt. $$
By Theorem 9(i), an admissible control \(u_{\cdot}^{\varepsilon}\) is ε-optimal if
$$\mathbb{E} \int_{0}^{T}f^{\varepsilon}(t)u^{\varepsilon}_{t}\,dt= \mathbb {E} \biggl[ \int_{0}^{T-\delta}\bigl(1+u_{t}^{\varepsilon}\bigr)u^{\varepsilon}_{t}\,dt+ \int_{T-\delta}^{T}u_{t}^{\varepsilon}\,dt \biggr]\leq\varepsilon $$
and thus if
$$\mathbb{E} \int_{0}^{T}u^{\varepsilon}_{t}\,dt\leq \varepsilon/2. $$
Finally, let us give some examples of ε-optimal controls with sufficiently small ε:
$$u_{t}^{\varepsilon}=\frac{\varepsilon}{2T}, \frac{\varepsilon t}{T^{2}}, \frac{\min\{W_{t},\varepsilon\}}{3T}. $$

Example 2

We consider a cash management problem. Denote by \(X_{\cdot}\) the cash flow of an agent, and \(v_{\cdot}\) the control strategy which is the rate of cash disturbance (cash inflow or cash outflow). Since there exist necessary and unavoidable time delays in practice, we assume that the dynamics of the cash flow is described by
$$ \textstyle\begin{cases} dX_{t}^{v}=[B_{1}(t)X_{t-\delta_{1}}^{v}+D_{1}(t)v_{t-\delta _{2}}]\,dt+[B_{2}(t)X_{t-\delta_{1}}^{v}+D_{2}(t)v_{t-\delta_{2}}]\,dW_{t},\quad 0\leq t\leq T,\\ X_{t}^{v}=\xi_{t},\quad-\delta_{1}\leq t\leq0, \end{cases} $$
where the time-varying coefficients are bounded adapted processes. Our objective is to minimize the following functional:
$$J(v_{\cdot})=\mathbb{E} \biggl[ \int_{0}^{T}\frac{1}{2}N(t) \bigl(v_{t}-\alpha (t)\bigr)^{2} \,dt-QX_{T}^{v} \biggr], $$
where \(N(\cdot)\) and \(\alpha(\cdot)\) are bounded adapted process, and Q is a bounded \(\mathcal{F}_{T}\)-measurable random variable. \(N(\cdot)\) and Q are weight coefficients, and \(\alpha(\cdot)\) is interpreted as a dynamic benchmark. For clarity, we assume that \(U=[c,d]\) with suitable constants c and d, \(c\geq0\), \(N(t)>0\) and \(Q>0\). In this case, the objective contains two parts: one is to maximize an expected terminal reward, and the other to minimize a square criterion on the control strategy \(v_{\cdot}\), which is to prevent it from large deviation. Let us assume w.o.l.g. that \(\alpha(t)\in U\) for all \(t\in[0,T]\), and \(v_{t}=c\) for all admissible control \(v_{\cdot}\) and \(t\in(T-\delta_{2},T]\).
It is easy to check that the assumptions (H1)-(H4) hold true for this example. The adjoint equation takes the following form:
$$ \textstyle\begin{cases} -dY_{t}=\mathbb{E}^{\mathcal{F}_{t}} [B_{1}(t+\delta_{1})Y_{t+\delta _{1}}+B_{2}(t+\delta_{1})Z_{t+\delta_{1}} ]\,dt-Z_{t}\,dW_{t},\quad 0\leq t\leq T,\\ Y_{T}=-Q,\\ Y_{t}=0,\quad\quad Z_{t}=0,\quad T< t\leq T+\delta_{1}. \end{cases} $$
Note that the solution is independent of the control. Similar to [11], if the coefficients Q, \(B_{1}(\cdot)\), \(B_{2}(\cdot)\) are Malliavin differentiable, then this ABSDE can be solved interval by interval in Malliavin’s sense to get its unique solution \((Y_{\cdot},Z_{\cdot})\).
The Hamiltonian H takes the following form:
$$H(t,x,x_{\delta},y,z,v,v_{\delta})=N(t) \bigl(v-\alpha (t) \bigr)^{2}/2+\bigl[D_{1}(t)y+D_{2}(t)z \bigr]v_{\delta}+\bigl[B_{1}(t)y+B_{2}(t)z \bigr]x_{\delta}. $$
Set \(\lambda(t)=\mathbb{E}^{\mathcal{F}_{t}} [D_{1}(t+\delta _{2})Y_{t+\delta_{2}}+D_{2}(t+\delta_{2})Z_{t+\delta_{2}} ]\) and \(\mathbb{H}(t,v)=N(t) (v-\alpha(t))^{2}/2+\lambda(t)v\). Then by the definition of \(\mathcal{H}^{\varepsilon}(t,v)\) we have
$$\mathcal{H}_{v}^{\varepsilon}(t,v)=N(t) \bigl(v-\alpha(t)\bigr)+ \lambda(t),\quad \quad \mathcal{H}^{\varepsilon}(t,v)-\mathcal{H}^{\varepsilon}(t,u) = \mathbb{H}(t,v)-\mathbb{H}(t,u). $$
Set \(P_{t}=(\alpha(t)N(t)-\lambda(t))/N(t)\) and \(\gamma_{t}^{\varepsilon}=\inf_{v_{\cdot}\in\mathcal{U}}\mathbb{E}\int _{0}^{T} [\mathcal{H}^{\varepsilon}(t,v_{t}) -\mathcal{H}^{\varepsilon}(t,u_{t}^{\varepsilon}) ] \,dt\). Then
$$\gamma_{t}^{\varepsilon}=\inf_{v_{\cdot}\in\mathcal{U}}\mathbb{E} \int _{0}^{T} \bigl[\mathbb{H}^{\varepsilon}(t,v_{t}) -\mathbb{H}^{\varepsilon}\bigl(t,u_{t}^{\varepsilon}\bigr) \bigr] \,dt. $$
By Remark 10, an admissible control \(u_{\cdot}^{\varepsilon}\) is ε-optimal if it satisfies
$$\gamma_{t}^{\varepsilon}\geq-\bigl(\varepsilon/C' \bigr)^{2}. $$
Particularly, if \(P_{t}\in U\) for all \(t\in[0,T]\), then it is easy to check that
$$\gamma_{t}^{\varepsilon}=-\mathbb{E} \int_{0}^{T} \frac{1}{2}N(t) \bigl(u_{t}^{\varepsilon}-P_{t} \bigr)^{2}\,dt. $$
Consequently, an adapted process \(u_{\cdot}^{\varepsilon}\) is ε-optimal if it takes values in U and satisfies
$$ \mathbb{E} \int_{0}^{T}N(t) \bigl(u_{t}^{\varepsilon}-P_{t} \bigr)^{2}\,dt\leq 2\bigl(\varepsilon/C' \bigr)^{2}. $$
(22)
By (22), in order to find an ε-optimal control, we need to compute \(C'\). To this end, we follow the proof of Theorem 9(ii). Let (17) hold. Recall that \(U=[c,d]\) with \(c\geq0\), and \(\alpha(t)\in U\). Since
$$\mathcal{H}^{\varepsilon}(t,v)-\mathcal{H}^{\varepsilon}(t,u)=\bigl[N(t) \bigl(u+v-2\alpha(t)\bigr)/2+\lambda(t)\bigr](v-u), $$
we have
$$\bigl\vert \mathcal{H}^{\varepsilon}(t,v)-\mathcal{H}^{\varepsilon}(t,u) \bigr\vert \leq\nu_{t} \vert v-u \vert , $$
where
$$\nu_{t}=1+dN(t)+\mathbb{E}^{\mathcal{F}_{t}} \bigl[ \bigl\vert D_{1}(t+\delta_{2})Y_{t+\delta_{2}} \bigr\vert + \bigl\vert D_{2}(t+\delta _{2})Z_{t+\delta_{2}} \bigr\vert \bigr]. $$
So \(f(u_{\cdot})\triangleq\mathbb{E}\int_{0}^{T}\mathcal{H}^{\varepsilon}(t,u_{t})\,dt\) satisfies
$$\bigl\vert f(u_{\cdot})-f(v_{\cdot}) \bigr\vert \leq \tilde{d}(u_{\cdot},v_{\cdot})\triangleq \int_{0}^{T}\nu_{t} \vert u_{t}-v_{t} \vert \,dt. $$
On the one hand, by (21) we have \(\vert \mathcal{H}^{\varepsilon}_{v}(t,\tilde {u}_{t}^{\varepsilon}) \vert \leq\varepsilon\nu_{t}\). On the other hand, \(\mathcal{H}^{\varepsilon}_{v}(t,u_{t}^{\varepsilon})=\mathcal {H}^{\varepsilon}_{v}(t,\tilde{u}_{t}^{\varepsilon}) +N(t)(u_{t}^{\varepsilon}-\tilde{u}_{t}^{\varepsilon})\). Thus,
$$\bigl\vert \mathcal{H}^{\varepsilon}_{v}\bigl(t,u_{t}^{\varepsilon}\bigr) \bigr\vert \leq\varepsilon\nu_{t} +N(t) \bigl\vert u_{t}^{\varepsilon}-\tilde{u}_{t}^{\varepsilon}\bigr\vert \leq\varepsilon\nu_{t}+\nu_{t} \bigl\vert \tilde{u}_{t}^{\varepsilon}-u_{t}^{\varepsilon}\bigr\vert /d, $$
and so
$$\bigl\vert \mathcal{H}^{\varepsilon}_{v}\bigl(t,u_{t}^{\varepsilon}\bigr) \bigl(v_{t}-u_{t}^{\varepsilon}\bigr) \bigr\vert \leq \,d\varepsilon\nu_{t}+\nu _{t} \bigl\vert \tilde{u}_{t}^{\varepsilon}-u_{t}^{\varepsilon}\bigr\vert . $$
Therefore,
$$J(v_{\cdot})-J\bigl(u_{\cdot}^{\varepsilon}\bigr)\geq-\mathbb{E} \int_{0}^{T} \bigl\vert \mathcal{H}^{\varepsilon}_{v} \bigl(t,u_{t}^{\varepsilon}\bigr) \bigl(v_{t}-u_{t}^{\varepsilon}\bigr) \bigr\vert \,dt \geq-d\mathbb{E} \int_{0}^{T}\nu_{t}\,dt\varepsilon-\tilde{d} \bigl(\tilde {u}_{\cdot}^{\varepsilon},u_{\cdot}^{\varepsilon}\bigr). $$
Next, in view of (19), we have
$$J(v_{\cdot})-J\bigl(u_{\cdot}^{\varepsilon}\bigr)\geq - \biggl(1+d\mathbb{E} \int_{0}^{T}\nu_{t}\,dt \biggr)\varepsilon, $$
and thus
$$J\bigl(u_{\cdot}^{\varepsilon}\bigr)\leq V+ \biggl(1+d\mathbb{E} \int_{0}^{T}\nu _{t}\,dt \biggr)\varepsilon, $$
due to the arbitrariness of \(v_{\cdot}\in\mathcal{U}\). So \(C'\) could be any constant satisfying
$$C'\geq1+d\mathbb{E} \int_{0}^{T}\nu_{t}\,dt. $$
Finally, let us give a numerical simulation. Assume that the coefficients are all deterministic and time-invariant. Take \(c=0\), \(d=3\), \(T=2\), \(\delta_{1}=\delta _{2}=0.1\), \(B_{1}(t)=B_{2}(t)=0.7\), \(Q=1\), \(D_{1}(t)=D_{2}(t)=0.5\), \(N(t)=1\), \(\alpha(t)=0\). In this case, it is easy to check that \(Z_{t}=0\) for \(0\leq t\leq2.1\), and \(Y_{t}\) solves the following ODE:
$$Y_{t}'=-0.7Y_{t+0.1}, \quad 0\leq t\leq2;\quad\quad Y_{2}=-1;\quad \quad Y_{t}=0,\quad 2< t\leq2.1, $$
which can be solved explicitly by subdividing \([0,2]\) backwardly to get
$$\begin{aligned} &Y_{t}=-1, \quad1.9\leq t\leq2, \\ &Y_{t}=-1-0.7(1.9-t), \quad1.8\leq t\leq1.9, \\ &\vdots \end{aligned}$$
The graph of \(Y_{t}\) is shown in Figure 1. Then it is easy to check that \(1+d\int_{0}^{T}\nu_{t}\,dt<30\), so we can take \(C'=100\). Since \(P_{t}=-0.5Y_{t+0.1}\in U\), we can conclude that an adapted process \(u_{\cdot}^{\varepsilon}\) is ε-optimal if it takes values in U and satisfies
$$\mathbb{E} \int_{0}^{2} \bigl(u_{t}^{\varepsilon}+0.5Y_{t+0.1} \bigr)^{2}\,dt\leq2(\varepsilon/100)^{2}. $$
Let us give an example of ε-optimal control for sufficiently small ε:
$$ u_{t}^{\varepsilon}= \textstyle\begin{cases} -0.5Y_{t+0.1}+\varepsilon/100,&0\leq t\leq1.9;\\ 0,&1.9< t\leq2. \end{cases} $$
Figure 1

The function \(\pmb{Y_{t}}\) .

6 Conclusion

We study near-optimal controls for one kind of stochastic delay control problem with convex control domain. By the stochastic maximum principle and Ekeland’s variational principle, we establish necessary conditions for a control to be near-optimal. Sufficient conditions are also given, which show when an admissible control is indeed near-optimal. Two illustrative examples are given, for which some near-optimal controls in the explicit form are obtained. Future work includes the nonconvex control domain case and linear quadratic problems in terms of the Riccati equations.

Declarations

Acknowledgements

The author would like to thank the editor and the reviewers for their valuable comments and suggestions that helped to improve this paper. This work is supported by the National Natural Science Foundation of China (11371228, 11471192), Natural Science Foundation of Shandong Province (ZR2015JL003, ZR2015JL021), Fostering Project of Dominant Discipline and Talent Team of Shandong Province Higher Education Institutions, Special Funds of Taishan Scholar Project (tsqn20161041) and Project of Shandong Province Higher Educational Science and Technology Program (J15LI02).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Mathematics and Quantitative Economics, Shandong University of Finance and Economics

References

  1. Mohammed, SE: Stochastic Functional Differential Equations. Pitman, London (1984) MATHGoogle Scholar
  2. Mao, XR: Stochastic Differential Equations and Their Applications. Horwood, New York (1997) MATHGoogle Scholar
  3. Arriojas, M, Hu, Y, Mohammed, SE, Pap, G: A delayed Black and Scholes formula. Stoch. Anal. Appl. 25(2), 471-492 (2007) MathSciNetView ArticleMATHGoogle Scholar
  4. Elsanousi, I, Oksendal, B, Sulem, A: Some solvable stochastic control problems with delay. Stochastics 71(1-2), 69-89 (2000) MathSciNetMATHGoogle Scholar
  5. Larrsen, B: Dynamic programming in stochastic control of systems with delay. Stochastics 74(3), 651-673 (2002) MathSciNetGoogle Scholar
  6. Oksendal, B, Sulem, A, Zhang, TS: Optimal control of stochastic delay equations and time-advanced backward stochastic differential equations. Adv. Appl. Probab. 43, 572-596 (2011) MathSciNetView ArticleMATHGoogle Scholar
  7. Agram, N, Haadem, S, Oksendal, B, Proske, F: A maximum principle for infinite horizon delay equations. SIAM J. Math. Anal. 45(4), 2499-2522 (2013) MathSciNetView ArticleMATHGoogle Scholar
  8. Shen, Y, Meng, QX, Shi, P: Maximum principle for mean-field jump-diffusion stochastic delay differential equations and its application to finance. Automatica 50(6), 1565-1579 (2014) MathSciNetView ArticleMATHGoogle Scholar
  9. Peng, SG, Yang, Z: Anticipated backward stochastic differential equations. Ann. Probab. 37(3), 877-902 (2009) MathSciNetView ArticleMATHGoogle Scholar
  10. Chen, L, Wu, Z: Maximum principle for the stochastic optimal control problem with delay and application. Automatica 46(6), 1074-1080 (2010) MathSciNetView ArticleMATHGoogle Scholar
  11. Yu, ZY: The stochastic maximum principle for optimal control problems of delay systems involving continuous and impulse controls. Automatica 48(10), 2420-2432 (2012) MathSciNetView ArticleMATHGoogle Scholar
  12. Huang, JH, Shi, JT: Maximum principle for optimal control of fully coupled forward-backward stochastic differential delayed equations. ESAIM Control Optim. Calc. Var. 18(4), 1073-1096 (2012) MathSciNetView ArticleMATHGoogle Scholar
  13. Du, H, Huang, JH, Qin, YL: A stochastic maximum principle for delayed mean-field stochastic differential equations and its applications. IEEE Trans. Autom. Control 58(12), 3212-3217 (2013) MathSciNetView ArticleGoogle Scholar
  14. Zhou, XY: Stochastic near-optimal controls: necessary and sufficient conditions for near-optimality. SIAM J. Control Optim. 36(3), 929-947 (1998) MathSciNetView ArticleMATHGoogle Scholar
  15. Ekeland, I: On the variational principle. J. Math. Anal. Appl. 47(2), 324-353 (1974) MathSciNetView ArticleMATHGoogle Scholar
  16. Zhou, XY: Deterministic near-optimal controls. Part I: necessary and sufficient conditions for near optimality. J. Optim. Theory Appl. 85(2), 473-488 (1995) MathSciNetView ArticleMATHGoogle Scholar
  17. Zhou, XY: Deterministic near-optimal controls. Part II: dynamic programming and viscosity solution approach. Math. Oper. Res. 21(3), 655-674 (1996) MathSciNetView ArticleMATHGoogle Scholar
  18. Chighoub, F, Mezerdi, B: Near optimality conditions in stochastic control of jump diffusion processes. Syst. Control Lett. 60(11), 907-916 (2011) MathSciNetView ArticleMATHGoogle Scholar
  19. Hafayed, M, Abbas, S, Veverka, P: On necessary and sufficient conditions for near-optimal singular stochastic controls. Optim. Lett. 7(5), 949-966 (2013) MathSciNetView ArticleMATHGoogle Scholar
  20. Hafayed, M, Abbas, S: Stochastic near-optimal singular controls for jump diffusions: necessary and sufficient conditions. J. Dyn. Control Syst. 19(4), 503-517 (2013) MathSciNetView ArticleMATHGoogle Scholar
  21. Hafayed, M, Abbas, S: On near-optimal mean-field stochastic singular controls: necessary and sufficient conditions for near-optimality. J. Optim. Theory Appl. 160(3), 778-808 (2014) MathSciNetView ArticleMATHGoogle Scholar
  22. Hafayed, M, Abba, A, Abbas, S: On mean-field stochastic maximum principle for near-optimal controls for Poisson jump diffusion with applications. Int. J. Dyn. Control 2(3), 262-284 (2014) View ArticleGoogle Scholar
  23. Meng, QX, Shen, Y: A revisit to stochastic near-optimal controls: the critical case. Syst. Control Lett. 82, 79-85 (2015) MathSciNetView ArticleMATHGoogle Scholar
  24. Bahlali, K, Khelfallah, N, Mezerdi, B: Necessary and sufficient conditions for near-optimality in stochastic control of FBSDEs. Syst. Control Lett. 58, 857-864 (2009) MathSciNetView ArticleMATHGoogle Scholar
  25. Huang, JH, Li, X, Wang, GC: Near-optimal control problems for linear forward-backward stochastic systems. Automatica 46(2), 397-404 (2010) MathSciNetView ArticleMATHGoogle Scholar
  26. Hafayed, M, Veverka, P, Abbas, S: On maximum principle of near-optimality for diffusions with jumps, with application to consumption-investment problem. Differ. Equ. Dyn. Syst. 20(2), 111-125 (2012) MathSciNetView ArticleMATHGoogle Scholar
  27. Hafayed, M, Veverka, P, Abbas, S: On near-optimal necessary and sufficient conditions for forward-backward stochastic systems with jumps, with applications to finance. Appl. Math. 59(4), 407-440 (2014) MathSciNetView ArticleMATHGoogle Scholar
  28. Zhang, LQ, Huang, JH, Li, X: Necessary condition for near optimal control of linear forward-backward stochastic differential equations. Int. J. Control 88(8), 1594-1608 (2015) MathSciNetView ArticleMATHGoogle Scholar
  29. Hafayed, M, Abba, A, Boukaf, S: On Zhou’s maximum principle for near-optimal control of mean-field forward-backward stochastic systems with jumps and its applications. Int. J. Model. Identif. Control 25(1), 1-16 (2016) View ArticleGoogle Scholar
  30. Clarke, F: Optimization and Nonsmooth Analysis. SIAM, Philadelphia (1990) View ArticleMATHGoogle Scholar

Copyright

© The Author(s) 2017