Skip to main content

Theory and Modern Applications

Maximum principle for controlled fractional Fokker-Planck equations

Abstract

In this paper, we obtain a maximum principle for controlled fractional Fokker-Planck equations. We prove the well-posedness of a stochastic differential equation driven by an α-stable process. We give some estimates of the solutions by fractional calculus. A linear-quadratic example is given at the end of the paper.

1 Introduction

The real world is full of uncertainty; using stochastic models one may gain real benefits. Thus, stochastic differential equations driven by Brownian motions have been studied extensively. In spite of many obvious advantages, some models based on Brownian diffusion usually fail to provide a satisfactory description of many dynamical processes. We illustrate this by some practical phenomena as follows: long-range correlations, lack of scale invariance, discontinuity of the trajectories and so on [1, 2]. To capture such anomalous properties of physical systems, one introduces the fractional Fokker-Planck equations.

Recently, Magdziarz [3] and Lv et al. [4] obtained the stochastic representation on the fractional Fokker-Planck equation with time and space dependent drift and diffusion coefficients. They found that the corresponding stochastic process is driven by an inverse α-stable subordinator and Brownian motion. The fractional Fokker-Planck equation can be described by the following stochastic process (see [4]):

$$ dx(t)=f\bigl(x(t)\bigr)\, dS_{\alpha}(t)+g\bigl(x(t)\bigr)\, dB \bigl(S_{\alpha}(t)\bigr), $$

with initial value \(x(0)=\xi\). The above stochastic process is driven by the inverse α-stable subordinator and Brownian motion. Here, the inverse α-stable subordinator \(S_{\alpha}(t)\) is independent of \(B(\tau)\). We explain \(S_{\alpha}(t)\) in Section 2.

In order to make the relevant decisions (controls) based on the most updated information, the decision makers (controllers) must select an optimal decision among all possible ones to achieve the best expected result related to their goals. Such optimization problems are called stochastic optimal control problems. The range of stochastic optimal control problems covers a variety of physical, biological, economic, and management systems.

Generally, one solves the optimal control problem by the Pontryagin maximum principle. Starting with [58], backward stochastic differential equations have been used to describe the necessary conditions that the optimal control must satisfy. We also refer to [911] and the references therein for some other works. In this paper, α-stable processes involve some fractional calculations. We use fractional derivatives (of Riemann-Liouville type) to prove the well-posedness of the equations and give some estimates.

In this paper, we consider an optimal control problem for fractional Fokker-Planck equations. We examine this issue because it has a very wide range of physical applications. For instance, surface growth, transport of fluid in porous media [12], two-dimensional rotating flow [13], diffusion on fractals [14], or even in multidisciplinary areas such as in analyzing the behavior of CTAM micelles dissolved in salted water [15] or econophysics [16].

This paper is organized as follows. We begin with the well-posedness of the stochastic differential equations driven by α-stable process by Picard iteration, then we give some estimates of the solution for the controlled fractional Fokker-Planck equation in Section 2. In Section 3, we establish necessary and sufficient conditions for optimal pairs. A linear-quadratic optimal control problem is proposed in Section 4, a Riccati differential equation is derived, and the explicit expression of the optimal control is obtained. The conclusion is in Section 5.

2 Preliminaries

2.1 Statement of the problem

Let \((\Omega, \mathcal{F}, P)\) be a probability space with filtration \(\mathcal{F}_{t}\). The controlled stochastic system is described as follows:

$$ \left \{ \begin{array}{l} dx(t) = b(t, x(t), u(t))\, dS_{\alpha}(t)+\sigma(t, x(t), u(t))\, d B(S_{\alpha}(t)), \\ x(0) = \xi, \quad t\in[0, T], \end{array} \right . $$
(1)

where \(b(t, x(t), u(t)):[0, T]\times\mathbb{R}^{n}\times\mathcal {U}[0, T]\rightarrow \mathbb{R}^{n}\), \(\sigma(t, x(t), u(t)):[0, T]\times\mathbb{R}^{n}\times\mathcal{U}[0, T]\rightarrow \mathbb {R}^{n}\) are given functionals, ξ is the initial value, \(u(t)\) is the control process, and \(x(t)\) is the corresponding state process. The inverse α-stable subordinator is defined in the following way:

$$S_{\alpha}(t)=\inf\bigl\{ \tau>0:U_{\alpha}(\tau)>t\bigr\} , $$

where \(U_{\alpha}(\tau)\) is a strictly increasing α-stable Lévy process. \(U_{\alpha}\) is a pure-jump process whose Laplace transform is given by \(\mathbb{E}(e^{-kU_{\alpha}(\tau)})=e^{-\tau k^{\alpha}}\), \(0<\alpha<1\). For every jump of \(U_{\alpha}(\tau)\), there is a corresponding flat period of its inverse \(S_{\alpha}(t)\).

The space of admissible controls is defined as

$$\begin{aligned} \mathcal{U}[0, T] \triangleq& \biggl\{ u:[0,T] \times \Omega\rightarrow \mathbb{R}^{n}\Big|u \text{ is } \mathcal{F}_{t} \text {-adapted stochastic process and } \\ & E\biggl(\int^{T}_{0}\bigl\vert u(t)\bigr\vert ^{2}\, dt\biggr) < +\infty \biggr\} . \end{aligned}$$

The cost functional is

$$ J\bigl(u(t)\bigr)=E \biggl\{ \int^{T}_{0}l \bigl(t, x(t) ,u(t)\bigr)\, dS_{\alpha}(t)+h\bigl(x(T)\bigr) \biggr\} , $$
(2)

where \(l(t, x(t), u(t)):[0, T]\times\mathbb{R}^{n}\times\mathcal {U}[0, T]\rightarrow \mathbb{R}^{n}\) and \(h(t):\mathbb {R}^{n}\rightarrow\mathbb{R}^{n}\) are given continuously differentiable functionals. We introduce the following basic assumptions which will be assumed throughout the paper.

  1. (H1)

    b, σ, l, g are continuously differentiable with respect to x. There exists a constant \(L_{1} > 0\) such that, for \(\varphi(t, x, u)=b(t, x, u)\), \(\sigma(t, x, u)\), we have:

    1. 1.

      \(|\varphi(t, x, u)-\varphi(t, \hat{x}, \hat{u})| \leq L_{1}(|x-\hat{x}|+|u-\hat{u}|)\), \(\forall t\in[0, T]\), \(x,\hat{x}\in\mathbb {R}^{n}\), \(u, \hat{u}\in\mathcal{U}[0, T]\).

    2. 2.

      \(|\varphi(t, x)|\leq C(1+|x|)\), \(x\in\mathbb{R}^{n}\), \(t \in[0, T]\).

  2. (H2)

    The maps b, σ, l, h are \(C^{2}\) in x with bounded (denoted by M) derivatives. There exists a constant \(L_{2} > 0\) such that for \(\varphi(t, x, u)=b(t, x, u)\), \(\sigma(t, x, u)\), we have

    $$\begin{aligned}& \bigl\vert \varphi_{x}(t, x, u)-\varphi_{x}(t, \hat{x}, \hat{u})\bigr\vert \leq L_{2}\bigl(\vert x-\hat{x}\vert +|u- \hat{u}|\bigr), \\& \quad \forall t\in[0, T], x,\hat{x}\in\mathbb {R}^{n}, u, \hat{u}\in\mathcal{U}[0, T]. \end{aligned}$$

Then we can pose the following optimal control problem.

Problem (A)

Find a pair \((x^{*}(t), u^{*}(t))\in\mathbb {R}^{n}\times\mathcal{U}[0, T]\) such that

$$ J\bigl(u^{*}(t)\bigr)=\inf_{u(t)\in\mathcal{U}[0, T]}J \bigl(u(t)\bigr). $$
(3)

Now, we introduce the variational equation of (1),

$$ \left \{ \begin{array}{l} d\hat{x}(t)=(b_{x}(t)\hat{x}(t)+b_{u}(t)\hat {u}(t))\, dS_{\alpha}(t)+(\sigma_{x}(t)\hat{x}(t) \\ \hphantom{d\hat{x}(t)=}{}+\sigma_{u}(t)\hat{u}(t))\, d B(S_{\alpha}(t)), \\ \hat{x}(t)=0, \quad t\in[0, T] , \end{array} \right . $$
(4)

and the adjoint equation of (1), respectively,

$$ \left \{ \begin{array}{l} dy(t) = -[b_{x}(t, x(t), u(t))y(t)+l_{x}(t, x(t), u(t)) \\ \hphantom{dy(t) =}{} +\sigma_{x}(t, x(t), u(t))z(t)]\, dS_{\alpha}(t) \\ \hphantom{dy(t) =}{} +z(t, x(t), u(t))\, d B(S_{\alpha}(t)), \\ y(T) = h_{x}(x(T)), \quad t\in[0, T]. \end{array} \right . $$
(5)

The Hamiltonian of our optimal control problem is obtained as follows:

$$ H(t, x, u, y, z)=l\bigl(t, x(t), u(t)\bigr)+b\bigl(t, x(t), u(t) \bigr)y(t)+\sigma\bigl(t, x(t), u(t)\bigr)z(t). $$
(6)

2.2 Well-posedness of the problem

To obtain our results of maximum principle, we need the following results.

Proposition 2.1

(Itô formula; see [17, Theorem 2.4])

Suppose that \(x(\cdot)\) has a stochastic differential

$$ dx=F\, dS_{\alpha}+G\, dB(S_{\alpha}) $$

for \(F\in\mathbb{L}^{1}(0,T)\), \(G\in\mathbb{L}^{2}(0,T)\). Assume \(u:\mathbb{R}\times[0,T]\rightarrow\mathbb{R}\) is continuous and that \(\frac{\partial u}{\partial t}\), \(\frac{\partial u}{\partial x}\), \(\frac {\partial^{2} u}{\partial x^{2}}\) exist and are continuous. Set

$$Y(t):=u\bigl(x(t),t\bigr). $$

Then Y has the stochastic differential equation

$$ dY=\frac{\partial u}{\partial t}\, dt+\frac{\partial u}{\partial x}\bigl(F\, dS_{\alpha}+G\, dB(S_{\alpha})\bigr)+\frac{1}{2} \frac{\partial^{2} u}{\partial x^{2}}G^{2}\, dS_{\alpha}, \quad 0< \alpha<1. $$
(7)

Lemma 2.1

(See [4])

Let \(S_{\alpha}(t)\) be the inverse α-stable subordinator, \(g(t)\) be an integrable function. Then

$$E\biggl[\int^{t}_{0}g\bigl(S_{\alpha}( \tau)\bigr)\, dS_{\alpha}(\tau)\biggr]=\frac{1}{\Gamma (\alpha)}\int ^{t}_{0}(t-\tau)^{\alpha-1}E\bigl[g \bigl(S_{\alpha}(\tau)\bigr)\bigr]\, d\tau. $$

Lemma 2.2

(See [4])

The following equation holds for any continuous function \(f(t)\):

$$E\biggl[\int^{t}_{0}f(\tau)g \bigl(S_{\alpha}(\tau)\bigr)\, dS_{\alpha}(\tau)\biggr]=\int ^{t}_{0}f(\tau)D^{1-\alpha}_{\tau}E \bigl[g\bigl(S_{\alpha}(\tau)\bigr)\bigr]\, d\tau. $$

Here the operator \(D^{1-\alpha}_{t}f(t)=\frac{1}{\Gamma(\alpha)}\frac {\partial}{\partial t}\int_{0}^{t}(t-s)^{\alpha-1}f(s)\, ds\) is the fractional derivative of Riemann-Liouville type. Especially, the derivative of a constant C need not be zero \(D^{1-\alpha}_{t}C=\frac {t^{\alpha-1}}{\Gamma(\alpha)}C\).

Remark 2.1

We get \(\int^{t}_{0}1\, dS_{\alpha}(t)=\int^{t}_{0}D^{1-\alpha}_{t}\, dt=\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}\). It is bounded when \(\alpha\in(0, 1)\). We set \(\frac{t^{\alpha}}{\alpha \Gamma(\alpha)}< P\).

Theorem 2.1

Let b and σ be measurable functions satisfying (H1) and (H2), \(T>0\) and T be independent of \(X(0)\). Then the stochastic differential equation

$$ dX(t)=b\bigl(t,X(t)\bigr)\, dS_{\alpha}(t)+\sigma\bigl(t, X(t)\bigr)\, dB\bigl(S_{\alpha}(t)\bigr),\quad t\in[0, T] $$
(8)

has a unique solution \(X(t)\).

Proof

Define \(Y^{(0)}(t)=X(0)\) and \(Y^{(k)}(t)=Y^{(k)}(t)(\omega)\). We consider the equation

$$ Y^{(k+1)}(t)=X(0)+\int^{t}_{0}b \bigl(s, Y^{(k)}(s)\bigr)\, dS_{\alpha}(s)+\int ^{t}_{0}\sigma\bigl(s, Y^{(k)}(s)\bigr)\, dB\bigl(S_{\alpha}(s)\bigr). $$
(9)

Then, for \(k\geq1\), \(t\leq T\), we have

$$\begin{aligned}& E\bigl\Vert Y^{(k+1)}(t)-Y^{(k)}(t)\bigr\Vert ^{2} \\& \quad = E\biggl\Vert \int^{t}_{0}\bigl(b \bigl(s, X^{(k)}(t)\bigr)-b\bigl(s, X^{(k-1)}(t)\bigr)\bigr)\, dS_{\alpha}(s) \\& \qquad {} +\int^{t}_{0}\bigl(\sigma\bigl(s, X^{(k)}(t)\bigr)-\sigma\bigl(s, X^{(k-1)}(t)\bigr)\bigr)\, dB \bigl(S_{\alpha}(s)\bigr)\biggr\Vert ^{2} \\& \quad \leq2\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}E\int^{t}_{0}\bigl\Vert \bigl(b\bigl(s, X^{(k)}(t)\bigr)-b\bigl(s, X^{(k-1)}(t) \bigr)\bigr)\bigr\Vert ^{2}\, dS_{\alpha}(s) \\& \qquad {} +2E\int^{t}_{0}\bigl\Vert \bigl( \sigma\bigl(s, X^{(k)}(t)\bigr)-\sigma\bigl(s, X^{(k-1)}(t)\bigr) \bigr)\bigr\Vert ^{2}\, dS_{\alpha}(s) \\& \quad \leq2 (P+1)\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}L^{2}E\int^{t}_{0} \bigl\Vert X^{(k)}(t)-X^{(k-1)}(t)\bigr\Vert ^{2}\, dS_{\alpha}(t) \end{aligned}$$

and

$$\begin{aligned} E\bigl\Vert Y^{(1)}(t)-Y^{(0)}(t)\bigr\Vert ^{2}&\leq\frac{4t^{\alpha}}{\alpha\Gamma (\alpha)}\bigl(1+E|X_{0}|^{2} \bigr)\frac{ t^{\alpha}}{\alpha\Gamma(\alpha)}+\frac {4 t^{\alpha}}{\alpha\Gamma(\alpha)}\bigl(1+E|X_{0}|^{2} \bigr) \\ &\leq\frac{4t^{\alpha}}{\alpha\Gamma(\alpha)}\bigl(1+E|X_{0}|^{2}\bigr) \biggl( \frac {t^{\alpha}}{\alpha\Gamma(\alpha)}+1\biggr) \\ &\leq A_{1}t, \end{aligned}$$

where the constant \(A_{1}\) depends on L, P, and \(E|X_{0}|^{2}\). Hence we obtain

$$ E\bigl\Vert Y^{(k+1)}(t)-Y^{(k)}(t)\bigr\Vert ^{2}\leq\bigl(2(P+1)PL^{2}\bigr)^{k}(A_{1} t)^{k} \leq(A_{2}t)^{k}. $$

Here the constant \(A_{2}\) depends on L, P, and \(E|X_{0}|^{2}\). We set \(A_{2}t<\frac{1}{2}\), \(m\geq n \geq0\). Then

$$\begin{aligned} \bigl\Vert Y^{(m)}(t)-Y^{(n)}(t)\bigr\Vert _{L^{2}(0, T)}&=\Biggl\Vert \sum_{k=n}^{m-1}Y^{(k+1)}(t)-Y^{(k)}(t) \Biggr\Vert _{L^{2}(0, T)} \\ &\leq\sum_{k=n}^{m-1}\biggl(E\biggl[\int ^{T}_{0}\bigl\vert Y^{(k+1)}(t)-Y^{(k)}(t) \bigr\vert ^{2}\, dS_{\alpha}(t)\biggr]^{\frac{1}{2}}\biggr) \\ &\leq\sum_{k=n}^{m-1}\biggl(\int ^{T}_{0}(A_{2}t)^{k}\, dS_{\alpha}(t)\biggr)^{\frac {1}{2}} \\ &\leq\sum_{k=n}^{m-1}\bigl(P(A_{2}t)^{k} \bigr)^{\frac{1}{2}}\rightarrow0 \end{aligned}$$

as \(m, n\rightarrow\infty\). Therefore \(\{Y^{(n)}(t)\}_{n=0}^{\infty}\) is a Cauchy sequence in \(L^{2}(0, T)\). Hence \({Y^{(n)}(t)}_{n=0}^{\infty}\) is convergent in \(L^{2}(0, T)\). Define

$$X(t):= \lim_{n\rightarrow\infty}Y^{(n)}(t). $$

Next, we prove that \(X(t)\) satisfies (8). For all n and \(t\in[0, T]\), we have

$$Y^{(n+1)}(t)=X(0)+\int^{t}_{0}b\bigl(s, Y^{(n)}(s)\bigr)\, dS_{\alpha}(s)+\int^{t}_{0} \sigma\bigl(s, Y^{(n)}(s)\bigr)\, dB\bigl(S_{\alpha}(s)\bigr). $$

Then we get

$$\int^{t}_{0}b\bigl(s, Y^{(n)}(s)\bigr) \, dS_{\alpha}(s)\rightarrow\int^{t}_{0}b \bigl(s, X(s)\bigr)\, dS_{\alpha}(s) \quad \mbox{as } n \rightarrow\infty. $$

Also

$$\int^{t}_{0}\sigma\bigl(s, Y^{(n)}(s) \bigr)\, dB\bigl(S_{\alpha}(s)\bigr)\rightarrow\int^{t}_{0} \sigma\bigl(s, X(s)\bigr)\, dB\bigl(S_{\alpha}(s)\bigr) \quad \mbox{as } n \rightarrow\infty. $$

We conclude that for all \(t\in[0, T]\) we have

$$X(t)=X(0)+\int^{t}_{0}b\bigl(s, X(s)\bigr)\, dS_{\alpha}(s)+\int^{t}_{0}\sigma\bigl(s, X(s)\bigr)\, dB\bigl(S_{\alpha}(s)\bigr). $$

That is, \(X(t)\) satisfies (8).

Now we prove uniqueness. Let \(X_{1}(t)\) and \(X_{2}(t)\) be solutions of (8) with the same initial values. Then

$$\begin{aligned} E\bigl\Vert X_{1}(t)-X_{2}(t)\bigr\Vert ^{2} =&E\biggl\Vert \int^{t}_{0} \bigl(b\bigl(s, X_{1}(s)\bigr)-b\bigl(s, X_{2}(s)\bigr)\bigr) \, dS_{\alpha}(s) \\ &{}+\int^{t}_{0}\bigl(\sigma\bigl(s, X_{1}(s)\bigr)-\sigma\bigl(s, X_{2}(s)\bigr)\, dB \bigl(S_{\alpha }(s)\bigr)\bigr)\biggr\Vert ^{2} \\ \leq&2\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}E\int^{t}_{0}\bigl\Vert \bigl(b\bigl(s, X_{1}(s)\bigr)-b\bigl(s, X_{2}(t) \bigr)\bigr)\bigr\Vert ^{2}\, dS_{\alpha}(s) \\ &{}+2E\int^{t}_{0}\bigl\Vert \bigl(\sigma \bigl(s, X_{1}(s)\bigr)-\sigma\bigl(s, X_{2}(s)\bigr)\bigr) \bigr\Vert ^{2}\, dS_{\alpha}(s) \\ \leq&2(P+1)\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}L^{2}E\int^{t}_{0} \bigl\Vert X_{1}(s)-X_{2}(s)\bigr\Vert ^{2}\, dS_{\alpha}(s). \end{aligned}$$

From Lemmas 2.1 and 2.2, we get

$$\begin{aligned}& E\bigl\Vert X_{1}(t)-X_{2}(t)\bigr\Vert ^{2} \\& \quad \leq2(P+1)\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}L^{2}E\int^{t}_{0} \bigl\Vert X_{1}(s)-X_{2}(s)\bigr\Vert ^{2} \frac{s^{\alpha-1}}{\Gamma(\alpha)}\, ds \\& \quad \leq2(P+1)P L^{2}CE\int^{t}_{0} \bigl\Vert X_{1}(s)-X_{2}(s)\bigr\Vert ^{2}\, ds. \end{aligned}$$

By the Gronwall inequality, we conclude that

$$E\bigl\Vert X_{1}(t)-X_{2}(t)\bigr\Vert ^{2}=0 \quad \mbox{for all } t \in[0, T]. $$

The uniqueness is proved. □

2.3 Some estimates of the solution

Let \(u^{*}\) and v be two admissible controls. For any \(\varepsilon\in \mathbb{R}\), we denote \(u^{\varepsilon}=u^{*}+\varepsilon(v-u^{*})\). Corresponding to \(u^{\varepsilon}\) and \(u^{*}\), there are two solutions \(x^{\varepsilon}(\cdot)\) and \(x^{*}(\cdot)\) to (1). That is,

$$\begin{aligned}& x^{*}(t)=\xi+\int_{0}^{t}b\bigl(s, x^{*}(s), u^{*}(s)\bigr)\, dS_{\alpha}(s)+\int _{0}^{t}\sigma\bigl(s, x^{*}(s), u^{*}(s)\bigr)\, dB\bigl(S_{\alpha}(s)\bigr), \\& x^{\varepsilon}(t)=\xi+\int_{0}^{t}b\bigl(s, x^{\varepsilon}(s), u^{\varepsilon}(s)\bigr)\, dS_{\alpha}(s)+\int _{0}^{t}\sigma\bigl(s, x^{\varepsilon }(s), u^{\varepsilon}(s)\bigr)\, dB\bigl(S_{\alpha}(s)\bigr). \end{aligned}$$

Theorem 2.2

Let (H1)-(H2) hold. Then, for any \(K\geq1\),

$$\begin{aligned}& \sup_{t\in[0, T]}E\bigl\vert x^{\varepsilon}(t)-x^{*}(t) \bigr\vert ^{2}=O\bigl(\varepsilon^{2}\bigr), \end{aligned}$$
(10)
$$\begin{aligned}& \sup_{t\in[0, T]}E|\hat{x}|^{2}=O\bigl( \varepsilon^{2}\bigr), \end{aligned}$$
(11)
$$\begin{aligned}& \sup_{t\in[0, T]}E\bigl\vert x^{\varepsilon}(t)-x^{*}(t)- \hat {x}(t)\bigr\vert ^{2}=O\bigl(\varepsilon^{2}\bigr). \end{aligned}$$
(12)

Proof

We have

$$\begin{aligned}& \sup_{t\in[0, T]}E\bigl\vert x^{\varepsilon}(t)-x^{*}(t) \bigr\vert ^{2} \\& \quad = \sup_{t\in[0, T]}E\biggl\vert \int_{0}^{t} \bigl(b\bigl(s, x^{\varepsilon}(s), u^{\varepsilon}(s)\bigr)-b\bigl(s, x^{*}(s), u^{*}(s)\bigr)\bigr)\, dS_{\alpha}(s) \\& \qquad {} +\int_{0}^{t}\bigl(\sigma\bigl(s, x^{\varepsilon}(s), u^{\varepsilon}(s)\bigr)-\sigma \bigl(s, x^{*}(s), u^{*}(s)\bigr)\bigr)\, dB\bigl(S_{\alpha}(s) \bigr)\biggr\vert ^{2} \\& \quad \leq \sup_{t\in[0, T]}2E \biggl\{ \frac{t^{\alpha}}{\alpha\Gamma (\alpha)}\biggl\vert \int_{0}^{t}\bigl(b\bigl(s, x^{\varepsilon}(s), u^{\varepsilon }(s)\bigr)-b\bigl(s, x^{*}(s), u^{*}(s)\bigr)\bigr)^{2}\, dS_{\alpha}(s)\biggr\vert \\& \qquad {} +\biggl\vert \int_{0}^{t}\bigl( \sigma\bigl(s, x^{\varepsilon}(s), u^{\varepsilon}(s)\bigr)-\sigma \bigl(s, x^{*}(s), u^{*}(s)\bigr)\bigr)^{2}\, dS_{\alpha}(s)\biggr\vert \biggr\} \\& \quad \leq \sup_{t\in[0, T]} 4\frac{t^{\alpha}}{\alpha\Gamma(\alpha )}L^{2}(P+1) \biggl\{ \int^{T}_{0}E\bigl(\bigl\vert x^{\varepsilon }(s)-x^{*}(s)\bigr\vert ^{2}\bigr)\, dS_{\alpha}(s) \\& \qquad {}+\frac{T^{\alpha}}{\alpha\Gamma(\alpha )}\varepsilon^{2}E \bigl(v-u^{*}\bigr)^{2} \biggr\} . \end{aligned}$$

From Lemmas 2.1 and 2.2 and the Gronwall inequality, we get

$$\begin{aligned}& \sup_{t\in[0, T]}E\bigl\vert x^{\varepsilon}(t)-x^{*}(t) \bigr\vert ^{2} \\& \quad \leq4\frac{t^{\alpha}}{\alpha\Gamma(\alpha)}L^{2}(P+1) \biggl\{ \int ^{T}_{0}E\bigl(\bigl\vert x^{\varepsilon}(s)-x^{*}(s) \bigr\vert ^{2}\bigr)\frac{s^{\alpha-1}}{\Gamma (\alpha)}\, ds+\frac{T^{\alpha}}{\alpha\Gamma(\alpha)} \varepsilon ^{2}E\bigl(v-u^{*}\bigr)^{2} \biggr\} \\& \quad \leq C_{P, L}\varepsilon^{2}, \end{aligned}$$

where \(C_{P, L}\) is a constant that depends on P, L. This proves (10). Similarly, we can prove (11).

We set \(\eta(t)=x^{\varepsilon}(t)-x^{*}(t)-\hat{x}(t)\). Then

$$\begin{aligned} \bigl\vert \eta(t)\bigr\vert ^{2} =&\biggl\vert \int ^{T}_{0} \biggl\{ \int^{1}_{0}b_{x} \bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon}(s)-x^{*}(s) \bigr), u^{\varepsilon}(s)\bigr)\, d\theta \bigl(x^{\varepsilon}(s)-x^{*}(s) \bigr) \\ &{}+\int^{1}_{0}b_{u}\bigl(s, x^{*}(s), u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s)\bigr) \\ &{}-b_{x}(s)\hat{x}(s)-b_{u}(s)\hat{u}(s) \biggr\} \, dS_{\alpha}(s) \\ &{}+\int^{T}_{0} \biggl\{ \int ^{1}_{0}\sigma_{x}\bigl(s, x^{*}(s)+\theta \bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\, d\theta\bigl(x^{\varepsilon }(s)-x^{*}(s) \bigr) \\ &{}+\int^{1}_{0}\sigma_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s)\bigr) \\ &{}-\sigma_{x}(s)\hat{x}(s)-b_{u}(s)\hat{u}(s) \biggr\} \, dB\bigl(S_{\alpha}(s)\bigr) \biggr\vert ^{2} \\ =&\biggl\vert \int^{T}_{0} \biggl\{ \int ^{1}_{0}b_{x}\bigl(s, x^{*}(s)+\theta \bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\, d\theta\eta(s) \\ &{}+\biggl[\int^{1}_{0}b_{x}\bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}(s)\bigr)\, d\theta-b_{x}(s)\biggr]\hat{x}(s) \\ &{}+\int^{1}_{0}b_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s)\bigr) \\ &{}-b_{u}(s)\hat{u}(s) \biggr\} \, dS_{\alpha}(s) \\ &{}+\int^{T}_{0} \biggl\{ \int ^{1}_{0}\sigma_{x}\bigl(s, x^{*}(s)+\theta \bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\,d\theta\eta(s) \\ &{}+\biggl[\int^{1}_{0}\sigma_{x} \bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon }(s)-x^{*}(s) \bigr), u^{\varepsilon}\bigr)\, d\theta-\sigma_{x}(s)\biggr]\hat {x}(s) \\ &{}+\int^{1}_{0}\sigma_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s)\bigr) \\ &{}-\sigma_{u}(s)\hat{u}(s) \biggr\} \, dBS_{\alpha}(s)\biggr\vert ^{2} \\ \leq& 2\biggl\vert \int^{T}_{0} \biggl\{ \int ^{1}_{0}b_{x}\bigl(s, x^{*}(s)+\theta \bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\, d\theta\eta(s) \\ &{}+\biggl[\int^{1}_{0}b_{x}\bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\, d\theta-b_{x}(s)\biggr]\hat{x}(s) \\ &{}+\int^{1}_{0}b_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s)\bigr) \\ &{}-b_{u}(s)\hat{u}(s) \biggr\} ^{2}\, dS_{\alpha}(s) \biggr\vert \\ &{}+2\biggl\vert \int^{T}_{0} \biggl\{ \int ^{1}_{0}\sigma_{x}\bigl(s, x^{*}(s)+\theta \bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\, d\theta\eta(s) \\ &{}+\biggl[\int^{1}_{0}\sigma_{x} \bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon }(s)-x^{*}(s) \bigr), u^{\varepsilon}\bigr)\, d\theta-\sigma_{x}(s)\biggr]\hat {x}(s) \\ &{}+\int^{1}_{0}\sigma_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s)\bigr) \\ &{}-\sigma_{u}(s)\hat{u}(s) \biggr\} ^{2}\, dS_{\alpha}(s)\biggr\vert \\ \leq&8\int^{T}_{0}\biggl(\int ^{1}_{0}b_{x}\bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon }(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\biggr)^{2}\, d\theta\eta^{2}(s) \\ &{}+\biggl[\int^{1}_{0}b_{x}\bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon}(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\, d\theta-b_{x}(s)\biggr]^{2} \hat{x}^{2}(s) \\ &{}+\int^{1}_{0}b_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)^{2}\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s) \bigr)^{2} \\ &{}-b_{u}(s)^{2}\hat{u}(s)^{2}\, dS_{\alpha}(s) \\ &{}+8\int^{T}_{0}\biggl(\int ^{1}_{0}\sigma_{x}\bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon }(s)-x^{*}(s)\bigr), u^{\varepsilon}\bigr)\biggr)^{2}\, d\theta\eta^{2}(s) \\ &{}+\biggl[\int^{1}_{0}\sigma_{x} \bigl(s, x^{*}(s)+\theta\bigl(x^{\varepsilon }(s)-x^{*}(s) \bigr), u^{\varepsilon}\bigr)\, d\theta-b_{x}(s)\biggr]^{2} \hat {x}^{2}(s) \\ &{}+\int^{1}_{0}\sigma_{u}\bigl(s, x^{*}, u^{*}(s)+\theta\bigl(u^{\varepsilon }(s)-u^{*}(s) \bigr)\bigr)^{2}\, d\theta\bigl(u^{\varepsilon}(s)-u^{*}(s) \bigr)^{2} \\ &{}-\sigma_{u}(s)^{2}\hat{u}(s)^{2}\, dS_{\alpha}(s). \end{aligned}$$
(13)

From Lemmas 2.1 and 2.2 and the Gronwall inequality, we get

$$\begin{aligned} \sup_{t\in[0, T]}E\bigl\vert \eta(t)\bigr\vert ^{2}& \leq E\int^{T}_{0}\bigl(C_{1}\eta ^{2}(s)+\varepsilon^{2} C_{2}\bigr) \, dS_{\alpha}(s) \\ &\leq\int^{T}_{0}C_{1}E \eta^{2}(s)\frac{s^{\alpha-1}}{\Gamma(\alpha )}\, ds+\varepsilon^{2}C_{2} \frac{T^{\alpha}}{\Gamma(\alpha)\alpha } \\ &\leq\varepsilon^{2}M_{C_{1}, C_{2}}, \end{aligned}$$
(14)

where \(C_{1}=16(M^{2}+L^{2}C)\), \(C_{2}=(L^{2}C-M^{2})(v-\hat {u})^{2}+M^{2}(v-u^{*})^{2}\), \(M_{C_{1}, C_{2}}\) is a constant that depends on \(C_{1}\), \(C_{2}\). □

3 The maximum principle

Now, we give the sufficient conditions of Problem (A).

Theorem 3.1

Let (H1) and (H2) hold. Let \((x^{*}(t), u^{*}(t))\) be an admissible pair, and \((y(t), z(t))\) satisfies (5). Moreover, the Hamiltonian \(H(t)\) and \(h(t)\) are convex, and

$$ H\bigl(t, x^{*}(t), u^{*}(t), y(t), z(t), l(t) \bigr)=\min_{u\in\mathcal{U}[0, T]}H\bigl(t, x(t), u(t), y(t), z(t), l(t)\bigr). $$
(15)

Then \(u^{*}(t)\) is an optimal control.

Proof

Fix \(u\in\mathcal{U}[0, T]\) with corresponding solution \(x=x^{(u)}\). Then

$$ J\bigl(x^{*}(t), u^{*}(t)\bigr)-J\bigl(x(t), u(t)\bigr)=I_{1}+I_{2}, $$
(16)

where

$$\begin{aligned}& I_{1}=E\int^{T}_{0}l\bigl(t, x^{*}(t), u^{*}(t)\bigr)-l\bigl(t, x(t), u(t)\bigr)\, dS_{\alpha }(t), \\& I_{2}=E\bigl(h\bigl(x^{*}(T)\bigr)-h\bigl(x(T)\bigr)\bigr). \end{aligned}$$

By the definition of H, we get

$$\begin{aligned} I_{1} =&E\biggl[\int^{T}_{0} \bigl\{ H\bigl(t, x^{*}(t), u^{*}(t), y(t), z(t), l(t)\bigr)-H \bigl(t, x(t), u(t), y(t), z(t), l(t)\bigr) \\ &{}-\bigl(b\bigl(t, \hat{x}(t), \hat{u}(t)\bigr) (t)-b\bigl(t, x(t), u(t)\bigr) \bigr)y(t) \\ &{}-\bigl(\sigma\bigl(t, \hat{x}(t), \hat{u}(t)\bigr)-\sigma\bigl(t, x(t), u(t) \bigr)\bigr)z(t)\bigr\} \, dS_{\alpha}\biggr]. \end{aligned}$$
(17)

We use the convexity of \(h(t)\) to obtain the inequality

$$\begin{aligned} I_{2}&=E\bigl(h\bigl(x^{*}(T)\bigr)-h\bigl(x(T) \bigr)\bigr) \\ &\leq E\bigl(h_{x}\bigl(\hat{x}(T)\bigr) \bigl(x^{*}(T)-x(T)\bigr)\bigr) \\ &\leq E\bigl(y(T) \bigl(x^{*}(T)-x(T) \bigr)\bigr). \end{aligned}$$
(18)

Applying the Itô formula to \(h_{x}(\hat{x}(T))(x^{*}(T)-x(T))\) and taking the expectation, we get

$$\begin{aligned}& E\bigl(y(T) \bigl(x^{*}(T)-x(T)\bigr)\bigr) \\& \quad = E\bigl(y(T) \bigl(x^{*}(T)-x(T)\bigr)-y(0) \bigl(x^{*}(0)-x(0)\bigr)\bigr) \\& \quad = E\biggl(\int^{T}_{0} y(t) \bigl(b\bigl(t, x^{*}(t), u^{*}(t)\bigr)-b\bigl(t, x(t), u(t)\bigr)\bigr) \\& \qquad {}+z(t) \bigl(\sigma\bigl(t, x^{*}(t), u^{*}(t)\bigr)- \sigma\bigl(t, x(t), u(t)\bigr)\bigr) \\& \qquad {}-H_{x}\bigl(t, x(t), u(t), y(t), z(t), l(t)\bigr) \bigl(x^{*}(t)-x(t)\bigr)\, dS_{\alpha}(t)\biggr). \end{aligned}$$

Substituting the last equation into (16), we obtain

$$\begin{aligned}& J\bigl(x^{*}(t), u^{*}(t)\bigr)-J\bigl(x(t), u(t)\bigr) \\& \quad \leq E\int^{T}_{0}\bigl[\bigl(H\bigl(t, x^{*}(t), u^{*}(t), y(t), z(t), l(t)\bigr)-H\bigl(t, x(t), u(t), y(t), z(t), l(t)\bigr)\bigr) \\& \qquad {}-H_{x}\bigl(t, x(t), u(t), y(t), z(t), l(t)\bigr) \bigl(x^{*}(t)-x(t)\bigr)\bigr] \, dS_{\alpha}(t). \end{aligned}$$

Since \(H(t)\) is convex, we get

$$J\bigl(x^{*}(t), u^{*}(t)\bigr)-J\bigl(x(t), u(t)\bigr) \leq0. $$

Then \(u^{*}(t)\) is an optimal control. □

Then we give the necessary conditions of the stochastic control problem.

Theorem 3.2

Assume that b, σ satisfy (H1) and (H2), \(u^{*}(t) \in\mathcal {U}[0, T]\) is the optimal control of (1)-(3). Then \((y(t), z(t))\) is the solution of (5) such that

$$ \int^{T}_{0}\bigl\langle H_{u}\bigl(t, x^{*}(t), u^{*}(t), y(t), z(t) \bigr), u_{1}(t)\bigr\rangle \, dS_{\alpha}(t)=0. $$
(19)

Proof

In order to treat the problem, we have

$$ {\biggl.\frac{d}{d\varepsilon} J^{u+\varepsilon v}(t)\biggr|_{\varepsilon=0}}=E\biggl\{ \int ^{T}_{0}\bigl(l_{x}(t) \hat{x}(t)+l_{u}(t)\hat{u}(t)\bigr)\, dS_{\alpha }(t)+h_{x} \bigl(x(T)\bigr)\hat{x}(T)\biggr\} . $$
(20)

Let \((y(t), z(t))\) be the solution of (5). Then applying the differential chain rule to \(\langle y(t), \hat{x}(t)\rangle\), we have the following duality relation:

$$\begin{aligned} h_{x}\bigl(x(T)\bigr)\hat{x}(T) =&y(T)\hat{x}(T)-y(0) \hat{x}(0) \\ =&\int^{T}_{0}\bigl[y(t)b_{u}(t) \hat{u}(t)+z(t)\sigma_{u}(t)\hat{u}(t)-\hat {x}(t)l_{x}(t) \bigr] \, d S_{\alpha}(t) \\ &{}+\int^{T}_{0}\bigl[y(t)\sigma_{x}(t) \hat{x}(t)+y(t)\sigma_{u}(t)\hat {u}(t) \\ &{}+\hat{x}(t)z(t)\bigr]\, d B\bigl(S_{\alpha}(t)\bigr). \end{aligned}$$
(21)

Combining (21) with (16) and by the optimality of \(u^{*}(t)\), we obtain

$$\begin{aligned} {\biggl.\frac{d}{d\varepsilon} J^{u+\varepsilon v}(t)\biggr|_{\varepsilon=0}} =&E \int ^{T}_{0} \bigl(l_{u}(t) \hat{u}(t)+y(t)b_{u}(t)\hat{u}(t)+z(t)\sigma _{u}(t) \hat{u}(t) \bigr)\, dS_{\alpha}(t) \\ =&E\int^{T}_{0}\bigl\langle H_{u} \bigl(t, x^{*}(t), u^{*}(t), y(t), z(t)\bigr), \hat {u}(t) \bigr\rangle \, dS_{\alpha}(t)= 0. \end{aligned}$$
(22)

 □

4 Application

In this section, we consider a linear-quadratic (LQ) optimal control problem as follows:

$$ \left \{ \begin{array}{l} dx(t) = (A(t)x(t)+C(t)u(t))\, dS_{\alpha }(t)+(D(t)x(t)+E(t)u(t))\, d B(S_{\alpha}(t)), \\ x(t) = \eta, \quad t\in[0,T], \end{array} \right . $$
(23)

where \(A(\cdot)\), \(C(\cdot)\), \(D(\cdot)\), \(E(\cdot)\) are given matrix valued deterministic functions. η is the initial value, the cost functional is

$$\begin{aligned} J\bigl(x(t), u(t)\bigr) =&\frac{1}{2}E \biggl\{ \int ^{T}_{0}\bigl[ x^{T}(t)Q(t)x(t)+ u^{T}(t)R(t)u(t)\bigr]\, dS_{\alpha}(t) \\ &{}+ x^{T}(T)S(T)x(T) \biggr\} , \end{aligned}$$
(24)

where \(Q(t)\), \(R(t)\), \(S(t)\) are positive-definite matrices. \(x^{T}(t) \) is the transposition of \(x(t)\).

The optimal control of the LQ problem can be stated as follows.

Problem (B)

Find a pair \((x_{*}(t), u_{*}(t))\in\mathbb {R}^{n}\times\mathcal{U}[0, T]\) such that

$$ J\bigl(u_{*}(t)\bigr)=\inf_{u(t)\in\mathcal{U}[0, T]}J \bigl(u(t)\bigr). $$
(25)

We will proceed to a reduction of our Riccati equations. We assume P is a semimartingale with the following decomposition:

$$ dP(t)=\Sigma(t)\, dS_{\alpha}(t)+\Pi(t) \, dB \bigl(S_{\alpha}(t)\bigr), \quad t\in[0, T]. $$
(26)

Applying the Itô formula to \(d(x^{T}(t)P(t)x(t))\), we obtain

$$\begin{aligned}& d\bigl(x^{T}(t)P(t)x(t)\bigr) \\& \quad = \bigl\{ x^{T}\bigl(PA+A^{T}P+\Sigma+D^{T}PD+ \Pi D+D^{T}\Pi\bigr)x \\& \qquad {} +2u^{T}\bigl(C^{T}P+E^{T}PD+E^{T} \Pi\bigr)x+u^{T}E^{T}PEu \bigr\} \, dS_{\alpha }(t) \\& \qquad {} + \bigl\{ x^{T}D^{T}Px+u^{T}E^{T}Px+x \Pi x+x^{T}PDx+x^{T}PEu \bigr\} \, dB\bigl(S_{\alpha}(t) \bigr). \end{aligned}$$
(27)

We denote

$$\begin{aligned}& K=R+E^{T}PE, \\& L=C^{T}P+E^{T}PD+E^{T} \Pi. \end{aligned}$$

Taking expectations on both sides of (27), adding these to (24) and using the square completion technique, we get

$$\begin{aligned}& J\bigl(x(t),u(t)\bigr) \\& \quad = \frac{1}{2}E\int^{T}_{s} \bigl\{ x^{T}\bigl(PA+A^{T}P+\Sigma+D^{T}PD+\Pi D+D^{T}\Pi+Q\bigr)x \\& \qquad {} +2u^{T}\bigl(C^{T}P+E^{T}PD+E^{T} \Pi\bigr)x+u^{T}\bigl(E^{T}PE+R\bigr)u \bigr\} \, dS_{\alpha }(t) \\& \qquad {} +\frac{1}{2}x^{T}(s)P(s)x(s)+\frac {1}{2}E \bigl[x^{T}(T) \bigl(S(T)-P(T)\bigr)x(T)\bigr] \\& \quad = \frac{1}{2}E\int^{T}_{s} \bigl\{ x^{T}\bigl(PA+A^{T}P+\Sigma+D^{T}PD+\Pi D+D^{T}\Pi+Q-L^{T}K^{-1}L\bigr)x \\& \qquad {} +\bigl(u+K^{-1}Lx\bigr)^{T}K\bigl(u+K^{-1}Lx \bigr) \bigr\} \, dS_{\alpha}(t) \\& \qquad {} +\frac{1}{2}x^{T}(s)P(s)x(s)+\frac{1}{2}E \bigl[x^{T}(T) \bigl(S(T)-P(T)\bigr)x(T)\bigr]. \end{aligned}$$
(28)

Now, if \((P, \Pi)\) satisfies the Riccati equation, i.e.

$$\Sigma=-\bigl(PA+A^{T}P+D^{T}PD+\Pi D+D^{T} \Pi+Q-L^{T}K^{-1}L\bigr). $$

We set \(P(T)=S(T)\). Then we get the stochastic Riccati equation as follows:

$$ \left \{ \begin{array}{l} dP(t) = \{- (P(t)A(t)+A'(t)P(t)+D'(t)P(t)D(t)+\Pi(t)D(t) \\ \hphantom{dP(t) = }{} +D'(t)\Pi(t)+Q(t) )+ (C^{T}(t)P(t)+E^{T}(t)p(t)D(t) \\ \hphantom{dP(t) = }{} +E^{T}(t)\Pi(t) )^{T} (R(t)+E'(t)P(t)E(t) )^{-1} (C^{T}(t)P(t) \\ \hphantom{dP(t) = }{} +E^{T}(t)p(t)D(t)+E^{T}(t)\Pi(t) ) \}\, d S_{\alpha}(t)+\Pi(t) \, d B(S_{\alpha}(t)), \\ P(T) = S(T), \\ K(t) = R(t)+D'(t)P(t)D(t)>0, \quad \mathbb{P}\mbox{-a.s.}, \forall t\in[0,T]. \end{array} \right . $$
(29)

Theorem 4.1

If the stochastic Riccati equation (29) admits a solution, then the stochastic LQ problem (23)-(24) is well-posed.

Proof

We know that \((P, \Pi)\) satisfies the Riccati equation (29) with \(K=R+E^{T}PE >0\). Then

$$\begin{aligned}& J\bigl(x(t), u(t)\bigr) \\& \quad =\frac{1}{2}E\int^{T}_{s} \bigl(u+K^{-1}Lx\bigr)^{T}K\bigl(u+K^{-1}Lx\bigr)\, dS_{\alpha }(t)+\frac{1}{2}x^{T}(s)P(s)x(s) \\& \quad \geq\frac{1}{2}x^{T}(s)P(s)x(s) > -\infty,\quad \mathbb{P} \mbox{-a.s.} \end{aligned}$$

Therefore, the stochastic LQ problem is well-posed. □

Remark 4.1

We see that if the Riccati equation (29) admits a solution \((P, \Pi)\), then the optimal feedback control would be

$$\begin{aligned} u(t)&=-K^{-1}(t)L(t)x(t) \\ &=- \bigl(R(t)+E^{T}(t)P(t)E(t) \bigr)^{-1} \bigl(C^{T}(t)P(t)+E^{T}(t)PD(t)+E^{T}(t)\Pi(t) \bigr)x(t). \end{aligned}$$

5 Conclusion

In this paper, we present some results as regards controlled fractional Fokker-Planck equations. The well-posedness of the system has been proved by Picard iteration. Some estimates of the solution of the controlled system have been given. Because some terms contain α-stable processes, we use the fractional integral to solve the problem. The necessary and sufficient conditions of Pontryagin type for the optimal controls have been proved. As an application, a LQ problem has been shown.

References

  1. Barkai, E, Metzler, R, Klafter, J: From continuous time random walks to the fractional Fokker-Planck equation. Phys. Rev. E 61, 132-138 (2000)

    Article  MathSciNet  Google Scholar 

  2. Shlesinger, MF, Zaslawsky, GM, Klafter, J: Strange kinetics. Nature 363, 31-37 (1993)

    Article  Google Scholar 

  3. Magdziarz, M: Stochastic representation of subdiffusion processes with time-dependent drift. Stoch. Process. Appl. 119, 3238-3252 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  4. Lv, L, Qiu, W, Ren, F: Fractional Fokker-Planck equation with space and time dependent drift and diffusion. J. Stat. Phys. 149, 619-628 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  5. Bismut, JM: Conjugate convex functions in optimal stochastic control. J. Math. Anal. Appl. 44, 384-404 (1973)

    Article  MathSciNet  Google Scholar 

  6. Bismut, JM: An introductory approach to duality in optimal stochastic control. SIAM Rev. 20, 62-78 (1978)

    Article  MATH  MathSciNet  Google Scholar 

  7. Bismut, JM: Mécanique Aléatoire. Lecture Notes in Mathematics (1981)

    MATH  Google Scholar 

  8. Peng, S: A general stochastic maximum principle for optimal control problems. SIAM J. Control Optim. 28, 966-979 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  9. Chen, SP, Li, XJ, Zhou, XY: Stochastic linear quadratic regulators with indefinite control weight costs. SIAM J. Control Optim. 36, 1685-1702 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  10. Fleming, WH, Soner, HM: Controlled Markov Processes and Viscosity Solutions. Springer, New York (2006)

    MATH  Google Scholar 

  11. Yong, J, Zhou, X: Stochastic Control: Hamiltonian Systems and HJB Equations. Springer, New York (1999)

    Book  MATH  Google Scholar 

  12. Spohn, H: Surface dynamics below the roughening transition. J. Phys. 3, 69-81 (1993)

    Google Scholar 

  13. Solomon, TH, Weeks, ER, Swinney, HL: Observation of anomalous diffusion and Lévy flights in a two-dimensional rotating flow. Phys. Rev. Lett. 71, 3975-3978 (1993)

    Article  Google Scholar 

  14. Stephenson, J: Some non-linear diffusion equations and fractal diffusion. Physica A 222, 234-247 (1995)

    Article  Google Scholar 

  15. Bouchaud, JP, Ott, A, Langevin, D, Urbach, W: Anomalous diffusion in elongated micelles and its Lévy flight interpretation. J. Phys. II 1, 1465-1482 (1991)

    Google Scholar 

  16. Plerou, V, Gopikrishnan, P, Nunes Amaral, LA, Gabaix, X, Stanley, HE: Economic fluctuations and anomalous diffusion. Phys. Rev. E 62, 3023-3026 (2000)

    Article  Google Scholar 

  17. Zhang, YT, Chen, F: Stochastic stability of fractional Fokker-Planck equation. Physica A 410, 35-42 (2014)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author expresses sincere thanks to Professor Yong Li for his comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiuxi Wang.

Additional information

Competing interests

The author declares that they have no competing interests.

Rights and permissions

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, Q. Maximum principle for controlled fractional Fokker-Planck equations. Adv Differ Equ 2015, 45 (2015). https://doi.org/10.1186/s13662-015-0382-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-015-0382-1

Keywords