Open Access

Lagrange optimal controls and time optimal controls for composite fractional relaxation systems

Advances in Difference Equations20172017:233

Received: 12 May 2017

Accepted: 29 July 2017

Published: 11 August 2017


By means of the theory of resolvent and Schauder’s fixed point, the existence results of semilinear composite fractional relaxation systems are acquired. Then the new approach of setting up minimizing sequences twice is used to derive the optimal pairs without Lipschitz assumptions on nonlinear functions and nonlocal items. Moreover, the reflexivity of a state space X is not required by making full use of the compact method. Our results essentially improve and generalize those on optimal controls in recent literature.


fractional composite relaxation systems resolvent operators optimal controls time optimal mild solution


49J15 47A10 34K37

1 Introduction

Let X be a Banach space and Y be a separable reflexive Banach space. The paper is devoted to investigating the Lagrange optimal controls and the time optimal controls subjected to the following fractional semilinear composite relaxation system:
$$ \textstyle\begin{cases} y'(t) = A {}^{C} D^{\gamma }y(t)-y(t)+f(t,y(t))+B(t)u(t), &t\in [0,c], \\ y(0)+m(y)=y_{0}\in X, \\ u\in U_{ad}, \end{cases} $$
where \(0<\gamma <1\). \(y(t)\in X\) and \(u(t)\in Y\). \(A:D(A)\subseteq X \rightarrow X\) generate a resolvent family \(\{\mathcal{R}_{1-\gamma }(t) \}_{t\geq 0}\). \(B\in L^{\infty }([0,c], \mathcal{L}(Y, X))\). The admissible control set \(U_{ad}\) and the functions \(f:[0,c]\times X \rightarrow X\), \(m:C([0,c], X)\rightarrow X\) will be given later.

Fractional differential equations play a critical role in many fields, such as physics, engineering, chemistry, etc., in which it is used as a tool of modeling many phenomena. So, more and more researchers pay attention to it. We refer the readers to the monographs [13], the recent papers [410] and the reference therein. Our motivation for studying the fractional relaxation system comes from the existing work (see [11]) in which system (1.1) without control item and with A being a positive constant is discussed. Especially, when \(\gamma =1/2\), the system is the classical Basset problem, which is concerned with the unsteady accelerating in a viscous fluid for gravitational force. Numerical analysis of the fractional relaxation system is carried out in [1215].

In recent decades, in spite of many research works on the optimal control problems governed by fractional differential systems in infinite dimension Banach spaces, it has been well recognized that most of the existing results on optimal controls are obtained under the following two conditions. One is that the mild solution of the corresponding system exists uniquely, and then the Lipschitz continuity of nonlinear functions and nonlocal items is required. The other is that both of the spaces X and Y are reflexive in time optimal problems. We refer the readers to the recent papers [12, 1624] and the references therein.

It is our intention to deal with the solvability of system (1.1) by using the properties of resolvent developed by Fan [25]. Meanwhile, Lagrange optimal control problems and time optimal control problems governed by system (1.1) are studied. Two contributions are made here. One is that we remove the Lipschitz continuity of nonlinear function f and nonlocal item m without imposing any other conditions. Inspired by Zhu and Huang [26], we derive the optimal pairs by utilizing the new technique of setting up minimizing sequences twice. The other is that we make full use of the compactness of resolvent to compensate the lack of reflexivity of X, and the conclusion about time optimal controls also holds here. Hence, our results essentially improve the related results on this topic.

The present paper is built up as follows. The basic definitions and assumptions which will be used throughout the paper are presented in Section 2. We establish the solvability of system (1.1) in Section 3. Lagrange optimal control problems subjected to system (1.1) are investigated in Section 4. Time optimal control problems governed by (1.1) are presented in Section 5. We illustrate our results with an example in Section 6.

2 Preliminaries and basic assumptions

In this paper, let \(c>0\) be fixed and \(0<\gamma <1\). \(\mathbb{R}\) and \(\mathbb{R}^{+}\) are the sets of real numbers and nonnegative real numbers, respectively. The set of all continuous functions from \([0,c]\) to the Banach space X with \(\Vert y\Vert =\sup \{\Vert y(t)\Vert , t\in [0,c]\}\) is denoted by \(C([0,c],X)\), and the set of all Bochner integrable functions from \([0,c]\) to the Banach space X with \(\Vert f\Vert _{L^{p}} =(\int_{0}^{c}\Vert f(t)\Vert ^{p}\,\mathrm{d}t)^{1/p}\) is denoted by \(L^{p} ([0,c], X)\), where \(1\leq p< \infty \). Let \(L^{\infty } ([0,c], X)\) be the set of all essentially bounded functions on \([0,c]\) with values in X and \(\Vert f\Vert _{\infty }=\operatorname{esssup}\{\Vert f(t)\Vert , t \in [0,c]\}\), and let \(\mathcal{L}(X, Y)\) be the space of all linear and continuous operators from X to Y with the operator norm \(\Vert \cdot \Vert \). \(\mathcal{L}(X)\) respecting the space \(\mathcal{L}(X, X)\).

Let \(f: [0, \infty)\rightarrow X\) be an appropriate abstract function. The Caputo fractional order derivative is defined by
$${}^{C}D^{\gamma } f(t)= \int_{0}^{t} g_{1-\gamma }(t-\tau)f'(\tau) \,\mathrm{d}\tau, $$
provided the right-hand side exists, where \(g_{\gamma }(t):=\frac{t ^{\gamma -1}}{\Gamma (\gamma)}\), \(t>0\).

Definition 2.1

An operator family \(\{\mathcal{R}_{\gamma }(t)\}_{t\geq 0}\subset \mathcal{L}(X)\) is called a resolvent generated by a closed and linear operator A with the domain \(D(A)\) on X if it fulfills:
  1. (1)

    \(\mathcal{R}_{\gamma }(\cdot)y\in C([0, \infty), X)\) for \(y\in X\), and \(\mathcal{R}_{\gamma }(0)=I\);

  2. (2)

    \(\mathcal{R}_{\gamma }(t)D(A)\subset D(A)\), and for all \(y\in D(A)\) and \(t\geq 0\), there holds that \(A\mathcal{R}_{\gamma }(t)y= \mathcal{R}_{\gamma }(t)Ay\);

  3. (3)
    for all \(y\in D(A)\), \(t\geq 0\), the following resolvent equation holds:
    $$\mathcal{R}_{\gamma }(t)y=y+ \int_{0}^{t}g_{\gamma }(t-\tau)A \mathcal{R}_{\gamma }(\tau)y\,\mathrm{d}\tau. $$

By virtue of [27], we can infer that the resolvent equation is satisfied for all \(y\in X\). Now, the mild solution of system (1.1) will be given below using the definition of resolvent and Laplace transformation, and the details can be seen in [13].

Definition 2.2

A function \(y\in C([0,c],X)\) is said to be the mild solution of (1.1) if
$$\begin{aligned} y(t)=y_{0}-m(y)- \int_{0}^{t}\mathcal{R}_{1-\gamma }(t-\tau)y( \tau) \,\mathrm{d}\tau + \int_{0}^{t}\mathcal{R}_{1-\gamma }(t-\tau) \bigl[f\bigl(\tau, y(\tau)\bigr)+B(\tau) u(\tau)\bigr]\,\mathrm{d}\tau \end{aligned}$$
for each \(t\in [0,c]\) and \(u\in U_{ad}\).
Let \(r>0\) be a real number, and \(W_{r}=\{y\in C([0,c], X): \Vert y\Vert \leq r\}\). The following assumptions will be used throughout the paper.
  1. (HA)

    The resolvent \(\{\mathcal{R}_{1-\gamma }(t)\}_{t>0}\) generated by A is compact and uniformly continuous, and set \(M=\sup_{t\in [0,c]}\Vert \mathcal{R}_{1-\gamma }(t)\Vert \).

  2. (Hf)
    1. (1)

      \(f(\cdot, y)\) is measurable for all \(y\in X\), and \(f(t,\cdot)\) is continuous for a.e. \(t\in [0,c]\).

    2. (2)
      there exists a function \(\eta \in L^{1}([0,c], \mathbb{R}^{+})\) with \(\Vert \eta \Vert _{L^{1}([0,c])}<\frac{1}{M}\) such that
      $$\begin{aligned} \bigl\Vert f(t,y)-y\bigr\Vert \leq \eta (t) \bigl(1+\Vert y \Vert \bigr) \end{aligned}$$
      for all \(y\in X\).
  3. (Hm)

    m is a continuous and compact operator on \(C([0,c], X)\) with \(\Vert m(y)\Vert \leq N\) for all \(y\in C([0,c],X)\) and some \(N>0\).

  4. (HB)

    \(B\in L^{\infty }([0,c], \mathcal{L}(Y, X))\), and \(\Vert B\Vert _{ \infty }= M_{B}\).

    The admissible control set is defined by
    $$U_{ad}:=S_{U}^{p}=\bigl\{ u\in L^{p} \bigl([0,c], Y\bigr): u(t)\in U(t), \mbox{a.e. }t\in [0,c] \bigr\} , $$
    where \(1< p<\infty \), and the multivalued map U satisfies the condition (HU).
  5. (HU)
    \(U:[0,c]\rightarrow P_{lv}(Y)\) (the set of all nonempty closed and convex subset of Y) satisfies:
    1. (1)

      \(U(\cdot)\) is graph measurable;

    2. (2)

      \(U([0,c])=\{\nu \in Y: \nu \in U(t), t\in [0,c]\}\subseteq \Lambda \) for some bounded subset of Λ of Y, that is, \(\Vert U([0,c])\Vert =\sup \{\nu \in U(t): t\in [0,c]\}\leq \tilde{M}\) for some \(\tilde{M}>0\).


In view of [28], we can infer that (HU) implies that \(U_{ad}\neq \emptyset \) and, clearly, \(U_{ad}\) is bounded, closed and convex, and \(Bu\in L^{p}([0,c], X)\).

Remark 2.3

Let \(0<\gamma <1\). In view of Fan [25], Lemma 3.8, an analytic resolvent of analyticity type \((\omega_{0},\theta_{0})\) is uniformly continuous for all \(t>0\).

The following property of resolvent plays an important role in this paper.

Lemma 2.4


Let \(0<\gamma <1\). If (HA) is satisfied, then for each \(t>0\) there holds
  1. (1)

    \(\lim_{h\rightarrow 0^{+}}\Vert \mathcal{R}_{1-\gamma }(t+h)-\mathcal{R}_{1-\gamma }(t)\mathcal{R}_{1-\gamma }(h)\Vert =0\);

  2. (2)

    \(\lim_{h\rightarrow 0^{+}}\Vert \mathcal{R}_{1-\gamma }(t)-\mathcal{R}_{1-\gamma }(h)\mathcal{R}_{1-\gamma }(t-h)\Vert =0\).


3 The solvability of system (1.1)

In this section, by making use of the compactness and uniform continuity of resolvent, the mild solutions of system (1.1) are obtained without the Lipschitz continuity of f and m. More importantly, no other conditions of f and m are applied.

Theorem 3.1

Let all the assumptions listed in Section  2 be fulfilled. Then, for every \(u\in U_{ad}\), system (1.1) possesses at least one mild solution.


For each \(y_{0}\in X\) and \(u\in U_{ad}\), we define the solution operator \(G:C([0,c], X)\rightarrow C([0,c], X)\) as
$$G y(t)=y_{0}-m(y)- \int_{0}^{t}\mathcal{R}_{1-\gamma }(t-\tau)y( \tau)\,\mathrm{d}\tau + \int_{0}^{t}\mathcal{R}_{1-\gamma }(t-\tau) \bigl[f\bigl( \tau, y(\tau)\bigr)+B(\tau) u(\tau)\bigr]\,\mathrm{d}\tau. $$
It suffices to establish the existence of a fixed point of G. The proof will proceed in the following steps.
Step 1. We show that \(G(W_{r})\subseteq W_{r}\) with \(r>\frac{\Vert y_{0}\Vert +N+M\Vert \eta \Vert _{L^{1}([0,c]) }+ MM_{B}\Vert u\Vert _{L^{1}([0,c])}}{1-M\Vert \eta \Vert _{L^{1}([0,c])}}\). In fact, for every \(y\in W_{r}\), we have
$$\begin{aligned} \bigl\Vert Gy(t)\bigr\Vert \leq & \Vert y_{0}\Vert +N+M \Vert \eta \Vert _{L^{1}([0,c])}(1+r)+MM_{B}\Vert u\Vert _{L^{1}([0,c])} \\ \leq & r. \end{aligned}$$
Step 2. We need to verify the continuity of G on \(C([0,c],X)\). Let \(\{y_{n}\}_{n=1}^{\infty }\subseteq C([0,c],X)\) be a sequence such that \(\lim_{n\rightarrow \infty }y_{n}=y\). Then
$$\begin{aligned} \bigl\Vert G y_{n}(t)-G y(t)\bigr\Vert \leq &\bigl\Vert m(y_{n})-m(y)\bigr\Vert \\ &{}+ M \int_{0}^{t}\bigl\Vert y_{n}(\tau)-y( \tau)\bigr\Vert \,\mathrm{d}\tau +M \int_{0} ^{t}\bigl\Vert f\bigl(\tau,y_{n}(\tau)\bigr)-f\bigl(\tau,y(\tau)\bigr)\bigr\Vert \,\mathrm{d} \tau. \end{aligned}$$
It follows from (Hf), (Hm) and the dominated convergence theorem that
$$\Vert Gy_{n}-Gy\Vert \rightarrow 0 $$
as \(n\rightarrow \infty \). This means that G is continuous.
Step 3. We establish the compactness of G. First, we show the equicontinuity of the set \(\Sigma =\{Gy: y\in W_{r}\}\) in \(C([0,c], X)\). For any \(0\leq t_{1}< t_{2}\leq c\), \(y\in W_{r}\) and \(\delta >0\) small enough, we have
$$\begin{aligned}& \bigl\Vert G y(t_{2})-Gy(t_{1})\bigr\Vert \\& \quad \leq \biggl\Vert \int_{0}^{t_{2}}\mathcal{R}_{1-\gamma }(t_{2}- \tau)\bigl[f\bigl( \tau,y(\tau)\bigr)-y(\tau)\bigr]\,\mathrm{d}\tau - \int_{0}^{t_{1}}\mathcal{R} _{1-\gamma }(t_{1}- \tau)\bigl[f\bigl(\tau,y(\tau)\bigr)-y(\tau)\bigr]\,\mathrm{d}\tau \biggr\Vert \\& \qquad {} +\biggl\Vert \int_{0}^{t_{2}}\mathcal{R}_{1-\gamma }(t_{2}- \tau)B(\tau)u( \tau)\,\mathrm{d}\tau - \int_{0}^{t_{1}} \mathcal{R}_{1-\gamma }(t_{1}- \tau)B(\tau) u(\tau)\,\mathrm{d}\tau \biggr\Vert \\& \quad \leq \int_{0}^{t_{1}-\delta }\bigl\Vert \mathcal{R}_{1-\gamma }(t_{2}- \tau)-\mathcal{R}_{1-\gamma }(t_{1}-\tau)\bigr\Vert \bigl\Vert f\bigl(\tau,y(\tau)\bigr)-y(\tau)\bigr\Vert \,\mathrm{d}\tau \\& \qquad {}+ \int_{t_{1}-\delta }^{t_{1}}\bigl\Vert \mathcal{R}_{1-\gamma }(t_{2}- \tau)-\mathcal{R}_{1-\gamma }(t_{1}-\tau)\bigr\Vert \bigl\Vert f\bigl(\tau,y(\tau)\bigr)-y(\tau)\bigr\Vert \,\mathrm{d}\tau \\& \qquad {} + \int_{t_{1}}^{t_{2}}\bigl\Vert \mathcal{R}_{1-\gamma }(t_{2}- \tau) \bigl(f\bigl(\tau,y(\tau)\bigr)-y(\tau)\bigr)\bigr\Vert \,\mathrm{d}\tau + \int_{t_{1}}^{t_{2}}\bigl\Vert \mathcal{R}_{1-\gamma }(t_{2}- \tau)B(\tau)u(\tau)\bigr\Vert \,\mathrm{d} \tau \\& \qquad {} + \int_{0}^{t_{1}-\delta }\bigl\Vert \mathcal{R}_{1-\gamma }(t_{2}- \tau)-\mathcal{R}_{1-\gamma }(t_{1}-\tau)\bigr\Vert \bigl\Vert B(\tau)u(\tau)\bigr\Vert \,\mathrm{d} \tau \\& \qquad {} + \int_{t_{1}-\delta }^{t_{1}}\bigl\Vert \mathcal{R}_{1-\gamma }(t_{2}- \tau)-\mathcal{R}_{1-\gamma }(t_{1}-\tau)\bigr\Vert \bigl\Vert B(\tau) u(\tau)\bigr\Vert \,\mathrm{d}s \\& \quad \leq \bigl((1+r)\Vert \eta \Vert _{L^{1}([0,c])}+M_{B}\Vert u\Vert _{L^{1}([0,c])} \bigr) \sup_{0\leq \tau \leq t_{1}-\delta }\bigl\Vert \mathcal{R}_{1-\gamma }(t_{2}-\tau)-\mathcal{R}_{1-\gamma }(t_{1}- \tau)\bigr\Vert \\& \qquad {} +2M \bigl((1+r)\Vert \eta \Vert _{L^{1}([t_{1}-\delta, t_{1}])}+M_{B}\Vert u \Vert _{L^{1}([t_{1}-\delta, t_{1}])} \bigr) \\& \qquad {} +M \bigl((1+r)\Vert \eta \Vert _{L^{1}([t_{1}, t_{2}])}+M_{B}\Vert u \Vert _{L^{1}([t _{1}, t_{2}])} \bigr). \end{aligned}$$
In view of the arbitrariness of δ and the uniform continuity of \(\mathcal{R}_{1-\gamma }(t), t>0\), one gets
$$\lim_{t_{1}\rightarrow t_{2}}\bigl\Vert G y(t_{2})-G y(t_{1})\bigr\Vert =0, $$
uniformly for \(y\in W_{r}\).
Then we verify the precompactness of the set \(\Sigma (t)=\{Gy(t): y \in W_{r}\}\) in X. Obviously, the set \(\Sigma (0)=\{y_{0}-m(y): y \in W_{r}\}\) is relatively compact in X due to the compactness of m. In the case of \(t\in (0,b]\), for each \(0<\varepsilon < \frac{t}{2}\) small enough, we define the set \(\Sigma^{\varepsilon }(t)\) in X as follows:
$$\Sigma^{\varepsilon }(t)=\bigl\{ G^{\varepsilon }y(t): y\in W_{r}\bigr\} , $$
$$\begin{aligned} G^{\varepsilon }y(t) =&y_{0}-m(y)-\mathcal{R}_{1-\gamma }( \varepsilon) \int_{0}^{t-\varepsilon }\mathcal{R}_{1-\gamma } (t- \varepsilon - \tau)y(\tau)\,\mathrm{d}\tau \\ &{}+ \mathcal{R}_{1-\gamma }(\varepsilon) \int_{0}^{t-\varepsilon } \mathcal{R}_{1-\gamma }(t- \varepsilon -\tau)\bigl[f\bigl(\tau,y(\tau)\bigr)+B( \tau) u(\tau)\bigr] \,\mathrm{d}\tau. \end{aligned}$$
One can deduce that \(\Sigma^{\varepsilon }(t)\) is relatively compact in X by virtue of the compactness of resolvent \(\mathcal{R}_{1-\gamma }(\varepsilon)\) and the nonlocal item m. Moreover, for each \(y\in W_{r}\), we have
$$\begin{aligned}& \bigl\Vert G y(t)-G^{\varepsilon }y(t)\bigr\Vert \\& \quad \leq \biggl\Vert \int_{0}^{t}\mathcal{R}_{1-\gamma }(t-\tau) \bigl[f\bigl(\tau,y( \tau)\bigr)-y(\tau)\bigr]\,\mathrm{d}\tau \\& \qquad {}- \mathcal{R}_{1-\gamma }(\varepsilon) \int_{0}^{t-\varepsilon } \mathcal{R}_{1-\gamma }(t- \varepsilon -\tau) \bigl[f\bigl(\tau,y(\tau)\bigr)-y( \tau)\bigr]\,\mathrm{d} \tau \biggr\Vert \\& \qquad {} +\biggl\Vert \int_{0}^{t}\mathcal{R}_{1-\gamma }(t-\tau)B( \tau) u(\tau) \,\mathrm{d}\tau -\mathcal{R}_{1-\gamma }(\varepsilon) \int_{0}^{t- \varepsilon }\mathcal{R}_{1-\gamma }(t- \varepsilon -\tau) B(\tau) u( \tau)\,\mathrm{d}\tau \biggr\Vert \\& \quad \leq \int_{0}^{t-2\varepsilon }\bigl\Vert \mathcal{R}_{1-\gamma }(t- \tau)- \mathcal{R}_{1-\gamma }(\varepsilon) \mathcal{R}_{1-\gamma }(t- \varepsilon -\tau)\bigr\Vert \bigl\Vert f\bigl(\tau,y(\tau)\bigr)-y(\tau) \bigr\Vert \,\mathrm{d}\tau \\& \qquad {}+ \int_{t-2\varepsilon }^{t-\varepsilon }\bigl\Vert \mathcal{R}_{1-\gamma }(t- \tau)-\mathcal{R}_{1-\gamma }(\varepsilon) \mathcal{R}_{1-\gamma }(t- \varepsilon -\tau)\bigr\Vert \bigl\Vert f\bigl(\tau,y(\tau)\bigr)-y(\tau) \bigr\Vert \,\mathrm{d}\tau \\& \qquad {} + \int_{t-\varepsilon }^{t}\bigl\Vert \mathcal{R}_{1-\gamma }(t- \tau) \bigl(f\bigl(\tau,y(\tau)\bigr)-y(\tau)\bigr)\bigr\Vert \,\mathrm{d}\tau + \int_{t-\varepsilon }^{t} \bigl\Vert \mathcal{R}_{1-\gamma }(t- \tau)B(\tau) u(\tau)\bigr\Vert \,\mathrm{d}\tau \\& \qquad {} + \int_{0}^{t-2\varepsilon }\bigl\Vert \mathcal{R}_{1-\gamma }(t- \tau)- \mathcal{R}_{1-\gamma }(\varepsilon) \mathcal{R}_{1-\gamma }(t- \varepsilon -\tau)\bigr\Vert \bigl\Vert B(\tau)u(\tau)\bigr\Vert \,\mathrm{d}\tau \\& \qquad {}+ \int_{t-2\varepsilon }^{t-\varepsilon }\bigl\Vert \mathcal{R}_{1-\gamma }(t- \tau) -\mathcal{R}_{1-\gamma }(\varepsilon)\mathcal{R}_{1-\gamma }(t- \varepsilon -\tau)\bigr\Vert \bigl\Vert B(\tau) u(\tau)\bigr\Vert \,\mathrm{d}\tau \\& \quad \leq \bigl(\Vert \eta \Vert _{L^{1}([0,c])}(1+r)+M_{B}\Vert u\Vert _{L^{1}([0,c])}\bigr) \\& \qquad {}\times \sup_{0\leq \tau \leq t-2\varepsilon }\bigl\Vert \mathcal{R}_{1-\gamma }(t-\tau)-\mathcal{R}_{1-\gamma }(\varepsilon) \mathcal{R}_{1- \gamma }(t-\varepsilon -\tau)\bigr\Vert \\& \qquad {} +\bigl(M+M^{2}\bigr) \bigl((1+r)\Vert \eta \Vert _{L^{1}([t-2\varepsilon, t-\varepsilon ])}+M_{B}\Vert u\Vert _{L^{1}([t-2\varepsilon, t-\varepsilon ])} \bigr) \\& \qquad {} +M \bigl((1+r)\Vert \eta \Vert _{L^{1}([t-\varepsilon, t])}+M_{B}\Vert u \Vert _{L ^{1}([t-\varepsilon, t])} \bigr). \end{aligned}$$
Exploiting Lemma 2.4 yields \(\Vert Gy(t)-G^{\varepsilon }y(t)\Vert \rightarrow 0\) as \(\varepsilon \rightarrow 0\), that is, the set \(\Sigma^{\varepsilon }(t)\) is arbitrarily close to the set \(\Sigma (t)\) for \(t>0\). This together with the precompactness of \(\Sigma^{\varepsilon }(t)\) gives rise to the precompactness of \(\Sigma (t)\) in X. Thanks to the Arzela-Ascoli theorem, one has that the operator G is compact.

Now, applying Schauder’s fixed point theorem, one gets that G possesses at least one fixed point in \(W_{r}\), which is the mild solution of system (1.1). This completes the proof. □

Remark 3.2

The properties of resolvent (Lemma 2.4) are of great importance in the process of deriving the compactness of the solution operator G, through which the methods used in the sense of integer order differential equations can be successfully applied here.

Remark 3.3

In view of Theorem 3.1, for each \(u\in U_{ad}\), and \(y\in C([0,c],X)\) is the corresponding mild solution of system (1.1), one has
$$\bigl\Vert y(t)\bigr\Vert \leq \Vert y_{0}\Vert +N+M\Vert \eta \Vert _{L^{1}([0,c])}+M \int_{0}^{t} \eta (\tau)\bigl\Vert y(\tau)\bigr\Vert \,\mathrm{d}\tau +MM_{B}\tilde{M}c. $$
Using Gronwall’s lemma, one can obtain that \(\Vert y(t)\Vert \leq R\), where
$$R=\bigl(\Vert y_{0}\Vert +N+M\Vert \eta \Vert _{L^{1}([0,c])}+MM_{B}\tilde{M}c\bigr)e^{M\Vert \eta \Vert _{L^{1}([0,c])}}, $$
which is independent of u.

Remark 3.4

For convenience, we denote
$$\begin{aligned}& S(u)=\bigl\{ y\in W_{R}: y\text{ is the mild solution of (1.1) corresponding to the control }u\in U_{ad}\bigr\} , \\& \mathcal{A}_{d}=\bigl\{ (y, u): u\in U_{ad}, y\in S(u)\bigr\} . \end{aligned}$$

4 Lagrange optimal control problems subjected to system (1.1)

In this section, the idea of constructing the minimizing sequences twice will be used to solve the following Lagrange optimal control problem \((P_{1})\):
$$\inf_{(y, u)\in \mathcal{A}_{d}} \mathcal{J}(y, u), $$
where the cost function \(\mathcal{J}(y, u)=\int_{0}^{c} L(t, y(t), u(t)) \,\mathrm{d}t\), and the cost integrand \(L: [0,c]\times X\times Y \rightarrow \mathbb{R}\cup \{+\infty \}\) satisfies the condition (HL).
  1. (HL)
    1. (1)

      \((t, y, u)\rightarrow L(t, y, u)\) is measurable;

    2. (2)

      \(L(t, \cdot, \cdot)\) is sequentially lower semi-continuous on \(X\times Y\) for a.e. \(t\in [0,c]\);

    3. (3)

      \(L(t, y, \cdot)\) is convex on Y for each \(y\in X\) and a.e. \(t\in [0,c]\);

    4. (4)

      \(L(t, y, u)\geq \phi (t)+a(\Vert y\Vert ^{p}+\Vert u\Vert ^{p})\) for some \(\phi \in L^{p}([0,c], \mathbb{R})\), \(a\geq 0\) and a.e. \(t\in [0,c]\).


The following lemma will be used in the proof of our main results.

Lemma 4.1

If (HA) holds, then the operator \(F:L^{p}([0,c], X)\rightarrow C([0,c], X)\) given by
$$\begin{aligned} (Fl) (\cdot)= \int_{0}^{\cdot }\mathcal{R}_{1-\gamma }(\cdot -\tau)l( \tau)\,\mathrm{d}\tau \end{aligned}$$
is compact for \(1< p<\infty \). Moreover, the condition \(l_{n}\rightharpoonup l\) in \(L^{p}([0,c], X)\) as \(n\rightarrow \infty \) leads to the fact that \(Fl_{n}\rightarrow Fl\) in \(C([0,c], X)\) as \(n\rightarrow \infty \).


A similar manner as that in Theorem 3.1 gives the conclusion. Also it can be seen in [13]. So we omit it. □

Theorem 4.2

Under the conditions of Theorem 3.1 and (HL), the Lagrange optimal problem (\(P_{1}\)) admits at least one optimal pair, that is, there is a pair \((y_{*}, u_{*})\in \mathcal{A}_{d}\) such that
$$\mathcal{J}(y_{*}, u_{*})\leq \mathcal{J}(y, u) $$
holds for all \((y, u)\in \mathcal{A}_{d}\).


In view of Theorem 3.1, there is at least one mild solution \(y\in W_{R}\) such that \((y, u)\in \mathcal{A}_{d}\) for each \(u\in U_{ad}\), that is, \(S(u)\neq \emptyset \). For clarity, we proceed in the following two steps.

Step 1. For each \(u\in U_{ad}\), set
$$\begin{aligned} \mathcal{F}(u)=\inf_{y\in S(u)} \mathcal{J}(y, u). \end{aligned}$$
The fact \(S(u)\neq \emptyset \) implies that \(\mathcal{F}(u)\) is well defined. We aim to show that \(\mathcal{F}(u)=\mathcal{J}(\hat{y}, u)\) for some \(\hat{y}\in S(u)\).
It is trivial for the cases that \(\inf_{y\in S(u)} \mathcal{J}(y, u)=+\infty \) and the set \(S(u)\) has finite elements. Otherwise, the assumption (HL) implies that \(\mathcal{F}(u)>-\infty \). Moreover, the definition of infimum gives
$$\begin{aligned} \lim_{n\rightarrow \infty }\mathcal{J}(y_{n}, u)= \mathcal{F}(u) \end{aligned}$$
for some \(\{y_{n}\}_{n=1}^{\infty }\subseteq S(u)\). Note that \((y_{n}, u)\in \mathcal{A}_{d}\), one has
$$\begin{aligned} y_{n}(t) =&y_{0}-m(y_{n})- \int_{0}^{t}\mathcal{R}_{1-\gamma }(t- \tau)y_{n}(\tau)\,\mathrm{d}\tau \\ &{}+ \int_{0}^{t}\mathcal{R}_{1-\gamma }(t-\tau) \bigl[f\bigl(\tau,y_{n}(\tau)\bigr)+B( \tau) u(\tau)\bigr] \,\mathrm{d}\tau \end{aligned}$$
for each \(n\geq 1\). The compactness of \(\mathcal{R}_{1-\gamma }(t)\), \(t>0\) and m, an argument similar to the one used in Theorem 3.1 yields the precompactness of \(\{y_{n}\}_{n=1}^{\infty }\). Then a subsequence can be extracted from \(\{y_{n}\}_{n=1}^{\infty }\), still denoted by it, such that
$$\begin{aligned} \lim_{n\rightarrow \infty } y_{n}=\hat{y} \end{aligned}$$
for some \(\hat{y}\in C([0,c], X)\). Now, taking \(n\rightarrow \infty \) to both sides of (4.3) and using the Lebesgue dominated convergence theorem yields
$$\begin{aligned} \hat{y}(t) =&y_{0}-m(\hat{y})- \int_{0}^{t}\mathcal{R}_{1-\gamma }(t- \tau) \hat{y}(\tau)\,\mathrm{d}\tau \\ & {}+ \int_{0}^{t}\mathcal{R}_{1-\gamma }(t-\tau) \bigl[f\bigl(\tau,\hat{y}( \tau)\bigr)+B(\tau) u(\tau)\bigr]\,\mathrm{d}\tau \end{aligned}$$
due to the continuity of \(f(s, \cdot)\), \(s\in [0, t]\) and \(m(\cdot)\). This means that \(\hat{y}\in S(u)\). We will show that ŷ satisfies that \(\mathcal{F}(u)=\mathcal{J}(\hat{y}, u)\). It is worth mentioning that the assumption (HL) satisfies all the conditions of Balder ([29], Theorem 2.1). Hence, Applying Balder’s theorem yields
$$\begin{aligned} \mathcal{F}(u) =&\lim_{n\rightarrow \infty }\mathcal{J}(y_{n}, u) \\ =& \lim_{n\rightarrow \infty } \int_{0}^{c} L\bigl(t, y_{n}(t), u(t) \bigr) \,\mathrm{d}t \\ \geq & \int_{0}^{c} L\bigl(t, \hat{y}(t), u(t)\bigr) \,\mathrm{d}t= \mathcal{J}( \hat{y}, u) \\ \geq & \mathcal{F}(u), \end{aligned}$$
which implies that \(\mathcal{F}(u)=\mathcal{J}(\hat{y}, u)\).
Step 2. We verify that
$$\begin{aligned} \mathcal{F}(u_{*})=\inf_{u\in U_{ad}} \mathcal{F}(u) \end{aligned}$$
for some \(u_{*}\in U_{ad}\). It is trivial for the case that \(\inf_{u\in U_{ad}} \mathcal{F}(u)=+\infty \). Otherwise, again from (HL), we have \(\inf_{u\in U_{ad}} \mathcal{F}(u)>- \infty \). Let \(\{u_{n}\}_{n=1}^{\infty }\subseteq U_{ad}\) be a minimizing sequence such that
$$\begin{aligned} \lim_{n\rightarrow \infty }\mathcal{F}(u_{n})= \mathcal{F}(u _{*}). \end{aligned}$$
Obviously, \(\{u_{n}\}_{n=1}^{\infty }\) is bounded in \(L^{p}([0,c], Y)\). The reflexivity of \(L^{p}([0,c], Y)\) gives rise to the fact that a subsequence of \(\{u_{n}\}_{n=1}^{\infty }\) can be extracted, still denoted by it, such that
$$\begin{aligned} u_{n}\rightharpoonup u_{*} \end{aligned}$$
as \(n\rightarrow \infty \) for some \(u_{*}\in L^{p}([0,c], Y)\). Moreover, by virtue of the convexity, closeness of \(U_{ad}\) and Mazur’s theorem, we have that \(u_{*}\in U_{ad}\). Now we only need to show that \(\mathcal{F}(u)\) attains its infimum at \(u_{*}\). To this end, according to Step 1, we may suppose that \(\hat{y}_{n}\in S(u_{n})\) such that
$$\mathcal{F}(u_{n})=\mathcal{J}(\hat{y}_{n}, u_{n}). $$
The fact that \(\hat{y}_{n}\in S(u_{n})\) implies
$$\begin{aligned} \hat{y}_{n}(t) =&y_{0}-m( \hat{y}_{n})- \int_{0}^{t}\mathcal{R}_{1- \gamma }(t-\tau) \hat{y}_{n}(\tau)\,\mathrm{d}\tau \\ & {}+ \int_{0}^{t}\mathcal{R}_{1-\gamma }(t-\tau) \bigl[f\bigl(\tau,\hat{y}_{n}( \tau)\bigr)+B(\tau) u_{n}( \tau)\bigr]\,\mathrm{d}\tau \end{aligned}$$
for each \(n\geq 1\). Similar to the proof in Theorem 3.1, thanks to the compactness of \(\mathcal{R}_{1-\gamma }(t)\) and m, as well as the uniform boundedness of \(\{u_{n}\}_{n=1}^{\infty }\) in \(L^{1}([0,c], Y)\), one obtains the precompactness of \(\{\hat{y}_{n}\} _{n=1}^{\infty }\) in \(C([0,c],X)\). Then there is a subsequence of \(\{\hat{y}_{n}\}_{n=1}^{\infty }\), still denoted by it, such that
$$\begin{aligned} \hat{y}_{n}\rightarrow y_{*} \end{aligned}$$
as \(n\rightarrow \infty \) for some \(y_{*}\in C([0,c], Y)\). On the other hand, with the help of Lemma 4.1, (4.8), and the fact that \(B(\cdot)u_{n}(\cdot)\in L^{p}([0,c], X)\), we obtain that
$$\begin{aligned} \int_{0}^{t}\mathcal{R}_{1-\gamma }(t-\tau)B( \tau) u_{n}(\tau) \,\mathrm{d}\tau \rightarrow \int_{0}^{t}\mathcal{R}_{1-\gamma }(t- \tau)B( \tau) u_{*}(\tau)\,\mathrm{d}\tau \end{aligned}$$
as \(n\rightarrow \infty \). Now, taking \(n\rightarrow \infty \) to both sides of (4.9) yields
$$\begin{aligned} y_{*}(t) =&y_{0}-m(y_{*})- \int_{0}^{t}\mathcal{R}_{1-\gamma }(t- \tau)y_{*}(\tau)\,\mathrm{d}\tau \\ & {}+ \int_{0}^{t}\mathcal{R}_{1-\gamma }(t-\tau) \bigl[f\bigl(\tau,y_{*}(\tau)\bigr)+B( \tau) u_{*}(\tau) \bigr]\,\mathrm{d}\tau. \end{aligned}$$
This means that \(y_{*}\in S(u_{*})\). Moreover, applying Balder’s theorem again gives
$$\begin{aligned} \mathcal{F}(u_{*}) =&\lim_{n\rightarrow \infty }\mathcal{F}(u _{n}) \\ =& \lim_{n\rightarrow \infty } \int_{0}^{c} L\bigl(t, \hat{y} _{n}(t), u_{n}(t)\bigr)\,\mathrm{d}t \\ \geq & \int_{0}^{c} L\bigl(t, y_{*}(t), u_{*}(t)\bigr)\,\mathrm{d}t=\mathcal{J}(y _{*}, u_{*}) \\ \geq & \mathcal{F}(u_{*}). \end{aligned}$$
Combining this with Step 1 yields
$$\mathcal{J}(y_{*}, u_{*})=\mathcal{F}(u_{*})= \inf_{u\in S_{U} ^{p}} \mathcal{F}(u) =\inf_{u\in S_{U}^{p}} \inf _{y \in S(u)} \mathcal{J}(y, u) =\inf_{(y, u)\in \mathcal{A}_{d}} \mathcal{J}(y, u), $$
that is, \((y_{*}, u_{*})\) is a feasible pair at which \(\mathcal{J}\) reaches a minimum. This ends the proof. □

Remark 4.3

The optimal pairs for the Lagrange problem (\(P_{1}\)) are obtained without the Lipschitz continuity of f and m. To compensate, the new idea of setting up minimizing sequences twice is used. Hence, our results generalize the recent existing ones in [12, 1621], in which the Lipschitz assumptions on nonlinear function and nonlocal item are needed.

5 Time optimal control problems governed by system (1.1)

In this segment, let W be a bounded, closed and convex subset of the Banach space X. Define the subsets as follows:
$$\begin{aligned}& \mathcal{A}_{d}^{W}=\bigl\{ (y, u)\in \mathcal{A}_{d}: y(t)\in W \mbox{ for some } t\in [0,c]\bigr\} ; \\& U_{0}=\bigl\{ u\in U_{ad}: (y, u)\in \mathcal{A}_{d}^{W} \mbox{ for some } y \in S(u)\bigr\} ; \\& S_{u}^{W}=\bigl\{ y\in S(u): u\in U_{0}, (y, u) \in \mathcal{A}_{d}^{W}\bigr\} . \end{aligned}$$
Suppose that \(\mathcal{A}_{d}^{W}\neq \emptyset \). Then, for any \((y, u)\in \mathcal{A}_{d}^{W}\), define the first time \(t_{(y, u)}\) such that \(y(t_{(y, u)})\in W\) as the transition time. Obviously, \(t_{(y, u)}\) is well defined owing to the fact that \(y(\cdot)\in C([0,c],X)\) as well as the closeness and convexity of W. The set W is called the target set.
Now, we ponder the following time optimal control problem (\(P_{2}\)):
$$\inf_{(y, u)\in \mathcal{A}_{d}^{W}} t_{(y, u)}. $$

Theorem 5.1

Let the hypotheses specified in Section  2 hold. Then system (1.1) possesses at least one feasible pair which solves the problem (\(P_{2}\)), that is, there is a pair \((y^{*}, u^{*})\in \mathcal{A}_{d}^{W}\) such that
$$t_{(y^{*}, u^{*})}\leq t_{(y, u)} $$
holds for all \((y, u)\in \mathcal{A}_{d}^{W}\).

Remark 5.2

The control \(u^{*}\), the time \(t_{(y^{*}, u^{*})}\) and the pair \((y^{*}, u^{*})\) in Theorem 5.1 are called the time optimal control, optimal time and time optimal pair, respectively.

Proof of Theorem 5.1

From Theorem 3.1, we have that there exists at least one y such that \((y,u)\in \mathcal{A}_{d}\) for each \(u\in U_{ad}\). We will proceed in two steps to check the main results.

Step 1. For any fixed \(u\in U_{0}\), set \(t^{u}=\inf_{y\in S_{u}^{W}} t_{(y,u)}\). It is trivial for the case that the set \(S_{u}^{W}\) has finite elements. Otherwise, the definition of infimum gives that there is a monotone decreasing sequence \(\{t_{(y _{n},u)}\}_{n\geq 1}\) such that
$$\begin{aligned} \lim_{n\rightarrow \infty }t_{(y_{n},u)}=t^{u}, \end{aligned}$$
where \((y_{n}, u)\in \mathcal{A}_{d}^{W}\) for each \(n\geq 1\). Moreover, a similar manner utilized in Step 1 of Theorem 4.2 gives the precompactness of \(\{y_{n}\}_{n\geq 1}\). Then there is a subsequence of \(\{y_{n}\}_{n\geq 1}\), still denoted by it, and a function \(\tilde{y} \in S(u)\) such that
$$\begin{aligned} y_{n}\rightarrow \tilde{y} \end{aligned}$$
as \(n\rightarrow \infty \). We will show that \(\tilde{y}(t^{u})\in W\). For each \(n\geq 1\), the definition of \(t_{(y_{n},u)}\) yields
$$\begin{aligned} y_{n}(t_{(y_{n},u)})\in W. \end{aligned}$$
In view of (5.1) and (5.2), one has
$$\begin{aligned} y_{n}(t_{(y_{n},u)})\rightarrow \tilde{y} \bigl(t^{u}\bigr). \end{aligned}$$
This together with (5.3) and the closeness of W yields
$$\begin{aligned} \tilde{y}\bigl(t^{u}\bigr)\in W. \end{aligned}$$
Step 2. Let \(t_{0}=\inf_{u\in U_{0}} t^{u}\). If \(U_{0}\) contains finite elements, the proof is obvious. Otherwise, there exists a monotone decreasing sequence \(\{t^{u_{n}}\}_{n\geq 1}\) such that
$$\begin{aligned} \lim_{n\rightarrow \infty }t^{u_{n}}=t_{0}, \end{aligned}$$
where \(u_{n}\in U_{0}\). In the light of Step 1, let \(\tilde{y}_{n}\) be such that \((\tilde{y}_{n},u_{n})\in \mathcal{A}_{d}^{W}\) and \(\tilde{y}_{n}(t^{u_{n}})\in W\). With the same method as that in Step 2 of Theorem 4.2, we can infer that
$$\begin{aligned} u_{n}\rightharpoonup u^{*},\qquad \tilde{y}_{n}\rightarrow y^{*} \end{aligned}$$
as \(n\rightarrow \infty \) for some \(u^{*}\in S_{U}^{p}\) and \(y^{*}\in S(u^{*})\). Now, we show that \(y^{*}(t_{0})\in W\). Exploiting (5.6) and (5.7), we have
$$\begin{aligned} \tilde{y}_{n}\bigl(t^{u_{n}}\bigr)\rightarrow y^{*}(t_{0}). \end{aligned}$$
This together with the closeness of W and the fact that
$$\begin{aligned} \tilde{y}_{n}\bigl(t^{u_{n}}\bigr)\in W \end{aligned}$$
$$\begin{aligned} y^{*}(t_{0})\in W. \end{aligned}$$
The proof is ended. □

Remark 5.3

On the basis of the solution set \(S(u)\) and the target set W, a suitable definition of optimal time is given. Then the idea of constructing time optimal sequences twice and the theory of resolvent are used to derive the existence of time optimal pairs without the Lipschitz assumptions on f and the reflexivity of X. Therefore, our results essentially improve those in [17, 2224] and the references therein, where the Lipschitz continuity of f and the reflexivity of X are all required.

6 Applications

The following fractional composite relaxation system will be considered:
$$ \textstyle\begin{cases} \frac{\partial x(t, \theta)}{\partial t}=\frac{\partial^{2}}{\partial \theta^{2}}{}^{C} D^{\gamma }x(t, \theta) +t^{2}(1+x(t, \theta)) +Bu(t, \theta),\quad t\in [0, 1], \theta \in [0, 1], \\ x(t,0)=x(t,1)=0, \\ x(0,\theta)=x_{0}(\theta), \end{cases} $$
where \(0<\gamma <1\).
Let \(X=Y=L^{2}([0,1], \mathbb{R})\). Define the operator \(A:D(A)\subseteq X\rightarrow X\) as
$$A\zeta =\zeta '' $$
with the domain
$$D(A)=\bigl\{ \zeta \in X; \zeta, \zeta ' \mbox{ are absolutely continuous}, \zeta ''\in X, \zeta (0)=\zeta (1)=0 \bigr\} . $$
$$A\zeta =\sum_{n=1}^{\infty }\bigl(-n^{2} \pi^{2}\bigr) (\zeta, e_{n})e _{n}, \quad \zeta \in D(A), $$
where \(e_{n}(\theta)=\sqrt{2}\sin (n\pi \theta)\), \(n=1,2,\ldots \) , is an orthonormal basis of X. In view of [30], we infer that A generates a compact and analytic semigroup \(\{T(t)\}_{t>0}\) in X, and
$$T(t)\zeta =\sum_{n=1}^{\infty }e^{-n^{2}\pi^{2}t}( \zeta, e _{n})e_{n},\quad \zeta \in X. $$
Obviously, \(\Vert T(t)\Vert \leq 1\). Furthermore, by means of the subordination principle [31], Theorem 3.1, one has that a compact \(1-\gamma \) order fractional analytic resolvent \(\{\mathcal{R}_{1-\gamma }(t)\} _{t\geq 0}\) of analytic type \((\omega_{0}, \theta_{0})\) can also be generated by A, and
$$\mathcal{R}_{1-\gamma }(t)= \int_{0}^{\infty }\varphi_{t, 1-\gamma }( \tau)T(\tau) \,\mathrm{d}\tau,\quad t>0, $$
where \(\varphi_{t, 1-\gamma }(\tau)=t^{\gamma -1}\Phi_{1-\gamma }( \tau t^{\gamma -1})\), \(\Phi_{1-\gamma }(t)\), \(t>0\) is a probability density function such that \(\int_{0}^{\infty }\Phi_{1-\gamma }(t) \,\mathrm{d}t=1\). Taking into account Lemma 3.8 in [25], we obtain that \(\{\mathcal{R}_{1-\gamma }(t)\}_{t>0}\) is also continuous in the sense of uniform operator topology. Now, we can come to the conclusion that (HA) holds, and
$$\bigl\Vert \mathcal{R}_{1-\gamma }(t)\bigr\Vert =\biggl\Vert \int_{0}^{\infty }\Phi_{1-\gamma }(\tau)T\bigl(\tau t^{1-\gamma }\bigr)\,\mathrm{d}\tau \biggr\Vert \leq 1, $$
that is, \(M=\sup_{t\in [0,1]}\Vert \mathcal{R}_{1-\gamma }(t)\Vert =1\).

Now, for every \(t\in [0, 1]\), \(\theta \in [0, 1]\), let \(y(t)(\theta)=x(t, \theta)\), \(f(t, y(t))(\theta)=t^{2}(1+y(t)(\theta))+y(t)(\theta)\), \(u\in L^{2}([0,1]\times [0,1], Y)\), and \(u(t)( \theta)=u(t,\theta)\).

The cost function is such that
$$\mathcal{J}(y, u)= \int_{0}^{1} \int_{0}^{1}\bigl(\bigl\vert y(t) (\theta)\bigr\vert ^{2}+\bigl\vert u(t) (\theta)\bigr\vert ^{2} \bigr)\,\mathrm{d}\theta \,\mathrm{d}t. $$
Obviously, the cost integrand
$$L\bigl(t, y(t), u(t)\bigr)= \int_{0}^{1}\bigl(\bigl\vert y(t) (\theta)\bigr\vert ^{2}+\bigl\vert u(t) (\theta)\bigr\vert ^{2} \bigr) \,\mathrm{d}\theta =\bigl\Vert y(t)\bigr\Vert ^{2}_{X}+ \bigl\Vert u(t)\bigr\Vert ^{2}_{Y} $$
satisfies the condition (HL). Now, for \(t\in [0,1]\), define the control multivalued map
$$U(t)=\bigl\{ u(t) (\cdot)\in Y: \bigl\Vert u(t) (\cdot)\bigr\Vert _{Y}\leq N_{1}\bigr\} $$
and the target set
$$W=\bigl\{ y(\cdot)\in X: \bigl\Vert y(\cdot)\bigr\Vert _{X}\leq N_{2}\bigr\} , $$
where \(N_{1}\), \(N_{2}\) are given positive constants.
Now, system (6.1) can be reformulated as the abstract fractional relaxation system (1.1), and all the conditions listed in Section 2 are satisfied. In fact,
$$\bigl\Vert f\bigl(t, y(t)\bigr) (\cdot)-y(t) (\cdot)\bigr\Vert _{X}= \biggl( \int_{0}^{1}\bigl\Vert t^{2} \bigl(1+y(t) (\theta)\bigr)\bigr\Vert ^{2}\,\mathrm{d}\theta \biggr)^{\frac{1}{2}} \leq t^{2}\bigl(1+\bigl\Vert y(t) (\cdot) \bigr\Vert _{X}\bigr), $$
with \(\Vert t^{2}\Vert _{L^{1}([0,1])}=\frac{1}{3}<1\). Then, in light of Theorem 4.2, we obtain that there exists an optimal pair \((y_{*}, u_{*})\) such that the integral cost functional \(\mathcal{J}(y,u)\) attains its minimum. Furthermore, if the set \(\mathcal{A}_{d}^{W}\) defined in Section 5 with \(c=1\) is nonempty, then it follows from Theorem 5.1 that an optimal pair \((y^{*}, u^{*})\) exists at which the transition time \(t_{(y, u)}\) attains its minimum.



The authors would like to express their gratitude to the editor and anonymous reviewers for their valuable comments and suggestions. Moreover, the work was supported by the NSF of China (11571300, 11271316), the Qing Lan Project of Jiangsu Province of China and the High-Level Personnel Support Program of Yangzhou University.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

School of Mathematical Sciences, Yangzhou University
Faculty of Mathematics and Physics, Yancheng Institute of Technology


  1. Kilbas, AA, Srivastava, HM, Trujillo, JJ: Theory and Applications of Fractional Differential Equations. North-Holland Mathematics Studies, vol. 204. Elsevier, Amsterdam (2006) View ArticleMATHGoogle Scholar
  2. Lakshmikantham, V, Leela, S, Devi, JV: Theory of Fractional Dynamic Systems. Cambridge Scientific Publishers, Cambridge (2009) MATHGoogle Scholar
  3. Miller, KS, Ross, B: An Introduction to the Fractional Calculus and Differential Equations. Wiley, New York (1993) MATHGoogle Scholar
  4. Liang, J, Yang, H: Controllability of fractional integro-differential evolution equations with nonlocal conditions. Appl. Math. Comput. 254, 20-29 (2015) MathSciNetGoogle Scholar
  5. Li, F, Liang, J, Xu, HK: Existence of mild solutions for fractional integrodifferential equations of Sobolev type with nonlocal conditions. J. Math. Anal. Appl. 391, 510-525 (2012) View ArticleMATHGoogle Scholar
  6. Mahmudov, NI, Unul, S: On existence of BVP’s for impulsive fractional differential equations. Adv. Differ. Equ. 2017, 15 (2017) MathSciNetView ArticleGoogle Scholar
  7. Bahaa, GM: Fractional optimal control problem for differential system with delay argument. Adv. Differ. Equ. 2017, 69 (2017) MathSciNetView ArticleGoogle Scholar
  8. Wang, JR, Fečkan, M, Zhou, Y: A survey on impulsive fractional differential equations. Fract. Calc. Appl. Anal. 19, 806-831 (2016) MathSciNetView ArticleMATHGoogle Scholar
  9. Li, M, Wang, JR: Finite time stability of fractional delay differential equations. Appl. Math. Lett. 64, 170-176 (2016) MathSciNetView ArticleMATHGoogle Scholar
  10. Wang, JR, Fečkan, M, Zhou, Y: Center stable manifold for planar fractional damped equations. Appl. Math. Comput. 296, 257-269 (2017) MathSciNetGoogle Scholar
  11. Gorenflo, R, Mainardi, F: Fractional calculus: integral and differential equations of fractional order. In: Carpinteri, A, Mainardi, F (eds.) Fractals and Fractional Calculus in Continuum Mechanics, pp. 223-276. Springer, Berlin (1997) View ArticleGoogle Scholar
  12. Fan, Z, Mophou, G: Existence of optimal controls for a semilinear composite fractional relaxation equation. Rep. Math. Phys. 73, 311-323 (2014) MathSciNetView ArticleMATHGoogle Scholar
  13. Fan, Z, Dong, Q, Li, G: Approximate controllability for semilinear composite fractional relaxation equations. Fract. Calc. Appl. Anal. 19, 267-284 (2016) MathSciNetView ArticleMATHGoogle Scholar
  14. Lizama, C, N’Guérékata, GM: Bounded mild solutions for semilinear integro differential equations in Banach spaces. Integral Equ. Oper. Theory 68, 207-227 (2010) MathSciNetView ArticleMATHGoogle Scholar
  15. Lizama, C: An operator theoretical approach to a class of fractional order differential equations. Appl. Math. Lett. 24, 184-190 (2011) MathSciNetView ArticleMATHGoogle Scholar
  16. Wang, J, Zhou, Y, Medved, M: On the solvability and optimal controls of fractional integrodifferential evolution systems with infinite delay. J. Optim. Theory Appl. 152, 31-50 (2012) MathSciNetView ArticleMATHGoogle Scholar
  17. Kumar, S: Mild solution and fractional optimal control of semilinear system with fixed delay. J. Optim. Theory Appl. 1-14 (2015) Google Scholar
  18. Fan, Z, Mophou, G: Existence and optimal controls for fractional evolution equations. Nonlinear Stud. 20, 163-172 (2013) MathSciNetMATHGoogle Scholar
  19. Meng, Q, Shen, Y: Optimal control for stochastic delay evolution equations. Appl. Math. Optim. 74, 53-89 (2016) MathSciNetView ArticleMATHGoogle Scholar
  20. Lu, L, Liu, Z, Jiang, W, Luo, J: Solvability and optimal controls for semilinear fractional evolution hemivariational inequalities. Math. Methods Appl. Sci. 39, 5452-5464 (2016) MathSciNetView ArticleMATHGoogle Scholar
  21. Jiang, Y, Huang, N: Solvability and optimal controls of fractional delay evolution inclusions with Clarke subdifferential. Math. Methods Appl. Sci. (2016). doi: Google Scholar
  22. Wang, J, Zhou, Y: Time optimal control problem of a class of fractional distributed systems. Int. J. Dyn. Syst. Differ. Equ. 3, 363-382 (2011) MathSciNetMATHGoogle Scholar
  23. Jeong, JM, Son, SJ: Time optimal control of semilinear control systems involving time delays. J. Optim. Theory Appl. 165, 793-811 (2015) MathSciNetView ArticleMATHGoogle Scholar
  24. Phung, KD, Wang, G, Zhang, X: On the existence of time optimal controls for linear evolution equations. Discrete Contin. Dyn. Syst., Ser. B 4(4), 925-941 (2007) MathSciNetView ArticleMATHGoogle Scholar
  25. Fan, Z: Characterization of compactness for resolvents and its applications. Appl. Math. Comput. 232, 60-67 (2014) MathSciNetGoogle Scholar
  26. Zhu, L, Huang, Q: Nonlinear impulsive evolution equations with nonlocal conditions and optimal controls. Adv. Differ. Equ. 2015, 378 (2015) MathSciNetView ArticleGoogle Scholar
  27. Prüss, J: Evolutionary Integral Equations and Applications. Birkhäuser, Basel (1993) View ArticleMATHGoogle Scholar
  28. Hu, S, Papageorgiou, NS: Handbook of Multivalued Analysis. Kluwer Academic, Norwell (2000) View ArticleMATHGoogle Scholar
  29. Balder, EJ: Necessary and sufficient conditions for \(L_{1}\)-strong-weak lower semicontinuity of integral functionals. Nonlinear Anal. 11, 1399-1404 (1987) MathSciNetView ArticleMATHGoogle Scholar
  30. Pazy, A: Semigroups of Linear Operators and Applications to Partial Differential Equations. Springer, Berlin (1983) View ArticleMATHGoogle Scholar
  31. Bazhlekova, E: Subordination principle for fractional evolution equations. Fract. Calc. Appl. Anal. 3, 213-230 (2000) MathSciNetMATHGoogle Scholar


© The Author(s) 2017