# A time optimal control problem of some linear switching controlled ordinary differential equations

## Abstract

This article studies a time optimal control problem $( P )$ for some switching controlled systems. We first prove the existence of time optimal controls to the problem $( P )$. Then, we derive the bang-bang property for time optimal controls to this problem, through utilizing the Pontryagin maximum principle.

AMS Classification: 49K20; 35J65.

## 1 Introduction

Let A be a n × n matrix. Let $b 1 ⃗$ and $b 2 ⃗$ be two different vectors given in nwith n ≥ 1 and write B for the n × 2 matrix $b 1 ⃗ , b 2 ⃗$. We define

$U = v 1 , v 2 T ∈ ℝ 2 ; v 1 ⋅ v 2 = 0 ;$

and

$U = v ( ⋅ ) : ( 0 , + ∞ ) → ℝ 2 measurable ; v ( t ) ∈ U , for almost every t ∈ 0 , + ∞ .$

Consider the following controlled system:

$x ′ ( t ) = A x ( t ) + B u ( t ) , t ≥ 0 , x ( 0 ) = x 0 ,$
(1.1)

where the control function $u ( ⋅ ) = u 1 ( ⋅ ) , u 2 ( ⋅ ) T ∈U$. In this system, we can rewrite Bu(t) as $u 1 ( t ) b 1 ⃗ + u 2 ( t ) b 2 ⃗$, while $b 1 ⃗$ and $b 2 ⃗$ are treated as two different controllers; u1(·) and u2(·) are treated as controls. Here, the controls u1(·) and u2(·) hold the property as following:

$u 1 ( t ) u 2 ( t ) = 0 , for almost every t ∈ 0 , ∞ ,$
(1.2)

and system (1.1) is called switching controlled system. The condition (1.2) ensures that, at almost every instant of time, at most one of controllers $b 1 ⃗$ and $b 2 ⃗$ is active. Such kind of switching controlled systems model a large class of problems in applied science.

The purpose of this article is to study a time optimal control problem for switching controlled system (1.1). We begin with introducing the problem which will be studied. To serve such purpose, we define two sets as:

$U ̃ = v ≡ v 1 , v 2 T ∈ ℝ 2 ; v 1 ⋅ v 2 = 0 and v ℝ 2 ≤ 1$
(1.3)

and

$U ad = u ( ⋅ ) ∈ L ∞ 0 , + ∞ ; ℝ 2 ; u ( t ) ∈ U ̃ , for almost every t ∈ 0 , + ∞ .$
(1.4)

Here, $⋅ ℝ 2$ stands for the Euclid norm in 2. (We will utilize the notation $⋅ , ⋅ ℝ 2$ to represent the Euclid inner product in 2.) Then, the time optimal control problem studied in the article is as:

$P : inf u ( ⋅ ) ∈ U ad T ; x T ; x 0 , u = 0 .$

Throughout this article, we denote x(·; x0, u) with u(·) = (u1(·), u2(·))T, to the solution of the Equation (1.1) corresponding to the initial data x0 and controls u(·) = (u1(·), u2(·))T. Consequently, x(T; x0, u) stands for the state of the solution x(·; x0, u) at time T.

In the problem $( P )$ the number

$T * ≡ inf u ( ⋅ ) ∈ U ad T ; x T ; x 0 , u = 0$

is called the optimal time, while a control u*(·), in the set $U ad$, and holding the property that x(T*;x0,u*) = 0, is called an optimal control. This problem is to ask for such a control u*(·) in the constraint control set $U ad$ that it derives the solution x(·;x0,u*) from the initial state x0 to the origin of nin the shortest time.

Next, we present the main results obtained in this study.

Theorem 1.1. The problem $( P )$ has at least one optimal control provided that the Kalman rank condition holds for A and B, and Re λ ≤ 0 for each eigenvalue λ of A.

Theorem 1.2. When the Kalman rank condition holds for A and B, any optimal control u* to $( P )$ has the bang-bang property:

$u * ( t ) ℝ 2 = 1 , f o r a l m o s t e v e r y t ∈ [ 0 , T * ] .$
(1.5)

Remark 1.3. (i) The statement that the Kalman rank condition holds for A and B if and only if

$r a n k B , A B , . . . , A n - 1 B = n .$

(ii) Since any optimal control u*(·) to the problem $( P )$ is a switching control, the statement that the bang-bang property (1.5) holds for the optimal control u*(·) is equivalent to the statement that for almost every t [0, T*], u*(t) is one of four vertices of the domain {(v1,v2)T 2; |v1 + v2| ≤ 1 and |v1-v2| ≤ 1}.

(iii) The condition that Reλ ≤ 0 for each eigenvalue λ of A is a condition to guarantee the existence of time optimal control under the control constraint: $u ( t ) ℝ 2 ≤1$ for almost every t, even in the case where the switching constraint disappears (see [1, 2]).

In the classical time optimal control problem, the controls constraint are convex, and the existence of the optimal control can be get by the weak convergence methods. In our article, the controls set in the problem $( P )$ lose the convexity. And it can not be researched by making use of methods in most studies from past publications (see ). Here, we utilize an idea from relax control theory to prove the existence theorem. Finally, we make use of the Pontragin maximum principle and the unique continuation property to obtain the bang-bang property for optimal controls to this problem.

With regard to the studies of time optimal control problems governed by ordinary partial differential equations and without the switching constrain to controls, there have been a lot of literatures. We would like to quote the following related articles [2, 69].

The rest of the article is structured as following: Section 2 presents the proof Theorem 1.1; Section 3 provides the proof of Theorem 1.2.

## 2 The existence of time optimal controls

We prove the existence result, namely, Theorem 1.1, as follows.

Proof. Write $co U ̃$ for the convex hull of the set $U ̃$. Then, it is clear that

$co U ̃ = v 1 , v 2 T ∈ ℝ 2 ; v 1 + v 2 ≤ 1 and v 1 - v 2 ≤ 1 .$

Now, we define another constraint control set as:

$U ̃ ad = u ( ⋅ ) ∈ L ∞ 0 , + ∞ ; ℝ 2 ; u ( t ) ∈ co U ̃ , for almost every t ∈ 0 , + ∞ .$

Since the Kalman rank condition and the condition that Re λ ≤ 0, for each eigenvalue λ of A, hold, we can utilize the same argument (see [, Theorem 2.6]) to get that system (1.1) is exact null controllable with the control constraint set $U ̃ ad$.

Next, we consider a new time optimal control problem:

$P ̃ : inf u ( ⋅ ) ∈ U ̃ ad T ; x T ; x 0 , u = 0 .$

We denote $T ̃ *$ to the optimal time for this problem.

Because the control set $co U ̃$ is convex, we can utilize the classical weak convergence method to prove that the problem $( P ̃ )$ has at least one solution (see, for instance [, Theorem 3.1]). Namely, there exists at least one control $u ̃ ( ⋅ ) ∈ U ̃ ad$ such that the corresponding solution $x ⋅ ; x 0 , u ̃$ to the equation (1.1) holds the property: $x T ̃ * ; x 0 , u ̃ =0$.

Let

$K = v ( ⋅ ) ∈ U ̃ ad ; x T ̃ * ; x 0 , v = 0 .$

Then, one can easily check that is a convex, nonempty subset of L(0, + ∞; 2), moreover, it is compact in the weak* topology. Therefore, we can apply the Krein-Milman theorem to get an extreme point $u ̃ * ( ⋅ )$ in the set .

Now, we claim that for almost every $t∈ 0 , T ̃ * , u ̃ * ( t ) ≡ u ̃ 1 * ( t ) , u ̃ 2 * ( t ) T$ belongs to $U ̃$. Here is the argument: in order to prove the statement that $u ̃ * ( t ) ∈ U ̃$, it suffices to show that for almost every time $t∈ [ 0 , T ̃ * ]$, the following two equalities stand:

$u ̃ 1 * ( t ) + u ̃ 2 * ( t ) = 1 ,$

and

$u ̃ 1 * ( t ) - u ̃ 2 * ( t ) = 1 .$

By seeking for a contradiction, we suppose that the above-mentioned statement was not true. Then there would exist a number ε with 0 < ϵ < 1, and a measurable subset $F⊂ 0 , T ̃ *$ with a positive measure such that one of the following two statements stands:

$u ̃ 1 * ( t ) + u ̃ 2 * ( t ) ≤ 1 - ε , for each t ∈ F ,$
(2.1)

and

$u ̃ 1 * ( t ) - u ̃ 2 * ( t ) ≤ 1 - ε , for each t ∈ F .$
(2.2)

In the case where (2.1) holds, we define a functional I F : L(F) → nby setting

$I F α ( ⋅ ) = ∫ F e A T ̃ * - s B α ⃗ ( s ) d s ,$

where $α ⃗ ( ⋅ )$ is a vector-valued function over F and is defined by $α ⃗ ( s ) = α ( s ) , α ( s ) T$, for almost every s F. It is clear that I F is a bounded linear operator from L(F) to n. L(F) is an infinite dimensional space, and nis a finite dimensional space. Thus, the kernel of I F is not trivial. Namely, there exists a function β(·) holds the properties: it belongs to L(F); is non-trivial; satisfies $β ( ⋅ ) L ∞ ( F ) ≤1$; and is such that I F (β(t)) = 0. Let $β ⃗ ( s ) = β ( s ) , β ( s ) T$ over F. We extend this function $β ⃗ ( ⋅ )$ over [0, +∞) by setting it to take the value (0, 0)Tover [0, +∞) \ F. We still denote this extension by $β ⃗ ( ⋅ )$. Then, we construct two control functions as following:

$v ( t ) = u ̃ * ( t ) + ε 2 β ⃗ ( t ) , w ( t ) = u ̃ * ( t ) - ε 2 β ⃗ ( t ) .$

We will proof that both v(·) and w(·) belong to . Since

$∫ 0 T ̃ * e A ( T ̃ * - s ) B β ⃗ ( s ) d s = 0 ,$

$u ̃ * ( ⋅ )$ is an extreme point of , and $x T ̃ * ; x 0 , u ̃ * =0$, it follows at once that

$x T ̃ * ; x 0 , v = x T ̃ * ; x 0 , w = 0 .$
(2.3)

Thus, the remainder is to show that v(·) and w(·) belong to $U ̃ ad$, namely, for almost every t [0,+∞), v(t) and w(t) are in the set $co U ̃$. With regard to t [0,+∞), there are only two possibilities: it belongs to either [0, +∞) \ F or F.

When t [0, +∞) \ F, we have that $β ⃗ ( t ) = 0$. Consequently, it holds that $v ( t ) =w ( t ) = u ̃ * ( t )$. Along with the fact that $u ̃ * ( ⋅ )$ is an extreme point of , this indicates that $v ( t ) =w ( t ) ∈ co U ̃$, for almost all t [0, +∞) \ F.

When t F, we observe that

$v ( t ) = v 1 ( t ) , v 2 ( t ) T ≡ u ̃ 1 * ( t ) + ε 2 β ( s ) , u ̃ 2 * ( t ) + ε 2 β ( s ) T$
(2.4)

and

$w ( t ) ≡ w 1 ( t ) , w 2 ( t ) T = u ̃ 1 * ( t ) - ε 2 β ( s ) , u ̃ 2 * ( t ) - ε 2 β ( s ) T .$

On the other hand, one can easily check that

$v 1 ( t ) + v 2 ( t ) = u ̃ 1 * ( t ) + ε 2 β ( s ) + u ̃ 2 * ( t ) + ε 2 β ( s ) ≤ u ̃ 1 * ( t ) + u ̃ 2 * ( t ) + ε β ( s ) ≤ 1$

and

$v 1 ( t ) − v 2 ( t ) = u ̃ 1 * ( t ) − ε 2 β ( s ) − u ̃ 2 * ( t ) − ε 2 β ( s ) ≤ u ̃ 1 * ( t ) - u ̃ 2 * ( t ) ≤ 1 .$

These, together with (2.4), yields that $v ( t ) ∈ co U ̃$, for almost every t F. Similarly, we can derive that $w ( t ) ∈ co U ̃$, for almost every t F.

Therefore, we have proved that for almost every t [0,+∞), both v(t) and w(t) belong to the set $co U ̃$. Combined with (2.3), this shows that

$both v ( ⋅ ) and w ( ⋅ ) belong to K .$
(2.5)

However, it is obvious that $u ̃ * ( t ) = 1 2 v ( t ) + 1 2 w ( t )$. Along with (2.5), this contradicts to the fact that $u ̃ * ( ⋅ )$ is an extreme point of .

In the case where (2.2) holds, we can utilize the same arguments as above to get a contradiction to the fact that $u ̃ * ( ⋅ )$ is an extreme point of .

Thus, we have proved that $u ̃ * ( t ) ∈ U ̃$ for almost every $t∈ [ 0 , T ̃ * ]$. In summary, we conclude that the above-mentioned claim stands.

Next, we define another control function $ū ( ⋅ )$ by setting

$ū ( t ) = u ̃ * ( t ) , t ∈ [ 0 , T ̃ * ] , ( 0 , 0 ) T , t ∈ T ̃ * , + ∞ .$

By the above-mentioned claim, we can easily find that $ū ( ⋅ )$ belong to $U ad$, and is an optimal control for problem $( P ̃ )$. Since T* is the optimal time to the problem $( P )$, from the facts that $x T ̃ * ; x 0 , ū =0$ and $ū ( ⋅ ) ∈ U ad$, we deduce that

$T * ≤ T ̃ * .$

However, it is clear that $U ad ⊂ U ̃ ad$. Thus, we necessarily have

$T ̃ * ≤ T * .$

Therefore, it holds that

$T ̃ * = T * .$

This indicates that $ū ( ⋅ )$ is an time optimal control to the problem $( P )$. Hence, we have completed the proof of Theorem 1.1.

## 3 The bang-bang property

This section is devoted to proving Theorem 1.2.

Proof. Let $u * ( ⋅ ) = u 1 * ( ⋅ ) u 2 * ( ⋅ ) T ∈ U ad$ be a time optimal control for problem $( P )$. We aim to show that u*(·) holds the bang-bang property (1.5). By the classic discuss, we can get the Pontryagain's maximum principle for the problem $( P )$ (see [10, 11]). Namely, there exists a multiplier ξ0 in n, with $ξ 0 ℝ n =1$, and such that the following maximum principle stands:

$B T ψ ( t ) , u * ( t ) ℝ 2 = max v ∈ U ̃ B T ψ ( t ) , v ℝ 2 , for almost every t ∈ [ 0 , T * ] ,$
(3.1)

where ψ(t) is the solution of the following adjoint equation:

$ψ ′ ( t ) = - A T ψ ( t ) , t ∈ [ 0 , T * ] , ψ ( T * ) = ξ 0 .$
(3.2)

Then, by the the Kalman rank condition, we obtain that

$B T ψ ( t ) ≠ 0 , for almost every t ∈ [ 0 , T * ] .$
(3.3)

Besides, it follows from (1.3), namely, the definition of $U ̃$, that $v∈ U ̃$ if and only if $-v∈ U ̃$. This, together with (3.3) and (3.1), immediately gives the inequality:

$B T ψ ( t ) , u * ( t ) ℝ 2 > 0 , for almost every t ∈ [ 0 , T * ] .$
(3.4)

Next, we define subsets E k with k = 1, 2,..., by setting

$E k = t ∈ [ 0 , T * ] ; u * ( t ) ℝ 2 ≤ 1 - 1 k .$

By contradiction, we suppose that u*(·) did not have the bang-bang property, namely, (1.5) did not hold for u*(·). Then, there would exist a natural k such that m(E k ) > 0. Therefore, we could find a positive number C > 1 such that

$C u * ( t ) ℝ 2 ≤ 1 , for each t ∈ E k .$

Now, we construct another control $ū ( ⋅ )$ in the following manner:

$ū ( t ) = u * ( t ) for almost every t ∈ [ 0 , T * ] \ E k , C u * ( t ) , for each t ∈ E k .$

It is obvious that $ū ( ⋅ ) ∈ U ad$. However, by the construction of $ū ( ⋅ )$ and by (3.4), we can easily obtain the inequality:

$B T ψ ( t ) , u * ( t ) ℝ 2 < B T ψ ( t ) , ū ( t ) ℝ 2 , for almost every t ∈ E k ,$

In summary, we conclude that the proof of Theorem 1.2 has been completed.

## References

1. 1.

Evans LC: An Introduction to Mathematical Optimal Control Theory.[http://math.berkeley.edu/evans/]

2. 2.

Phung KD, Wang G, Zhang X: On the existence of time optimal controls for linear evolution equations. Discret Contin Dyn Syst Ser B 2007, 8: 925–941.

3. 3.

Barbu V Volume 190. Academic Press, Inc; 1993.

4. 4.

Barbu V, Precupanu T: Convexity and Optimization in Banach Spaces. D Reidel Dordrecht; 1986.

5. 5.

Pontryagin LS, Boltyanskii VG, Gamkrelidze RV: The Mathematical Theory of Optimal Processes. Wiley, New York; 1962.

6. 6.

Fattorini HO: Time optimal control of solutions of operational differential equations. J SIAM Control 1964, 2: 54–59.

7. 7.

Lasalle JP: The Time Optimal Control Problem. Volume 5. Contributions to the Theory of Nonlinear Oscillations, Princeton University Press, Princeton; 1960:1–24.

8. 8.

Wang G, Wang L: The bang-bang principle of time optimal controls for the heat equation with internal controls. Syst Control Lett 2007, 56: 709–713. 10.1016/j.sysconle.2007.06.001

9. 9.

Wang G: L-null controllability for the heat equation and its consequences for the time optimal control problem. J SIAM Control Optim 2008, 47: 1701–1720. 10.1137/060678191

10. 10.

Barbu V: Mathematical Methods in Optimization of Differential Systems. Kluwer Academic Publishers, Dordrecht; 1994.

11. 11.

Li X, Yong J: Optimal Control Theory for Infinite Dimensional Systems. Birkhäauser, Boston/Cambridge, MA; 1995.

## Acknowledgements

The authors would like to thank professor Gengsheng Wang for his valuable suggestions on this article. This work was partially supported by the National Natural Science Foundation of China under Grant No. 10971158 and the Natural Science Foundation of Ningbo under Grant No. 2010A610096.

## Author information

Authors

### Corresponding author

Correspondence to Guojie Zheng.

### Competing interests

The authors declare that they have no competing interests.

### Authors' contributions

GZ provided the questions and solved the existence Theorem for the optimal control. BM gave the proof for the bang-bang principle of the optimal control. All authors read and approved the final manuscript.

## Rights and permissions

Reprints and Permissions

Zheng, G., Ma, B. A time optimal control problem of some linear switching controlled ordinary differential equations. Adv Differ Equ 2012, 52 (2012). https://doi.org/10.1186/1687-1847-2012-52 