Skip to content

Advertisement

  • Research
  • Open Access

Point to point control of fractional differential linear control systems

Advances in Difference Equations20112011:13

https://doi.org/10.1186/1687-1847-2011-13

  • Received: 9 December 2010
  • Accepted: 22 June 2011
  • Published:

Abstract

In the article, an alternative elementary method for steering a controllable fractional linear control system with open-loop control is presented. It takes a system from an initial point to a final point in a state space, in a given finite time interval.

Keywords

  • fractional control systems
  • fractional calculus
  • point to point control

1 Introduction

Fractional integration and differentiation are generalizations of the notions of integer-order integration and differentiation. It turns out that in many real-life cases, models described by fractional differential equations much more better reflect the behavior of a phenomena than models expressed by means of the classical calculus (see, e.g., [1, 2]). This idea was used successfully in various fields of science and engineering for modeling numerous processes [3]. Mathematical fundamentals of fractional calculus are given in the monographs [49]. Some fractional-order controllers were developed in, e.g., [10, 11]. It is also worth mentioning that there are interesting results in optimal control of fractional order systems, e.g., [1214].

In this article, it will be shown how to steer a controllable single-input fractional linear control system from a given initial state to a given final point of state space, in a given time interval. There is also shown how to derive hypothetical open-loop control functions, and some of them are presented. This method of control is an alternative to, e.g., introduced in [15], in which a derived open-loop control is based on controllability Gramian matrix, defined in [16] that seems to be much more complex to calculate than in our approach.

The article is divided into two main parts: in Sect. 2 we study control systems described by the Riemann-Liouville derivatives and in Sect. 3--systems expressed by means of the Caputo derivatives. In each of these sections, we consider three cases of linear control systems: in the form of an integrator of fractional order α, in the form of sequential -integrator, and finally, in a general (controllable) vector state space form. In Sect. 3.3, an illustrative example is given. Conclusions are given in Sect. 4.

2 Fractional control systems with Riemann-Liouville derivative

Let and denote the Riemann-Liouville fractional left-sided integral and fractional derivative, respectively, of order α , on a finite interval of the real line [4, 9]:

where n = [(α)] + 1, and [(α)] denotes the integer part of (α).

Let us consider a fractional-order (α and α > 0) differential equation of the form:
(2.1)
with the initial conditions
(2.2)
where n = [α] + 1 for α , and n = α for α . By , we mean the following limit

i.e., the limit taken in ]t s, t s + ε [for ε > 0.

The existence and uniqueness of solutions of (2.1) and (2.2) were considered by numerous authors, e.g., [4, 8].

2.1 Linear control system in the form of α-integrator

Consider a control system of the form
(2.3)

where 0 < α < 1, z(t) is a scalar solution of (2.3), and v(t) is a scalar control function.

The aim of the control is to bring system (2.3), i.e., the state trajectory z(t), from the start point
(2.4)
i.e., from the point z(t) = z(t s+) for tt s+, to the final point
(2.5)

in a finite time interval t f - t s. In other words, we are looking for such an open-loop control function v = v(t), which will achieve it in a finite time interval t f - t s. The start and final points will be also called the terminal points.

In order to solve Equation 2.3, we need to use an initial condition of the form
(2.6)
that will correspond to condition (2.4), i.e., we have to find an appropriate value w 1 corresponding to (2.4). To this end, initial condition (2.6) can be rewritten (see [4]) as
from which
(2.7)
Proposition 1. A control v(t) that steers system (2.3) from the start point (2.4) to the final point (2.5) is of the form
(2.8)
where φ (t) is an arbitrary C 1-function satisfying
(2.9)
Proof. Take (2.8) as a control applied to (2.3), i.e.,
(2.10)
Integrating both sides of (2.10) by means of , i.e.,
we get (using the rule of integration given, e.g., in [4])
(2.11)
Since φ (t s) = z s, and the system starts from z(t s) = z s, we get

which finally yields z(t) = φ (t). In particular, z(t f) = φ (t f) = z f. □

Example 2. We want to steer system (2.3) from the start point (2.4) to the final point (2.5) by means of the control given by (2.8), where
(2.12)
The values of coefficients a 0 and a 1 have to be chosen such that conditions (2.9) hold, i.e., from
we calculate, for t f > t s,
(2.13)
Thus, polynomial (2.12) has the form
and then, Equation 2.3, with control , is the following
(2.14)
In order to show that the above-calculated control v(t) is right, we integrate (2.14) by means of , giving
Since the value of initial condition corresponding to the start point z s is given by (2.6) and (2.7), and substituting already calculated coefficients a 0 and a 1 given by (2.13), we get
(2.15)

Since for α < 1, evaluating (2.15) at t = t s yields z(t s) = z s, and for t = t f gives z(t f ) = z f .

2.2 Linear control system in the form of -integrator

Consider a control system of order , for 0 < α < 1, n + such that < 1, given by
(2.16)
with the initial conditions
(2.17)
where z(t) is a scalar solution of (2.16), (2.17), and v(t) is a scalar control function. By we mean
(2.18)
We introduce the notion of (see Property 2.4 in [4]), because, in general,
Initial conditions (2.17) are equivalent (see [4]) to
(2.19)
The aim of the control is to bring system (2.16) from the start point
(2.20)
at time t s, to the final point
(2.21)

at time t f , in the finite time interval t f - t s.

For initial conditions (2.17) to correspond to the start point Z s, we calculate (from (2.19))
Proposition 3. A control v(t) that steers system (2.16) from the start point (2.20) to the final point (2.21) is of the form
where φ (t) is an arbitrary C n -function satisfying
(2.22)
i.e.,
For such defined conditions (2.22), the initial conditions are
(2.23)
Proof. Apply the control
to (2.16), and we obtain
(2.24)
Next, integrating (2.24) by means of
we get
(2.25)
Since the system starts from (2.20), and (2.22) holds, i.e., , we get
which yields
(2.26)
In particular, for t = t f we obtain
Analogously, consecutive integrations of (2.26) by means of , together for all n integrations, yields
and

One of the possible choices of function φ (t) is
(2.27)
where
(2.28)

satisfying (2.22).

For a function of type (t - t s) i α , the following holds
which is always satisfied, since we have i = 0, ..., 2n - 1 and α > 0 (0 < α < 1). It follows that for the function (given by (2.28)), we have
Thus, for the function φ(t) given by (2.27), we have , and then
Example 4. Consider control system (2.16) of order 2α (n = 2), i.e.,
which we want to bring from the start point
to the final point

in the finite time interval t f - t s.

We take function φ (t) in the form
for which
According to (2.22), the following must be satisfied
or, in the matrix form
(2.29)

from which we can calculate coefficients a i , 0 ≤ i ≤ 3, assuming that t f > t s.

Therefore, a control function steering the system from the start point Z s to the final point Z f , is

where a i , 0 ≤ i ≤ 3, are already calculated from (2.29).

2.3 Linear control system in the general state space form

Consider a linear fractional control system of the form
(2.30)
where x(t) = (x 1(t), ..., x n (t)) T n is a state space vector, A n×n , u(t) , b n×1and . The initial conditions are
or, in the equivalent form
The aim of the control is to bring the control system Λ from the start point
(2.31)
to the final point
(2.32)
in the finite time interval t f - t s. To this end, since Λ is assumed to be controllable [15, 16], i.e.,
we can change the state coordinates x to new coordinates , in the following linear way
such that Λ expressed in the new coordinates will be in the Frobenius form, i.e.,
In order to find a linear transformation T we take a row vector t 1 1 × n such that
(2.33)
which yields
Indeed, if we take , where the first coordinate function is given by , and such that t 1 satisfies (2.33), then, using the linearity of Riemann-Liouville derivative, we have
getting . Condition (2.33) can also be rewritten in the matrix form
which gives rise to

where is the n th row of the matrix R -1(A, b).

Next, applying to the system a feedback of the form
(2.34)
where and v(t) , we get
Denoting , and using notation (2.18), we get
then
(2.35)
Since the transformation is already known, for the given start point (2.31) and final point (2.32) we can calculate corresponding terminal points expressed in the new coordinates , i.e.,
and
Then, for system (2.35) the terminal points are the following
(2.36)
and
(2.37)

In such a way, we have transformed the problem of finding a control u(t) for the system (2.30) steering from the start point (2.31) to the final point (2.32), into an equivalent problem of finding a control v(t) for system (2.35) steering from the start point (2.36) to the final point (2.37), which has already been explained in Sect. 2.2.

To this end, we take a C n -function φ (t) satisfying (2.22) for given (2.36) and (2.37). For such a function φ (t), the control is
Finally, using (2.34), the desired control u(t) taking system Λ from x s to x f is the following

3 Fractional control systems with Caputo derivative

We will use the following definition of Caputo derivative. Let α and (α) ≥ 0. If α 0, n = [(α)] + 1, and then
If α = n 0, then
Consider a differential equation, for α and α > 0,
(3.1)
with the initial conditions
(3.2)

It has been already shown, e.g., in [4] that for (3.1) and (3.2) a solution exists.

3.1 Linear control system in the form of α-integrator

Consider a linear fractional differential equation
(3.3)
with the initial conditions
(3.4)

where z(t) is a scalar solution and v(t) is a scalar control function.

The aim of the control is to steer system (3.3) from the start point
(3.5)
to the final point
(3.6)
in a finite time interval t f - t s. In contrast to the equation defined by means of Riemann-Liouville derivative, initial conditions (3.4) coincide with start point (3.5), i.e.,
Proposition 5. A control v(t) that steers system (3.3) from the start point (3.5) to the final point (3.6) is of the form
(3.7)
where φ(t) is an arbitrary C n -function satisfying
(3.8)
i.e.,
Proof. As a control applied to (3.3) take (3.7), and then
(3.9)
Integrating (3.9) (according to the rule given by Lemma 2.22 in [4]) by means of , i.e.,
we get
(3.10)
If the system starts from Z(t s) = Z s, and Φ (t s) = Z s, then from (3.10) we get
which implies

A possible choice of the function φ(t) is to take a (2n - 1)-degree polynomial of the form
(3.11)
satisfying (3.8). A control function v(t) for the function φ(t) given by (3.11) is
and thus,
(3.12)

Example 6. Consider control system (3.3), for 0 < α < 1, where n = [α] + 1 = 1. We want to find a control function v(t), which steers (3.3) from the given start point z(t s) = z s0 to the given final point z(t f ) = z f0.

To this end, take φ(t) of the form
(3.13)
where a 0 and a 1 are such that conditions (3.8) are met, i.e.,
A solution of the above system of equations, for t f > t s, is
Therefore, polynomial (3.13) is of the form
and the control given by (3.12) is as follows
(3.14)
So, system (3.3), with calculated control, is of the form
(3.15)
To be sure that such a control is correct, let us integrate (3.15) by means of obtaining
Since
we get
(3.16)

Evaluating (3.16) at t = t s gives z(t s) = z s0 and for t = t f yields z(t f ) = z f0, which means that control (3.14) correctly steers the system from z s0 to z f0.

Remark 7. For 0 < α < 1, the problem of steering system (3.3) from start point (initial condition) (3.5) to final point (3.6) can be also solved using the known relation between Caputo and Riemanna-Liouville derivative, i.e.,
Therefore, system (3.3) together with terminal points (3.5) and (3.6) can be transformed to the following form
(3.17)
where
(3.18)
Indeed, control v(t) steering system (3.17) from the given point y(t s+) to the given point y(t f ), steers system (3.3) from the given point z(t s) to the given final point z(t f ), which follows from the inverse transformation of (3.18), i.e.,

3.2 Linear control system in the form of -integrator

Consider a control system of order , where α , 0 < α ≤ 1, and n +, such that < 1, given by
(3.19)
with initial conditions

where z(t) is a scalar solution, v(t) is a scalar control function, and is defined like in (2.18), but for the Caputo derivative.

The aim of the control is to steer system (3.19) from the start point
(3.20)
to the final point
(3.21)
in a finite time interval t f - t s. Obviously, we have
Proposition 8. A control v(t) that steers system (3.19) from start point (3.20) to final point (3.21) is of the form
(3.22)
where φ(t) is an arbitrary C n -function satisfying
(3.23)
i.e.,
Proof. Apply to (3.19) control (3.22) obtaining
(3.24)
Next, integrating both sides of (3.24) by means of , i.e.,
we get
Since, system (3.19) starts from (3.20), and (3.23) holds, that is , we get
(3.25)
Analogously, consecutive integrations of (3.25) by means of , yields (for all n integrations)
and then

A possible choice of function φ(t) is
where is given by (2.28), and satisfying (3.23). Since, for a function of type (t - t s) is
it follows that for we have
(3.26)
Therefore, it follows that , and after applying (3.26), we get
(3.27)

3.3 Linear control system in the general state space form

Consider a controllable linear fractional control system of the form
where x(t) = (x 1(t), ..., x n (t)) T n is the state space vector, A n×n , u(t) , b n×1and . The initial conditions are
The aim of control is to bring the control system Λ from the start point
to the final point
in a finite time interval t f - t s. Then, obviously, the initial conditions have to be set to
The consecutive proceeding is analogous to that already presented in Sect. 2.3, but for Caputo derivative, arriving at a system of the form

for which we apply the theory presented in Sect. 3.2.

Example 9. For a linear control system in the form
calculate a control function u(t) taking system Λ from the start (initial) point (at time t s = 1 s)
to the final point (at time t f = 5 s)

in the finite time interval t f - t s = 4 s.

Transform Λ by means of given by
to in the form
Apply to control
(3.28)
resulting
Denoting (and as a consequence ), we get
(3.29)
Now, we want to find a control v(t), which takes (3.29) from the start point
to the final point
As a control function, of the form (3.27), we take
where
and the coefficients a i , 0 ≤ i ≤ 3, are such that
or, in the matrix form
The calculated coefficients are
and according to (3.27), we have
Finally, complete control (3.28) applied to Λ, achieving the task, is

Conclusions

In the article, a method for steering a control system from one point to another in a state space was presented. Both for the system described by Riemann-Liouville derivative and Caputo derivative three forms of control systems were studied. In both cases, the -integrator form was introduced as a scalar representation equation of a control system in a controllable state space form. Because of the specific nature of initial conditions for systems defined by means of Riemann-Liouville derivative, numerical example was given only for the systems with Caputo derivative. The choice of possible candidates for control functions presented in the article is not the only one possible. Other functions achieving the task can also be found. Since in our approach no restrictions are posed on the trajectory joining two given points, a family of such trajectories, and thereby "base-functions," can be relatively wide, and authors have proposed some selected examples of such functions (e.g., (2.27), (3.11)). If one would wish, additionally, to steer a system from a given point to another one in an optimal way, i.e., with minimizing some cost function, this implies a specific trajectory. In such a case it is still possible to look for the other type of functions (satisfying one of Propositions 1, 3, 5, and 8) restricted additionally by these optimality constraints. In other words, it can be possible to find some other type of functions, perhaps different from these selected by authors, achieving the desired task. Interesting results in optimal control of fractional systems can be found, e.g., in [1214].

Declarations

Authors’ Affiliations

(1)
Institute of Control and Industrial Electronics Warsaw University of Technology, Koszykowa 75, 00-662 Warsaw, Poland

References

  1. Dzieliński A, Sarwas G, Sierociuk D: Ultracapacitor parameters identification based on fractional order model. Procedings of the European Control Conference, Budapest, Hungary 2009.Google Scholar
  2. Dzieliński A, Sarwas G, Sierociuk D: Time domain validation of ultracapacitor fractional order model. Procedings of the 49th IEEE Conference on Decision and Control, Atlanta, CA, USA 2010.Google Scholar
  3. Vinagre M, Monje C, Calderon A: Fractional order systems and fractional order control actions. Lecture 3 of the IEEE CDC02: Fractional Calculus Applications in Automatic Control and Robotics 2002.Google Scholar
  4. Kilbas A, Srivastava H, Trujillo J: Theory and Appliccations of Fractional Differential Equations. Elsevier, Amsterdam; 2006.Google Scholar
  5. Miller KS, Ross B: An Introduction to the Fractional Calculus and Fractional Differential Equations. Wiley, New York; 1993.Google Scholar
  6. Oldham KB, Spanier J: The Fractional Calculus. Academic Press, New York; 1974.Google Scholar
  7. Oustalup A: La dérivation non entiére. Hermès, Paris; 1995.Google Scholar
  8. Podlubny I: Fractional Differential Equations. Academic Press, San Diego; 1999.Google Scholar
  9. Samko S, Kilbas A, Marichev O: Fractional Integrals and Derivatives. Gordon and Breach, Amsterdam 1993.Google Scholar
  10. Oustalup A: Commande CRONE. Hermès, Paris 1993.Google Scholar
  11. Podlubny I, Dorcak L, Kostial I: On fractional derivatives, fractional order systems and PI λ D μ controllers. In Procedings of the 36th IEEE Conference on Decision and Control. San Diego, CA, USA; 1997.Google Scholar
  12. Agrawal OP, Baleanu D: A hamiltonian formulation and a direct numerical scheme for fractional optimal control problems. J Vibr Control 2007,13(9-10):1269-1281. 10.1177/1077546307077467MathSciNetView ArticleGoogle Scholar
  13. Agrawal OP, Defterli O, Baleanu D: Fractional optimal control problems with several state and control variables. J Vibr Control 2010,16(13):1967-1976. 10.1177/1077546309353361MathSciNetView ArticleGoogle Scholar
  14. Baleanu D, Defterli O, Agrawal OP: A central difference numerical scheme for fractional optimal control problems. J Vibr Control 2009,15(4):583-597. 10.1177/1077546308088565MathSciNetView ArticleGoogle Scholar
  15. Djennoune S, Bettayeb M: New results on controllability and observability of fractional order systems. J Vibr Control 2008,14(9-10):1531-1541. 10.1177/1077546307087432MathSciNetView ArticleGoogle Scholar
  16. Matignon D, d'Andréa Novel B: Some results on controllability and observability of finite-dimensional fractional differential systems. Comput Eng Syst Appl 1996, 2: 952-956.Google Scholar

Copyright

© Dzieliński and Malesza; licensee Springer. 2011

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement