# Time-Delay and Fractional Derivatives

- J.A. Tenreiro Machado
^{1}Email author

**2011**:934094

https://doi.org/10.1155/2011/934094

© J. A. Tenreiro Machado. 2011

**Received: **7 January 2011

**Accepted: **4 February 2011

**Published: **13 March 2011

## Abstract

This paper proposes the calculation of fractional algorithms based on time-delay systems. The study starts by analyzing the memory properties of fractional operators and their relation with time delay. Based on the Fourier analysis an approximation of fractional derivatives through time-delayed samples is developed. Furthermore, the parameters of the proposed approximation are estimated by means of genetic algorithms. The results demonstrate the feasibility of the new perspective.

## Keywords

## 1. Introduction

Fractional calculus (FC) deals with the generalization of integrals and derivatives to a noninteger order [1–7]. In the last decades the application of FC verified a large development in the areas of physics and engineering and considerable research about a multitude of applications emerged such as, viscoelasticity, signal processing, diffusion, modeling, and control [8–17]. The area of dynamical systems and control has received a considerable attention, and recently several papers addressing evolutionary concepts and fractional algorithms can be mentioned [18, 19]. Nevertheless, the algorithms involved in the calculation of fractional derivatives require the adoption of numerical approximations [20–26], and new research directions are clearly needed.

Bearing these ideas in mind, this paper addresses the optimal system control using fractional order algorithms and is organized as follows. Section 2 introduces the calculation of fractional derivatives and formulates the problem of optimization through genetic algorithms (GAs). Section 3 presents a set of experiments that demonstrate the effectiveness of the proposed optimization strategy. Finally, Section 4 outlines the main conclusions.

## 2. Problem Formulation and Adopted Techniques

where is the Euler's gamma function, means the integer part of , and is the step time increment.

where and represent the Fourier variable and operator, respectively, and .

These expressions demonstrate that fractional derivatives have memory, contrary to integer derivatives that consist in local operators. There is a long standing discussion, still going on, about the pros and cons of the different definitions. These debates are outside the scope of this paper, but, in short, while the Riemann-Liouville definition involves an initialization of fractional order, the Caputo counterpart requires integer order initial conditions which are easier to apply (often the Caputo's initial conditions are called freely as "with physical meaning"). The Grünwald-Letnikov formulation is frequently adopted in numerical algorithms because it inspires a discrete-time calculation algorithm, based on the approximation of the time increment through the sampling period.

where and are weight coefficients and the corresponding delays and is the order of the approximation.

Before continuing we must mention that, although based on distinct premises, expression (2.3), inspired by the interpretation of fractional derivatives proposed in [27], is somehow a subset of the interesting multiscaling functional equation proposed by Nigmatullin in [28]. Besides, while in [28] we can have complex values, in the present case we are restricted to real values for the parameters. In fact, expression (2.3) adopts the well-known time-delay operator, usual in control system theory, following the Laplace expression , where and represent the Laplace variable and operator, respectively.

Another aspect that deserves attention is the fact that while stability and causality may impose restrictions to the parameters in (2.3) it was decided not to impose *a priori* any restriction to the numerical values in the optimization procedure to be developed in the sequel. For example, in what concerns the delays, while it seems not feasible to "guess" the future values of the signal and only the past is available for the signal processing, it is important to analyze the values that emerge without establishing any limitation *a priori* to their values. Nevertheless, in a second phase, the stability and causality will be addressed.

where represents an index of the sampling frequencies within the bandwidth and denotes the total number of sampling frequencies. Therefore, the quality of the approximation depends not only on the orders and , but also on the bandwidth .

For the optimization of in (2.5) it is adopted a genetic algorithm (GA). GAs are a class of computational techniques to find approximate solutions in optimization and search problems [29, 30]. GAs are simulated through a population of candidates of size that evolve computationally towards better solutions. Once the genetic representation and the fitness function are defined, the GA proceeds to initialize a population randomly and then to improve them through the repetitive application of mutation, crossover, and selection operators. During the successive iterations, a part or the totality of the population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (measured by a fitness function ) are usually more likely to be selected. The GA terminates when either the maximum number of generations is produced, or a satisfactory fitness level has been reached.

The pseudocode of a standard GA is as follows.

- (1)
Choose the initial population

- (2)
Evaluate the fitness of each individual in the population

- (3)Repeat
- (a)
Select best-ranking individuals to reproduce

- (b)
Breed new generation through crossover and mutation and give birth to offspring

- (c)
Evaluate the fitness of the offspring individuals

- (d)
Replace the worst ranked part of population with offspring

- (a)
- (4)
Until termination.

A common complementary technique, often adopted to speed-up the convergence, denoted as elitism, is the process of selecting the better individuals to form the parents in the offspring generation.

We observe that we have not introduced *a priori* any restriction to the numerical values of the parameters that result during the optimization procedure. It is well known that one of the advantages of GAs over classical optimization techniques is precisely its characteristic of handling easily these situations. One technique is simply to substitute "not suitable" elements of the GA population by new ones generated randomly. Furthermore, during the generation of the GA elements it is straightforward to impose restrictions. As mentioned previously, in a first phase it is not considered any limitation in order to reveal more clearly the pattern that emerges freely with the time-delay algorithm. After having the preliminary results, in a second phase, several restrictions are considered, and the optimization GA is executed again.

## 3. Numerical Experiments and Results

In this section we develop a set of experiments for the analysis of the proposed concepts. Therefore, we study the case of approximation orders and fractional orders
and
, respectively. In what concerns the bandwidth are considered
, and
(rad/s). In all cases are adopted
and sampling frequencies and identical distances between consecutive measuring points along the *locus* of
.

Experiments demonstrated some difficulties in the GA acquiring the optimal values, being the problem harder the higher the value of , that is, the larger the number of parameters to be estimated. Consequently, several measures to overcome that problem were envisaged, namely, a large GA population with elements, the crossover of all population elements and the adoption of elitism, a mutation probability of 10%, and an evolution with iterations. Even so, it was observed that the GA tended to stabilize in suboptimal solutions and other values for the GA parameters had no significant impact. Therefore, a complementary strategy was taken to prevent such behavior, by restarting the base GA population and executing new trials until getting a good solution.

It was also observed that all GA executions lead to positive values of . In what concerns and , , it was verified that most experiments lead to negative values; nevertheless, in some cases, particularly for near integer values, where the GA had more convergence difficulties, occasionally some positive values occurred. Several experiments restricting the GA to negative values proved that the fitting was possible with good accuracy, and, therefore, for avoiding scattered results with unclear meaning, those restrictions were included in the optimization algorithm.

It is clear that the higher the value of , the better the approximation, that is, the smaller the value of . When the bandwidth increases we observe larger values of the weighting factors , but the delays remain in a limited range a small values, being more close/apart to/from zero for values of near/far the unit. Moreover, for larger bandwidths the GA has more difficulties in estimating the parameters of the approximation.

It is clear that expression (2.4) leads to a superior approximation. Furthermore, although not particularly important with present day computational resources, expression (2.4) poses a calculation load which is inferior to the one of (3.3). In fact, since in real time the delay consists simply in a memory shift, we have versus sums and versus multiplications for (2.4) and (3.3), respectively.

In conclusion, while the aim of this paper was to explore the relationship between the fractional operator and the time delay, it was verified that the proposed algorithm can be applied to successfully approximate fractional expressions.

## 4. Conclusions

The recent advances in FC point towards important developments in the application of this mathematical concept. During the last years several algorithms for the calculation of fractional derivatives were proposed, but the fact is that the results are still far from the desirable. In this paper, a new method, based on the intrinsic properties of fractional systems, that is, inspired by the memory effect of the fractional operator, was introduced. The optimization scheme for the calculation of fractional approximation adopted a genetic algorithm, leading to near-optimal solutions and to meaningful results. The conclusions demonstrate not only the goodness of the proposed strategy, but point also towards further studies in its generalization to other classes of fractional dynamical systems and to the evaluation of time-based techniques.

## Declarations

### Acknowledgment

The author would like to acknowledge FCT, FEDER, POCTI, POSI, POCI, POSC, POTDC, and COMPETE for their support to R&D Projects and GECAD Unit.

## Authors’ Affiliations

## References

- Oldham KB, Spanier J:
*The Fractional Calculus: Theory and Applications of Differentiation and Integration to Arbitrary Order*. Academic Press, London, UK; 1974:xiii+234.MATHGoogle Scholar - Miller KS, Ross B:
*An Introduction to the Fractional Calculus and Fractional Differential Equations, A Wiley-Interscience Publication*. John Wiley & Sons, New York, NY, USA; 1993:xvi+366.Google Scholar - Samko SG, Kilbas AA, Marichev OI:
*Fractional Integrals and Derivatives: Theory and Applications*. Gordon and Breach Science Publishers, Yverdon, Switzerland; 1993:xxxvi+976.MATHGoogle Scholar - Podlubny I:
*Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications, Mathematics in Science and Engineering*.*Volume 198*. Academic Press, San Diego, Calif, USA; 1999:xxiv+340.MATHGoogle Scholar - Kilbas AA, Srivastava HM, Trujillo JJ:
*Theory and Applications of Fractional Differential Equations, North-Holland Mathematics Studies*.*Volume 204*. Elsevier Science, Amsterdam, The Netherlands; 2006:xvi+523.Google Scholar - Klimek M:
*On Solutions of Linear Fractional Differential Equations of a Variational Type*. Czestochowa University of Technology; 2009.Google Scholar - Diethelm K:
*The Analysis of Fractional Differential Equations: An Application-Oriented Exposition Using Differential Operators of Caputo Type, Lecture Notes in Mathematics*.*Volume 2004*. Springer, Berlin, Germany; 2010:viii+247.MATHGoogle Scholar - Oustaloup A:
*La commande CRONE: commande robuste d'ordre non entier*. Hermes; 1991.Google Scholar - Nigmatullin RR:
**A fractional integral and its physical interpretation.***Teoreticheskaya i Matematicheskaya Fizika*1992,**90**(3):354-368.MathSciNetMATHGoogle Scholar - Podlubny I:
**Fractional-order systems and PI**^{ λ }**D**^{ μ }**-controllers.***IEEE Transactions on Automatic Control*1999,**44**(1):208-214.MathSciNetView ArticleMATHGoogle Scholar - Tenreiro Machado JA:
**Discrete-time fractional-order controllers.***Fractional Calculus & Applied Analysis*2001,**4**(1):47-66.MathSciNetMATHGoogle Scholar - Magin RL:
*Fractional Calculus in Bioengineering*. Begell House Publishers; 2006.Google Scholar - Sabatier J, Agrawal OP, Tenreiro Machad JA (Eds):
*Advances in Fractional Calculus: Theoretical Developments and Applications in Physics and Engineering*. Springer, Dordrecht, The Netherlands; 2007:xiv+552.MATHGoogle Scholar - Baleanu D:
**About fractional quantization and fractional variational principles.***Communications in Nonlinear Science and Numerical Simulation*2009,**14**(6):2520-2523. 10.1016/j.cnsns.2008.10.002View ArticleGoogle Scholar - Mainardi F:
*Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models*. Imperial College Press, London, UK; 2010:xx+347.View ArticleMATHGoogle Scholar - Caponetto R, Dongola G, Fortuna L, Petráš I:
*Fractional Order Systems: Modeling and Control Applications*. World Scientific; 2010.Google Scholar - Monje CA, Chen YQ, Vinagre BM, Xue D, Feliu V:
*Fractional Order Systems and Controls: Fundamentals and Applications, Advances in Industrial Control*. Springer, Berlin, Germany; 2010.View ArticleMATHGoogle Scholar - Tenreiro Machado JA, Galhano AM, Oliveira AM, Tar JK:
**Optimal approximation of fractional derivatives through discrete-time fractions using genetic algorithms.***Communications in Nonlinear Science and Numerical Simulation*2010,**15**(3):482-490. 10.1016/j.cnsns.2009.04.030View ArticleGoogle Scholar - Tenreiro Machado JA:
**Optimal tuning of fractional controllers using genetic algorithms.***Nonlinear Dynamics*2010,**62**(1-2):447-452. 10.1007/s11071-010-9731-5MathSciNetView ArticleMATHGoogle Scholar - Al-Alaoui MA:
**Novel digital integrator and differentiator.***Electronics Letters*1993,**29**(4):376-378. 10.1049/el:19930253View ArticleGoogle Scholar - Tenreir Machado JA:
**Analysis and design of fractional-order digital control systems.***Systems Analysis Modelling Simulation*1997,**27**(2-3):107-122.Google Scholar - Tseng C-C:
**Design of fractional order digital FIR differentiators.***IEEE Signal Processing Letters*2001,**8**(3):77-79. 10.1109/97.905945View ArticleGoogle Scholar - Chen YQ, Moore KL:
**Discretization schemes for fractional-order differentiators and integrators.***IEEE Transactions on Circuits and Systems. I*2002,**49**(3):363-367. 10.1109/81.989172MathSciNetView ArticleGoogle Scholar - Vinagre BM, Chen YQ, Petráš I:
**Two direct Tustin discretization methods for fractional-order differentiator/integrator.***Journal of the Franklin Institute*2003,**340**(5):349-362. 10.1016/j.jfranklin.2003.08.001MathSciNetView ArticleMATHGoogle Scholar - Chen Y, Vinagre BM:
**A new IIR-type digital fractional order differentiator.***Signal Processing*2003,**83**(11):2359-2365. 10.1016/S0165-1684(03)00188-9View ArticleMATHGoogle Scholar - Barbosa RS, Tenreiro Machado JA, Silva MF:
**Time domain design of fractional differintegrators using least-squares.***Signal Processing*2006,**86**(10):2567-2581. 10.1016/j.sigpro.2006.02.005View ArticleMATHGoogle Scholar - Tenreiro Machado JA:
**Fractional derivatives: Probability interpretation and frequency response of rational approximations.***Communications in Nonlinear Science and Numerical Simulation*2009,**14**(9-10):3492-3497. 10.1016/j.cnsns.2009.02.004View ArticleGoogle Scholar - Nigmatullin RR:
**Strongly correlated variables and existence of the universal distribution function for relative fluctuations.***Physics of Wave Phenomena*2008,**16**(2):119-145. 10.3103/S1541308X08020064View ArticleGoogle Scholar - Holland JH:
*Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence*. University of Michigan Press, Ann Arbor, Mich, USA; 1975:ix+183.Google Scholar - Goldenberg DE:
*Genetic Algorithms in Search Optimization, and Machine Learning*. Addison-Wesley, Reading, Mass, USA; 1989.Google Scholar - Al-Alaoui MA:
**Novel digital integrator and differentiator.***Electronics Letters*1993,**29**(4):376-378. 10.1049/el:19930253View ArticleGoogle Scholar - Al-Alaoui MA:
**Filling the gap between the bilinear and the backward-difference transforms: an interactive design approach.***International Journal of Electrical Engineering Education*1997,**34**(4):331-337.Google Scholar - Smith JM:
*Mathematical Modeling and Digital Simulation for Engineers and Scientists*. Wiley-Interscience, New York, NY, USA; 1977:xii+332.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.