Variational Optimal-Control Problems with Delayed Arguments on Time Scales

This article deals with variational optimal-control problems on time scales in the presence of delay in the state variables. The problem is considered on a time scale unifying the discrete, the continuous and the quantum cases. Two examples in the discrete and quantum cases are analyzed to illustrate our results.


Introduction
The calculus of variations interacts deeply with some branches of sciences and engineering e.g. geometry, economics, electrical engineering and so on [16]. Optimal control problems appear in various disciplines of sciences and engineering as well [19].
Time scale calculus was initiated by Hilger ( see ref. [13] and the references therein) having in mind to unify two existing approaches of dynamic modelsdifference and differential equations into a general framework. This kind of calculus can be used to model dynamic processes whose time domains are more complex than the set of integers or real numbers [10]. Several potential applications for this new theory were reported (see for example Refs. [10], [4], [12] and the references therein). Many researchers studied calculus of variations on time scales. Some of them followed the delta approach and some others followed the nabla approach (see for example Refs. [17], [15], [8], [18], [2] and [20] ).
It is well known that the presence of delay is of great importance in applications. For example, its appearance in dynamic equations, variational problems and optimal control problems may affect the stability of solutions. Very recently, some authors payed the attention to the importance of imposing the delay in fractional variational problems [6]. The non-locality of the fractional operators and the presence of delay as well may give better results for problems involving the dynamics of complex systems. To the best of our knowledge, there is no work in the direction of variational optimal-control problems with delayed arguments on time scales.
Our aim in this article is to obtain the Euler-Lagrange equations for a functional, where the state variables of its Lagrangian are defined on a time scale whose backward jumping operator is ρ(t) = qt − h, q > 0, h ≥ 0. This time scale, of course, absorbs the discrete, the continuous and the quantum cases. The state variables of this Lagrangian allow the presence of delay as well. Then, we generalize the results to the n-dimensional case. Dealing with such a very general problem enables us to recover many previously obtained results [3,11,7,14].
The structure of the article is as follows: In section 2 basic definitions and preliminary concepts about time scale are presented. The nabla time scale derivative analysis is followed there. In section 3 the Euler-Lagrange equations into one unknown function and then in the n-dimensional case are obtained. In section 4 the variational optimal control problem is proposed and solved. In section 5 the results obtained in the previous sections are particulary studied in the discrete and quantum cases, where two examples are analyzed in details. Finally, section 6 contains our conclusions.

Preliminaries
A time scale is an arbitrary closed subset of the real line R. Thus the real numbers and the natural numbers, N, are examples of a time scale. Throughout this article, and following [10], the time scale will be denoted by T. The forward jump operator σ : T → T is defined by σ(t) := inf{s ∈ T : s > t}, while the backward jump operator ρ : T → T is defined by In connection we define the backward graininess function ν : In order to define the backward time scale derivative down, we need the set T κ which is derived from the time scale T as follows: if T has a right-scattered minimum m, then T κ = T − {m}. Otherwise, T κ = T.
Definition 2.1. [5] Assume f : T → R is a function and t ∈ T κ . Then the backward time-scale derivative f ∇ (t) is the number (provided it exists) with the property that given any ǫ > 0, there exists a neighborhood U of t (i.e, U = (t − δ, t + δ) for some δ > 0) such that Moreover, we say that f is (nabla) differentiable on T κ provided that f ∇ (t) exists for all t ∈ T κ .
The following theorem is Theorem 3.2 in [9] and an analogue to Theorem 1.16 in [10].
[5] Assume f : T → R is a function and t ∈ T κ . Then we have the following: ( exists as a finite number. In this case Example 2.3. (i) T = R or any any closed interval (The continuous case) σ(t) = ρ(t) = t, ν(t) = 0 and f ∇ (t) = f ′ (t).
(ii) T = hZ, h > 0 or any subset of it. (The difference calculus, a discrete case) . Note that in this example the backward operator is of the form ρ(t) = ct + d and hence T h q is an element of the class H of time scales that contains the discrete, the usual and the quantum calculus (see [14]).
2. for any λ ∈ R, the function λf : 3. the product f g : T → R is nabla differentiable at t and For the proof of the following lemma we refer to [1]: Throughout this article we use for the time scale derivatives and integrals the symbol ∇ h q which is inherited from the time scale T h q . However, our results are true also for the H− time scales (those time scales whose jumping operators have the form at + b). The time scale T h q is a natural example of an H− time scale.
The following lemma which extends the fundamental lemma of variational analysis on time scales with nabla derivative is crucial in proving the main results.
holds if and only if The proof can be achieved by following as in the proof of Lemma 4.1 in [8], ( see also [14]).

First order Euler-Lagrange equation with delay
We consider the T h q -integral functional J : S → R, We shall shortly write : We calculate the first variation of the functional J on the linear manifold S: and . and where Lemma 2.5 and that ∇ h q ρ α0 (t) = q α0 are used. If we use the change of variable u = ρ α0 (x), which is a linear function, and make use of Theorem 1.98 in [10] and Lemma 2.5 we then obtain where we have used the fact that η ≡ 0 on [ρ α0 (a), a].
Splitting the first integral in (6) and rearranging will lead to If we make use of part 3 of Theorem 2.4 then we reach 9) In the above equations (8), (9), once choose η such that η(a) = 0 and η ≡ 0 on [q α0 b, b] and in another case choose η such that η(b) = 0 and η ≡ 0 on [a, q α0 b], and then make use of Lemma 2.7 to arrive at the following theorem: ρ α0 (a), a]) and y(b) = c 0 }.
Then the necessary condition for J(y) to possess an extremum for a given function y(x) is that y(x) satisfies the following Euler-Lagrange equations Furthermore, the equation The necessary condition represented by (13) is obtained by applying integration by parts in (7) and then substituting equations (11) and (12) in the resulting integrals. The above theorem can be generalized as follows: and S m = {y = (y 1 , y 2 , ..., y m ) : , a]) and y i (b) = c i , i = 1, 2, ..., m}.
Then a necessary condition for J(y) to possess an extremum for a given function y(x) = (y 1 (x), y 2 (x), ..., y m (x)) is that y(x) satisfies the following Euler-Lagrange equations and Furthermore, the equations hold along y(x) for all admissible variations η i (x) satisfying and .

The optimal-control problem
Our aim in this section is to find the optimal control variable u(x) defined on the H−time scale, which minimizes the performance index subject to the constraint such that where c is a constant and L and G are functions with continuous first and second partial derivatives with respect to all of their arguments. To find the optimal control, we define a modified performance index as where λ is a Lagrange multiplier or an adjoint variable.
Using the the equations (11), (12) and (13) of Theorem 3.2 with m = 3, (y 1 = y, y 2 = u, y 3 = λ), the necessary conditions for our optimal control are (we remark that as there is no any time scale derivative of u(x), no boundary constraints for it are needed) and and also ∇ h q y(x) = G(x, y ρ (x), u ρ (x)) Note that the condition (25) disappears when the Lagrangian L is free of the delayed time scale derivative of y.

The discrete and quantum cases
We recall that the results in the previous sections are valid for time scales whose backward jump operator ρ has the form ρ(x) = qx − h, in particular for the time scale T h q . (i) The discrete case: If q = 1 and h > 0 (of special interest the case when h = 1), then our work becomes on the discrete time scale hZ = {hn : n ∈ Z}. In this case the functional under optimization will have the form The necessary condition for J h (y) to possess an extremum for a given function y : {ih : i = a − d, a − d + 1, ..., a, a + 1, ..., b} → R n is that y(x) satisfies the following h-Euler-Lagrange equations Furthermore, the equation In this case the h-optimal-control problem would read as: Find the optimal control variable u(x) defined on the time scale hZ, which minimizes the h-performance index subject to the constraint ∇ h y(ih) = G(ih, y((i − 1)h), u((i − 1)h)), i = a + 1, a + 2, ..., b such that The necessary conditions for this h-optimal control are and and also Note that the condition (35) disappears when the Lagrangian L is independent of the delayed ∇ h derivative of y.
Example 5.1. In order to illustrate our results we analyze an example of physical interest. Namely, let us consider the following discrete action, subject to the condition The corresponding h-Euler-Lagrange equations are as follows: (36) and We observe that when the delay is removed, that is d = 0, the classical discrete Euler-Lagrange equations are reobtained.
In this case the q-optimal-control problem would read as: Find the optimal control variable u(x) defined on the T q −time scale, which minimizes the performance index subject to the constraint such that where c is a constant and L and G are functions with continuous first and second partial derivatives with respect to all of their arguments. The necessary conditions for this q-optimal control are: and and also ∇ h q y(x) = G(x, y(qx), u(qx)) Note that the condition (48) disappears when the Lagrangian L is independent of the delayed ∇ q derivative of y.
Example 5.2. Suppose that the problem is that of finding a control function u(x) defined on the time scale T q such that the corresponding solution of the controlled system ∇ q y(x) = −ry(qx) + u(qx), r > 0, satisfying the conditions: is an extremum for the q-integral functional (q-quadratic delay cost functional): According to (47) and (48), the solution of the problem satisfies: and of course ∇ q y(x) = −ry(qx) + u(qx) When the delay is absent (i.e α 0 = 0), it can be shown that the above system is reduced to a second order q-difference equation. Namely, reduced to ∇ 2 q y(x) + rq(∇ q y)(qx) = q(r 2 + 1)y(qx) + qr∇ q y(x) If we solve recursively for this equation in terms of an integer power series by using the initial data, then the resulting solution will tend to the solutions of the second order linear differential equation: y ′′ − (r 2 + 1)y = 0.

Conclusion
In this manuscript we have developed an optimal variational problem in the presence of delay on time scales whose backward jumping operators are of the form ρ(t) = qt − h, q > 0, h ≥ 0, called H−time scales. Such kinds of time scales unify the discrete, the quantum and the continuous cases, and hence the obtained results generalized many previously obtained results either in the presence of delay or without. To formulate the necessary conditions for this optimal control problem, we first obtained the Euler-Lagrange equations for one unknown function then generalized to the n-dimensional case. The state variables of the Lagrangian in this case are defined on the H−time scale and contain some delays. When q = 1 and h = 0 with the existence of delay some of the results in [3] are recovered. When 0 < q < 1 and h = 0 and the delay is absent most of the results in [7] can be reobtained. When q = 1 and the delay is absent some of the results in [11] are reobtained. When the delay is absent and the time scale is free somehow, some of the results in [14] can be recovered as well.
Finally, we would like to mention that we followed the line of nabla time scale derivatives in this article, analogous results can be originated if the delta time scale derivative approach is followed.