- Research
- Open Access
On the theory of necessary optimality conditions in discrete systems
- Misir J Mardanov^{1},
- Samin T Malik^{2} and
- Nazim I Mahmudov^{3}Email author
https://doi.org/10.1186/s13662-015-0363-4
© Mardanov et al.; licensee Springer. 2015
- Received: 7 October 2014
- Accepted: 7 January 2015
- Published: 31 January 2015
Abstract
In the present paper, a discrete optimization problem with rather general input data (without assumptions of convexity and smoothness) is considered. Taking into account the specific character of the discrete system, a necessary optimality condition which is not formulated in terms of the Hamilton-Pontryagin function is obtained.
Keywords
- optimality conditions
- discrete systems
- maximum principle
1 Introduction
A problem of optimal control of discrete systems arises in planning economics, optimization of complex technological systems in different issues of organization of production and investigation of operations, and also in controlling continuous processes are using up-to-date computer technologies. The Bellman dynamic programming method [1] is universal for optimal processes in discrete systems. But its application to concrete problems may be less effective [2] than the use of necessary optimality condition as Pontryagin’s maximum principle [3]. Historically, the theory of discrete processes has followed the Pontryagin maximum principle for continuous optimal control problems that is reduced to attempts for formulating the discrete analog of the maximum principle. Butkovskii [4] first showed that unlike the continuous case, a direct extension of Pontryagin’s maximum principle to discrete systems is impossible in general. Naturally such characteristics of these systems is of theoretical interest for researchers. In this connection, some researchers ([2, 5–13] and others) imposing extra conditions (for example, condition of convexity of the set of admissible velocities of the system, convexity in direction and so on) established that, subject to these conditions, the maximum principle is valid for discrete control systems. Following [4], many researchers [4, 10–21] tried to prove the maximum principle in the weakened form (local maximum, stationary) and obtained the higher order optimality conditions.
In spite of the fact that in the problems of optimization of discrete systems, there are many various and important results ([1, 2, 4–45] and others) the issue of necessary optimality conditions is still far from being completed. To begin with this concerns the generality of the conditions of input data of the problem under investigation. Therefore, there is necessity to study thoroughly the issues of the optimization of discrete optimal problems.
In the present paper, we consider a discrete optimization problem with rather general input data (without assumptions of convexity and smoothness) and taking into account the specific character (see Lemma 1) of the discrete system, we get a necessary optimality condition that is not formulated by the Hamilton-Pontryagin function. It should be noted that this optimality condition is characterized by the three peculiarities.
Firstly, this optimality condition permits to replace the many-dimensional minimization problem by the sequence of problems of less dimensions, i.e. the optimization problem under investigation, for each discrete time is reduced to minimization of the function of r variables, where r is the dimension of the control vector.
Secondly, considering some special optimization problems (including the problems with quadratic and special non-smooth quality criteria), it is proved that this condition contains the known and also new effective necessary optimality conditions, and some of them are different strengthenings of the discrete maximum principle.
Thirdly, this condition has a wide scope, it is stronger than the discrete maximum principle, the linearized maximum principle, the optimality condition of Euler type and local optimality conditions formulated in terms of different types of star neighborhoods (see Examples 1-3) and it is very useful when applying the contraction of the control sets suspicious for optimality that are highlighted with the help of more constructive local conditions. In conclusion, we give examples illustrating the richness of the content of the obtained results.
2 Problem statement
Here \(x=(x_{1}, \ldots,x_{n})^{\prime}\) is a state vector (\(\vphantom{m}^{\prime}\) (the prime) is the transposition operation), \(u= (u_{1},\ldots ,u_{r} )^{\prime}\) is a control vector, t is time (discrete), \(x^{*}\) is the given vector, \(T= \{t_{0}, t_{0}+1, \ldots, t_{1}-1 \}\); \(U(t)\), \(t \in T\) are sets of r-dimensional Euclidean space \(E^{r}\); \(\Phi (x )\), \(x\in E^{n}\), \(f (x, u, t )\), \((x,u,t) \in E^{n}\times E^{r}\times [t_{0}, t_{1} ]\), are continuous functions.
The control \(u (t )\), \(t\in T\), responding to constraint (3), will be said to be admissible; the admissible control \(u (t )\), \(t\in T\), and the corresponding solution \(x (t )\), \(t\in T \cup \{t_{1}\}\), of the system (2), for which the functional (1) takes the least value, will be called optimal. In this case each pair \((u(t), x(t) )\) is called an optimal process.
Note that if in problem (1)-(3), \(U(t)\), \(t\in T\), are bounded, closed, and non-empty sets, then the optimal control \(u (t ) \) exists for any initial state \(x^{*}\) [13, p.31].
2.1 Formula of increment in quality functional
As is seen, the solution \(z(t;u^{0},x^{0},\alpha,\alpha_{k})\), \(t\in \{\theta_{1},\theta_{2},\ldots,t_{1}\}\) of the system (6), at the points \(\theta_{1},\theta_{2},\ldots,\theta_{k}\in T\) depends only on the parameter α, and therefore in a part of the system (6) the denotation of the form \(z(t;u^{0},x^{0},\alpha)\) is used. Since the system (6) is recurrent, it is not difficult to find its solution for concrete θ, \(\theta_{k}\), v, \(\tilde{v}\). It should be noted that the system (6) plays an important role in the investigation of problem (1)-(3).
Lemma 1
- (a)
\(\Delta ^{\ast}x ( t ) =z ( t;u^{0},x^{0},\alpha ) \), if \(t\in \{ \theta_{1},\ldots,\theta_{k} \} \),
- (b)
\(\Delta ^{\ast}x ( t ) =z ( t;u^{0},x^{0},\alpha ,\alpha_{k} ) \), if \(t\in \{ \theta_{k+1},\theta _{k+2},\ldots \} \cap \{ t_{0},\ldots,t_{1} \} \).
For proving Lemma 1 it suffices to solve the system (5) by the steps method taking into account (4) and (6).
Note that the sets \(U[x^{0} (\cdot )] (t )\), \(t\in T\), are non-empty and even if one set \(U[x^{0} (\cdot )] (\theta ) \) consists of at least two elements, then it permits one to acquire additional information on the optimality of the control \(u^{0} (t )\), \(t\in T\) (see Example 1). We also underline that in most cases it is not difficult to find the elements of the set \(U[x^{0} (\cdot )] (\theta )\), \(\theta\in T\). For example in problem (1)-(3), if \(f (x (t ),u (t ),t )=g (x (t ) )+A(x (t ),t)u(t)\), \(t\in T\), finding of the elements of the set \(U[x^{0} (\cdot )] (\theta )\) is reduced to the solution of the linear algebraic system of equations.
Lemma 2
If the control \(u^{0} ( t ) \), \(t\in T\), is optimal, then any control \(\hat{u} ( t ) \in U[x^{0} ( \cdot ) ] ( t ) \), \(t\in T\), is optimal, and the pair \((\hat{u} ( t ) ,x^{0} ( t ) )\) is an optimal process.
The proof of Lemma 2 easily follows from the definition of the set (7).
3 Optimality conditions
Theorem 3
- (a)For the optimality of the control \(u^{0} ( t ) \), \(t\in T\) it is necessary that for any \(\hat{u}(t)\in U[x^{0} ( \cdot ) ] ( t ) \), \(t\in T\), and for all \(\alpha= ( \theta ,v ) \in T\backslash \{ t_{1}-1 \} \times U ( \theta ) \), \(\alpha_{k}=(\theta_{k},\tilde{v})\in \{ \theta_{1},\theta _{2},\ldots \} \cap T\times U ( \theta_{k} ) \), the following inequality be fulfilled:where \(z ( t;\hat{u},x^{0},\alpha,\alpha_{k} ) \) is the solution of the system (6), corresponding to the process \((\hat{u}(t),x^{0}(t))\).$$ \Phi\bigl(x^{0}(t_{1})+z\bigl(t_{1}; \hat{u},x^{0},\alpha,\alpha_{k}\bigr)\bigr)-\Phi \bigl(x^{0}(t_{1})\bigr)\geq0, $$(9)
- (b)
If: (1) in addition to the conjectures of (a) the set T consists of at most two points (\(T= \{ t_{0},t_{0}+1 \} \), \(t_{1}=t_{0}+2\)) and (2) even if along one control \(\hat{u}(t)\in U[x^{0} ( \cdot ) ] ( t ) \), \(t\in T\), for instance along the control \(u^{0} ( t ) \), \(t\in T\), the inequality (9) is fulfilled for all \(\alpha = ( t_{0},v ) \), \(v\in U ( t_{0} ) \), \(\alpha _{1}=(t_{0}+1,\tilde{v})\), \(\tilde{v}\in U(t_{0}+1)\), then the control \(u^{0} ( t ) \), \(t\in T\), is an optimal control.
Proof
Remark 4
Below we show the proved theorem has as a particular case a number of known and also new efficient (both in verification and computational aspects) optimality conditions.
Corollary 5
Proof
Let \(\alpha= ( \theta, \hat{u} ( \theta ) ) \), \(\alpha_{k}= ( t_{1}-1,\tilde{v} ) \), where \(\theta\in T\backslash \{ t_{1}-1 \} \), \(\hat{u} ( \theta ) \in U[x^{0} ( \cdot ) ] ( \theta ) \), \(\tilde{v}\in U ( t_{1}-1 ) \). Then taking into account (7) for the solution of the system (6) corresponding to the process \(( \hat{u} ( t ) ,x^{0} ( t ) ) \), we have \(z ( t_{1};\hat{u},x^{0},\alpha,\alpha_{k} ) =\Delta _{\tilde{v}}f ( x^{0} ( t_{1}-1 ) ,\hat{u} ( t_{1}-1 ) ,t_{1}-1 ) \).
The validity of the condition (10) follows from (9). Further, if \(\alpha= ( \theta,v ) \in T\backslash \{ t_{1}-1 \} \times U ( \theta ) \), \(\alpha_{k}= ( \theta_{k}, \hat{u} ( \theta_{k} ) ) \in \{ \theta _{1},\theta_{2},\ldots \} \cap T\times U[x^{0} ( \cdot )] ( \theta_{k} ) \), then by (6), (7), from (9) we get condition (11). Thus, Corollary 5 is proved. □
Corollary 6
Let: (1) the vector function \(f(x,u,t)\) be linear with respect to x, i.e. \(f ( x,u,t ) =A ( u,t ) x+b ( u,t ) \), where \(A ( u,t ) \) is a matrix function of \(n\times n\) order, \(b ( u,t ) \) is an n-dimensional vector function, and (2) the quality criterion has the form \(\Phi=c^{\prime}x+x^{\prime n}\), where c is an n-dimensional vector, D is a matrix of \(n\times n\) order. Then for the optimality of the control \(u^{0} ( t ) \), \(t\in T\), it is necessary that for any \(\hat{u}(t)\in U[x^{0} ( \cdot ) ] ( t ) \), \(t\in T\), the following inequalities be fulfilled:
Proof
Corollary 7
In problem (1)-(3) let \(f ( x,u,t ) =A ( u,t ) x+b ( u,t ) \), \(\Phi ( x ) =c^{\prime }x\), where \(A ( u,t ) \) is a matrix function of \(n\times n\) order, \(b ( u, t ) \) is an n-dimensional vector function, c is an n-dimensional vector. Then:
Proof
Assuming \(\tilde{v}=\hat{u}(\theta_{k})\) and taking into account \(\hat{u} ( t ) \in U [ x^{0} ( \cdot ) ] ( t ) \), \(t\in T\), from (25) in particular case for all \(\theta \in T\backslash \{ t_{1}-1 \} \) it follows the proof of the optimality condition (24) and for the proof of the condition (24) at the point \(\theta_{k}=t_{1}-1 \) it suffices to take into account \(v=u^{0}(\theta)\) in (25). The validity of the last statement of Corollary 7 using the scheme of the proof of the optimality condition (25) follows directly from statement (b) of the theorem. Consequently, Corollary 7 is proved. □
We stress that the optimality condition (24) for \(\hat{u} ( t ) =u^{0} ( t ) \), \(t\in T\), is obtained in [15].
Corollary 8
Proof
Corollary 9
Proof
Remark 10
Under the conditions of Corollary 9, the optimality condition (24) has first been proved in [47] as a necessary and sufficient optimality condition, and taking into account the scheme of the proof of Corollary 7, it is easy to show that the discrete maximum principle from [47] and the optimality conditions (36) are equivalent.
Finally we prove the following statement.
Corollary 11
In problem (1)-(3) let us additionally suppose that the function \(\Phi ( x ) \) is continuously differentiable and \(f ( x,u,t ) \) is a continuously differentiable vector function with respect to x and u. Then:
Proof
Remark 12
Optimality conditions of type (41), (42) obtained in [19], for \({\hat{u} ( t ) =u}^{0} ( t ) \), \(t\in T\), are obtained, for example, in [4, 12, 13, 15, 16, 26, 34].
Remark 13
If \(f ( x, u,t ) \), \(\Phi ( x ) \) are continuously differentiable functions with respect to x, and the sets \(\{ f ( x, u,t ) :u\in U ( t ) \} \) are convex for any \(x\in E^{n}\), \(t\in T\), then similar to the scheme of the proof of Corollary 11, the discrete maximum principle follows as a particular case from the statement of Corollary 5 [7, 8, 13].
4 Discussion of the theorem and its corollaries. Examples
It should be noted that the necessary optimality condition (9) is rather general. The merit of the sufficient condition (9) compared with discrete maximum principle [7, 8, 13, 15] is that it has no convexity or smoothness conditions with respect to the input data of problem (1)-(3). Therefore the optimality condition (9) has a wider range of applications than the earlier known necessary optimality conditions (for example, [4, 9, 12–16, 18, 20, 21, 26, 34]). Note that the optimality condition (9) allows one to replace the many-dimensional minimization problem by a sequence of problems of less dimensions, and its application is convenient when its test is performed sequentially with respect to θ: \(\theta=t_{1}-1,t_{1}-2,\ldots,t_{0}\) (see Example 1). By Corollaries 5-11 and Remark 13 it is shown that the optimality condition (9) contains as a particular case both new and known optimality conditions. The necessary optimality conditions given in Corollaries 6-8 show how important it is to take into account the specific character of the concrete optimization problem (1)-(3).
Corollary 5 as an independent result is obtained in [36]. Taking into account the method of proof of Corollaries 6 and 7, we find that optimality (13) contains as a particular case the optimality conditions (14), (24), (25). The optimality conditions (24), (41), (42) following the sets \(U[x^{0} ( t ) ](t)\), \(t\in T\), are direct strengthenings of the discrete maximum principle, discrete analog of the linearized maximum principle and discrete analog of the Euler equation [4, 7, 8, 13, 15, 26], respectively (see Example 2). As is seen, optimality conditions (41), (42) are the corollaries of the condition (10), (11), though unlike the continuous [3] case, it is well known that [16, 34] in the general case they are not corollaries of the discrete maximum principle. Note that the discrete maximum principle under the conditions of Corollaries 6, 7 admits different strengthenings in the form of optimality conditions (14) and (25). The optimality condition of type (14) is obtained in another way in [16], and as far as we know the optimality conditions (25), (31) have no analog. It is interesting to note that verification of the optimality condition (25) at any two neighboring points θ, \(\theta_{1}\) is more convenient but not sufficient for its complete application (see Example 3). From the optimality condition (25) and Example 3 we conclude that in the discrete case, unlike the continuous one, the discrete principle of the maximum (as a necessary optimality condition of first order) is strengthened at the expense of multipoint variation of control as well.
The above is confirmed by the following examples.
Example 1
\(x_{1} ( t+1 ) =u_{1} ( t ) \), \(x_{2} ( t+1 ) =\vert x_{1} ( t ) \vert -\vert u_{1} ( t ) \vert \), \(x_{3} ( t+1 ) ={x}_{2} ( t ) u_{2} ( t ) \), \(x_{i} ( 0 ) =0\), \(i=1,2,3\); \(T=\{0,1,2\}\), \(u= ( u_{1},u_{2} ) ^{\prime\ast }\times U^{\ast}\), \(U^{\ast}= [ -1,-\frac{1}{2} ] \cup \{ 0 \} \cup [ \frac{1}{2},1 ] \), \(\Phi ( x ( 3 ) ) =x_{3}(3)\rightarrow \min\).
Let us apply Corollary 5. It is fulfilled for \(\hat{u} ( t ) =u^{0} ( t ) = ( 0,0 ) ^{\prime}\), \(t\in T\), and for all \(\theta ( \theta,v ) \), \(\theta\in T\), \(v=(v_{1},v_{2})^{\prime}\in U\): \(0\geq0\).
Not continuing calculations, we get the condition (11) for \(\hat{u} ( t ) \equiv{ ( 0,1 ) }^{\prime}\), \(t\in T\), \(\theta =1\in T \) is no longer fulfilled: \(z ( 3;\hat{u},x^{0};\alpha(1,v) ) =-\vert v_{1}\vert \geq0\), \(v_{1}\in U^{\ast}\).
Consequently, the control \(u^{0} ( t ) ={ ( 0,0 ) } ^{\prime}\), \(t\in T\), is not optimal by virtue of the optimality condition (11). Note that as the right-hand side is not differentiable with respect to x, u, we cannot use the necessary optimality conditions from [1, 2, 4–6, 9, 12–18, 20, 21, 23–26].
Example 2
Here we have taken into account that \(\hat{\psi} (0 )=(0,-1,0)^{\prime}\) and \(H (\hat{\psi} (0 ),x^{0} (0 ),v,0 )=-v_{1}\).
Therefore, the considered control \(u^{0} ( t ) = ( 0,0 ) ^{\prime}\), \(t\in T\), cannot be optimal by virtue of optimality condition (24) Note that using this example, it is easy to show the richness of the content of the optimality conditions (43), (44).
Example 3
\(x_{1} ( t+1 ) =u ( t ) \), \(x_{2} ( t+1 ) =x_{1} ( t ) \), \(x_{3} ( t+1 ) =\vert u ( t ) \vert x_{2}(t)\), \(x_{i} ( 0 ) =0\), \(i=1,2,3\), \(T=\{0,1,2\}\), \(U= [ -1,1 ] \), \(\Phi ( x ( 3 ) ) =x_{3}\rightarrow \min\).
Taking into account these calculations, it is easy to be convinced that the optimality conditions (10), (11), (15), (24) leave the control \(u^{0} ( t ) =0\), \(t\in T\), among the pretenders for optimality. It is easy to see that the optimality condition (25) is fulfilled at all possible two neighboring points. However, it is not fulfilled at \(\theta=0\), \(\theta_{k}=2\) (non-neighboring points): \(-\vert \tilde{v}\vert v\leq0\), \(v,\tilde {v}\in U= [ -1,1 ] \), i.e. the control \(u^{0} ( t ) =0\), \(t\in T\), is not optimal.
Declarations
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
Authors’ Affiliations
References
- Bellman, R: Dynamic Programming. Inostr. Lit., Moscow (1960) (in Russian) Google Scholar
- Fan, L-C, Wan, C-S: Discrete Maximum Principle. Mir, Moscow (1967) (in Russian) MATHGoogle Scholar
- Pontryagin, LS, Boltyanskii, VG, Gamkrelidze, RV, Mishchenko, EF: The Mathematical Theory of Optimal Processes. Wiley, New York (1962) MATHGoogle Scholar
- Butkovskii, AG: On necessary and sufficient optimality conditions for impulse control systems. Avtom. Telemeh. 24, 1056-1064 (1963) (in Russian) MathSciNetGoogle Scholar
- Dolezal, J: Optimal Control Discrete-Time Systems. Lecture Notes in Control and Information Sciences, vol. 174. Springer, New York (1988) Google Scholar
- Dubovitskiy, AY: Discrete maximum principle. Avtom. Telemeh. 10, 55-71 (1978) (in Russian) Google Scholar
- Halkin, H: Optimal control for systems described by difference equations. In: Advances in Control Systems, vol. 2. Academic Press, New York (1964) Google Scholar
- Halkin, H, Jordan, BW, Polak, E, Rosen, JB: Theory of optimum discrete time systems. In: Proc. Third Congr. Internat. Federation Automat. Control (IFAC), London. Inst. Mech. Engrs., London (1966) Google Scholar
- Holtzman, JM, Halkin, H: Directional convexity and the maximum principle for discrete systems. SIAM J. Control 4(2), 213-275 (1966) View ArticleMathSciNetGoogle Scholar
- Holtzman, JM: On the maximum principle for nonlinear discrete-time systems. IEEE Trans. Autom. Control 11(2), 273-274 (1966) View ArticleMathSciNetGoogle Scholar
- Jakson, R, Horn, F: On discrete analogues of Pontryagin’s maximum principle. Int. J. Control 1(4), 389-395 (1965) View ArticleGoogle Scholar
- Jordan, BW, Polak, E: Theory of a class of discrete optimal control systems. J. Electron. Control 17(6), 697-711 (1964) View ArticleMathSciNetGoogle Scholar
- Propoi, AI: Elements of the Theory of Optimal Discrete Processes. Nauka, Moscow (1973) (in Russian) Google Scholar
- Ashchepkov, LT: To necessary optimality conditions of higher order for singular controls in discrete systems. Differ. Uravn. 8, 1857-1867 (1972) (in Russian) Google Scholar
- Gabasov, RK: To theory of optimal discrete processes. Zh. Vychisl. Mat. Mat. Fiz. 4(8), 780-796 (1968) (in Russian) Google Scholar
- Gabasov, R, Kirillova, FM: On the theory of necessary optimality conditions for discrete systems. Autom. Remote Control 30, 39-47 (1969) MathSciNetGoogle Scholar
- Gabasov, R, Kirillova, FM, Mordukhovich, B: Discrete maximum principle. Dokl. Akad. Nauk SSSR 213, 19-22 (1973) (in Russian) MathSciNetGoogle Scholar
- Gabasov, R, Kirillova, FM: On the extension of the maximum principle by L.S. Pontryagin to discrete systems. Autom. Remote Control 27, 1878-1882 (1966) MathSciNetGoogle Scholar
- Mardanov, MJ, Melikov, TK: On necessary optimality conditions for discrete control systems. In: Proceedings of the International Conference Devoted to 55 Years of the Institute of Mathematics and Mechanics, Baku, pp. 242-245 (2014) (in Russian) Google Scholar
- Minchenko, LI: On necessary optimality conditions for some classes of discrete control systems. Differ. Uravn. 12, 7 (1976) (in Russian) Google Scholar
- Pearson, JD Jr., Sridhar, R: A discrete optimal control problem. IEEE Trans. Autom. Control 11(2), 171-174 (1966) View ArticleMathSciNetGoogle Scholar
- Ahlbrandt, CD: Equivalence of discrete Euler equations and discrete Hamiltonian systems. J. Math. Anal. Appl. 180, 498-517 (1993) View ArticleMATHMathSciNetGoogle Scholar
- Arutyunov, AV, Marinkovic, B: Necessary conditions for optimality in discrete optimal control problems. Vestnik MGU. Ser. 15 1, 43-48 (2005) (in Russian) MathSciNetGoogle Scholar
- Boltyanskii, VG: Optimal Control of Discrete Systems. Nauka, Moscow (1973) (in Russian) Google Scholar
- Ferreira, JAS, Vidal, RVV: On the connections between mathematical programming and discrete optimal control. In: System Modelling and Optimization. Lecture Notes in Control and Information Sciences, vol. 84, pp. 234-243. Springer, New York (1986) View ArticleGoogle Scholar
- Gabasov, R, Kirillova, FM: Quality Theory of Optimal Processes. Nauka, Moscow (1971) (in Russian) Google Scholar
- Gabasov, R, Kirillova, FM, Mordukhovich, BS: The ε-maximum principle for suboptimal controls. Sov. Math. Dokl. 27, 95-99 (1983) MATHGoogle Scholar
- Gaishun, IV: Systems with Discrete Time. National Academy of Sciences of Belarus, Institute of Mathematics, Minsk (2001) (in Russian) Google Scholar
- Gaishun, IV: Multi-Parametric Control Systems. Mir, Moscow (1996) (in Russian) Google Scholar
- Gorokhovik, VV, Gorokhovik, SA, Marinkovic, B: Necessary optimality conditions for a smooth discrete-time optimal control problem with vector-valued objective function. In: Proceedings of Institute of Mathematics, Minsk, Belarus, vol. 17, pp. 27-40 (2009) (in Russian) Google Scholar
- Gorokhovik, VV, Gorokhovik, SY, Marinković, B: First and second order necessary optimality conditions for a discrete-time optimal control problem with a vector-valued objective function. Positivity 17, 483-500 (2013) View ArticleMATHMathSciNetGoogle Scholar
- Hilscher, R, Zeidan, V: Discrete optimal control: second order optimality conditions. J. Differ. Equ. Appl. 8, 875-896 (2002) View ArticleMATHMathSciNetGoogle Scholar
- Mansimov, KB: Optimization of a class of discrete two-parameter systems. Differ. Uravn. 27, 2 (1991) (in Russian) MathSciNetGoogle Scholar
- Mansimov, KB: Discrete systems, Baku (2013) (in Russian) Google Scholar
- Malanowski, K: Stability and sensitivity analysis of discrete optimal control problems. Probl. Control Inf. Theory 28, 187-200 (1991) MathSciNetGoogle Scholar
- Malik, ST: To optimization of discrete systems. In: Proceedings of the International Conference Devoted to 55 Years of the Institute of Mathematics and Mechanics, Baku, pp. 222-224 (2014) (in Russian) Google Scholar
- Marinkovic, B: Sensitivity analysis for discrete optimal control problems. Math. Methods Oper. Res. 63, 513-524 (2006) View ArticleMATHMathSciNetGoogle Scholar
- Marinkovic, B: Optimality conditions for discrete optimal control problems. Optim. Methods Softw. 22, 959-969 (2007) View ArticleMATHMathSciNetGoogle Scholar
- Marinkovic, B: Optimality conditions in discrete optimal control problems with state constraints. Numer. Funct. Anal. Optim. 28, 945-955 (2008) View ArticleMathSciNetGoogle Scholar
- Mordukhovich, BS: On optimal control of discrete systems. Differ. Uravn. 9(4), 727-734 (1973) (in Russian) MATHGoogle Scholar
- Mordukhovich, BS: Approximate maximum principle for finite difference control systems. USSR Comput. Math. Math. Phys. 28, 106-114 (1988) View ArticleMATHMathSciNetGoogle Scholar
- Mordukhovich, BS: Approximation Methods in Problems of Optimization and Control. Nauka, Moscow (1988) (in Russian) MATHGoogle Scholar
- Mordukhovich, BS: Variational Analysis and Generalized Differentiation. I: General Theory. Springer, Berlin (2005) Google Scholar
- Mordukhovich, BS: Variational Analysis and Generalized Differentiation. II: Applications. Springer, Berlin (2005) Google Scholar
- Nahorski, Z, Ravn, HF, Vidal, RVV: A discrete-time maximum principle: a survey and some new results. Int. J. Control 40, 533-554 (1984) View ArticleMATHMathSciNetGoogle Scholar
- Melikov, TK: To necessary optimality conditions for distributed parameters systems. Dep. in VINITI, AS of the USSR, 2637-79, 31 Google Scholar
- Rozonoer, LI: L.S. Pontryagin’s maximum principle in theory of optimal systems III. Avtom. Telemeh. 20, 1561-1578 (1959) (in Russian) MathSciNetGoogle Scholar