Switching Controller Design for a Class of Markovian Jump Nonlinear Systems Using Stochastic Small-Gain Theorem
© Jin Zhu et al. 2009
Received: 2 October 2008
Accepted: 16 January 2009
Published: 23 February 2009
Switching controller design for a class of Markovian jump nonlinear systems with unmodeled dynamics is considered in this paper. Based on the differential equation and infinitesimal generator of jump systems, the concept of Jump Input-to-State practical Stability (JISpS) in probability and stochastic Lyapunov stability criterion are put forward. By using backsetpping technology and stochastic small-gain theorem, a switching controller is proposed which ensures JISpS in probability for the jump nonlinear system. A simulation example illustrates the validity of this design.
Stability of dynamic systems has been the primary study topic for system analysis. After Lyapunov's second method was created in 1892, it has been developed and applied by many researchers in the past century with fruitful classical stability results achieved. Among these important developments is the Input-to-State Stability (ISS) property, which was firstly formulated by Sontag  and has found wide use in engineering by incorporating the idea of nonlinear small-gain theorem [2, 3]. The ISS-based small-gain theorem has some advantages over the earlier passivity-based small-gain theorem and is currently becoming a desirable tool for nonlinear stability analysis, especially in the case of nonlinear robust stabilization for systems with nonlinear uncertainties and unmodeled dynamics [4, 5]. Among all the practical nonlinear systems with uncertainties, the systems of lower-triangular form are of great importance; such systems have several special properties so that they are recently attracting great attention. Firstly, the "lower-triangular" form has close connection with feedback linearzation method; therefore this provides convenience to designers. Second, many real-world dynamic systems are of lower-triangular form [6, 7], and some general systems can be transformed to lower-triangular form via mathematical method . For this reason, lower-triangle nonlinear system can find its wide applications in lots of practical dynamic systems: turbine generator and water turbing generator , intelligent robot  and missile , and so forth. However, dynamic processes in this field are very difficult to describe exactly and depend on many factors. For example, imagining an attacking missile tracking a moving target, this dynamic process is a classical problem of model following and tracking; till now, different control algorithms have been put forward by using ideal assumptions [12, 13]. However, the missile itself may have variable structures subject to random changes and/or failures of its components or environments during its flight; such problems also occur in the case of the moving of robots or the operation of generators. Therefore, an urgent requirement appears to remodel such dynamic processes to meet the need of accuracy and precision.
On the other hand, Markovian jump systems, which were firstly put forward by Kac and Krasovskii , have now become convenient tools for representing many real-world systems [15, 16] and therefore arouse much research attention in recent years. In the case of fault detection, fault-tolerant control, and multimodal control, discrete jumps in the continuous dynamics are used to model component failures and sudden switch of system dynamics. With further study of Markovian jump systems, many achievements have been made in the last decade, among which Shaikhet and Mao performed foundational work on stochastic stability for jump systems [17–19] and jump systems with time delays [20–23]. Based on the stochastic stability, more efforts are devoted to applications of jump system model: system state estimation [24, 25], controller design [26–28], and hierarchical reinforcement learning for model-free jump systems [29, 30]. However, in the referred works concerned with controller design problems, assumptions are firstly made that system models considered only consist of static uncertainty. This is an ideal approximation of real situation and not the case nevertheless. As we know, Markovian jump systems are used to represent a class of systems which are usually accompanied by sudden change of working environment or system dynamics. For this reason, practical jump systems are usually accompanied by uncertainties, and it is hard to describe these uncertainties, with precise mathematical model. Thus, how to stabilize Markovian jump systems with unmodeled dynamic uncertainties is a concernful work in our view.
Stochastic differential equation for Markovian jump system is given according to generalized Itô formula, and the similar result is achieved by Yuan and Mao  with a different method. Based on differential equation, the martingale process caused by Markovian process is incarnated in the procedure of controller design by applying mathematical transform.
We introduce the concept of Jump Input-to-State practical Stability (JISpS) and give stochastic Lyapunov stability criterion.
By composing backstepping technology and stochastic small-gain theorem, a switching controller is proposed. It is shown that all signals of the closed-loop system are globally uniformly ultimately bounded. And the closed-loop system is JISpS in probability.
The rest of this paper is organized as follows. Section 2 begins with some mathematical notions and Markovian jump system model along with its differential equation. In Section 3, we introduce the notion of JISpS and stochastic Lyapunov stability criterion. Section 4 presents the problem description. In Section 5, a switching controller is given based on backstepping technology and stochastic small-gain theorem. In Section 6, an example is shown to illustrate the validity of the design. Finally, conclusions are drawn in Section 7.
2. Stochastic Differential Equation of Markovian Jump System
Throughout the paper, unless otherwise specified, we denote by a complete probability space with a filtration satisfying the usual conditions (i.e., it is right continuous, and contains all -null sets). Let stand for the usual Euclidean norm for a vector and stand for the supremum of vector over time period , that is, . The superscript will denote transpose, and we refer to as the trace for matrix. In addition, we use to denote the space of Lebesgue square integrable vector.
Let denote the family of all functions on which are continuously twice differentiable in and once in . Furthermore, we will give the stochastic differentiable equation of .
The differential equation of Markovian jump system (2.1) is given as above and, the similar result is also achieved by Yuan and Mao . Compared with the differential equation of general nonjump systems, two parts come forth as differences: transition rates and the martingale process , which are both caused by the property of Markov chain (see (2.4)). Up till now, switching controller design for jump systems contains only the transition rate in most cases. In this paper, the controller design will take into account the martingale process as well since the jump systems considered here are of the form of lower triangular. The detailed description will be given in Section 4.
3. JISpS and Stochastic Small-Gain Theorem
The concept of Input-to-State Stability (ISS) is a well-known classical tool for designing nonlinear systems, which means for a bounded control input , the trajectories remain in the ball of radius as ; furthermore, as time increases, all trajectories approach the smaller ball of radius . However, for general nonlinear systems disturbed by noise and/or unmodeled dynamics, it is impossible to obtain such strong conclusion, therefore some generalized results are put forward: Noise-to-State Stability (NSS)  and Input-to-State practically Stable (ISpS) . In the definition of ISpS, the trajectories remain in the ball of radius as instead of . Similar to ISS, as time increases, all trajectories approach the smaller ball of radius , and the system is still BIBO. As can be seen in the following analysis of this paper, bound can be made as small as possible by choosing appropriate control parameters. For some special cases, if , the ISpS is reduced to ISS.
The definition of Input-to-State practically Stable (ISpS) in probability for nonjump stochastic system is put forward by Wu et al. , and the difference between JISpS in probability and ISpS in probability lies in the expressions of system state and control signal . For nonjump system, system state and control signal contain only continuous time with , while for jump system, they concern with both continuous time and discrete regime . For different system dynamic , control signal will differ even with the same time taken and that is the reason why it is called a switching controller. Based on the idea of switching control, the corresponding stability is called " Jump ISpS" and it is a more general extension of ISpS. By choosing , the definition of JISpS will degenerate to ISpS.
This paper introduces two kinds of JISpS in the sense of stochastic stability: -moment JISpS and JISpS in probability. According to the knowledge of stochastic process, if one system is -moment stochastically stable, it must be stochastically stable in probability by using martingale inequality. Here only sufficient conditions for -moment stochastic stability are considered and now introduce the following stochastic Lyapunov stability criterion.
for all , then jump system (2.1) is pth moment JISpS and JISpS in probability as well.
Clearly the conclusion holds if . So we only need proof for . Fix such arbitrarily, we write as .
This completes the proof.
Lemma 3.7 (Stochastic small-gain theorem).
Small-gain theorem for nonlinear systems was firstly provided by Mareels and Hill  and was extended to the stochastic case by Wu et al. . The above stochastic small-gain theorem for jump systems is an extension of the nonjump case. This extension can be achieved without any mathematical difficulties, and the proof process is the same as in . The reason is that in Lemma 3.7 we only take into account the interconnection relationships between interconnected system and its subsystems, despite subsystems are of jump or nonjump case. If both subsystems are nonjump and ISpS in probability, respectively, the interconnected system is ISpS in probability. In contrast, if both subsystems are jump and JISpS in probability, respectively, the interconnected system is JISpS in probability and so on.
4. Problem Description
where is system state vector, is system input signal, is unmeasured state vector, and is output signal. The Markov chain is as defined in Section 2. is a smooth function, and denotes the unmodeled dynamic uncertainty which could be different with different system regime . Both and are locally Lipschitz as described in Section 2.
Our design purpose is to find a switching controller of the form , such that the closed-loop jump system could be JISpS in probability, and the system output could be within an attractive region around the equilibrium point with radius as small as possible. In this paper, the following assumptions are made for jump system (4.1):
where is function, is positive integer, and are constants.
where is known constant and are known nonnegative smooth functions for any given .
For the design of switching controller, we introduce the following lemmas.
Lemma 4.1 (Young's inequality).
where and the constants satisfy .
Lemma 4.2 (Martingale representation ).
5. Controller Design and Stability Analysis
For simplicity, we just denote , by , , , , where , and the new coordinate is .
The inequality (5.4) could easily be deduced by using Lemma 4.1.
Differential equation of new coordinate is deduced as above. The martingale process resulting from Markov process is transformed into Wiener noise by using Martingale representation theorem, and it will affect the Lyapunov function construction and affect the remainder of the control design process; for nonjump systems with uncertainty, a quadratic Lyapunov is chosen to meet control performance in most cases [32, 35, 36]. However, for jump systems, this choice will fail because of the existence of martingale process (or Wiener noise). To solve this problem, we suggest using quartic Lyapunov function instead of quadratic one, and this will increase largely the difficulty of design.
where , and are design parameters.
Here , are design parameters.
where , , and function is chosen to satisfy .
If Assumptions (A1) and (A2) hold and a switching control law (5.11) is adopted, the interconnected Markovian jump system (4.1) is JISpS in probability, and all solutions of closed-loop system are ultimately bounded. Furthermore, the system output could be regulated to a small neighborhood of the equilibrium point with preset precision in probability within finite time.
In (5.15), appropriate parameter can be chosen to satisfy .
Here parameter can be chosen to satisfy .
where , is given as in . From (5.20) it can be seen that all solutions of closed-loop system are ultimately bounded in probability.
According to (5.20) and the property of function, for any given , there exists . If , there is . At the same time by choosing approximate parameter, it can be guaranteed that .
meanwhile can be made as small as possible by choosing approximate parameters . The proof is completed.
Theorem 5.2 shows that if both subsystem and subsystem are JISpS in probability, the jump system (4.1) is JISpS in probability with appropriate control parameters chosen. Meanwhile the system output can be regulated to a small region in probability with preset precision within finite time. In the following simulation, we will show how different parameters affect the system states and output.
Consider a two-order Markovian jump nonlinear system with regime transition space , and the transition rate matrix is .
where , , . Thus the control law is taken as follows (here ).
The system regime is :
The system regime is :
Comparing the results from two simulations, all the signals of closed-loop system are globally uniformly ultimately bounded, and the system output can be regulated to a neighborhood near the equilibrium point despite of different experiment samples. As can be seen from the figures, larger values of help to increase the convergence speed of system states while larger value of and smaller values of help to increase the precision. If one wants the system states to converge to the neighborhood of the equilibrium point with fast speed and an acceptable precision, one should increase the value of and decrease though this choice will increase the cost of control signals.
Much research work has been performed toward the study of nonlinear system by using small-gain theorem [4, 32, 33]. In contrast to their contributions, this paper focuses on the switching controller design for Markovian jump nonlinear system which is a more general form of nonjump systems. For each different regime , the actual controller is different, and it consists of two parts with obvious difference (see (5.11)): the coupling of regime and , which are both caused by the Markovian jumps (see (2.4)). By defining regime , the above two terms will reduce to zero. Thus this switching controller design is capable of stabilizing the general nonjump system as well.
In this paper, the authors take into account switching controller design for a class of Markovian jump nonlinear system with unmodeled dynamics. Based on the differential equation and infinitesimal generator of jump systems, the concept of Jump Input-to-State Stability (JISpS) and stochastic Lyapunov stability criterion are put forward. Moreover, the martingale process caused by the stochastic Markovian jumps is converted into Wiener noise. By using backstepping technology and stochastic small-gain theorem, a switching controller is proposed which ensures JISpS in probability of the jump nonlinear system. And system output can be regulated in probability to a small neighborhood of the equilibrium point with preset precision. The result presented in this paper also stands for the general nonjump system.
This work has been funded by BK21 research project: Switching Control of Systems with Structure Uncertainty and Noise.
- Sontag ED: Smooth stabilization implies coprime factorization. IEEE Transactions on Automatic Control 1989,34(4):435-443. 10.1109/9.28018MATHMathSciNetView ArticleGoogle Scholar
- Khalil HK: Nonlinear Systems. 2nd edition. Prentice-Hall, Englewood Cliffs, NJ, USA; 1996.Google Scholar
- Kokotović PV, Arcak M: Constructive nonlinear control: a historical perspective. Automatica 2001,37(5):637-662.MATHMathSciNetView ArticleGoogle Scholar
- Jiang Z-P: A combined backstepping and small-gain approach to adaptive output feedback control. Automatica 1999,35(6):1131-1139. 10.1016/S0005-1098(99)00015-1MATHMathSciNetView ArticleGoogle Scholar
- Jiang Z-P, Mareels I, Hill D: Robust control of uncertain nonlinear systems via measurement feedback. IEEE Transactions on Automatic Control 1999,44(4):807-812. 10.1109/9.754823MATHMathSciNetView ArticleGoogle Scholar
- Krstić M, Kanellakopoulos I, Kokotović PV: Nonlinear and Adaptive Control Design. John Wiley & Sons, New York, NY, USA; 1995.Google Scholar
- Seto D, Annaswamy AM, Baillieul J: Adaptive control of nonlinear systems with a triangular structure. IEEE Transactions on Automatic Control 1994,39(7):1411-1428. 10.1109/9.299624MATHMathSciNetView ArticleGoogle Scholar
- Čelikovský S, Nijmeijer H: Equivalence of nonlinear systems to triangular form: the singular case. Systems & Control Letters 1996,27(3):135-144. 10.1016/0167-6911(95)00059-3MATHMathSciNetView ArticleGoogle Scholar
- Chen H, Ji H-B, Wang B, Xi H-S: Coordinated passivation techniques for the dual-excited and steam-valving control of synchronous generators. IEE Proceedings: Control Theory and Applications 2006,153(1):69-73. 10.1049/ip-cta:20045016View ArticleGoogle Scholar
- Jiang Z-P, Nijmeijer H: Tracking control of mobile robots: a case study in backstepping. Automatica 1997,33(7):1393-1399. 10.1016/S0005-1098(97)00055-1MATHMathSciNetView ArticleGoogle Scholar
- Do KD, Jiang ZP, Pan J: On global tracking control of a VTOL aircraft without velocity measurements. IEEE Transactions on Automatic Control 2003,48(12):2212-2217. 10.1109/TAC.2003.820148MathSciNetView ArticleGoogle Scholar
- Chang Y-C, Yen H-M: Adaptive output feedback tracking control for a class of uncertain nonlinear systems using neural networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B 2005,35(6):1311-1316. 10.1109/TSMCB.2005.850158View ArticleGoogle Scholar
- Cai Z, de Queiroz MS, Dawson DM: Robust adaptive asymptotic tracking of nonlinear systems with additive disturbance. IEEE Transactions on Automatic Control 2006,51(3):524-529. 10.1109/TAC.2005.864204MathSciNetView ArticleGoogle Scholar
- Kac IYa, Krasovskii NN: About stability of systems with stochastic parameters. Prikladnaya Matematika i Makhanika 1960,24(5):809-823.Google Scholar
- Mariton M: Jump Linear Systems in Automatic Control. Marcel-Dekker, New York, NY, USA; 1990.Google Scholar
- Kac IYa: Method of Lyapunov functions in problems of stability and stabilization of systems of stochastic structure. Ekaterinburg, Russia, 1998Google Scholar
- Shaikhet L: Stability of stochastic hereditary systems with Markovian switching. Theory of Stochastic Process 1996,2(18):180-184.Google Scholar
- Mao X: Stability of stochastic differential equations with Markovian switching. Stochastic Processes and Their Applications 1999,79(1):45-67. 10.1016/S0304-4149(98)00070-2MATHMathSciNetView ArticleGoogle Scholar
- Yuan C, Mao X: Asymptotic stability in distribution of stochastic differential equations with Markovian switching. Stochastic Processes and Their Applications 2003,103(2):277-291. 10.1016/S0304-4149(02)00230-2MATHMathSciNetView ArticleGoogle Scholar
- Mao X, Shaikhet L: Delay-dependent stability criteria for stochastic differential delay equations with Markovian switching. Stability and Control: Theory and Applications 2000,3(2):88-102.MathSciNetGoogle Scholar
- Mao X: Robustness of stability of stochastic differential delay equations with Markovian switching. Stability and Control: Theory and Applications 2000,3(1):48-61.MathSciNetGoogle Scholar
- Mao X: Asymptotic stability for stochastic differential delay equations with Markovian switching. Functional Differential Equations 2002,9(1-2):201-220.MATHMathSciNetGoogle Scholar
- Yuan C, Mao X: Robust stability and controllability of stochastic differential delay equations with Markovian switching. Automatica 2004,40(3):343-354. 10.1016/j.automatica.2003.10.012MATHMathSciNetView ArticleGoogle Scholar
- de Souza CE, Trofino A, Barbosa KA:Mode-independent filters for Markovian jump linear systems. IEEE Transactions on Automatic Control 2006,51(11):1837-1841.MathSciNetView ArticleGoogle Scholar
- Zhu J, Park J, Lee K-S, Spiryagin M: Robust extended Kalman filter of discrete-time Markovian jump nonlinear system under uncertain noise. Journal of Mechanical Science and Technology 2008,22(6):1132-1139. 10.1007/s12206-007-1048-zView ArticleGoogle Scholar
- Nguang SK, Shi P:Robust output feedback control design for Takagi-Sugeno systems with Markovian jumps: a linear matrix inequality approach. Journal of Dynamic Systems, Measurement and Control 2006,128(3):617-625. 10.1115/1.2232686View ArticleGoogle Scholar
- Zhu J, Xi H-S, Ji H-B, Wang B: Robust adaptive tracking for Markovian jump nonlinear systems with unknown nonlinearities. Discrete Dynamics in Nature and Society 2006, 2006:-18.Google Scholar
- Jin Z, Hongsheng X, Xiaobo X, Haibo J: Guaranteed control performance robust LQG regulator for discrete-time Markovian jump systems with uncertain noise. Journal of Systems Engineering and Electronics 2007,18(4):885-891. 10.1016/S1004-4132(08)60036-5MATHView ArticleGoogle Scholar
- Chen C, Li H-X, Dong D: Hybrid control for robot navigation: a hierarchical Q-learning algorithm. IEEE Robotics and Automation Magazine 2008,15(2):37-47.View ArticleGoogle Scholar
- Dong D, Chen C, Li H, Tarn T-J: Quantum reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B 2008,38(5):1207-1220.View ArticleGoogle Scholar
- Deng H, Krstić M, Williams RJ: Stabilization of stochastic nonlinear systems driven by noise of unknown covariance. IEEE Transactions on Automatic Control 2001,46(8):1237-1253. 10.1109/9.940927MATHView ArticleGoogle Scholar
- Wu Z-J, Xie X-J, Zhang S-Y: Adaptive backstepping controller design using stochastic small-gain theorem. Automatica 2007,43(4):608-620. 10.1016/j.automatica.2006.10.020MATHMathSciNetView ArticleGoogle Scholar
- Mareels IMY, Hill DJ: Monotone stability of nonlinear feedback systems. Journal of Mathematical Systems, Estimation, and Control 1992,2(3):275-291.MATHMathSciNetGoogle Scholar
- Øksendal B: Stochastic Differential Equations. Springer, New York, NY, USA; 2000.Google Scholar
- Polycarpou MM, Ioannou PA: A robust adaptive nonlinear control design. Automatica 1996,32(3):423-427. 10.1016/0005-1098(95)00147-6MATHMathSciNetView ArticleGoogle Scholar
- Wang B, Ji H, Zhu J, Xiao X: Robust adaptive control of polynomial lower-triangular systems with dynamic uncertainties. Proceedings of the 6th World Congress on Intelligent Control and Automation (WCICA '06), June 2006, Dalian, China 1: 815-819.View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.