Inverse optimal control for continuous-time nonlinear systems

Control óptimo inverso para sistemas no lineales en tiempo continuo

Main Article Content

Carlos Vega Pérez
Ricardo Alzate Castaño
Abstract

Optimization theory applied to automatic control allows governing actions reaching desired conditions but minimizing a given performance index. Such optimization tasks imply to solve complicated mathematical expressions. The inverse optimal control appears as alternative to find the optimal control law without the explicit solution for the Hamilton-Jacobi-Bellman equation. To show the potential of the inverse optimal control for solving complex optimization problems in control theory. A general description of the optimal control problem is performed, followed by the justification of an inverse optimal approach. Examples for illustration are properly selected. Mathematical formulations given are applied to solve analytically the cases of a linear optimal quadratic regulator (LQR) and a nonlinear inverse optimal Lyapunov-based control problem (CLF). It is posible to solve optimal control problems for nonlinear systems, without explicitly facing the Hamilton-Jacobi-Bellman equation, by means of the inverse optimal control approach.

Keywords

Downloads

Download data is not yet available.

Article Details

Author Biographies (SEE)

Carlos Vega Pérez, Universidad Industrial de Santander

Ingeniero Electrónico

Bucaramanga -Colombia

Ricardo Alzate Castaño, Universidad Industrial de Santander

Doctor en Automática

Bucaramanga –Colombia

References

B.D.O. Anderson, J.B. Moore, “Optimal Control: Linear Quadratic Methods”. Prentice-Hall, Englewood,NJ, 1990.

D. E. Kirk, “Optimal Control Theory: An Introduction”, Englewood Cliffs, NJ, USA: Prentice-Hall, 1970.

G.J. Ji. “Inverse Optimal Stabilization of a Class of Nonlinear Systems”. Proceedings of the 26th Chinese Control Conference July 26-31, 2007, Hunan, 226-231.

J. J. Slotine, W. Li, “Applied Nonlinear Control”, Massachusetts Institute of Technology, Englewood Cliffs, New Jersey, Prentice Hall.

Luenberger, D. G. “Introduction to dynamic systems Theory, Models and Applications”, U.S.A, pp.419-427.1979.

L. Rodrigues, “An Inverse Optimality Method to Solve a Class of Second Order Optimal Control Problems”, IEEE 18th Mediterranean Conference on Control & Automation, Marrakech, Morocco, pp.407-412, June 23-25, 2010.

L. S. Pontryagin, V. G. Boltyankii, R. V. Gamkrelizde, and E. F. Mischenko. “The Mathematical Theory of Optimal Processes.” Interscience Publishers, Inc., New York, USA, 1962.

M. Krstic and Z. Li. “Inverse optimal design of input-to-state stabilizing nonlinear controllers”. IEEE Transactions on Automatic Control, 43(3):336–350, 1998.

P. J. Moylan and B. D. O. Anderson. “Nonlinear regulator theory and an inverse optimal control problema”. IEEE Transactions on Automatic Control, 18(5):460– 465, 1973.

R. E. Bellman. “Dynamic Programming”. Princeton, NJ, 1957.

R.E. Kalman, “Contributions to the theory of optimal control” Bol. Soc. Mat. Mex. vol. 5, pp. 102-119, 1960.

R. Freeman, P.V. Kokotovic. “Optimality of Robust Nonlinear Feedback Controls”. Technical report CCEC-93-1103, 1993.

R. Freeman, P.V. Kokotovic, “Robust Control of Nonlinear Systems”, Birkhauser, Boston, 1996.

R. Sepulchre, M. Jankovic, P.V. KokotoviC, “Constructive Nonlinear Control”, Springer, Berlin, 1996.

Z. Liu, Q. Wang, and H. Schurz, “Inverse optimal noise-to-state stabilization of stochastic recurrent neural networks driven by noise of unknown covariance,” Optimal Control Applications and Methods, vol. 30, no. 2, pp. 163–178, 2009.

Most read articles by the same author(s)

OJS System - Metabiblioteca |