Abstract
This entry illustrates the application of Bellman’s dynamic programming principle within the context of optimal control problems for continuous-time dynamical systems. The approach leads to a characterization of the optimal value of the cost functional, over all possible trajectories given the initial conditions, in terms of a partial differential equation called the Hamilton–Jacobi–Bellman equation. Importantly, this can be used to synthesize the corresponding optimal control input as a state-feedback law.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Bibliography
Bardi M, Capuzzo Dolcetta I (1997) Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations. Birkhxxxomlaxxxuser, Boston
Barles G (1994) Solutions de viscosité des équations de Hamilton-Jacobi. In: Mathématiques et applications, vol 17. Springer, Paris
Bellman R (1957) Dynamic programming. Princeton University Press, Princeton
Bertsekas DP (1987) Dynamic programming: deterministic and stochastic models. Prentice Hall, Englewood Cliffs
Boltyanskii VG, Gamkrelidze RV, Pontryagin LS (1956) On the theory of optimal processes (in Russian). Doklady Akademii Nauk SSSR 110, 7–10
Fleming WH, Rishel RW (1975) Deterministic and stochastic optimal control. Springer, New York
Fleming WH, Soner HM (1993) Controlled Markov processes and viscosity solutions. Springer, New York
Howard RA (1960) Dynamic programming and Markov processes. Wiley, New York
Kushner HJ, Dupuis P (2001) Numerical methods for stochastic control problems in continuous time. Springer, Berlin
Macki J, Strauss A (1982) Introduction to optimal control theory. Springer, Berlin/Heidelberg/New York
Pontryagin LS, Boltyanskii VG, Gamkrelidze RV, Mishchenko EF (1961) Matematicheskaya teoriya optimal’ nykh prozessov. Fizmatgiz, Moscow. Translated into English. The mathematical theory of optimal processes. John Wiley and Sons (Interscience Publishers), New York, 1962
Ross IM (2009) A primer on Pontryagin’s principle in optimal control. Collegiate Publishers, San Francisco
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this entry
Cite this entry
Falcone, M. (2021). Optimal Control and the Dynamic Programming Principle. In: Baillieul, J., Samad, T. (eds) Encyclopedia of Systems and Control. Springer, Cham. https://doi.org/10.1007/978-3-030-44184-5_209
Download citation
DOI: https://doi.org/10.1007/978-3-030-44184-5_209
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-44183-8
Online ISBN: 978-3-030-44184-5
eBook Packages: Intelligent Technologies and RoboticsReference Module Computer Science and Engineering