Overview
- Convergence proofs of the algorithms presented teach readers how to derive necessary stability and convergence criteria for their own systems
- Establishes the fundamentals of ADP theory so that student readers can extrapolate their learning into control, operations research and related fields
- Applications examples show how the theory can be made to work in real example systems
- Includes supplementary material: sn.pub/extras
Part of the book series: Communications and Control Engineering (CCE)
Buy print copy
About this book
• infinite-horizon control for which the difficulty of solving partial differential Hamilton–Jacobi–Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences;
• finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control;
• nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point.
Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium.
In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time:
• establishes the fundamental theory involved clearly with each chapter devoted to aclearly identifiable control paradigm;
• demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and
• shows how ADP methods can be put to use both in simulation and in real applications.
This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.
Similar content being viewed by others
Keywords
Table of contents (10 chapters)
Reviews
From the book reviews:
“This book provides a self-contained treatment of adaptive dynamic programming with applications in feedback control and game theory. … This book … will appeal to graduate students, practitioners, and researchers seeking an up-to-date and consolidated treatment of the field.” (IEEE Control Systems Magazine, October, 2013)
Authors and Affiliations
Bibliographic Information
Book Title: Adaptive Dynamic Programming for Control
Book Subtitle: Algorithms and Stability
Authors: Huaguang Zhang, Derong Liu, Yanhong Luo, Ding Wang
Series Title: Communications and Control Engineering
DOI: https://doi.org/10.1007/978-1-4471-4757-2
Publisher: Springer London
eBook Packages: Engineering, Engineering (R0)
Copyright Information: Springer-Verlag London 2013
Hardcover ISBN: 978-1-4471-4756-5Published: 14 December 2012
Softcover ISBN: 978-1-4471-5881-3Published: 28 January 2015
eBook ISBN: 978-1-4471-4757-2Published: 14 December 2012
Series ISSN: 0178-5354
Series E-ISSN: 2197-7119
Edition Number: 1
Number of Pages: XVI, 424
Topics: Control and Systems Theory, Optimization, Artificial Intelligence, Computational Intelligence, Systems Theory, Control