Article PDF
Avoid common mistakes on your manuscript.
References
Werbos P J. Approximate dynamic programming for real-time control and neural modeling. In: Handbook of Intelligent Control: Neural, Fuzzy, and Adaptive Approaches. New York: Van Nostrand Reinhold, 1992. 15: 493–525
Vamvoudakis K G, Lewis F L. Online actor-critic algorithm to solve the continuous-time infinite horizon optimal control problem. Automatica, 2010, 46: 878–888
Lewis F L, Vrabie D, Vamvoudakis K G. Reinforcement learning and feedback control: using natural decision methods to design optimal adaptive controllers. IEEE Control Syst Mag, 2012, 32: 76–105
Zhang H, Cui L, Luo Y. Near-optimal control for nonzero-sum differential games of continuous-time nonlinear systems using single-network ADP. IEEE Trans Cyber, 2013, 43: 206–216
Mu C, Sun C, Song A, et al. Iterative GDHPbased approximate optimal tracking control for a class of discrete-time nonlinear systems. Neurocomputing, 2016, 214: 775–784
Wang D, Liu D, Zhang Q, et al. Data-based adaptive critic designs for nonlinear robust optimal control with uncertain dynamics. IEEE Trans Syst Man Cyber Syst, 2016, 46: 1544–1555
Gao W, Jiang Y, Jiang Z P, et al. Output-feedback adaptive optimal control of interconnected systems based on robust adaptive dynamic programming. Automatica, 2016, 72: 37–45
Sahoo A, Xu H, Jagannathan S. Approximate optimal control of affine nonlinear continuous-time systems using event-sampled neurodynamic programming. IEEE Trans Neural Netw Learn Syst, 2017, 28: 639–652
Narayanan V, Jagannathan S. Distributed adaptive optimal regulation of uncertain large-scale interconnected systems using hybrid Q-learning approach. IET Control Theory Appl, 2016, 10: 1448–1457
Fu Q, Gu P P, Wu J R. Iterative learning control for one-dimensional fourth order distributed parameter systems. Sci China Inf Sci, 2017, 60: 012204
Acknowledgments
This work was supported by National Natural Science Foundation of China (Grant Nos. 61233001, 61273140, 61304018, 61304086, 61533017, U1501251), Beijing Natural Science Foundation (Grant No. 4162065), Tianjin Natural Science Foundation (Grant No. 14JCQNJC05400), Early Career Development Award of SKLMCCS, and Research Fund of Tianjin Key Laboratory of Process Measurement and Control (Grant No. TKLPMC-201612).
Author information
Authors and Affiliations
Corresponding author
Additional information
The authors declare that they have no conflict of interest.
Rights and permissions
About this article
Cite this article
Wang, D., Mu, C. Developing nonlinear adaptive optimal regulators through an improved neural learning mechanism. Sci. China Inf. Sci. 60, 058201 (2017). https://doi.org/10.1007/s11432-016-9022-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11432-016-9022-1