Abstract
Ecological cruising control methods of vehicles have been extensively studied to further cut down energy consumption by optimizing vehicles’ speed profiles. However, most controllers cannot be put into practical application because of future terrain data requirements and excessive computational demand. In this paper, an eco-cruising strategy with real-time capability utilizing deep reinforcement learning is proposed for electric vehicles (EVs) propelled by in-wheel motors. The deep deterministic policy gradient algorithm is leveraged to continuously regulate the motor torque in response to road elevation changes. By comparing the proposed strategy to the energy economy benchmark optimized with dynamic programming (DP), and traditional constant speed (CS) strategy, its learning ability, optimality, and generalization performance are verified. The simulation results show that without a priori knowledge about the future trip, the proposed strategy provides 3.8% energy saving compared with the CS strategy. It also yields a smaller gap than the globally optimal solution of DP. By testing on other driving cycles, the trained strategy reveals good generalization performance and impressive computational efficiency (about 2 ms per simulation step), making it practical and implementable. Additionally, the model-free characteristic of the proposed strategy makes it applicable for EVs with different powertrain topologies.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
U.S. Energy Information Administration, monthly energy review, https://www.eia.gov/energyexplained/use-of-energy/transportation
Liu T, Hu X, Hu W, et al. A heuristic planning reinforcement learning-based energy management for power-split plug-in hybrid electric vehicles. IEEE Trans Ind Inf, 2019, 15: 6436–6445
Barkenbus J N. Eco-driving: An overlooked climate change initiative. Energy Policy, 2010, 38: 762–769
Next-generation energy technologies for connected and automated on-road vehicles. https://arpa-e.energy.gov/technologies/programs
Xie L, Luo Y, Zhang D, et al. Intelligent energy-saving control strategy for electric vehicle based on preceding vehicle movement. Mech Syst Signal Processing, 2019, 130: 484–501
Chen B C, Wu Y Y, Tsai H C. Design and analysis of power management strategy for range extended electric vehicle using dynamic programming. Appl Energy, 2014, 113: 1764–1774
Saerens B, Van den Bulck E. Calculation of the minimum-fuel driving control based on Pontryagin’s maximum principle. Transpation Res Part D-Transp Environ, 2013, 24: 89–97
Shen D, Karbowski D, Rousseau A. Fuel-optimal periodic control of passenger cars in cruise based on pontryagin’s minimum principle. IFAC-PapersOnLine, 2018, 51: 813–820
Ye Z, Li K, Stapelbroek M, et al. Variable step-size discrete dynamic programming for vehicle speed trajectory optimization. IEEE Trans Intell Transp Syst, 2019, 20: 476–484
Dong H, Zhuang W, Yin G, et al. Energy-optimal velocity planning for connected electric vehicles at signalized intersection with queue prediction. In: Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). Boston, 2020. 238–243
Zhuang W C, Qu L H, Xu S B, et al. Integrated energy-oriented cruising control of electric vehicle on highway with varying slopes considering battery aging. Sci China Tech Sci, 2020, 63: 155–165
Sciarretta A, Guzzella L. Control of hybrid electric vehicles. IEEE Control Syst Mag, 2007, 27: 60–70
Xie S, Hu X, Liu T, et al. Predictive vehicle-following power management for plug-in hybrid electric vehicles. Energy, 2019, 166: 701–714
Xiang C L, Ding F, Wang W D, et al. MPC-based energy management with adaptive Markov-chain prediction for a dual-mode hybrid electric vehicle. Sci China Tech Sci, 2017, 60: 737–748
Zhuang W, Xu L, Yin G. Robust cooperative control of multiple autonomous vehicles for platoon formation considering parameter uncertainties. Automot Innov, 2020, 3: 88–100
Sutton R S, Barto A G. Reinforcement Learning: An Introduction. 2nd ed. Cambridge: MIT Press, 2018
Li Y, He H, Khajepour A, et al. Energy management for a power-split hybrid electric bus via deep reinforcement learning with terrain information. Appl Energy, 2019, 255: 113762
Xu C, Zhao W Z, Chen Q Y, et al. An actor-critic based learning method for decision-making and planning of autonomous vehicles. Sci China Tech Sci, 2021, 64: 984–994
Zhou Q, Li J, Shuai B, et al. Multi-step reinforcement learning for model-free predictive energy management of an electrified off-highway vehicle. Appl Energy, 2019, 255: 113755
Wang P, Chan C Y. Formulation of deep reinforcement learning architecture toward autonomous driving for on-ramp merge. In: Proceedings of IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). Yokohama, 2017. 1–6
Shi J, Qiao F, Li Q, et al. Application and evaluation of the reinforcement learning approach to eco-driving at intersections under infrastructure-to-vehicle communications. Transpation Res Record, 2018, 2672: 89–98
Vázquez-Canteli J R, Nagy Z. Reinforcement learning for demand response: A review of algorithms and modeling techniques. Appl Energy, 2019, 235: 1072–1089
Guo Q, Angah O, Liu Z, et al. Hybrid deep reinforcement learning based eco-driving for low-level connected and automated vehicles along signalized corridors. Transpation Res Part C-Emerging Technologies, 2021, 124: 102980
Zhu Z, Gupta S, Gupta A, et al. A deep reinforcement learning framework for eco-driving in connected and automated hybrid electric vehicles. 2021. ArXiv: 2101.05372
Boriboonsomsin K, Barth M. Impacts of road grade on fuel consumption and carbon dioxide emissions evidenced by use of advanced navigation systems. Transpation Res Record, 2009, 2139: 21–30
Lee H, Kim N, Cha S W. Model-based reinforcement learning for eco-driving control of electric vehicles. IEEE Access, 2020, 8: 202886
Lillicrap T P, Hunt J J, Pritzel A, et al. Continuous control with deep reinforcement learning. 2015. ArXiv: 1509.02971
ProteanDrive. https://www.proteanelectric.com/technology/
Xie S, Hu X, Xin Z, et al. Time-efficient stochastic model predictive energy management for a plug-in hybrid electric bus with an adaptive reference state-of-charge advisory. IEEE Trans Veh Technol, 2018, 67: 5671–5682
Zhang F, Xi J, Langari R. Real-time energy management strategy based on velocity forecasts using V2V and V2I communications. IEEE Trans Intell Transp Syst, 2017, 18: 416–430
Sun C, Moura S J, Hu X, et al. Dynamic traffic feedback data enabled energy management in plug-in hybrid electric vehicles. IEEE Trans Contr Syst Technol, 2015, 23: 1075–1086
Guo J Q, He H W, Peng J K, et al. A novel MPC-based adaptive energy management strategy in plug-in hybrid electric vehicles. Energy, 2019, 175: 378–392
Murphey Y L, Park J, Chen Z, et al. Intelligent hybrid vehicle power control-part I: Machine learning of optimal vehicle power. IEEE Trans Veh Technol, 2012, 61: 3519–3530
Liu T, Zou Y, Liu D, et al. Reinforcement learning of adaptive energy management with transition probability for a hybrid electric tracked vehicle. IEEE Trans Ind Electron, 2015, 62: 7837–7846
Wu J, He H, Peng J, et al. Continuous reinforcement learning of energy management with deep Q network for a power split hybrid electric bus. Appl Energy, 2018, 222: 799–811
Liu T, Hu X, Li S E, et al. Reinforcement learning optimized look-ahead energy management of a parallel hybrid electric vehicle. IEEE/ASME Trans Mechatron, 2017, 22: 1497–1507
Lian R, Peng J, Wu Y, et al. Rule-interposing deep reinforcement learning based energy management strategy for power-split hybrid electric vehicle. Energy, 2020, 197: 117297
Larochelle H, Bengio Y, Louradour J, et al. Exploring strategies for training deep neural networks. J Mach Learn Res, 2019, 10: 1–40
He K, Zhang X, Ren S, et al. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceedings of IEEE International Conference on Computer Vision (ICCV). Santiago, 2015. 1026–1034
Zhang K, Sun M, Han T X, et al. Residual networks of residual networks: Multilevel residual networks. IEEE Trans Circ Syst Video Technol, 2018, 28: 1303–1314
Mnih V, Kavukcuoglu K, Silver D, et al. Human-level control through deep reinforcement learning. Nature, 2015, 518: 529–533
Dong H, Ding Z, Zhang S. Deep Reinforcement Learning: Fundamentals, Research and Applications. Singapore: Springer, 2020
Schaul T, Quan J, Antonoglou I, et al. Prioritized experience replay. 2016. ArXiv: 1511.05952
Hou Y, Liu L, Wei Q, et al. A novel DDPG method with prioritized experience replay. In: Proceedings of IEEE International Conference on Systems, Man, and Cybernetics (SMC). Banff, 2017. 316–321
Chen Y, Li X, Wiet C, et al. Energy management and driving strategy for in-wheel motor electric ground vehicles with terrain profile preview. IEEE Trans Ind Inf, 2014, 10: 1938–1947
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was supported by the Graduate Student Innovation Project of Jiangsu Province, China (Grant No. KYCX20_0258).
Rights and permissions
About this article
Cite this article
Wang, Q., Ju, F., Zhuang, W. et al. Ecological cruising control of connected electric vehicle: a deep reinforcement learning approach. Sci. China Technol. Sci. 65, 529–540 (2022). https://doi.org/10.1007/s11431-021-1994-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11431-021-1994-7