Skip to main content

Application of Evolutionary Reinforcement Learning (ERL) Approach in Control Domain: A Review

  • Conference paper
  • First Online:
Smart Innovations in Communication and Computational Sciences

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 670))

Abstract

Evolutionary algorithms have come to take a centre stage in diverse areas spanning multiple applications. Reinforcement learning is a novel paradigm that has recently evolved as a major control technique. This paper presents a concise review on implementing reinforcement learning with evolutionary algorithms, e.g. genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO), to several benchmark control problems, e.g. inverted pendulum, cart–pole problem, mobile robots. Some techniques have combined Q-Learning with evolutionary approaches to improve their performance. Others have used knowledge acquisition to obtain optimal fuzzy rule set and genetic reinforcement learning (GRL) for designing consequent parts of fuzzy systems. We also propose a Q-value-based GRL for fuzzy controller (QGRF) where evolution is performed after each trial in contrast to GA where many trials are required to be performed before evolution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Elmer P. Dadios and David J. Williams, “Nonconventional Control of the Flexible Pole–CartBalancing Problem: Experimental Results”, IEEE Transactions On Systems, Man, And Cybernetics—Part B: Cybernetics, Vol. 28, No. 6, 1998.

    Google Scholar 

  2. Chi-Yao Hsu, Yung-Chi Hsu and Sheng-Fuu Lin, “Reinforcement evolutionary learning using data mining algorithm with TSK-type fuzzy controllers”, Applied Soft Computing 11 (2011) 3247–3259.

    Google Scholar 

  3. Cheng-Jian Lin and Yong-Ji Xu, “Anovel genetic reinforcement learning for nonlinear fuzzy control problems”, Neurocomputing 69(2006) 2078–2089.

    Google Scholar 

  4. Farhad Pourpanah, Chee Peng Lim and Junita Mohamad Saleh, “A hybrid model of fuzzy ARTMAP and genetic algorithm for data classification and rule extraction”, Expert Systems With Applications 49 (2016) 74–85.

    Google Scholar 

  5. Deepak Kumar, Brijesh Dhakar and Rajpati Yadav, “Tuning a PID controller using Evolutionary Algorithms for an Non-linear Inverted Pendulum on the Cart System”, International Conference on Advanced Developments in Engineering and Technology (ICADET-14), India, Vol. 4(1), 2014.

    Google Scholar 

  6. Chia-Feng Juang and Chun-Ming Lu, “Ant Colony Optimization Incorporated With Fuzzy Q-Learning for Reinforcement Fuzzy Control”, IEEE Transactions On Systems, Man, And Cybernetics—Part A: Systems And Humans, Vol. 39(3), 2009.

    Google Scholar 

  7. H. K. Lam, Frank H. Leung and Peter K. S. Tam, “Design and Stability Analysis of Fuzzy Model-Based Nonlinear Controller for Nonlinear Systems Using Genetic Algorithm,” IEEE Transactions On Systems, Man, And Cybernetics—Part B: Cybernetics, Vol. 33, No. 2, 2003.

    Google Scholar 

  8. Tandra Pal and Nikhil R. Pal, “SOGARG: A Self-Organized Genetic Algorithm-Based Rule Generation Scheme for Fuzzy Controllers”, IEEE Transactions On Evolutionary Computation, Vol. 7, No. 4, 2003.

    Google Scholar 

  9. Cheng-Jian Lin and Cheng-Hung Chen, “Nonlinear system control using self-evolving neural fuzzy inference networks with reinforcement evolutionary learning”, Applied Soft Computing 11 (2011) 5463–5476.

    Google Scholar 

  10. R. P. Prado, S. Garc´ıa-Gal´an, J. E. Mu˜noz Exp´osito and A. J. Yuste, “Knowledge Acquisition in Fuzzy-Rule-Based Systems with Particle-Swarm Optimization”, IEEE Transactions On Fuzzy Systems, Vol. 18(6), 2010.

    Google Scholar 

  11. Chih-Kuan Chiang, Hung-Yuan Chung and Jin-Jye Lin, “A Self-Learning Fuzzy Logic Controller Using Genetic Algorithms with Reinforcements”, IEEE Transactions On Fuzzy Systems, Vol.5(3), 1997.

    Google Scholar 

  12. Yung-Chi Hsu and Sheng-Fuu Lin, “Reinforcement Hybrid Evolutionary Learning for TSK-type Neuro-Fuzzy Controller Design”, Proceedings of the 17th World Congress The International Federation of Automatic Control Seoul, Korea, July 6–11, 2008.

    Google Scholar 

  13. Yesoda Bhargava, Anupam Shukla and Laxmidhar Behera, “Improved Approach to Area Exploration in an Unknown Environment by Mobile Robot using Genetic Algorithm, Real time Reinforcement Learning and Co-operation among the Controllers’’, Third International Conference on Advances in Control and Optimization of Dynamical Systems March 13–15, 2014. Kanpur, India.

    Google Scholar 

  14. Hitesh Shah and M. Gopal, “A Reinforcement Learning Algorithm with Evolving Fuzzy Neural Networks”, Third International Conference on Advances in Control and Optimization of Dynamical Systems March 13–15, 2014. Kanpur, India.

    Google Scholar 

  15. Changjiu Zhou, “Robot learning with GA-based fuzzy reinforcement learning agents”, Information Sciences 145 (2002) 45–68.

    Google Scholar 

  16. P. K. Das, H. S. Behera and B. K. Panigrahi, “Intelligent-based multi-robot path planning inspired by improved classical Q-learning and improved particle swarm optimization with perturbed velocity”, Engineering Science and Technology, an International Journal 19 (2016) 651–669.

    Google Scholar 

  17. Shotaro Kamio and Hitoshi Iba, “Adaptation Technique for Integrating Genetic Programming and Reinforcement Learning for Real Robots’, IEEE Transactions On Evolutionary Computation, Vol. 9 (3), 2005.

    Google Scholar 

  18. Chia-Feng Juang, “Combination of Online Clustering and Q-Value Based GA for Reinforcement Fuzzy System Design”, IEEE Transactions On Fuzzy Systems, Vol. 13(3), 2005.

    Google Scholar 

  19. Chin-Teng Lin and Chong-Ping Jou, “Controlling Chaos by GA-Based Reinforcement Learning Neural Network”, IEEE Transactions On Neural Networks, Vol. 10(4), 1999.

    Google Scholar 

  20. Chia-Feng Juang and Chia-Hung Hsu, “Reinforcement Interval Type-2 Fuzzy Controller Design by Online Rule Generation and Q-Value-Aided Ant Colony Optimization”, IEEE Transactions On Systems, Man, And Cybernetics—Part B: Cybernetics, Vol. 39(6), 2009.

    Google Scholar 

  21. Chia-Feng Juang, Jiann-Yow Lin and Chin-Teng Lin, “Genetic Reinforcement Learning through Symbiotic Evolution for Fuzzy Controller Design”, IEEE Transactions On Systems, Man, And Cybernetics—Part B: Cybernetics, Vol. 30(2), 2000.

    Google Scholar 

  22. Chin-Teng Lin and Chong-Ping Jou, “GA-Based Fuzzy Reinforcement Learning for Control of a Magnetic Bearing System”, IEEE Transactions On Systems, Man, And Cybernetics—Part B: Cybernetics, Vol. 30, No. 2, 2000.

    Google Scholar 

  23. Chia-Feng Juang and Chun-Ming Lu, “Ant Colony Optimization Incorporated With Fuzzy Q-Learning for Reinforcement Fuzzy Control”, IEEE Transactions On Systems, Man, And Cybernetics—Part A: Systems And Humans, Vol. 39(3), 2009.

    Google Scholar 

  24. Fei Liu and Guangzhou Zeng, “Study of genetic algorithm with reinforcement learning to solve the TSP”, Expert Systems with Applications 36 (2009) 6995–7001.

    Google Scholar 

  25. Tanfei Jiang and Zhijng Liu, “An Adaptive Ant Colony Optimization Algorithm Approach to Reinforcement Learning”, International Symposium on Computational Intelligence and Design, 2008.

    Google Scholar 

  26. Shingo Mabu, Andre Tjahjadi and Kotaro Hirasawa, “Adaptability analysis of genetic network programming with reinforcement learning in dynamically changing environments”, Expert Systems with Applications 39 (2012) 12349–12357.

    Google Scholar 

  27. Hussein Samma, Chee Peng Lim and Junita Mohamad Saleh, “A new Reinforcement Learning-based Memetic Particle Swarm Optimizer”, Applied Soft Computing 43 (2016) 276–297.

    Google Scholar 

  28. Teodoro C. Bora, Luiz Lebensztajn and Leandro Dos S. Coelho, “Non-Dominated Sorting Genetic Algorithm Based on Reinforcement Learning to Optimization of Broad-Band Reflector Antennas Satellite”, IEEE Transactions On Magnetics, Vol. 48(2), 2012.

    Google Scholar 

  29. Duan Houli, Li Zhiheng and Zhang Yi, “Multiobjective Reinforcement Learning for Traffic Signal Control Using Vehicular Ad Hoc Network”, Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2010, Article ID 724035, 7 pages.

    Google Scholar 

  30. F. Daneshfar and H. Bevrani, “Load–frequency control: A GA-based multi-agent reinforcement learning”, Published in IET Generation, Transmission & Distribution.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Parul Goyal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Goyal, P., Malik, H., Sharma, R. (2019). Application of Evolutionary Reinforcement Learning (ERL) Approach in Control Domain: A Review. In: Panigrahi, B., Trivedi, M., Mishra, K., Tiwari, S., Singh, P. (eds) Smart Innovations in Communication and Computational Sciences. Advances in Intelligent Systems and Computing, vol 670. Springer, Singapore. https://doi.org/10.1007/978-981-10-8971-8_25

Download citation

Publish with us

Policies and ethics