Abstract
Since the end of the last century, reinforcement learning has succeeded in the simplest computer games and modern games, and under special conditions of learning. The article describes the work on transferring actual strategic tasks to the game environment in order to approximate the function of their solution using reinforcement learning methods. The authors are faced with the task of battle of robots, for a successful solution of which it is necessary to make strategic decisions. The relevance of the problem lies in its difficult to formalize and the possibility of its transfer to other spheres of society, for successful functioning in which there are not enough methods based on linear logic. The task is formalized for model-free and model-based methods, off-policy and on-policy algorithms. Particular attention is paid to the development and understanding of the reward function and the neural network model. Article also presents comparison of simulation results under different conditions and methods of solution that affect the quality of the approximated policy. In the conclusion, authors give an analysis of achieved results, further methods for the development of the solved problem.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Gawehn, E., Hiss, J.A., Schneider, G.: Deep learning in drug discovery. Mol. Inf. 35(1), 3–14 (2016)
Vamathevan, J., Clark, D., Czodrowski, P., Dunham, I., Ferran, E., Lee, G., Li, B., Madabhushi, A., Shah, P., Spitzer, M., Zhao, S.: Applications of machine learning in drug discovery and development. Nat. Rev. Drug Discovery 18(6), 463–477 (2019)
Marcus, G.: Deep Learning: A Critical Appraisal. arXiv preprint arXiv:180100631 (2018)
Medvedev, M., Pshikhopov, V., Gurenko, B., Hamdan, N.: Path planning method for mobile robot with maneuver restrictions. In: Proceeding of the International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME) 7–8 October (2021). https://doi.org/10.1109/ICECCME52200.2021.9591090
Kostjukov, V.A., Medvedev, M.Y.E., Pshikhopov, V.K.: Method for optimizing of mobile robot trajectory in repeller sources field. Inf. Autom. 20(3), 690–726 (2021)
Medvedev, M., Kostjukov, V., Pshikhopov, V.: Optimization of mobile robot movement on a plane with finite number of repeller sources. SPIIRAS Proc. 19(1), 43–78 (2020). https://doi.org/10.15622/sp.2020.19.1.2
Pshikhopov, V., Medvedev, M.: Multi-loop adaptive control of mobile objects in solving trajectory tracking tasks. Autom. Remote. Control. 81(11), 2078–2093 (2020)
Medvedev, M., Pshikhopov, V.: Path planning of mobile robot group based on neural networks. In: International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, pp. 51–62 (2020)
Gaiduk, A.R., Martjanov, O.V., Medvedev, M.Y., Pshikhopov, V.K., Hamdan, N., Farhood, A.: Neural network based control system for robots group operating in 2-d uncertain environment. Mechatron. Autom. Control 21(8), 470–479 (2020)
Liu, Q., Wu, Y.: Supervised learning. In: Web Data Mining, pp. 63–112 (2012). https://doi.org/10.1007/978-1-4419-1428-6_451
Alloghani, M., Al-Jumeily Obe, D., Mustafina, J., Hussain, A., Aljaaf, A.: A Systematic Review on Supervised and Unsupervised Machine Learning Algorithms for Data Science (2020). https://doi.org/10.1007/978-3-030-22475-2_1
Reddy, Y.C.A.P., Viswanath, P., Reddy, B.E.: Semi-supervised learning: a brief review. Int. J. Eng. Technol. 7(8.1), 81 (2018). https://doi.org/10.14419/ijet.v7i1.8.9977
Sharma, R., Prateek, M., Sinha, A.: Use of reinforcement learning as a challenge: a review. Int. J. Comput. Appl. 69, 28–34 (2013). https://doi.org/10.5120/12105-8332
Sallab, A., Rashwan, M.: Self-learning Machines using Deep Networks (2011). https://doi.org/10.1109/SoCPaR.2011.6089108
Kanishka Nithin, D., Bagavathi, S.P.: Generic feature learning in computer vision. Proc. Comput. Sci. 58, 202–209 (2015). https://doi.org/10.1016/j.procs.2015.08.054
Charbuty, B., Abdulazeez, A.: Classification based on decision tree algorithm for machine learning. J. Appl. Sci. Technol. Trends 2(01), 20–28 (2021)
Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. MIT Press (2018)
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., Hassabis, D.: Human level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., Hassabis, D.: Mastering the game of go with deep neural networks and tree search. Nature 529, 484–489 (2016)
Balducci, F., Grana, C., Cucchiara, R.: Affective level design for a role-playing videogame evaluated by a brain-computer interface and machine learning methods. Vis. Comput. 33, 1–15 (2016)
Cruz, R.M., Sabourin, R., Cavalcanti, G.D., Ren, T.I.: META-DES: a dynamic ensemble selection framework using meta-learning. Pattern Recogn. 48(5), 1925–1935 (2015)
Schaeffer, J., Lake, R., Lu, P., Bryant, M.: Chinook the world man-machine checkers champion. AI Mag. 17(1), 21–21 (1996)
Campbell, M., Hoane, A.J., Hsu, F.: Deep blue. Artif. Intell. 134(1), 57–83 (2002)
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Leg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Berner, C., Brockman, G., Chan, B., Cheung, V., Dębiak, P., Dennison, C., Farhi, D., Fischer, Q., Hashme, S., Hesse, C., Józefowicz, R.: Dota 2 with Large Scale Deep Reinforcement Learning. arXiv preprint arXiv:1912.06680 (2019)
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y.: Mastering the game of go without human knowledge. Nature 550(7676), 354–359 (2017)
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal Policy Optimization Algorithms. arXiv:1707.06347 (2017)
Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D.: Continuous control with deep reinforcement learning. In: International Conference on Learning Representations (ICLR) (2016)
Christodoulou, P.: Soft Actor-Critic for Discrete Action Settings. arXiv preprint arXiv:1910.07207 (2019)
Pu, Y., Wang, S., Yao, X. Li, B.: Context-Based Soft Actor Critic for Environments with Non-stationary Dynamics. arXiv preprint arXiv:2105.03310 (2021)
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal Policy Optimization Algorithms. arXiv preprint arXiv:1707.06347 (2017)
He, K., Zhang, X., Ren, S. Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Parkhomenko, V., Gayda, T., Medvedev, M. (2023). Intelligent System for Countering Groups of Robots Based on Reinforcement Learning Technologies. In: Ronzhin, A., Pshikhopov, V. (eds) Frontiers in Robotics and Electromechanics. Smart Innovation, Systems and Technologies, vol 329. Springer, Singapore. https://doi.org/10.1007/978-981-19-7685-8_9
Download citation
DOI: https://doi.org/10.1007/978-981-19-7685-8_9
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-7684-1
Online ISBN: 978-981-19-7685-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)