Abstract
Training robot or agent behaviors by example is an attractive alternative to directly coding them. However training complex behaviors can be challenging, particularly when it involves interactive behaviors involving multiple agents. We present a novel hierarchical learning from demonstration system which can be used to train both single-agent and scalable cooperative multiagent behaviors. The methodology applies manual task decomposition to break the complex training problem into simpler parts, then solves the problem by iteratively training each part. We discuss our application of this method to multiagent problems in the humanoid RoboCup competition, and apply the technique to the keepaway soccer problem in the RoboCup Soccer Simulator.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Argall, B.D., Chernova, S., Veloso, M., Browning, B.: A survey of robot learning from demonstration. Robotics and Autonomous Systems 57, 469–483 (2009)
Atkeson, C.G., Schaal, S.: Robot learning from demonstration. In: Fisher, D.H. (ed.) Proceedings of International Conference on Machine Learning (ICML), pp. 12–20. Morgan Kaufmann (1997)
Bentivegna, D.C., Atkeson, C.G., Cheng, G.: Learning tasks from observation and practice. Robotics and Autonomous Systems 47(2-3), 163–169 (2004)
Browning, B., Xu, L., Veloso, M.: Skill acquisition and use for a dynamically-balancing soccer robot. In: Proceedings of the American Association of Artificial Intelligence (AAAI), pp. 599–604 (2004)
Chernova, S.: Confidence-based Robot Policy Learning from Demonstration. Ph.D. thesis, Carnegie Mellon University (2009)
Chernova, S., Veloso, M.: Confidence-based multi-robot learning from demonstration. International Journal of Social Robotics 2, 195–215 (2010)
Coates, A., Abbeel, P., Ng, A.Y.: Apprenticeship learning for helicopter control. Communications of the ACM 52(7), 97–105 (2009)
Dixon, K., Khosla, P.K.: Learning by observation with mobile robots: A computational approach. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA) (2004)
Eyharabide, V., Amandi, A.: Automatic task model generation for interface agent development. Inteligencia Artificial 9(26), 49–57 (2005)
Garland, A.: Learning hierarchical task models by demonstration. Tech. Rep. TR-2001-03, Mitsubishi Electric Research Laboratories (2001)
Grollman, D., Jenkins, O.: Dogged learning for robots. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 2483–2488. IEEE (2007)
Grollman, D.H., Jenkins, O.C.: Can we learn finite state machine robot controllers from interactive demonstration? In: Sigaud, O., Peters, J. (eds.) From Motor Learning to Interaction Learning in Robots. SCI, vol. 264, pp. 407–430. Springer, Heidelberg (2010)
Hovland, G., Sikka, P., McCarragher, B.: Skill acquisition from human demonstration using a hidden markov model. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 2706–2711. IEEE (1996)
Ijspeert, A.J., Nakanishi, J., Schaal, S.: Movement imitation with nonlinear dynamical systems in humanoid robots. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 1398–1403 (2002)
Jenkins, O., Mataric, M., Weber, S.: Primitive-based movement classification for humanoid imitation. In: Proceedings of the IEEE-RAS International Conference on Humanoid Robotics (Humanoids) (2000)
Lockerd, A., Breazeal, C.: Tutelage and socially guided robot learning. In: Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS), vol. 4, pp. 3475–3480. IEEE (2004)
Luke, S., Cioffi-Revilla, C., Panait, L., Sullivan, K., Balan, G.: MASON: A multiagent simulation environment. Simulation 81(7), 517–527 (2005)
Luke, S., Ziparo, V.: Learn to behave! rapid training of behavior automata. In: Grześ, M., Taylor, M. (eds.) Proceedings of Adaptive and Learning Agents Workshop at AAMAS 2010, pp. 61–68 (2010)
Nakanishi, J., Morimoto, J., Endo, G., Cheng, G., Schaal, S., Kawato, M.: Learning from demonstration and adaptation of biped locomotion. Robotics and Autonomous Systems 47(2-3), 79–91 (2004)
Nicolescu, M., Jenkins, O., Olenderski, A.: Behavior fusion estimation for robot learning from demonstration. In: Proceedings of Workshop on Distributed Intelligent Systems: Collective Intelligence and Its Applications. IEEE Computer Society (2006)
Nicolescu, M., Jenkins, O., Stanhope, A.: Fusing robot behaviors for human-level tasks. In: Proceedings of the International Conference on Development and Learning (ICDL), pp. 76–81. IEEE (2007)
Nicolescu, M.N.: A Framework for Learning from Demonstration, Generalization and Practice in Human-Robot Domains. Ph.D. thesis, University of Southern California (2003)
Panait, L., Luke, S.: Cooperative multi-agent learning: The state of the art. Autonomous Agents and Multi-Agent Systems 11(3), 387–434 (2005)
Quinlan, J.R.: C4.5: Programs for Machine Learning, 1st edn. Morgan Kaufmann Series in Machine Learning. Morgan Kaufmann (January 1993)
Saunders, J., Nehaniv, C., Dautenhahn, K.: Teaching robots by molding behavior and scaffolding the environment. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI) (2006)
Stone, P., Sutton, R.S., Kuhlmann, G.: Reinforcement learning for RoboCup-soccer keepaway. Adaptive Behavior 13(3), 165–188 (2005)
Stone, P., Veloso, M.: Layered learning and flexible teamwork in robocup simulation agents. In: Veloso, M.M., Pagello, E., Kitano, H. (eds.) RoboCup 1999. LNCS (LNAI), vol. 1856, pp. 495–508. Springer, Heidelberg (2000)
Sullivan, K., Luke, S.: Learning from demonstration with swarm hierarchies. In: Proceedings of Autonomous Agents and Multi-Agent Systems (AAMAS) (2012)
Sullivan, K., Luke, S., Ziparo, V.A.: Hierarchical learning from demonstration on humanoid robots. In: Proceedings of the Humanoid Robots Learning from Interaction Workshop at Humanoids (2010)
Sullivan, K., Russell, K., Andrea, K., Stout, B., Luke, S.: RoboPatriots: George Mason University 2012 RoboCup team. In: Proceedings of the 2012 RoboCup Workshop (2012)
Veeraraghavan, H., Veloso, M.M.: Learning task specific plans through sound and visually interpretable demonstrations. In: Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS), pp. 2599–2604. IEEE (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Sullivan, K., Luke, S. (2013). Real-Time Training of Team Soccer Behaviors. In: Chen, X., Stone, P., Sucar, L.E., van der Zant, T. (eds) RoboCup 2012: Robot Soccer World Cup XVI. RoboCup 2012. Lecture Notes in Computer Science(), vol 7500. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39250-4_32
Download citation
DOI: https://doi.org/10.1007/978-3-642-39250-4_32
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39249-8
Online ISBN: 978-3-642-39250-4
eBook Packages: Computer ScienceComputer Science (R0)