Abstract
The goal of this paper was to accomplish a technical challenge of double passing soccer game for humanoid soccer robots in RoboCup competition. Using only a vision sensor, the control strategies for the technical challenges of humanoid league in RoboCup are designed and presented. The vision system includes the color space setting, the object recognition, a simplified mean shift algorithm, and the target position derivation. Vision system works on the tasks of object recognition, which includes the goal, landmark poles, and the interval of two black poles. The computational time is reduced greatly by the mean shift algorithm and that time can be utilized to do other control strategies. With the proposed control strategies, humanoid robots can successfully complete the RoboCup double passing task. The successful experiment results demonstrate the feasibility and effectiveness of the proposed foot–eye coordination control scheme.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Many researches of robots have developed in the world for decades of years. Different kinds of robots are designed for corresponding specific uses. It is believed that the development of robots has mainly been in the areas of their interacting with humans and helping them with their needs. So a lot of methods have been proposed for human–robot interaction and communication [1–4]. Since the hardware computational ability improves fast with the size of the devices become smaller and smaller, building a robot for our own is not a difficult thing anymore. The goal of this paper was to design and implement a team (aiRobots-V) of humanoid robots for robot soccer games. A humanoid robot is a highly integrated system including mechanism design, passive sensors, vision system, power supply system, communication system, and algorithms in software programming. The robot soccer game presents a dynamic and complex environment. It could be a challenging platform for multi-agent research, involving many topics such as motion generation [5–9], motion control, image processing algorithm, localization, and collaboration strategy planning. They aim to promote artificial intelligence (AI), robotics, and related fields. The ultimate goal of the RoboCup [10] project is as follows: By the year 2050, develop a team of fully autonomous humanoid robots that can play against the human world champion soccer team and win the game. We attend these competitions in order to enhance and stimulate our technique and development of the humanoid robot. At the same time, we can learn and try to make strides by competing with other teams from different countries. We will also attend the challenge games the 3 on 3 soccer games of RoboCup 2010 humanoid league.
Besides, the robots need to be autonomous and intelligent during the soccer games. Thus, strategies of many cases have to be taken into consideration. Strategies are also programmed in the embedded computer. Robots will communicate with each other via wireless network to accomplish the collaboration strategies. The role of each robot will switch dynamically by itself to reach the best efficiency for offense and defense. The aim of this paper was to develop a fast processing vision system and robust strategy system for not only humanoid robot soccer competition but also challenge games of double passing.
2 Vision System
The vision system used in this paper includes two essential parts: object recognition and simple mean shift algorithm, which are described in the following subsections.
2.1 Object Recognition
In the soccer playing field, there are five important objects: ball, landmark pole, goal, black pole, and interval space between the black poles. These need to be recognized for the technical challenge since the goals and landmark poles have been changed in RoboCup 2010, and the interval between the black poles is important for the dribbling event. The goal, landmark poles, and interval detections are presented as follows.
2.1.1 Goal Recognition
After the object with the same color goal is segmented by the YUV lookup table, the following steps are applied to check whether it has the characteristics of a goal.
-
Step 1
The width and height of the goal must be wider and taller than a set threshold.
-
Step 2
The goal must be situated on the game field, so the color under the goal probably belongs to the field or the lines. It is necessary to check this condition. Two places will be checked: the lower right corner and the lower left corner. For each corner, there is a 3 × 3 window. Each pixel will be scanned and checked to see whether it is the color of the game field or a white line or something else. An illustration of the goal-checking windows is shown in Fig. 1a. However, the view may be occluded by other robots during the game time, as shown in Fig. 2. Since the robots must be mostly black according to the rules of the RoboCup, the black color also has to be checked in those two windows.
-
Step 3:
Because of the shape of the goal, there should not be any color belonging to the goal. There is a 5 × 5 window in the middle of the goal to scan and check the color in that window to make sure it is not the color of the goal. It is illustrated in Fig. 1b.
In steps 2 and 3, every pixel in the window is scanned and a reward function is adopted. If the scanned pixel meets the expectation, the score goes up; otherwise, the score goes down. The reward function can be described as follows:
where \( G_{\text{score}} \) is the total reward and \( \alpha_{G} \) is the reward value; the recognition succeeds if \( G_{\text{score}} { > }\,\beta \). Otherwise, the recognition fails. \( \beta \) is a threshold defined by the user. The results of the goal recognition are shown in Figs. 3 and 4. Nevertheless, when the robot stands very close to the goal, it can only see part of the goal. Figure 5 gives an example of a partial view of the goal.
In this case, the robot will be asked to raise its head, and then, the view will be clearer. It is able to determine which side of the goal the robot sees by sampling of the goal and calculating the vector, as shown in Fig. 6. After the color of the goal is segmented out, a vertical sampling from the left to the right of the image will be done as shown in Fig. 6a and e show. Then, the vector is calculated by:
where \( (x_{R} ,y_{R} ) \) is the coordinate of the rightmost sampled point, \( (x_{L} ,y_{L} ) \) is the coordinate of the leftmost sampled point, and \( V_{g} (x_{g} ,y_{g} ) \) is the result vector. If the vector points to the upper right corner, this means that the robot sees the left side of the goal. If the vector points to the lower right corner, this means that the robot sees the right side of the goal.
2.1.2 Interval Detection
In the double passing event, there are two more black poles of 30 cm in radius and 90 cm in height. One pass must go through the interval of the two black poles. The interval is about 120 cm in width. Figure 7a shows the two black poles. The detection of the interval is via a sampling method, and the method is divided into several steps.
-
Step 1
Do the sampling vertically, and the sampling interval is 10 pixels. A group of sample points will be collected. Figure 7b demonstrates the sampling.
-
Step 2
Check from the first point. If the distance between this point and the next point is long enough and the vector points to the bottom, one black pole is found. Keep scanning the points until there is a vector that orients the opposite way, and then one black pole is detected. All the points will be checked in this step. The distance is given by
$$ D_{i} = \left| {y_{i} - y_{i - 1} } \right|,\quad i = 1\sim 32 $$(3)where D i is the distance between ith point and (i − 1)th point and y i represents the coordinate value of the y-axis. Since the sampling interval is 5 pixels, the difference in the x-axis from every point to the next one will be constant. It is not necessary to take the x-axis into consideration. The computation is reduced in this way. The vector is given by
$$ V_{i} = y_{i} - y_{i - 1} ,\quad i = 1\sim 32 $$(4)where V i is the vector from (i − 1)th point to ith point. As denoted in (4), the x-axis is not taken into consideration. The pseudo code of step 2 is shown in Fig. 8.
-
Step 3
If there are two black poles found in Step 2, the width of their interval can be calculated. The center of the interval is then pointed out. The interval detection is completed, and the result is shown in Fig. 9.
2.2 Simple Mean Shift Algorithm
After the recognitions are done, the next step is to keep tracking the target. There are many papers that discuss different kinds of methods or algorithms for tracking. But actually, the target is not that big, so it is not necessary to always search the whole image. Besides, searching the entire image for a small target is a waste of time and increases the computation of the system. So a mean shift algorithm is adopted to solve this problem. The mean shift algorithm has been widely discussed [11–15]. The algorithm only searches the region of interest, and the size of the region is adaptive to the variation in size of the target. This section shows how the algorithm works.
2.2.1 Principal Concept of Mean Shift
The main idea and intuitive description of mean shift algorithm can be illustrated as shown in Fig. 10. The red dots shown in Fig. 10 mean the global data distribution. The algorithm aims to find the maximum distribution of the data. In the case of the target tracking in this research, red dots are considered as pixels whose color in YUV channel is similar to the color of the target. The main difference is that the data in our case will be concentrated but not scattered because every target has its distinguishing color organization. For example, if there is only one orange ball on the game field, it is impossible to see the scattered orange color distributed on the field. The data distribution is shown in Fig. 11. Another different point is that, unlike in most cases, the shape of the region is square in this research. The square shape is convenient for searching digital image data.
2.2.2 Simplified Mean Shift Algorithm
The first step is to initialize the region of interest. In the beginning, scanning all the pixels of an image is needed because the data only gather together somewhere but are not scattered in the whole image. We can say that the initial region of interest is the whole image. When the target is found, the region is then determined and will keep shifting with the target until the target is lost. If the target disappears in the region of interest, the procedure goes back to search the entire image. A diagram of a simplified mean shift algorithm is shown in Fig. 12. The center of mass \( C\left( {x_{c} ,y_{c} } \right) \) is computed by
where x i and y i are the coordinate values of the pixels of the target on the image plane and n is the total number of pixels of the target. Then, the mean shift vector \( V_{m} (x_{m} ,y_{m} ) \) is given by
where \( \left( {X_{\text{region}} ,Y_{\text{region}} } \right) \) is the center of the region of interest. So the new center of region \( \left( {X_{\text{region}}^{{\prime }} ,Y_{\text{region}}^{{\prime }} } \right) \) is as follows:
The size of a target that is shown on the captured image varies when the distance from the robot and target changes. A fixed size of a searching region is not reasonable. So the size of the region or, more specifically, the side length of the square will be adaptive to the size of the target. The side length is adjusted according to the size of the target. The adaptive mechanism is given by
where L region is the side length of the region and the S target is the size information of the target. S target may be different from other targets. For instance, S target is the radius in the case of the ball and the width in the case of the pole and goal. k is a related coefficient. With the adaptive region, the algorithm is more efficient and time-saving. The results are shown in Fig. 13. The tracked target in this case is a ball. There are two squares in every picture in Fig. 13.
The outer square is the region of interest, and the inner square is the boundary of the target after the recognition is successful. It is obvious that the side length of the region of interest varies according to the area of the target. When the ball is far from the robot, the region becomes smaller; otherwise, it becomes bigger. The efficiency is raised in this way.
3 Control Strategy
The control strategy is divided into two independent processes because there are two players and their tasks are a little different. In order to achieve the strategy, target tracking by the head of the robot is required because many procedures in the strategy need to track the target, walk toward it, and calculate the distance from it. So, once the current target is found, keeping it in the image all the time is important. Here, the fuzzy logic controller (FLC) implemented in [16] is adopted to solve the problem. The FLC is done by controlling the two motors as pan and tilt on the head. In this way, the following strategies can run smoothly. The flowchart strategy of robot Player A is shown in Fig. 14a. As shown in Fig. 14a, the process of a control strategy for Player A is partitioned into three parts. Process 1 is to make the first pass to Player B and then walk toward the left pole until the distance from Player A is less than a certain length and turn back to face the game field to wait for the ball to be passed back. Process 2 is to pass the ball through to the interval between the two black poles, walk toward the goal until the distance from the goal is less than a certain length, and then wait for the ball to be passed back. One of the steps in process 2 is to dribble the ball in the direction of the goal. This step can make the second pass of Player A easier according to the many times of experiments. The third process is to make a goal when the ball is passed back from Player B, just as in a normal soccer game.
4 Experiment Results
The full procedure for double passing is demonstrated in Figs. 15, 16, 17, 18, and 19. Figure 15 shows the first pass from robot Player A to robot Player B. The first kick is very slight because the two players are very close to each other. Figure 16 shows the second pass from Player B to A. The strength of the kick is stronger than in the first pass. The strength is not as strong for shooting the goal because the ball must be controlled in the field. Figures 17 and 18 demonstrate the third pass from Player A to B. The strength of the kick is like the second pass to maintain the ball inside the field. Figure 19 shows the last task of the double passing. Player A shoots the ball into the goal. After the ball is in the goal, the two robots will stay at their position. Finally, the double passing is then completed after the last shot.
5 Conclusions
This paper has proposed a complete and systematic vision system and foot–eye coordination control for humanoid soccer robots. The major objects on the field can all be recognized by different methods. A simplified mean shift algorithm is proposed for target tracking, and it searches the region of interest instead of always searching the entire image. In addition, the area of the region is adaptive according to a change in size of the target. It can save more computation time than the original mean shift algorithm by the adaptive searching region. Finally, the foot–eye coordination control strategies are presented for the double passing challenges. The control strategies allow the robots to successfully implement the technical challenge of the humanoid league of the RoboCup. For the dribbling challenge, the proposed control scheme provides a complete and effective strategy for robots to accomplish the tasks. Additionally, the proposed foot–eye coordination control scheme is used in humanoid soccer robots, which entered the humanoid league of the RoboCup in 2010 and won second place in the technical challenge of RoboCup 2010.
References
Stiefelhagen, R., Ekenel, H.K., Fugen, C., Gieselmann, P., Holzapfel, H., Kraft, F., Nickel, K., Voit, M., Waibel, A.: Enabling multimodal human-robot interaction for the Karlsruhe humanoid robot. IEEE Trans. Robot. 23, 840–851 (2007)
Spexard, T.P., Hanheide, M., Sagerer, G.: Human-oriented interaction with an anthropomorphic robot. IEEE Trans. Robot. 23, 852–862 (2007)
Lee, J.M., Choi, J.S., Lim, Y.S., Kim, H.S., Park, M.: Intelligent and active system for human-robot interaction based on sound source localization. In: Proceedings of International Conference on Control, Automation and Systems, ICCAS, pp. 2738–2741 (2008)
Rivera-Bautista, J.A., Ramirez-Hernandez, A.C., Garcia-Vega, V.A., Marin-Hernandez, A.: Modular control for human motion analysis and classification in human-robot interaction. In: Proceedings of 5th ACM/IEEE International Conference on Human-Robot Interaction, HRI, pp. 169–170 (2010)
Su, Y.-T., Chong, K.-Y., Li, T.-H.S.: Design and implementation of fuzzy policy gradient gait learning method for walking pattern generation of humanoid robots. Int. J. Fuzzy Syst. 13(4), 369–382 (2011)
Li, T.-H.S., Su, Y.-T., Lai, S.-W., Hu, J.-J.: Walking motion generation, synthesis, and control for biped robot by using PGRL, LPI and fuzzy logic. IEEE Trans. Syst. Man, Cybern. B 41(3), 736–748 (2011)
Kuo, P.-H., Ho, Y.-F., Lee, K.-F., Tai, L.-H., Li, T.-H.S.: Development of humanoid robot simulator for gait learning by using particle swarm optimization. In: Proceedings of 2013 IEEE International Conference on System, Man, Cybernetics, pp. 2683–2688 (2013)
Hong, Y.-D., Park, C.-S., Kim, J.-H.: Stable bipedal walking with a vertical center-of-mass motion by an evolutionary optimized central pattern generator. IEEE Trans. Ind. Electron. 61(5), 2346–2355 (2014)
Li, T.-H.S., Kuo, P.-H., Ho, Y.-F., Kao, M.-C., Tai, L.-H.: A biped gait learning algorithm for humanoid robots based on environmental impact assessed artificial bee colony. IEEE Access. 3, 13–26 (2015)
RoboCup. http://www.robocup.org/
Pooransingh, A., Radix, C.A., Kokaram, A.: The path assigned mean shift algorithm: a new fast mean shift implementation for colour image segmentation. In: Proceedings of 15th IEEE International Conference on Image Processing, ICIP, pp. 597–600 (2008)
Wang, H., Yang, B., Tian, G., Men, A.: Object tracking by applying mean-shift algorithm into particle filtering. In: Proceedings of 2nd IEEE International Conference on Broadband Network and Multimedia Technology, IC-BNMT, pp. 550–554 (2009)
Li, Y.H., Pang, Y.G., Li, Z.X., Liu, Y.L.: An intelligent tracking technology based on Kalman and mean shift algorithm. In: Proceedings of Second International Conference on Computer Modeling and Simulation, ICCMS, pp. 107–109 (2010)
Yafeng, Y., Hong, M.: Adaptive mean shift for target-tracking in FLIR imagery. In: Proceedings of Wireless and Optical Communications Conference, WOCC, pp. 1–3 (2009)
Liu, Y., Peng, S.: A new motion detection algorithm based on snake and mean shift. In: Proceedings of Congress on Image and Signal Processing, CISP, pp. 140–144 (2008)
Su, Y.T., Hu, C.Y., Li, T.H.S.: FPGA-based fuzzy PK controller and image processing system for small-sized humanoid robot. In: Proceedings of IEEE International Conference on Systems, Man and Cybernetics, SMC, pp. 1039–1044 (2009)
Acknowledgment
This work supported by Ministry of Science and Technology of Taiwan, R.O.C, under Grants MOST 103-2221-E-006-252, MOST 104-2221-E-006-228-MY2, and the aim for the Top University Project to the National Cheng Kung University (NCKU) is greatly appreciated.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing Switzerland
About this paper
Cite this paper
Kuo, PH., Ho, YF., Wang, TK., Li, TH.S. (2017). Design and Implementation of Double Passing Strategy for Humanoid Robot Soccer Game. In: Kim, JH., Karray, F., Jo, J., Sincak, P., Myung, H. (eds) Robot Intelligence Technology and Applications 4. Advances in Intelligent Systems and Computing, vol 447. Springer, Cham. https://doi.org/10.1007/978-3-319-31293-4_29
Download citation
DOI: https://doi.org/10.1007/978-3-319-31293-4_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-31291-0
Online ISBN: 978-3-319-31293-4
eBook Packages: EngineeringEngineering (R0)