Abstract
Navigation in an unknown dynamic environment is one of the key challenges in mobile robotics. This paper proposes a scheme, inspired by human pedestrian behavior, for navigation of a mobile robot in an a priori unknown dynamic environment. An occupancy grid map has been built using onboard sonar sensors through successive sensor information. Inspired by human pedestrian behavior to maintain a safe direction and distance to avoid collisions with obstacles, the proposed navigation scheme trail a path for the robot following a forbidden region map concept with a velocity proportional to the distance and rate at which the obstacles are approaching or receding the robot. The reachable region of robot navigation horizon is based on the motion model predictability of the obstacles. The navigation scheme is deployed on a Fire Bird V mobile robot. The experimental result shows that the robot is able to follow a smooth and time-efficient path avoiding collisions with the mobile and stationary obstacles.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The need of mobile robots sharing a common work cell with human is continuously increasing in many practical contexts. This demands for a smooth and time-efficient navigation by the robots in an unknown dynamic environment. There have been various approaches for environment mapping and navigation in the field of mobile robotics. However, most of the methods are subjected to path irregularity and lower throughput leading to collisions in unknown dynamic environments. In societal and industrial settings, the implementation of a mobile robot is accepted to be enabled of navigation capabilities that meet both engineering and societal objectives [1]. Therefore, mapping and navigation in an unknown dynamic environment are largely an open research problem [2].
Based on the neuro-fuzzy concepts aiming toward collision free and minimum trajectory error, number of motion control methodologies like adaptive neural network method, sliding mode control, and back-stepping method [3] have been proposed. Simultaneous localization and mapping (SLAM) technique have been in use for building a metric map of an unknown environment [4]. Following SLAM for modeling uncertainties, unexpected disturbances, and actuator failures, an adaptive fault tracking control method have been proposed by Song et al. [5]. In the area of robot localization and navigational research, number of technologies have been utilized from mapping [4] to navigation [6]. Although aforementioned methods can realize map-building-based navigation, human efforts or exteroceptive sensor information have to be integrated for occupancy grid map building leading to limitation in smooth and time-efficient navigation of the mobile robots. Obstacles detection through infrared images [7] have been studied to facilitate the deployment of autonomous robots. Prediction of trajectories which comprises of discrete decisions for interacting agents [8], use of visual and embodied data association to build a local map [9] are explained for autonomous mobile robots. A fully distributed algorithm for robots navigation has been implemented for mutual avoidance as adopted by human have been proposed by Guzzi et al. [1]. But to address both engineering and societal aspects of the navigation in unknown dynamic environment, a robot should be equipped with the similar locomotion capability as a pedestrian.
We present a navigation scheme for a mobile robot in an unknown dynamic environment. Using successive onboard sensor information, a real-time local map has been built. Inspired by the human pedestrian behavior, the navigation scheme maintains a safe distance and direction form the obstacles by controlling its speed and heading direction. A velocity constraint multiplier; proportional to the velocity of the surrounding obstacles have been introduced to maintain a safe distance between the robot and obstacles. Based on motion model predictability of obstacles, the navigation scheme plan for a shorter or longer reachable region for the robot; which helps to avoid the obstacles following a simple navigation scheme. The proposed scheme has been deployed on a Fire Bird V mobile robot which ensures smooth and time- efficient navigation in terms of path irregularity and relative throughput under no obstacle, static and dynamic environments.
2 Real-Time Local Map Building
In this work, we used the Fire Bird V mobile robot customized with ultrasonic range sensors (shown in Fig. 1a) as an experimental testbed. Successive sensors information were superimposed for real-time map building of a local area; scanning a view of 360\(^{\circ }\) surrounding the robot with a radius of 180 cm.
A typical ultrasonic sensor returns a radial measure of distance to the nearest obstacle within its conical field of view which lies between 10\(^{\circ }\) and 30\(^{\circ }\). To avoid crosstalk possibility, we used four Ultrasonic Ranging Module HC-SR04 equally spaced 90\(^{\circ }\) apart in a circular ring on the robot at a height of 15 cm from the ground. The circular ring is mounted on a servo mechanism in order to have a full 360\(^{\circ }\) scan around the robot. The ultrasonic sensors were set to detect an obstacle upto 180 cm with an accuracy of ±1 cm.
Figure 1c shows schematic of four sensors as S1, S2, S3, and S4; wherein each sensor’s beam of acoustic energy spreads in a cone of 30\(^{\circ }\). The typical scan time of a sensor ranges from 60 to 500 ms. The servo mechanism rotates the circular ring twice by an angle of 30\(^{\circ }\) to complete a scan of a 90\(^{\circ }\) cone by each sensor. The cone of the acoustic energy beam during successive rotation of sensor S3 for completing a scan of 90\(^{\circ }\) is shown as S3, \(S3^{\prime }\) and \(S3^{\prime \prime }\) in Fig. 1c. If the sensor returns a value of distance between the specified minimum and maximum range (10–180 cm), then the returned distance measurement is proportional to the distance of the nearest obstacle within the range of the sensor.
Map building requires obstacles’ localization in the area with reference to the robot. For the ease of it, sensing range in front of each sensor is categorized into three zones (viz., \(Z_1\), \(Z_2\), and \(Z_3\)) and shown for sensor S1 in Fig. 1c. The zones \(Z_1\), \(Z_2\) and \(Z_3\) ranges upto a distance of 30 cm, 90 cm and 180 cm respectively. The incoming four sonar sensors’ (\(S_1, S_2, S_3\), and \(S_4\)) readings for three successive trials (like \(S_3\), \(S_3^\prime \), \(S_3^{\prime \prime }\) for sensor \(S_3\)) are interpreted and converted to local occupancy values for map building. The grids allow the efficient accumulation of small amounts of information from individual sensor readings for increasingly accurate maps of the robot’s surroundings. An occupancy grid map has been built based on the sensors information wherein a sequence of continuously changing information indicates an obstacle; a new continuous sequence after a discontinuity indicates a new obstacle; and a single measurement not related to its neighbors considered as noise. Figure 1b shows a typical map built with the robot shown as a circle in blue and the obstacles as squares in black colors. The map building through sensor data interpretation is the first phase to support the robot navigation in an unknown dynamic environment.
3 Navigation Scheme
The proposed scheme aims at human pedestrian behavior and is based on a novel cognitive science approach to determine human pedestrian behavior [10].
3.1 Human Pedestrian Behavior
A pedestrian usually selects the most convenient and efficient path for reaching the destination. Visual information is the prime source for deciding the motion strategy by the pedestrian [11]. Using the neural interface between the retina and brain, a pedestrian can estimate the time to collision with the obstacles [12]. Accordingly, the pedestrian chooses the direction that leads to the destination through the shortest path while maintaining a safe distance from the obstacles along the line of heading [10].
3.2 Navigation Approach
The navigation approach plan to scan an area of 180 cm radius with the robot’s position as the center and repeats the plan after traversing the area moving toward its goal. At each step, the robot estimates the relative obstacle position and motion for choosing a way to avoid collisions. We model the velocity vector of the robot (\(V_{ROB}\)) by a superposition of the velocity due to its own actuation (\(V_{Act}\)) as a factor of a multiplier proportional to the distance between the robot and the obstacle. We introduced this multiplier as velocity constraint inspired by the human pedestrian behavior to maintain a safe distance with the obstacles.
where k = Velocity constraint proportional to the distance between the robot and the obstacle; and is quantized as follows:
where \(\varDelta D_{Rob - Obs}\) = Distance between the robot and the obstacle.
The navigation scheme generates the actuation command to the robot for modifying its speed in order to avoid collision with the obstacles in its sensing range. The actuation to the robot is kept at \(V_{Act} = 12\) cm/s with k = 0 initially and can have a maximum speed of 24 cm/s. To avoid collision, k = −1 at \(\varDelta D_{Rob - Obs}\) = 10–30 cm (i.e., when the obstacle is in the region \(Z_1\)) makes the robot to stop and allow change in its heading direction. The robot moves with 12 cm/s toward its goal point with k = 0 at \(\varDelta D_{Rob - Obs}\) = 31–90 cm (i.e., when the obstacle is in the region \(Z_2\)). The robot doubles its speed toward the goal point with k = 1 at \(\varDelta D_{Rob - Obs}\) = 91–180 cm (i.e., when the obstacle is in the region \(Z_3\)). The robot modifies its motion strategy successively every 4 s; out of which 3 s is required by the robot to scan the 360\(^{\circ }\) around it and another 1 s for commanding the actuation including the computation for map building. The determination of the directional heading and velocity of the robot is illustrated for static and dynamic environment in the following sections.
3.2.1 Static Environment
We plan the robot navigation in position space. Position space for a short duration \(\varDelta T\) becomes the reachable region \(R \varDelta T\); which is the set of all positions that the robot can reach in time \(\varDelta T\). Each obstacle corresponds to a set of directions, termed as forbidden headings that need to be avoided. We denote the forbidden heading for a given obstacle as \(H_{obs}\).
In Fig. 2a, \(R\varDelta T\) shows the reachable region with the robot velocity vector \(V_{ROB}\) towards the goal position. An obstacle in between the start and goal point is represented by the occlusion points \(O_s\) and \(O_e\) in Fig. 2b. To accommodate the size of the robot of radius \(R_{ROB}\), we extend the obstacle by this measure at the occlusion points. The heading to be avoided by the robot to prevent collision is shown as \(H_{Obs}\); marked in the reachable region with an arc in red color and is determined as the forbidden region. The directional heading decision for passing by \(O_s\) or \(O_e\) is made through choosing the shortest path to the goal. The velocity vector \(V_{ROB}\) is determined following the Eq. 1. For multiple obstacles, multiple forbidden regions are introduced following the same approach.
3.2.2 Dynamic Environment
The velocities of the dynamic obstacles are unknown a priori and have to be superimposed with the robot velocity in order to maintain a safe distance with the obstacles. Accordingly the Eq. 1 for determining the robot velocity is modified as follows:
where \(V_{OBS}\) = Obstacle velocity measured using onboard sensors and is negative if the obstacle is proceeding towards the robot and is positive if receding from the robot.
The choice of the reachable region \(R \varDelta T\) depends on the distance to the nearest obstacle. If the predicted motion is considered reliable, the robot can plan for a longer reachable horizon. On the other hand, if the obstacle’s motion model is highly unpredictable, the robot plans for a shorter reachable horizon. If the predicted distance to the obstacle lies in the \(Z_1\) or \(Z_2\) zone, the reachable region is planned with shorter time step as shown in Fig. 3a. On the other hand, if the predicted distance to the obstacle lies in the \(Z_3\) zone, the reachable region is planned with longer time step as in Fig. 3b. With this reachable region, the directional heading is decided as for static environment.
4 Experiments and Results
4.1 Experimental Setup
Experiments are performed with the Fire Bird V mobile robot on a rectangular arena of 5.76 m\(^2\) with 36 squares of 0.16 m\(^2\) each in it. At first, the robot is entrusted to navigate from the start (i.e., robot origin position) to goal point (i.e., robot final position) on the arena. The data from the sonar sensors were fed to MATLAB through an UNO 328P controller. The path planned in MATLAB following the navigation approach illustrated in Sect. 3 is fed to the robot controller ATMEGA 2560 for its navigation accordingly. The experiments have been performed in the following three environments:
No Obstacles Environment: The arena is free of any obstacles and the robot travel toward the goal point from the start point.
Static Environment: The obstacles initially placed at regular intervals along the vertices of the squares in the arena and the robot was entrusted to navigate from the start to goal point.
Dynamic Environment: The remotely controlled dynamic obstacles travel obstructing the path of the robot from the start to goal point. This creates a crossroad and the robot frequently need to adjust their trajectories in order to avoid collisions.
4.2 Performance Metrics
Following performance metrics have been computed for each environments:
Throughput: It indicates the robot’s time efficiency in navigating toward the goal. This measure is defined as the minimal time that the robot would take to reach the goal without any obstacles while traveling in a straight line divided by the actual time it takes while traveling from the start to goal point avoiding any collisions in the presence of obstacles.
Path Irregularity: It is defined as the amount of unnecessary turning per unit path length performed by a robot, where unnecessary turning corresponds to the total amount of robot rotation minus the minimum amount of rotation which would be needed to reach the same goal point with the most direct path. Path irregularity is measured in radian per meter and indicates the smoothness of the navigating path.
Number of Collisions: It indicates the performance of the navigation algorithm in terms of safety measuring the number of collisions occurring under the three experimental environments. It is measured as collisions per minute.
4.3 Results and Discussions
The results exploring the characteristics of the proposed navigation scheme in terms of throughput and path irregularity in three different environments: no obstacles, static and dynamic environments are reported in this section. Figure 4 through Fig. 6 shows the navigation path of the robot in no obstacle, static and dynamic environments.
Initially, the robot was kept stationary at the start point and entrusted to travel to the goal point located at a distance of \(\approx \)250 cm along the diagonal of the arena as shown in Fig. 4. Under no obstacle environment, Fig. 4 shows the robot navigation path following the shortest path from the start to the goal point. Initially, the robot starts with a speed of 12 cm/s. After a period of 4 s, the robot updates its speed to 24 cm/s following the Eq. 1. This is because the sensors could not detect any obstacles along its path towards the goal point (i.e., k = 1). The robot continues its navigation path at the maximum speed of 24 cm/s and completes in \(\approx \)11 s. In static environment, initially the robot follows the direction to go straight toward the goal at a speed of 12 cm/s. On the detection of an obstacle at a distance of about \(\approx \)56 cm, the robot motion is modified. Accordingly the velocity constraint value is updated as k = −1 following Eq. 1. Following the navigation approach illustrated in Sect. 3, the robot avoids the forbidden region and moves to one of the edges of the obstacle leading to the shortest path to the goal point as shown in Fig. 5a. Likewise, the collision avoidance of the robot with the second obstacle is shown in Fig. 5b. The second obstacle is located at a distance of \(\approx \)65 cm from that of the first obstacle’s edge and the robot avoids the forbidden region with the velocity constraint value as k = −1 following Eq. 1. Figure 5c shows the navigation path of the robot from the start to the goal point in static environment. In dynamic environment, initially the robot follows the direction to go straight toward the goal at a speed of 12 cm/s as in static environment. The first dynamic obstacles obstructs from left side of the robot. The collision with the first obstacle is avoided as in Fig. 6a with a velocity constraint value updated to k = −1. The robot avoids the forbidden region following the navigation approach as illustrated in Sect. 3 with a velocity according to the Eq. 2; wherein the robot’s velocity increases by a magnitude equal to the obstacle’s velocity. Similarly, collision avoidance with the second obstacle approaching the path of the robot from right to left is shown in Fig. 6b. In this case, the robot’s velocity decreases by a magnitude equal to the velocity of the obstacle. Figure 6c shows the navigation path of the robot from start to the goal point in dynamic environment with the reachable and forbidden regions as planned by the navigation scheme. Such dynamic obstacle avoidance requires a simple and fast online navigation scheme as proposed.
The throughput and path irregularity of the robot in static and dynamic environment are shown in Figs. 7 and 8. It can be observed that the throughput decreases and path irregularity increases with the increase in the number of obstacles. This is because robots must follow longer and more curved path with the increase in the number of obstacles. The throughput of the robot is higher in static environment compared to the dynamic one as the robot follows more longer path to avoid dynamic obstacles compared to static obstacles. Further, path irregularity of the robot is higher in dynamic environment compared to static environment as the robot needs to follow more number of changes in heading direction for avoiding collisions with dynamic obstacles.
5 Conclusions
A navigation scheme inspired by human pedestrian behavior for a mobile robot in an a priori unknown dynamic environment is proposed. The proposed scheme is evaluated under no obstacles, static and dynamic environments; and the robot was able to avoid both stationary as well as dynamic obstacles. The experimental results show that the introduction of the velocity constraint and reachable regions holds promise for navigation of mobile robot. It has been observed that the throughput and path irregularity of the robot decreases and increases, respectively, with the increase in the number of obstacles. One of the opportunities to minimize it is the detection of the obstacle’s orientation in the sensing cone; which is the part of ongoing research.
References
J. Guzzi, A. Giusti, L. M. Gambardella, G. Theraulaz, G. A. D. Caro, Human-friendly robot navigation in dynamic environments, in: IEEE International Conference on Robotics and Automation, Karlshure, Germany, 2013, pp. 423–430.
D. Tuvshinjargal, B. A. Dorj, D. J. Lee, Hybrid motion planning method for autonomous robots using kinect based sensor fusion and virtual plane approach in dynamic environments, Journal of Sensors 2015 (Article ID 471052) (2015) 1–13.
T. Fukao, H. Nakagawa, N. Adachi, Adaptive tracking control of a nonholonomic mobile robot, IEEE Trans Robotics and Automation 16 (5) (2000) 609–615.
P. Henry, M. Krainin, E. Herbst, X. Ren, D. Fox, Rgb-D mapping: Using kinect-style depth cameras for dense 3d modeling of indoor environments, International Journal of Robotics Research 31 (5) (2012) 647–663.
Y. D. Song, H. N. Chen, D. Y. Li, Virtual-point-based fault-tolerant lateral and longitudinal control of 4 w-steering vehicles, IEEE Trans Intel Transp Syst. 12 (4) (2011) 1343–1351.
A. Vieira, P. Drews, M. Campos, Spatial density patterns for efficient change detection in 3d environment for autonomous surveillance robots, IEEE Trans. Autom. Sci. Eng. 11 (3) (2014) 766–774.
M. Yasuno, S. Ryousuke, N. Yasuda, M. Aoki, Pedestrian detection in far infrared images, Integrated Computer-Aided Engineering 20 (4) (2013) 347–360.
H. Kretzschmar, M. Kuderer, W. Burgard, Learning to predict trajectories of cooperatively navigating agents, in: IEEE International Conference on Robotics and Automation, Hong Kong, 2014, pp. 4015–4020.
Schwendner, Jakob, S. Joyeux, F. Kirchner, Using embodied data for localization and mapping, Journal of Field Robotics 31 (2) (2014) 163–295.
M. Moussaï, D. Helbing, G. Theraulaz, How simple rules determine pedestrian behavior and crowd disasters, Proceedings of the National Academy of Sciences of United States 108 (17) (2011) 6884–6888.
M. Batty, Predicting where we walk, Nature 338 (6637) (1997) 19–20.
P. Schrater, D. Knill, E. Simoncelli, Mechanisms of visual motion detection, Nature Neuroscience 3 (1) (2000) 64–68.
Acknowledgment
Centre of Excellence in Machine Learning and Big Data Analysis, Tezpur University funded by Ministry of HRD, Govt. of India.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Kakoty, N.M., Mazumdar, M., Sonowal, D. (2019). Mobile Robot Navigation in Unknown Dynamic Environment Inspired by Human Pedestrian Behavior. In: Panigrahi, C., Pujari, A., Misra, S., Pati, B., Li, KC. (eds) Progress in Advanced Computing and Intelligent Engineering. Advances in Intelligent Systems and Computing, vol 714. Springer, Singapore. https://doi.org/10.1007/978-981-13-0224-4_40
Download citation
DOI: https://doi.org/10.1007/978-981-13-0224-4_40
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-0223-7
Online ISBN: 978-981-13-0224-4
eBook Packages: EngineeringEngineering (R0)