Keywords

1 Introduction

An autonomous mobile robot (AMR) is a robot that can navigate in an indoor/outdoor environment without being supervised directly by an operator. This autonomous functionality is achieved through a series of on-board sensors and maps which allow the autonomous mobile robots to understand interpret and react to the surrounding environment. The autonomous mobile robot has been extensively used in many sectors such as material handling in light and heavy industries, household tasks, underwater and outer space applications.

The robot environment mapping is the process of generating 2D or 3D model of the real working environment of the robot using specific environmental data obtained from the sophisticated sensors [1]. The environment mapping is one of the important necessary features in mobile robotics when dealing with localization, positioning, and autonomous navigation of mobile robot without collision [2]. Most of the mobile robots have equipped with Inertial Measurement Unit (IMU), gyroscope, accelerometers and magnetometer to measure the orientation, angular velocity and acceleration of the robot which are used for localization of the robot. The mobile robot localization by using above-mentioned inertial navigation systems results in navigation errors as they suffer from integration drift; small errors in the measurement of acceleration and angular velocity result in position error during localization of robot [3].

In early days, LIDAR sensor was widely used for mobile robot mapping and navigation [4, 5]. The Lidar sensor scans and detects the objects which are above floor level using only a single horizontal scanning line [4]. There is some kind of structures such as holes or staircases which can’t be detected by Lidar. This limitation of mapping using Lidar can be overcome by using Kinect Camera. The Kinect camera can detect the objects which are above and below axis of scanning line. Objective of current research work is to scan an unknown indoor environment using Orbbec Astra camera which is a kind of Kinect camera and ROS in order to generate the map of environment and navigate the mobile robot from starting location to target locations without colliding with obstacles. Experimental results show that the generated map of the indoor environment looks more close to real environment than mapping using Lidar, and mobile robot is navigated to reach target from start location without collision with obstacles.

2 System Overview and Implementation

2.1 Mobile Robot Setup and Block Diagram

The Turtlebot2 mobile robot which is having Kobuki base with maximum speed of 0.65 m/s is interfaced with laptop having Orbbec Astra camera and Robotic Operating System (ROS). Orbbec Astar (Kinect) camera was mounted on mobile robot for obstacle detection and mapping. The experimental setup consisting of Turtlebot mobile robot, Astra camera and Laptop with ROS is as shown in Fig. 1.

Fig. 1
figure 1

Mobile robot and Astra camera setup for mapping and navigation

The Orbbec Astra camera is mounted on the Turtlebot2 mobile robot which is teleoperated to move around the environment to be mapped. The camera which is mounted on the mobile robot will detect the objects around the robot in the environment and send the information about obstacles to the ROS. Minimal SLAM Gmapping technique is used to map the environment and map can be visualized using Rviz tool in ROS. The generated map is used to navigate the mobile robot from start point to goal point without any collision with obstacles. The Fig. 2 shows the block diagram of methodology used in mapping and navigation of mobile robot.

Fig. 2
figure 2

Block diagram of unknown indoor mapping using Orbbec Astar camera

2.1.1 Orbbec Astra Camera

An Orbbec Astra camera is a 3D depth camera with VGA color and superior long-range depth tracking up to 8 m [4]. This device contains an IR sensor, RGB sensor, microphones, advanced eye protector and a coded pattern projector. It is widely used for mapping and navigation applications in mobile robots using ROS (Robot Operating System) (Fig. 3).

Fig. 3
figure 3

Astra Series – Orbbec (orbbec3d.com)

Orbbec Astra camera.

2.1.2 Robot Operating System

The Robotic Operating System (ROS) is an open-source robotic middleware for the development of robotic systems. The main function of the ROS is to provide communication between the user, robot and equipment external to the computer such as sensors and cameras. A ROS system is comprised of a number of small and independent programs that can run concurrently called ROS nodes and each of which communicates with the other nodes by sending or receiving messages. The messages can consist of data, or commands, or other information necessary for the application. Some nodes provide information for other nodes through ROS topic. The ROS master is responsible for establishing communication between the nodes and provides naming and registration services to the nodes in the ROS system. It also tracks publishers and subscribers nodes to the topics [5, 6].

2.1.3 Gmapping Technique

In the present work, Gmapping technique is used for mapping of environment. The Gmapping is a localization technique that runs on unknown environment to perform simultaneous localization and mapping. It uses the Rao-Blackwellized Particle Filter (RBPF) and receives data from both sensor and robots pose to generate a 2D grid map of the environment without IMU information [7]. In Gmapping, the robot constantly updates the pose on each processed particle by estimating odometry. During execution of mapping as the first scan is received, it is directly registered in the map. Afterward, registration occurs only if the linear distance or angular distance traversed by the robot exceeds specific thresholds. Once scan match is received, correction of the estimation of pose in the map of each particle is performed for each laser scan to generate the map of environment [8, 9].

2.1.4 RViz Tool

The RViz is a visualization tool for ROS applications. It captures the information from the laser scanners and replays captured data in the form of visual. In current work, RViz is used to display the generated map of environment.

3 Flow Chart

3.1 Mapping Process

The Orbbec Astra camera mounted on mobile robot will detect the obstacles in the environment and give information regarding obstacles to the ROS. The obstacle information along with odometry information of mobile robot is used generate map of environment using SLAM Gmapping technique. The mapping process is as follows: initially running the ROScore and Gmapping node in the ROS terminal. The ROS core establishes the connection between Gmapping node, Astra camera and mobile robot [9, 10]. The Gmapping node receives obstacle information from Astra camera in the form messages through ‘/scan’ topic and odometry position information from the mobile robot in order to create the 2D grid map as shown in Fig. 4. The mobile robot is teleoperated using keyboard in the indoor environment to be mapped and run the RViz in another terminal in order to visualize the mapping process.

Fig. 4
figure 4

Gmapping

The accuracy of the map is determined when the mobile robot completes the mapping process.

$${\text{Accuracy}}\% = \left( {a{/}b} \right) \times 100$$
(1)

Equation (1) is used to find the accuracy of the map generated based on the dimension of the actual layout of indoor environment to be mapped. The ‘a’ indicates the total length of the map generated by the robot where ‘b’ indicates the total length of the actual real map. If the accuracy of the generated map is high or more than 80%, then it will be saved using map saver command, and if accuracy is less than 80%, then the robot will be moved again in the environment to be mapped to map the unmapped regions of the map. The detailed flow chart of mapping process is shown in Fig. 5.

Fig. 5
figure 5

Flow chart of mapping process

3.2 Navigation Process

The generated map is used to navigate the mobile robot from start to target location without collision with obstacles. The generated map and robot position array topics are loaded in to global planner. Locate the robot position on the map using 2D pos estimate button and target location to be reached by the robot is indicated using 2D Nav Goal button on the generated map. Then, global path planner estimates the path from start location to target location without collision with obstacles. The detailed flow chart of navigation process is shown in Fig. 6.

Fig. 6
figure 6

Flow chart of navigation process

4 Experimental Results and Discussion

The mapping of the indoor environment is performed using Orbbec Astra camera mounted on a Turtlebot2 mobile robot and ROS. The scanning and detection of obstacles in the environment are carried using Orbbec Astra camera to collect obstacle information present in the indoor environment. Initially, the mobile robot with Astra camera and ROS is teleoperated using keyboard ‘ssh’ commands manually in the environment to be mapped. While mobile robot is moving around, the Orbbec Astra camera collets information of obstacles present in the environment and which is sent to gmapping node through ROS topic called ‘/scan’. The gmapping node in ROS will combine obstacle information received from Astra camera with mobile robot odometry and generate the map of indoor environment using Gmapping technique. The map generation process can be visualized using RViz tool in ROS as shown in Fig. 8. The generated map of the indoor environment generated using Orbbec Astra camera shown in Fig. 8 matches with real indoor environment shown in Fig. 7.

Fig. 7
figure 7

Actual layout of environment

Fig. 8
figure 8

Map generation visualized using RViz

The generated map of the environment is used to safely navigate the mobile robot from start to target location without any collision with obstacles. The starting position of the robot is located on the generated map using 2D Pos estimate button, and target is located using 2D nav Goal button on the map. The mobile robot plans the route from start to target using path planning algorithm and navigates without any collision with obstacles. The green color in Fig. 9 indicates the navigation path of mobile robot from start to target location and is also tested experimentally by navigating Turtlebot mobile as shown in Fig. 10.

Fig. 9
figure 9

Navigation of mobile robot seen in RViz

Fig. 10
figure 10

Navigation of Turtlebot robot

5 Conclusion

The navigation of mobile robot in unknown indoor environment is difficult without knowing structure of environment. In order to support the collision free navigation of mobile robot in an unknown indoor environment, the environment should be mapped first to know the information of obstacles present in the indoor environment. In this work, the Orbbec Astra 3D depth camera is used is for mapping of indoor environment by detecting the objects which are below and above axis of scanning line where the obstacles below the axis of scanning line are missing in normal laser scan using Lidar. The mapping using Orbbec Astra camera gives better accuracy than mapping using Lidar as Orbbec Astra camera performs scanning of objects which are even below axis of placement of sensor which is missing in the Lidar. The generated map of environment using Orbbec Astra camera and ROS has been used to navigate the mobile robot successfully from start to target location without colliding with obstacles.