Introduction

Robots are becoming an integral part of our daily lives, from autonomous cars to drones and service robots. An intelligent robot possesses the ability to plan, organize, and adapt to complex environments without human intervention (Li et al. 2019). In order to achieve autonomous navigation, the robot requires a reliable determination of its own position and pose; it functions as a framework for the robot’s surroundings investigation as well as intuitive navigating. One of the essential requirements for autonomous robots is the ability to navigate through an unknown environment while avoiding obstacles and reaching their destination safely. Localization and mapping are crucial aspects of robot navigation, allowing robots to understand their surroundings and estimate their position accurately. The Simultaneous Localization and Mapping (SLAM) algorithm is a popular approach for robot localization and mapping in unknown environments (Wu et al. 2021).

Considering enhanced precision and effectiveness, SLAM innovation is now focusing on improving computing effectiveness, data incorporation, as well as looping completion. Regardless of setbacks in native investigations, important advances for mapping participation, mapping development, location, navigating, and additional fields have been accomplished. Publications have investigated SLAM methodology that uses the combination of sound waves as well as optical knowledge, leading in increased locating as well as mapping precision and resilience. Its inertial navigational system (INS) determines a carrier’s location by doubling its integration of momentum with duration, providing considerable benefits in autonomous as well as camouflage. As robotics technology progresses, so do its algorithmic hardware enhancements, providing the groundwork for intelligent internal robot mobility and location (Yu et al. 2019). Nevertheless, technological constraints have restricted the broad acceptance of internal robotics for tracking, routing, and location in new areas. Hence, precise real-time posture data are essential for entirely self-sufficient navigating as well as compositions in any interior surroundings (Dai 2022).

SLAM-based robot localization and navigation algorithms have been a topic of active research in robotics for several decades, and many techniques and approaches have been proposed and tested in various scenarios. SLAM algorithms typically consist of two components: mapping and localization. The mapping component involves building a map of the environment using sensor data, such as laser range finders, cameras, or sonar sensors. The localization component involves estimating the robot’s position within the mapped environment, often using probabilistic techniques such as particle filters.

The SLAM algorithm has been successfully applied in various scenarios, such as autonomous driving, indoor navigation, and exploration missions. For example, the SLAM algorithm has been used in autonomous vehicles for mapping the environment and estimating the vehicle’s position and orientation, allowing the vehicle to navigate safely and reach its destination. Similarly, in indoor environments, SLAM-based algorithms have been used for robot navigation, enabling the robot to move through dynamic and unstructured environments, avoiding obstacles and reaching its goal.

Recent advances in deep learning-based SLAM algorithms have also shown promising results, using neural networks to learn the mapping and localization tasks. For example, deep learning-based SLAM algorithms have been used in indoor navigation scenarios with RGB-D cameras, allowing the robot to navigate through an environment with varying lighting conditions (Thrun and Montemerlo 2006).

The aim of this research paper is to present a comprehensive study of the SLAM algorithm and its use in robot localization and navigation. This paper will discuss various SLAM techniques, including the extended Kalman filter (EKF) and GraphSLAM algorithms, and their applications in probabilistic estimation of the robot’s position and orientation. The paper will also explore different path-planning techniques that can be used with the map created by the SLAM algorithm to generate collision-free paths for the robot to navigate toward its goal. Additionally, the paper will discuss recent advances in deep learning-based SLAM algorithms and their potential applications in robot navigation.

According to (Thrun and Montemerlo 2006), SLAM is a fundamental problem in robotics, and its solution is essential for robot navigation in unknown environments. This research paper aims to contribute to the existing body of research on SLAM-based robot localization and navigation algorithms, providing insights into different SLAM techniques and their applications in various scenarios (Wang et al. 2003).

Simultaneous Localization and Mapping (SLAM) is a popular technique used in robotics to address the problem of robot navigation and localization. The technique allows a robot to build a map of its environment and simultaneously locate itself within that map, without relying on external sensors or prior knowledge of the environment. The SLAM problem is complex, as it requires the robot to integrate data from multiple sensors and make inferences about the environment while simultaneously updating its own position estimate.

Research on SLAM-based robot localization and navigation algorithms has been an active area of research in recent years, driven by the increasing demand for autonomous robots in a wide range of applications such as search and rescue, agriculture, and transportation. The key objective of this research is to develop algorithms that are robust, accurate, and efficient in real-world environments (Wang et al. 2007).

There are several different approaches to SLAM-based localization and navigation, including graph-based approaches, filter-based approaches, and particle filters. Graph-based approaches use graph theory to model the environment, while filter-based approaches use Bayesian filters to estimate the robot’s position. Particle filters use a particle-based representation of the environment to estimate the robot’s position.

Recent research has focused on developing SLAM algorithms that can handle large-scale environments, operate in real-time, and deal with dynamic obstacles. Other areas of research include exploring the use of machine learning techniques to improve SLAM performance and developing SLAM algorithms that are robust to sensor noise and other sources of uncertainty.

Current SLAM research focuses on speeding up computing, improving data linkage, and detecting when loops occur in data (Kun et al. 2019). Although initial exploration in this area was slow, advances were made in mapping, location, and circulation. Some studies combine sonar and visual information in SLAM, improving the accuracy of positioning and mapping (Dai 2022). The other method, called inertial path, uses speed to determine position. This is good because it can work on its own and can be hard to find. As robotic technology evolves, so do the control systems, helping to navigate indoors. However, there are still challenges for indoor mobile robots, such as mapping and moving around new areas. Accurate and fast detection of the robot indoors is essential for unassisted movement (Alhmiedat et al. 2023).

An important contribution of this research is the detailed analysis of simultaneous localization and mapping (SLAM) algorithms and their applications in robotic localization and navigation Traditional methods such as extended Kalman filter (EKF) and GraphSLAM, and three days yield advances such as deep learning-based approaches Discussed; the paper provides valuable insights into how state-of-the-art robotic communication technologies. Furthermore, by addressing the latest challenges and developments in SLAM research, the study contributes to the ongoing efforts to improve the capabilities of autonomous robots in safe and efficient navigation in the unknown areas.

Overall, the research on SLAM-based robot localization and navigation algorithms is critical for advancing the capabilities of autonomous robots and enabling them to operate in a wide range of environments and applications.

Literature review

Robot navigation and positioning are a critical task that involves identifying the robot’s location and determining its movement in a given environment. For precise robotic exploration as well as location, many methods have been implemented, notably optical exploration as well as location, electromagnetism guidance as well as location, GPS worldwide location, as well as light-reflecting exploration along with tracking. These strategies make use of several detectors to assist robotics in correctly navigating various settings.

Recent research has proposed several algorithms to achieve high-precision robot navigation and positioning. One such algorithm is the square-root unscented KF (Kalman filter) proposed in literature (Wang et al. 2003). The above method broadens the limited spectrum of the UKF (unmanned Kalman filtering) as well as was applicable to Gauss regression algorithms. The unmanned particulate filtering method merges standard UKF using a novel hotspot technique termed Particulate Filtering to generate a higher accuracy unmanned particulate filtering technique. Furthermore, this study indicates that UKF be used to solve the positioning issue related to nonlinear transitions (Wen et al. 2019).

A different strategy employed in robotic localized includes particulate filtering when coupled using the machine’s odometer as well as detector models. The likelihood is given through the dispersed particulate group dispersion under the aforementioned method, along with the location as well as the distribution of barriers can be estimated using the probabilistic densities. Over several rounds, the location dispersion was properly estimated. The samples procedure has been optimized according to previous research by minimizing the variety of computational samplings followed by presenting a Monte Carlo technique utilizing the Gauss mixing concept. The existing research suggests that the particulate filtering technique be improved by merging the fuzzier connect mechanism and the particulate filtering reprocessing procedure. The above approach has the capability of lowering the impact of numerous unknown elements (Liu et al. 2021).

Despite the challenges, several robots have achieved autonomous positioning and navigation using techniques like SLAM, computer vision-based positioning, and Fast SLAM. SLAM (simultaneous localization and mapping) technology is widely used to achieve autonomous navigation tasks for robots in unknown environments. For instance, literature uses SLAM technology to provide map updates for autos and enable them to navigate freely in any part of the world in a comparable vein, the research investigates SLAM in interior spaces using laser extending and a monocular sight to perform navigational autonomy missions for robotics (Li et al. 2019).

A different strategy utilized for robotic positioning as well as localization was rapid SLAM. The method above creates mappings and determines the robotic position by using recursion heuristics to figure out the robotic position as well as generate the whole probabilistic pattern of mapping indications. An optimized reprocessing approach is proposed throughout the scientific community to drive sub-plasmas into outstanding particulates, conserving the population’s variety as well as decreasing deterioration (Wang et al. 2019).

A different approach prominent option includes computational vision-based orientation gadgets, which employ frame rates recorded through a camera to determine the machine’s stance. This approach extracts pointed features as well as characteristics from every image framework using detector computations, whereby both the translating and rotational matrices across the component’s position framework as well as, the camera’s orientation framework may be produced. Lastly, the objective’s area can be identified using a specialized procedure by considering the pertinent connection across the locations of all those points of interest and characteristics throughout the picture.

Robot navigation and positioning is a challenging task that requires the use of different sensors and algorithms to achieve high-precision localization and navigation. Techniques like SLAM, computer vision-based positioning, and Fast SLAM have enabled robots to navigate autonomously in unknown environments, providing a promising future for robotics (Dai 2022).

The researchers developed a navigation system based on SLAM. This system allows robots to locate and move around large indoor spaces. Using SLAM techniques, social robots can explore unfamiliar indoor environments, mapping on the go and tracking their location (Alhmiedat et al. 2023).

Many SLAM algorithms share similar steps in their design. Each time the robot takes a step, it gathers data from its onboard sensors and extracts features from that data. These objects are compared with objects previously detected by the robot, which are stored in its internal map. To improve this matching process, the robot uses its predicted position from the previous step. Once the connections between the objects are identified, they are integrated with the map, fixing the location of the objects with the current location of the robot and in addition adding new, undiscovered objects to the map based on the estimated current location robot. This iterative process is continuous, helping the robot to localize and simulate its surroundings at the same time (Yarovoi and Cho 2024).

Case study—different techniques and algorithms

Graph SLAM

It has already been various efforts over the past few decades focused on recording actual landscapes utilizing movable sensory systems. These comprises traditional cartography through the air, on land, and the oceans. Indoor, outside, as well as underground habitats are all included comprise this cartography. Numerous purposes drove the gathering of those charts, including artistic visualization, monitoring, academic evaluation, as well as robotics navigation. The fundamental research on this topic has come from several scientific areas such as photography, artificial intelligence, graphic design, and automation attributed to the diversified aims.

The concept behind GraphSLAM is straightforward: it uses a sparse graph to represent soft constraints derived from data and then, resolves these constraints into a consistent global estimate of the map and robot path. Although the constraints are generally nonlinear, they are linearized during resolution and then solved using standard optimization techniques. GraphSLAM provides an approach that involves building minimal graphs of asymmetric requirements as well as populating minimal “knowledge” matrices of linear requirements. This is capable of handling an extensive amount of characteristics and incorporates GPS data into big-scale modeling issues. Information acquired via a robotic device intended to build 3D renderings of towns and cities was used to evaluate the algorithms. Despite it has not yet been released in its present state, GraphSLAM bases itself on Lu and Milios’ foundational method since 1997 as well as is strongly connected to Folkesson and Christensen’s work on Pictorial SLAM. Considering hierarchies of restraints is important when considering the whole SLAM study subject matter, the abbreviation GraphSLAM proposed (Thrun and Montemerlo 2006).

Graph-based representations have been used in SLAM filtering algorithms as well. In 1997, Csorba created a knowledge filtering that kept comparative data among a trio comprising three attractions, paving the path for techniques with logarithmic processing needs. (Jeong and Lee 2005) created a comparable informational filtering however, failed to clarify exactly what landmark-landmark data linkages are discovered. This technique was subsequently improved by Leonard and Newman across an effective aligning methodology, which had been executed and applied for an automated submarine transport employing artificial-aperture acoustic. The SLAM probability is represented in a sparse hierarchy using the lightweight intersection determine approach (Chen, 2021), as well as, (Wang et al., 2019) created an analogous tree construction of that data matrices for effective assessment. Covariance techniques intersecting is a customizable approach that has been employed on NASA’s MARS Rovers flotilla. This SEIF method, invented by (Thrun and Montemerlo 2006), has been demonstrated to have been sufficiently rapid to operate live on tiny data sets.

Neither of those methods, nevertheless, addressed whether to integrate sporadic GPS readings inside SLAM. Figure 1 depicts the GraphSLAM method, wherein presents an illustration with quad-labeled robotic postures (x1,…,x4) along with pair mapping characteristics (m1, m2). The graphical representation has a couple of kinds of edges: mobility arcs, which connect successive robotics positions, and measured arcs, which correlate attitudes to attributes that were recently detected there. Each boundary is an unconventional restriction that corresponds to the evaluation along with kinematic modeling’ negative logarithmic probability. These constitute data restrictions that would be rapidly applied to graphs without needing considerable processing. As shown in the first illustration, the total amount of all restrictions resulted in a nonlinear minimum squares issue (Thrun and Montemerlo 2006).

Fig. 1
figure 1

Graph SLAM

GraphSLAM streamlines a collection of restrictions for calculating the mapping lateral, culminating in small data matrices as well as a metadata vector. Because the matrices are sparse, GraphSLAM may utilize the parameter reduction approach to turn the network toward a simpler one specified roughly by robotic postures. The trajectory of the probability mapping is then computed using traditional inference strategies. GraphSLAM determines borderline posteriors covering the representation as well; however, the full mapping anterior is seldom retrieved simply because of its quad dimension.

Robotics positioning as well as navigating method utilizing SLAM

Kalman filtering-based attitude fusion (AF) algorithm

ORB-SLAM provides a flexible SLAM technology that offers monocular, binocular, as well as RGB-D modalities with may be utilized across a variety of situations including massive amounts and tiny-scale, both inside and out. It employs a multiple thread strategy, identical to the conventional PTAM technique, to divide monitoring, regional visualization, and looping identification across triple threads that are running resulting in improved effectiveness as well as productivity (Yang et al., 2022).

Our close-looping recognition process begins by contrasting the present frame to all images within the list of prospective closed-loop pictures as well as rating them according to the procedure’s parameters. When the outcome reaches a certain level, the closure of the framework is achieved, as well as the back-office optimizing process starts. The following precedes with the continuous rectification of phase, which involves halting the regional mapped threading as well as, adjusting the international package to complete the closed-off from lamination as well as critical mapping optimization.

The updated ORB-SLAM method utilizing an RGB-D protocol includes key features including real-time mapping construction, 3D packed pointless mapping creation, loosely coupled orientation prediction with IMU, map storing including studying, including perceptual repositioning following map overflowing.

The visual SLAM procedure gives six axes of autonomy (placement and orientation) data about a machine at any point via its vision detector. Nevertheless, the device has inadequate operating frequencies, is sensitive to sunlight, as well as rapidly loses objects, leading to a large inclination inaccuracy. IMU offers a better periodicity as well as precision of orientation computation over visual SLAM, although it may produce positioning inaccuracy as time passes. The optical gyroscope navigational orientation fusing estimate technique built around KF has been explained in depth during the following part, as illustrated in Fig. 2.

Fig. 2
figure 2

AF-based vision inertial flowchart

Visual odometry-based robot localization

To provide an accurate remedy for the geolocation issue, the robot’s movement must be estimated. This part emphasizes the construction for a stereoscopic camera-based vision-related movement prediction method. The principal detector for this regard is known as the Visual-Inertia (VI-) Sensor, which provides completely validated as well as duration-synchronized IMU as well as stereoscopic digicam information streams. Its approach’s key processors are depicted in Fig. 3 (Perez-Grau et al. 2016).

Fig. 3
figure 3

Primary processing models of vision odometry algorithm

The stereoscopic duo has been subjected to visual deformation reduction for every framework, after which a collection of relevant spots is determined across both the right, and left pictures. Those attention spots are produced by employing the characteristics through expedited segmentation testing (FAST) method, which recognizes angles quickly. However, there are instances where the scene may be highly uniform, or a large number of features may be clustered in a specific area of the image with intense details, which can result in inaccurate measurements during subsequent stages of the algorithm. In order to fix this challenge, just a small amount of characteristics must be taken out of evenly split sections within a picture, often attached into containers. Furthermore, a certain amount of characteristics should be chosen from every container depending upon the FAST sensor description in order to select just the greatest resilient elements. The amalgamation of these two factors allows for a fair allocation among the strongest picked characteristics across both photos. Our technique involves categorizing the picture into 6 containers (dual rows and triple columns). Figure 4 depicts an illustration of the previously identified characteristics and the traits generated through the bucketing approach (Rosten and Drummond 2006).

Fig. 4
figure 4

Vision odometry feature detection

To overcome the task of localizing the challenge, the rotation of the robot must be accurately approximated. It is possible thanks to a vision-related tracking system that employs dual cameras as well as a VI Detector. Employing the characteristics from the faster segmentation testing (FAST) technique, the procedure begins by collecting the points of interest across the left as well as the right pictures of the stereoscopic duo. The photograph is partitioned divided six containers to prevent erroneous evaluations, as well as the smallest and greatest range of variables are retrieved from every container. The Bilateral Reinforced Independence Elemental Features (BRIEF) descriptor, considered highly efficient and successful for component monitoring over modest visual impairments, is subsequently employed to find conversations among each set of characteristics. A rigorous matcher can be utilized for rejecting asymmetrical similarities and to decrease combining possibilities by utilizing epiploic geometries. To discover an assortment of additional strong characteristics, bilateral component pairing develops throughout subsequent pictures. The EPnP methodology is subsequently employed to address a Perspective-n-Point (PnP) issue to calculate the posture difference across the two pictures. To reduce the impact of accumulated mistakes in odometers, a key-framing strategy is used. Depending on the included IMU knowledge, that odometer subsystem calculates the image sensor’s rollover and rotation vantage points, which are merged with the subjective prediction in a loose-coupled way to generate the latest relevant positioning and alignment evaluations when compared to the starting posture (Perez-Grau et al. 2016).

Discussion

Simultaneous localization and mapping (SLAM) is a well-known problem in robotics, which involves constructing a map of an unknown environment while simultaneously estimating the robot’s position within that environment. The goal of SLAM is to allow a robot to navigate in an unknown environment and accomplish tasks while avoiding obstacles. SLAM-based localization and navigation algorithms have been widely researched in the field of robotics, and many different approaches have been proposed.

One popular approach to SLAM is the use of visual odometry, which estimates the robot’s motion by analyzing the motion of visual features in a sequence of images. This approach can be used with a single camera or stereo cameras, and it has been shown to be effective in various indoor and outdoor environments. Visual odometry is often combined with other sensors, such as inertial measurement units (IMUs), to improve the accuracy and robustness of the system.

Another popular approach is to use laser range finders or 3D cameras to construct a map of the environment and then, use that map to estimate the robot’s position. This approach is known as mapping-based localization. It has been shown to be effective in environments where visual odometry is not reliable, such as in low-light conditions or environments with few distinctive visual features.

One of the best algorithms to reduce navigation problem id GraphSLAM. GraphSLAM was an approach that tackles the off-line issue associated with SLAM by gathering every bit of information while mapping as well as translating it onto an outline once the machine’s activity is finished. GraphSLAM does that by first mapping the input onto a dense chart of restrictions followed by linearizing it utilizing the Taylor extension to get a knowledge format description. Accurate translations are used to simplify the consequent informative structure, thus eliminating the location parameters that constitute the optimization issue. After then, a common optimization approach, including a conjugate curve, is used to resolve the optimization issue.

GraphSLAM reconstructs the visualization utilizing the posture estimate by solving a series of small-scale business optimization issues, one for each feature. Appropriate maps may be constructed in situations with 108 varieties or more characteristics repeatedly via the linearization as well as optimization approach. This strategy builds on a long heritage of disconnected SLAM computations, all of which are founded upon the realization that the entire SLAM issue correlates to a sparsely spring-mass mechanism.

SLAM-based localization and navigation algorithms often use a probabilistic framework, such as a Kalman filter or a particle filter, to estimate the robot’s position and uncertainty. These algorithms are capable of handling noisy sensor measurements and can update the robot’s position estimate in real-time as new sensor measurements are acquired.

One important aspect of SLAM-based localization and navigation algorithms is the ability to handle loop closures. Loop closures occur when the robot revisits a previously visited location, and the algorithm must be able to detect this and update the map and position estimates accordingly. This is a challenging problem, as loop closures may occur under different lighting and environmental conditions.

In recent years, deep learning approaches have also been applied to SLAM-based localization and navigation algorithms. These approaches use neural networks to learn features from sensor data and can improve the accuracy and robustness of the system. However, these approaches require large amounts of training data and computational resources, which may not be feasible in some applications.

Finally, SLAM-based localization and navigation algorithms are an active area of research in robotics, with many different approaches proposed. These algorithms have the potential to enable robots to navigate in unknown environments and perform complex tasks, and advances in this field are likely to have significant impacts on robotics and related fields.

Conclusion

SLAM-based robot localization and navigation algorithms are critical in enabling autonomous robots to navigate and map unknown environments in real-time. The algorithms are designed to process sensor data and determine the robot’s position and orientation in real-time, allowing it to avoid obstacles, plan optimal paths, and accomplish complex tasks.

The research on SLAM-based algorithms has shown that there are various approaches to solving the localization and navigation problem, each with its strengths and limitations. Some of the commonly used techniques include visual SLAM, graph-based SLAM, and particle filters. Visual SLAM, in particular, has gained significant attention in recent years due to advancements in computer vision and machine learning.

Overall, SLAM-based algorithms have demonstrated impressive results in real-world applications, such as autonomous vehicles, drones, and mobile robots. However, there is still much research to be done in developing more accurate and robust algorithms that can handle complex environments and large-scale maps. Additionally, there is a need to optimize the computational efficiency of the algorithms to enable real-time operation on low-power devices.

The research on SLAM-based robot localization and navigation algorithms is an active field with significant potential for real-world applications. Continued research and development of these algorithms will enable the creation of more capable and intelligent robots that can operate autonomously in a variety of environments.

While SLAM provides a comprehensive overview of algorithms and their applications, it is important to acknowledge a few limitations. First, the paper mainly focuses on existing methods and their applications, which can look at emerging methods or new methods that can lead to new developments in robotic navigation. Furthermore, the discussion mainly revolves around on technical aspects. Few studies have potential challenges associated with Future research could address these limitations by exploring alternative SLAM methods beyond the traditional systems, while considering the wide range of navigation system, it is independent in terms of security, privacy, and social impact as well, and there is a need for research to investigate how SLAM technology and other emerging technologies such as artificial intelligence, edge computing, and Internet of Things (IoT) are used for navigation complex and flexible systems that can operate in a variety of dynamic environments. Furthermore, strategies for enhancing the SLAM algorithm, real-time performance, and robustness can significantly improve the reliability and robustness of autonomous robot navigation systems in real-world situations.