Keywords

1 Introduction

According to the World Health Organization (WHO) in 2022, there are approximately 1 billion people that are blind or face moderate-to-high levels of vision impairment, a majority of which (826 million) is attributed to unaddressed presbyopia which causes near-vision impairment (NVI) and uncorrected refractive errors. Other causes include various conditions such as glaucoma, cataract, diabetic retinopathy [1]. Generally, it is observed that in countries with low income, congenital cataract is a primary cause of vision loss in children, while in middle-income countries, the likelihood of retinopathy of prematurity being the cause of vision impairment is higher [2]. The advancements in the study of robotics and artificial intelligence have especially made great progress in recent times and are the reason for many breakthroughs in various fields including medical research.

Azeta et al. have developed an obstacle avoidance robot vehicle which uses ultrasonic sensors for the movement of the robot. The robot is built using an Arduino UNO. The study describes the design and implementation of an autonomous obstacle avoiding a robot which can direct itself whenever it detects an obstacle in its path [3]. Ultrasonic sensors are used for the detection of an obstacle. When the sensors detect an obstacle in their path, a command is sent to the microcontroller which then redirects the robot in a different path by actuating the motors depending on the signal received. The robot showed decent performance in different lighting conditions. Kulkarni et al. have proposed a system to assist visually impaired individuals in indoor navigation. The study highlights the importance of designing robots that are capable of engaging in interactions with human users. It also focuses on enhancing the user's experience while interacting with the robot [4]. An autonomous biped humanoid robot that detects obstacles to assist visually impaired individuals while traveling outdoors was developed by Ganguly & Paul [5]. Arduino Nano is used as the processor and ultrasonic sensor is used to detect obstacles. They build a cost-effective model that can be used to efficiently guide the visually impaired. Noise filtering techniques also can be used to remove the signal noise for efficient signal prediction [6, 7]. Mathematical modeling and simulation also can be performed for signals received from sensors [8]. It can also be a source of entertainment and can provide security. An investigation into the development and testing of an independently navigating robot for the blind has been proposed by some research groups [9,10,11].

Finding a feasible, accessible, and general solution becomes paramount because as stated, blindness and visual impairments are leading causes of many first-world problems. The current solutions available for those with visual impairment, such as the use of sighted guides or guide dogs, are subject to limitations imposed by biological variables. Additionally, alternative technologies that are currently available do not possess the necessary versatility to cater to a wide range of general applications. Furthermore, certain technical solutions that have been developed rely on sensors that are inefficient in their functionality. Although the number of people affected by visual impairments increases over time, technological advances are also accelerating quickly. In the proposed work, we tried to design and develop the assistive robot with the utilization of an advanced sensor array, including ultrasonic, LIDAR, and infrared sensors, alongside a sophisticated camera system, that enables the robot to effectively perceive and analyze its dynamic surroundings. The proposed robot enables blind individuals to confidently navigate unfamiliar terrains and embark on independent journeys.

2 Design and Development of Methodology

The first step in the design and development of the robot involves creating a circuit diagram of the robot. The circuit is designed in the Proteus software. Once the circuit is designed, the next phase involves writing the embedded C code for the robot in the Arduino IDE. During the simulation of the circuit in the Proteus software, if errors are found, then the code is to be modified accordingly. The final code is uploaded to the Arduino microcontroller once there are no errors in the code. Next step involves building an application for transmitting voice commands and input signals to the microcontroller via Bluetooth module. The robot starts or stops functioning according to the input signal given by the user through the app which is then transmitted to the robot via the Bluetooth module. After building the app, DC motors attached to the wheels are to be integrated with the robot chassis and then connect the motor driver to the microcontroller. Once the connection between the motor driver and microcontroller is made, then the servo motors and DC motors are to be connected to the motor driver. Once all the connections are made, the next step is to arrange the IR sensors and Bluetooth module in their respective positions and place the ultrasound on top of the servo motor. Then, connect the power source and switch to the driver shield integrated with the Arduino. Table 1 shows the components used for constructing the robot.

Table 1 Components used to build the assistive robot

3 Circuit Diagram

The components of the robot include an Arduino Mega Board, three IR sensors, an Ultrasonic sensor, an L293d motor driver shield, four DC motors, and an HC-05 Bluetooth module. All components are supported by an acrylic sheet chassis. The Arduino provides control and helps in processing power for the robot. The two types of sensors which we used provide the input and feedback to the system. The four DC motors together provide the movement of the robot with the help of rubber wheels. We also used a GPS module for providing precise location and accuracy to the user. To facilitate the communication between the prototype and the app, a HC-05 Bluetooth module was integrated with the robot which transmitted voice commands from the user. The circuit diagram of the proposed assistive robot is given in Fig. 1.

Fig. 1
A circuit diagram which consists of an Arduino Mega which is connected to three I R sensors, Servo motor, Bluetooth sensor, Ultrasonic sensor, Motor driver, G PS module and four D C motors.

Circuit diagram of the proposed assistive robot

4 Prototype Development

Firstly, a power supply or a battery is to be connected and then manually switch on the robot using the power switch. After turning on the switch, the user should open the custom-designed android application. Using the HC-05 Bluetooth module, connect the app to the robot. An input voice command is to be given through the Android application for the robot to start moving. By passing “START” as input voice command through the app, the Bluetooth module installed takes this input and then passes the signal to the microcontroller (Arduino UNO). When the microcontroller successfully detects the signal, the robot starts functioning (or start working or moving). The HC-SR04 ultrasonic sensors installed at the back of the robot start detecting or searching for human or the user throughout its motion, and it keeps on checking the distance between the user and robot to ensure that the robot maintains a healthy distance throughout the journey. Once the robot detects the human, it starts moving by parallelly giving information about its surroundings to the user through the app, while in motion, the robot keeps searching for any obstacle. Once it detects an object through the IR sensors, the robot changes its direction. This information is then passed to the human user through the app and the robot continues searching for any obstacle in its path. It moves in a linear path until it finds an obstacle. Finally, once the user gives “STOP” command, the Bluetooth module sends a signal to the microcontroller to stop the motion of the robot. Aluminum alloy AA6061/AA7075 and nanocomposites can be utilized for the mechanical structure of the robot [12,13,14]. The flowchart for the construction of the robot is shown in Fig. 2. Figure 3 represents flowchart for working of the assistive robot. Figure 4 depicts the assistive robot's connections and internal status, whereas Fig. 5 provides a visual representation of the completed prototype.

Fig. 2
A flow chart of the construction of the robot consists of, 1. Start, 2. Create circuit diagram of the robot with Proteus software, 3. Write or modify the embedded the C code for the robot in an I D E, 4. If errors found in code during simulation of circuit in Proteus, go to 3, else go to 5, 5. Upload the final code to the Arduino, 6. Build the app for transmitting voice, 7. Integrate the D C motors attached, 8. Connect the motor driver shield, 9. Connect the servo and D C motors to the driver, 10. Arrange the I R sensors and Bluetooth module, 11. Place the ultrasound on top of the servo motor, 12. Connect the power source and switch to driver shield, and 13. End.

Flowchart for construction of the robot

Fig. 3
A flow chart consists of, 1. Start, 2. Connect the power supply, 3. Switch on the robot using power switch, 4. Input voice command start through the app, 5. If signal start detected successfully, go to 6, else go to 4, 6. Detect or search for the human using the sensor, 7. Detect obstacle using I R sensor, 8. If obstacle detected, change of direction of motion and go to 6, else go to 9, 9. Movement in linear motion, 10. If stop command given, stop movement, else got to 7.

Flowchart for working of the robot

Fig. 4
A photograph of a robot which consists of a circuit board with interconnected wires, mounted over four wheels.

Connections and internal state of the assistive robot

Fig. 5
Three photographs of a final prototype. 1. A photograph of the top view of a rectangular box, mounted over four wheels, 2. A photograph of the bottom view of a rectangular box with a battery compartment, 3. A photograph of the side view of a rectangular box, mounted with an ultrasonic sensor in the front and an I R sensor in the middle of two wheels.

Visualization of the final prototype

5 Construction and Working of Custom Designed Android Application

The Android application used in this paper is designed using an App Inventor. The designing of the application using the App Inventor can be divided into two main phases: (a) designing and (b) programming.

  1. (a)

    Designing: The first phase, designing, takes place in the App Inventor Designer which involves selecting the components that will be used for designing the app. These components include the buttons, labels, textboxes, and images. In this phase, the basic layout and structure of the Android application are designed.

  2. (b)

    Programming: The second phase of the process involves using the App Inventor Block Editor. Here, the user assembles blocks of code that specify how the components should behave visually. For example, the user might specify that when a button is pressed, a certain action should occur, such as displaying a message or changing the background color of the screen. This phase is where the user really brings the app to life, defining its functionality and behavior.

Once the designing and the programming phases of the app are complete, the application is now ready to run on an Android phone by connecting the phone to a computer, or it can be run on an Android Emulator if the user does not have a physical Android device. After testing the app, the user can download the app's.apk file and install it directly onto their Android device, allowing them to use the app like any other installed application. The app uses Google’s speech recognition service to detect the user's speech and then convert it into text which is then sent to the microcontroller in the form of a signal. The speech-to-text module takes the user's commands and converts them into a digital signal that can be transmitted to the Arduino board. When the user speaks a specific set of commands, the speech-to-text module converts the speech into text using Google's online services. The converted text is then sent to an Arduino microcontroller through a Bluetooth connection. The microcontroller processes the input signal, which enables the robot to act accordingly based on the user's commands [15,16,17]. The user interface of the custom-made Android application is shown in Fig. 6.

Fig. 6
Two screenshots of the user interface of the app called Vector. 1. A mic icon is present and displays, Not connected, as the Bluetooth status. 2. Some lines of machine codes are present.

Image of user interface of the custom-made Android application

The system is designed to be interactive, as it allows for real-time communication between the robot and the user. The app also uses a text-to-speech module that speaks out messages to the user. For example, when the robot detects an obstacle, the microcontroller sends a signal to the app via Bluetooth module. Using the text-to-speech module, the app speaks out the message to the user which allows the user to be informed of any obstacles in their path.

6 Results and Discussion

The results are quantified by taking the following factors into consideration: (a) no. of times tested vs. no. of times obstacle successfully detected; (b) time taken by Bluetooth app to recognize the users voice, i.e., to receive signal (time taken vs. test number, ten tests); (c) no. of times the blind person was not detected by the robot (tested in different positions); (d) no. of times the obstacle was avoided completely; (e) time taken to cover certain distance milestones.

From Fig. 7, it can be inferred that the robot is mostly successful in detecting the obstacles and taking action through corresponding movements. Around 15 tests were done with obstacles in different positions (left, front, and right) with respect to the robot. The success rate of obstacle detection and corresponding movements appears to be highest when the obstacle is directly in front of the robot. This may indicate that the robot’s sensors are more effective in this orientation. Some lighting conditions may also affect the performance of the IR sensors. Despite some variability in the success rate across different obstacle positions, the overall trend is that the robot is able to successfully detect and respond to obstacles in most cases. This is a positive result, as it suggests that the robot is well-suited to tasks that require obstacle avoidance. It is important to note that the graph only reflects the robot's performance in the specific testing scenarios that were carried out.

Fig. 7
A bar graph of probability in percentage for successful and unsuccessful detection of 80 and 20, respectively.

Probability of successfully obstacle detected

Figure 8 shows a clear trend in the time it takes for the Bluetooth module to register the user's voice commands on average 1–2 s. The time it takes for the Bluetooth module to register the user's voice commands may depend on a variety of factors. For example, it may depend on the distance between the user and the robot, the amount of background noise in the environment, or the specific words or phrases that the user is speaking. The voice commands were tested in a fairly noisy environment.

Fig. 8
A line graph of time in milliseconds with respect to test number. The time for test numbers 1, 2, 3, 4, 5, 6, 7, 8, 9 and 10 are 500, 1100, 1000, 2000, 1000, 1500, 500,1300, 2300, and 1500. Values are estimated.

Time taken by Bluetooth app to recognize the users voice, i.e., to receive signal

From Fig. 9, it can be inferred that the person was detected around 70% of the times the tests were conducted and the failure rate is fairly less. The user was placed in different orientations with respect to the robot. There seems to be a higher detection rate when the person is facing the robot directly, as opposed to when they are turned to the side. We have conducted this test to track the user’s walking speed and adjust robot’s speed accordingly to match the user’s speed.

Fig. 9
A bar graph of percentage of tests conducted for person detected and person not detected of 70 and 30, respectively.

No. of times the blind person was not detected by the robot (tested in different direction)

Figure 10 shows that there were some tests where the robot struggled to maneuver the objects. Specifically, there were four tests where the robot was only partially successful in maneuvering the objects and one test where it failed completely. The ability of the robot to avoid the objects depends on the parameters of the objects, especially the size, shape, and width of it. The performance of IR sensors which are used to detect the objects can also be affected by factors such as the color and reflectivity of the objects being detected.

Fig. 10
A bar graph of percentage of tests conducted for avoided and not avoided or partially avoided of 75 and 25, respectively.

No. of times the obstacle was avoided completely (same as first result bar graph)

Figure 11 shows time taken to cover certain distance milestones. The graph helps us understand the speed of the robot. The speed of the robot is an important aspect to measure as it gives us an idea of how fast it should travel to guide the person and the time it takes to avoid the obstacles. In average the robot travels around 33.7 cm per second. The robot was allowed to run freely on a uniform terrain with no obstacles in its path. The robot may move more slowly on uneven or rough terrain or when it encounters obstacles that require it to slow down or change direction. There is some variation in speed in different distance milestones which is to be expected as uniform speed is hard to maintain due to factors like weight or battery power. The speed of the dc motors was set in the embedded C code.

Fig. 11
A line graph of time in milliseconds and distance in centimeters. A line begins at (50, 2) and increases gradually to reach (1000, 35). Values are estimated.

Time taken to cover certain distance milestones

The robot, vector, was tested in the lab and on an open ground, where we placed a few obstacles and the robot was allowed to move in a path. The robot changed its direction when it detected an obstacle. When an obstacle is detected, a message is sent to the Android app through which the user is notified about the obstacle.

7 Conclusion

The advancement of technology has led to creation of many innovative developments in the field of robotics which have transformed our lives in many ways. This assistive robot and mobile application utilizes cutting-edge speech recognition algorithms to accurately transcribe and interpret commands, allowing for intuitive control and seamless navigation toward desired destinations. The fusion of these technologies culminates in the generation of real-time auditory feedback, providing vital information about obstacles, navigation directions, environmental cues, and points of interest. Through the integration of state-of-the-art sensor technology, computer vision, and natural language processing, the robot empowers visually impaired individuals, offering them unparalleled independence and confidence to navigate unfamiliar environments and reach their desired destinations with ease.