Keywords

1 Introduction

Visually impaired people unlike us have a hard time interacting with the environment, especially because of their inability to judge their relative position as a reference in an area. They face a hard time getting employment and have to hugely rely on their friends and family for even the smallest of the task.

India houses to more than half the blind population in the globe [1], and the stats are ever rising. Science has made a lot of progress but has failed to come up with a satisfactory and feasible solution to cater the need of blind people. IoT brought about many devices that sought to bridge the gap between the blind and the normal people. Some of the research led to the development of smart sticks that allowed them to tackle hurdles, while some used beacons to let the blind know what they are touching or what is around them.

The beacon method proposed by Korial and Abdullah [2] proposed a method to implement beacon system that a smartphone can capture and let the blind person know the things around them. The proposal was to install low-power consumption beckons in an indoor environment. This method had a major flaw, as time passes by the person gets acquainted with the things around the room and navigates around with a lot fewer difficulties. Also, the time taken to configure the beacons and their low scalability would make anyone give a second thought.

Shoval et al. [3] developed a NavBelt that was a portable wearable computer designed to help the blind avoid obstacles indoor. The information was conveyed to the blind by different audio sounds. The method did not clearly distinguish between sounds, though an ideal method to navigate indoor this method did not convey the momentary position of the person in the room nor made the sounds easily distinguishable.

Smart stick using infrared sensors for indoor navigation was an idea proposed by Innet and Ritnoom [4]. Upon detection of the obstacle, the stick produced different vibrations, and the vibration pattern was hard to distinguish and moreover made the person unable to use both his hands freely.

All the recent developments were either for indoor use or if suitable for outdoor used probes that enabled the blind person to use both of his hands freely. Our project aims to implement a cheap navigation tool that is easy to scale and can be used indoor and outdoors easily. We aim to use the power of smartphone and microcontroller at its fullest to enable GPS navigation, obstacle detection, and allow the blind person to get a description of the environment or what exactly is in his way through his smartphone, that now even the poorest can afford.

2 System Design

The system consists of an ultrasonic sensor attached to the blind person’s feet about 25 cm from the ground. The sensor has a physical connection to a sling bag coated with reflective tape which makes it easier for the vehicles or other people to notice the blind person. The sling bag houses an Arduino and his Android phone is communicating through Bluetooth. Upon detection of an obstacle, the Arduino sends an alarm to the Android device and a voice command can be heard by the blind person that a danger is present forward. The blind person can use his phone camera (triggered by a click of earphone’s button) to take the picture of his surroundings, which then returns a statement in simple languages as audio commands. The blind person can use his phone to hear audio commands to the destination he wants to go to. The phone also broadcasts real-time location of the person to the server which would be visible to his friends and family. Also, the app broadcast an SOS message upon triple click of earphone’s button to everyone on his SOS list.

2.1 System Configuration

The proposed system follows the architecture as shown in Fig. 1.

Fig. 1
figure 1

Proposed model

2.2 Microcontroller

Arduino Uno is based on ATmega328P. The board consists of 14 input/output pin, 6 analog inputs, a 16 MHz and a quartz crystal [5]. The microcontroller can be connected directly to the computer. The microcontroller has a low power consumption of 5 V and a flash memory of 32 KB. Arduino easily interfaces with Android phones (Figs. 2, 3).

Fig. 2
figure 2

Arduino Uno R3

Fig. 3
figure 3

Logical diagram of the obstacle detection

2.3 Obstacle Detection Unit

Ultrasonic Sensor: The module sends 40 kHz waves and detects for any pulse signal deflected back. The module used, HC-SR04 provides distance measurement of 2–400 cm with an accuracy of 3 mm. The module is attached to the person’s leg. About 25 cm above the ground, it continuously looks for obstacles and sends a trigger message as soon as there is an obstacle detected based on the flowchart shown in Fig. 4. The obstacle is decided based on the following formula: \( {\text{Test}}\,\,{\text{distance}} = \, ({\text{high}} - {\text{level}}\,{\text{time}}\,{\text{velocity}}\,\,{\text{of}}\,{\text{sound}}\,\left( {340\,{\text{M}}/{\text{S}}} \right) / 2 \).

Fig. 4
figure 4

HC-05 Bluetooth module

2.4 Warning Unit

Bluetooth Module: The HC-05 module provides a transparent wireless serial connection setup. The module is fully qualified Bluetooth V2.0 + EDR 3 Mbps Modulation with 2.4 GHz radio transceiver and baseband. The module stays in constant communication with the Android phone and sends a warning signal to the phone as soon as the ultrasonic sensor detects an obstacle (Fig. 5).

Fig. 5
figure 5

ANDROID device running the app

Android Application: The Android application stays in constant communication with the Bluetooth device. Upon detection of an obstacle, the message is communicated through the Bluetooth. A warning message saying “obstacle detected” can be heard over the earphone. The app captures an image and converts it into input stream upon the click of the button of the earphone. The input stream is sent to the API discussed in Sect. 2.5 which returns a string in plain English. The string is again played on the earphone. A service runs in the background detecting for triple taps of the earphone’s button. An SOS message is immediately sent to the person’s loved ones if the triple tap service is triggered.

2.5 APIs and Database

Object Detection API: Microsoft Vision API [6] takes an input stream from the Android device and returns the description and tags of the image recognized. The blind person can click on his earphone’s button to trigger the camera and take a photo. The app then sends the bitmap to the API, which then returns to the description of the image (Figs. 6, 7).

Fig. 6
figure 6

Microsoft vision API [6]

Fig. 7
figure 7

Screenshot of indoor atlas console [7]

Navigation API: Indoor Atlas [7] uses radio signals, geomagnetic field, inertial sensor data, barometric pressure, camera data, and other details from the embedded sensors of the smartphone to provide precise data of the location of the user that the normal positioning system fails to do. The API uses the floor plan of the area and needs to be trained with providing waypoints for the detection to start working. It provides a high ability to scale the system with a much greater accuracy as compared to tradition GPS with no hardware cost (Fig. 8).

Fig. 8
figure 8

Screenshot of DeepStreamHub [8]

Database: DeepStreamHub [8] is a real-time database providing 3 ms response time. The app stays in constant communication with DeepStreamHub to update real-time coordinates to the loved ones of the blind person. Anyone who is added to the trusted list of the blind person can view the location and receive SOS messages.

3 Results

The ultrasonic sensors are attached to both the left and right feet of the blind person above 25 cm of the ground. The ultrasonic on both the left and right feet reduces the chances of collision from obstacles when one of the feet is behind the other. Also, attaching the ultrasonic sensors to the feet gives the blind person’s hand mobility to move freely while traveling. The Arduino, Buzzer, and the Bluetooth sensor are kept in a sling bag whose straps are covered with reflective tape. The reflective tapes promote visibly for any car or person far away from the blind person, thus reducing the chances of the cars or other persons with the blind person (Fig. 9).

Fig. 9
figure 9

Connections

The blind person starts to walk while the ultrasonic sensor looks out for obstacles. The setup can successfully detect chairs, trees etc. above the ground level. A buzzer is triggered along with an output voice message that an obstacle has been detected by the ultrasonic sensor. The blind person can then take out his phone and with the click of his earphone’s button click a picture which returns the description of the image.

To navigate through the app, the location needs to be fed to the android application. The application uses a pretrained blueprint to increase its efficiency in location prediction. The location is predicted with an accuracy of 1 m. The blind person then puts the phone in its pocket and listens to the navigation commands sent by the API, while the ultrasonic sensor looks for the obstacles in his way. And thus, the blind person is able to successfully move and reach his required destination.

4 Conclusion and Future Work

The suggested method showed a great improvement over the previous methods. It helped the blind go from one location to the other without the use of external probes like a stick. A separate application for accessing the GPS location of the person was implemented which showed the location of the person indoor or outdoor with an accuracy of 1 m. The results for indoor and navigation within a small area showed little to no errors and helped the visually impaired a lot. Further, this idea can be worked on various other platforms like iOS and Windows. Also, the location setting can be extended to work through voice commands.