Keywords

1 Introduction

Sight dominates our mental life, more than any other sense. Even when we are just thinking about something the world, we end imagining what looks like. This rich visual experience is part of our lives. People need vision for two complementary reasons. One of them is vision give us the knowledge to recognize things in real time. The other reason is vision provides us the control people need to move around and interact with objects [1].

The World Health Organization tells us that there are about 39 million individuals who are legally blind and 246 million people that have low vision around the world. Most of them are aged 50 and above and live in low-income settings [2].

Visually impaired people are challenged by numerous difficulties to perform their daily tasks. They usually depend on someone or something for help. When a blind person is in an unknown environment, they lack knowledge about the features and obstacles of the physical space around. Devices like the white cane or Braille compasses help visually impaired to navigate in the world [3]. These artifacts have improved their lifestyle significantly, but each artifact has its limitations. For example, white cane has a maximum range of about 1,5 m and cannot detect objects above the waist level. Braille compasses can only tell where is the geographical North. The compasses only give the user the direction that should face.

This paper presents Blind Guide a system based on the Internet of Things (IoT) that can work with existing solutions to help blind people to navigate in their environment. IoT rests on the wireless network of sensors that communicate with each other with the purpose of understanding the environment and activate where there is a change in their territory.

The system is composed of wireless sensors that can be attached to any part of the body and a primary central device that has a camera and a speaker. When a wireless sensor detects an obstacle, it sends a signal to the central device. The central device takes a picture and uploads the photo to an image recognition software. Then the central device responds with an audible warning to the user. This signal contains the distance of the obstacle and the possible name of that object. Blind Guide has been tested with a different set of people and situations. All the results and conclusions of the tests are included in this paper.

Several other sensor-based solutions have already been made in the past, and a set of a few is presented in Sect. 2. The rest of this article is structured as follows. In Sect. 3, technologies used with the proposed system are discussed. Section 4 describes the Blind Guide implementation. In Sect. 5, results of the tests performed in visually impaired using Blind Guide are presented. Finally, in Sect. 6 some final considerations are given, as well as directions for future work.

2 Related Work

Many teams for different parts of the world have been developing different systems with the purpose of help visual impaired with their daily lives. One of these solutions is an information system based on location data used to guide blind people, called BlindeDroid. BlindeDroid is a software library in Java for Android mobile devices. The team developed a prototype based on this library and tested it on a shopping center with two blind people. The prototype gathers the information required importing XML files from beacons placed in strategic locations inside a building. With this information, the prototype helps the user with guided navigation, answering questions and providing objective information about places, products, and services near the user. Although this prototype requires beacons placed inside the indoor scenario, BlindeDroid was able to help the testers to reach their desired location [4].

In [5] the authors propose a smart walker that helps visually impaired people with walking disabilities. The prototype is a walker that does not only provide walking assistance but also helps the user avoiding obstacles. The system detects obstacles such as curbs, staircases and holes in the ground and transmits obstacle proximity information through haptic feedback. The authors tested a prototype of the smart walker at SightCity 2014 in Germany. They received positive feedback. The tester said that the smart walker makes them feel safe. The prototype successfully detected all the obstacles in the test.

In the paper [6] the authors sought to design specifications that detail how a building service robot could interact with and guide a visually impaired person through a building in an efficient and socially acceptable way. The authors said that robots are becoming more common in large buildings like hotels and stores. They conducted participatory design sessions with three designers, two of them with vision disability and five visually impaired non-designers. Authors assure that using a robot for indoor navigation has two main advantages. Interacting with robots will require minimum training. Second, they assume that robots will be designed for multiple activities and they will require minimum changes to assist blind people. At the end of this study, the authors developed design recommendations for a blind people guiding robot. They hope that their study will inspire robotics researchers to make a prototype.

Another indoor navigation system for sightless people uses visible light communication [7]. In this system, LED lights, a smartphone with integrated Bluetooth and headphones were used for the implementation. The system employs LED lights and a geomagnetic correction method [7]. The LED light ID is sent via a light communication channel to the smartphone receiver. This receiver transmits the ID of the LED to the phone via Bluetooth. Using the LED light, the phone receives the positional data from a cloud service, via Wi-Fi. The positional information is combined with the guidance content into audio files and transmitted to the user’s headphones [7].

The authors of paper [8] developed a camera-based assistive text reading framework to help visually impaired read text labels and products packaging from hand-held objects in their daily lives. The recognition software locates the text region and distinguishes the words. The recognized text codes are output to blind users in an audio [8]. The experiments performed in the recognition software demonstrate that achieves the state of the art. The authors found interface issues and some problems in extracting and reading text from different objects with complex backgrounds of the algorithm [8].

Virtual Eye uses GPS for helping sightless people [3]. The prototype consists of a GPS receiver, GMS module, the microcontroller (ATMEGA 328), ultrasonic sensor, speech IC and headphones [3]. When the ultrasound sensor finds an obstacle, sends the information to the microcontroller. The microcontroller processes the data and when an obstacle is found a signal is forwarded to the speech IC. The speech IC inform the user that an obstacle is detected by a sound. The GPS information is continuously sent to the microcontroller. The microcontroller transmits the location using the GMS modem in SMS format to all saved numbers.

This present work is the continuation of the theme proposed in a previous paper entitled: Blind Guide: anytime, anywhere [9]. Some of the authors of this article were the authors of this previous work. The authors presented the architecture, materials, and operation of the Blind Guide prototype. Authors made some preliminary test of the prototype using nonblind people [9].

There are many systems related to the works presented in this paper. We believe that our blind guide solution has extra value than the rest as it uses an image recognition software that will help sightless people to understand their surroundings. We also pretend to give social skills to the systems to create a database that stores the location of the obstacles and make it available to the rest of the users of our system.

3 Implemented Technologies

The goal of the present project is to create a system that can be accessible for everyone. The electronic parts used in the Blind Guide prototype are easily obtainable using affordable off-the-shelf components available worldwide. The system is formed by two parts. The peripheral sensors search for obstacles in real time. Blind Guide can use the number of peripheral sensors as needed. The order part of the system is the Central. The central gathers the information sent wirelessly by the peripheral sensors and performs all the object recognition. This section lists all the electronic parts and technologies used in Blind Guide prototype.

3.1 Peripheral Sensor

The primary function of the external sensor is to find nearby obstacles and alert the central device. The sensor is formed by an ultrasound detector and a Wi-Fi microcontroller. Each external sensor has a 500 mA LiPo (Lithium Polymer) battery. Using the sensor ten hours per day the battery can last one week. A prototype of this sensor is illustrated in Fig. 1.

Fig. 1.
figure 1

Peripheral sensor prototype.

3.1.1 Ultrasonic Sensor

For the purpose of detecting obstacles and ultrasonic sensor is used. The ultrasound transmitter emits a 40 kHz ultrasonic wave that travels through the air and immediately returns if an obstacle is in the range of 2 cm–400 cm [10]. This effect is illustrated in Fig. 2. The ultrasound system calculates the distance between the sensor and the obstacle and sends the data to the microcontroller. The sensor operates with a current of 15 mA, and the cost of acquisition is very low [10].

Fig. 2.
figure 2

This illustration shows how the ultrasound sensor works [11].

3.1.2 Wi-Fi Microcontroller

To send the information wirelessly if an obstacle is detected by the central a microcontroller with a Wi-Fi module was used. This microcontroller uses an 80 MHz ESP8266 processor with a Wi-Fi front-end that supports TCP/IP and DNS [12]. The microcontroller has a 3.3 V out and a 500 mA regulator [12].

The ultrasound sensor continuously sends data to the microcontroller. The microcontroller only alerts the central when the distance of the obstacle is less than the signal range configured in the microcontroller. The warning distance varies depending on which part of the body is placed the peripheral sensor. The reason for this configuration is because we want to limit the power consumption as much as possible.

3.2 Central Device

The central device is the heart of the Blind Guide solution. It receives the information from the peripheral devices wirelessly. When an obstacle is encounter, takes a picture of the object. If the central has internet access, send the picture to an object recognition web service. The results and the distance of the obstacle are transmitted via speaker to the sightless person. The electronic parts that form the central device are a single-board computer, camera module, and a speaker. This device has a 2500 mA portable battery. With one single charge, the battery can last two days running for ten hours each day. The prototype of the central device is illustrated in Fig. 3.

Fig. 3.
figure 3

Central device prototype.

3.2.1 Camera Module

The central device has a camera module to take a picture of the obstacle when is detected by a peripheral device. This camera module has a Sony IMX219 8-megapixel sensor. The camera can take high definition videos or photos. Its supports 1080p30, 720p60 and VGA90 video modes [13].

The picture taken by the camera is sent to the cloud image recognition service. Bad quality photos can affect the recognition that is why a good quality camera is needed.

3.2.2 Single-Board Computer

The central device needs a computer to process the messages from the peripheral devices and to communicate with the cloud image recognition service. The central device must be portable and fast as possible. We decided to use the credit card-size single-board computer called Raspberry Pi 3. This portable computer has a 1.2 GHz 64-bit quad-core ARMv8 CPU, 802.11n wireless LAN, Bluetooth Low Energy (BLE) and 1 GB RAM [14]. Raspberry Pi can be used with a variety of operating systems. For Blind Guide prototype we choose Raspbian OS.

Raspbian OS is the most used operating system for Raspberry Pi and has a large community all over the world. This is a free operating system based on Debian which is optimized for the Raspberry Pi hardware.

3.2.3 Message Queue Telemetry Transport

To implement the communication between the peripheral devices and the central device a low-power consume protocol was needed. Message Queue Telemetry Transport (MQTT) is an application layer protocol designed for resource-constrained devices [15]. This protocol depends on the Transmission Control Protocol (TCP) and IP. MQTT uses a topic-based publish-subscribe architecture [15]. This means that a client can send messages to an individual topic and all the clients subscribed to the same topic will receive the messages. The server that manages the communication is called the broker. In Blind Guide, the central device is the broker and also a client. The peripheral devices publish on the topic that the central device is subscribed to communicate.

Quality of Service (QoS) is implemented in MQTT in three levels [15]. Level 0 means that the message is delivered once and no acknowledgment is required. Level 1 is a confirmation that the message reception is required. Finally, level 2 implements a four-way handshake for the delivery of the message [15] (Fig. 4).

Fig. 4.
figure 4

MQTT scheme [16].

3.2.4 Cloud Image Recognition Server

With Blind Guide, we try to give as much information as possible about the obstacles to the users. We choose to use an audible warning based on the results given by a cloud image recognition service. The central device sends the photograph of the obstacle to the service using HTTPS (Hypertext Transfer Protocol Secure) protocol. The server responds a list in JSON (JavaScript Object Notation) format of possible things that the obstacle can be. The central device parses the information and chooses the top five possible options and transmits to the user by and audible warning.

We tested many image recognition services, and we decided to use a web service offered by the company Clarifai [17]. Clarifai provides an image recognition web service and many APIs for different programming languages. Our most important program that runs on the central device has been made using Python. Clarifai offers an easy to use Python API for accessing their services [17].

4 System Design

Blind Guide is a solution for sightless people that alerts about obstacles when he/she is navigating in indoor or outdoor. The system works in the following way: Blind Guide is composed of two elements. One is the peripheral sensor that has an ultrasound and a Wi-Fi module. Blind Guide can use many peripheral sensors as needed. We suggest four peripheral sensors: one for the forehead, one in the chest and two in each leg. The second is the central device that is composed of a single-board computer, a camera module, and a speaker. Blind Guide just needs one central device that should be placed in the chest. The communication between the peripherals and the central device is via Wi-Fi using the MQTT protocol. This distribution forms a star topology. When an obstacle is detected the peripheral sensor alerts the central device. Then the central device takes a picture of the obstacle using the camera module. The picture is uploaded to the cloud image recognition service. When the server responds, the result is transformed into an audible warning.

The information flow of the system works in the following way. Starts in the peripheral sensor. The Wi-Fi microcontroller parses the information from the ultrasound sensor to know if an obstacle has been found. If an obstacle appears, the peripheral sensor sends a warning to the central device. Then the central device takes a picture. If there is internet access the device uploads it to the cloud image recognition service. After that, the central device waits for the server response. When the cloud service sends the result, the central device parses the information. The resulting data is transformed into an audio file, which is transmitted to the speaker. Figure 5 illustrates the Blind Guide system.

Fig. 5.
figure 5

Blind Guide system.

5 Evaluation

To know if the Blind Guide solution can help visually impaired in their daily lives test where performed. We check the prototype with five blind people of different ages. The participants were informed about the objective of the experiments and instructed about how the prototype works. All the volunteers gave consent regarding the publication of photos and other information related to the investigation. The process of introduction of the system was the following: First, we explain the volunteers how the prototype of the system works. We let the volunteers touch and explore the peripheral sensors, the central device, and the straps to make them get used of the different parts. Finally, we explain to them how to wear the system and let the volunteers do it by themselves. We did not give them physical help in this process because we wanted to know if the system can easily wear it by a blind person without any help. This first test was a success. With a little explanation, the volunteers were able to attach the prototype in their body.

Next, we perform a test in indoors and outdoors. For this test, two different set ups of the system were used: One was using only two peripheral sensors, one in the head and the other in the chest. The other configuration was using four sensors, one in the head, one in the chest and two in each leg. Both configurations used the central device.

For the purpose of giving the audible warnings, the central device has a standard headphone jack. In this headphone jack, we can attach regular headphones or a mini speaker. The use of each one depends on of the preference of the blind person using the system. In our research, we identify that some visually impaired does not like to use headphones because they developed their sense of hearing and they see using this sense. On the other hand, others like to use headphones but not in a regular way. This group likes to put the headphone close to their ear. With the utilization of a regular headphone jack, we assure that the system can adjust to the user preference.

Figure 6 shows a picture where a volunteer tried our prototype. The volunteer is the president of the PROCODIS foundation. An organization with the goal of teaching about disabilities using standard communication media.

Fig. 6.
figure 6

A volunteer tries the prototype of the Blind Guide system.

Our tests indicate that our prototype requires at most fifteen minute practices to understand how it works. In the indoor evaluation, we examine the system identifying chairs, tables, doors, walls and ordinary objects inside an office. We test walking on the sidewalk, crossing the street and avoiding common objects in the road like light posts or trash cans in the outdoor evaluation. The system performs as expected in both scenarios. Each peripheral sensor works in the same way, but our volunteers hint that the most valuable of all is the one that is placed in the head. The reason of this is because there is not a device on the market that can help them to avoid high obstacles. That can hit them in the head such as branches.

Our system had an unexpected use. The president of the PROCODIS foundation is an expert in rehabilitation for people that recently became blind. He told us that our system without any modification could help with the process of recovery. This is because our system requires wear it on. This activity helps blind people in their motor skills. When a person became blind, they need to improve their motor skills to compensate their missing sense. Also, our prototype can help sightless to overcome downhearted. When a blind person learns how to use some technology for blind people, they rejoice. This is because they realize that they can still have a healthy life.

After the tests, we received feedback from the volunteers. First, the distance for detecting an obstacle must be between two and a half meters. During the tests, the prototype was using a one-meter range for detecting obstacles. By the time the prototype gave the audible warning while the user was walking the user already hit the obstacle. Second, for the audible alert, a human voice must be use. The voice of our prototype was like a robot and was not pleasant to their ears. Finally, the peripheral sensors and central must be small as possible. The sensors of our system are not comfortable to wear, and they need straps to attach to the body.

6 Conclusions and Future Work

We have developed a blind guide for helping sightless to navigate in indoor and outdoor scenarios. We have tested the blind guide prototype with visually impaired and we had satisfactory results. Using this system, the users can identify and avoid obstacles in real life scenarios, which is very helpful especially in unknown situations. With the materials employed in this prototype, we can assure that this is a low-cost solution for visually impaired.

An important limitation of our prototype is that requires internet access for the object recognition. This problem can be solved using a 4G, 3G or GSM module for the central device. Another more solution is to develop an image recognition server that runs on the central devices and does not require internet access.

The results of our test we found that our system also can be used in the rehabilitation of people that recently became blind. In the future, we want to create to systems. One for rehabilitation and the other for daily use in indoors and outdoors scenarios.

We want also test new forms that the system can give feedback to the users. This prototype only provides audible warnings to the users. Haptic feedback can be more discrete and comfortable for some scenarios.

The system also needs an improvement regarding charging the devices. Must be easy for anyone to charge them.

We also want to increase the number of cameras that the system use, with the purpose of improving the effectiveness of the detection of the obstacles and environment mapping. The camera takes good pictures for the recognition only with good light conditions. For the future, we want to try different cameras to solve the light problem.

Our current prototype does not give a description of the environment. This feature can enhance the process of cognitive mapping of the environment, giving the user more confidence on exploring new areas. We believe that a good feature can be face recognition, to help the users interacting with others.