Keywords

1 Introduction

Nearly 40 million people in India, including 1.6 million children, are blind or visually impaired. The human life is contingent on all five sensory organs and most important of it all is vision. To understand life without vision, we have to place ourselves in the shoes of visually impaired people. They do live a normal life similar to us but in their own way. In the United States, 1.1 million people are blind and about 50,000 lose their sight each year. These things can be avoided to a certain extent such as most of the cases of blindness are preventable. Visually impaired people also have a hard time in recognizing consumables and keeping track of quantities. The difficulties addressed by the people that are disabled in some way are vast.

The objective of our project is to create something that would make the life of visually impaired people a little better than it was used to be. The items that they consume have an expiry date which they might consume. This consumption of expired food is harmful. It might be the root cause of unknown diseases. They also need to keep track of the quantities around the house, and for that, Iris provides a simple and effective solution. As we have mentioned about the tribulations that visually impaired people face throughout their lives, our primary aim is to reduce the problems faced by them and to make their life much simpler due to the findings of new technology. The assistive technology helps in understanding what type of consumable is present within a container which is connected to Android application and provides the necessary information regarding the consumable. When the item has an expiry date, that information is also provided. All these things are integrated together and provide a complete solution to the visually impaired people.

In this project, we are developing a smart container integrated with a mobile application that helps visually impaired people recognize consumables in the container by just touching it. At that time, an audio output will produce in our mobile application giving the information of the contents in that container. Alert messages are also provided to give information on the expiry date of consumables and also for the low-quantity substances. This system is very helpful for people with dementia and also helps to avoid accidental consumption of medicines by visually impaired people. We are also providing a system that detects the leakage in gas cylinders, and the control of flame intensity is made possible through our mobile application. We are also providing object detection and product suggestions in our mobile application to help people in purchasing new and good products.

2 Related Works

In our system, we are using load cells for weighing substances in the container. After certain tests, the results show that the load cell has high accuracy and high stability under certain combined errors such as hot-cold temperature [1]. It is clear that the load cell will give accurate values for measuring fluids also [2]. Thus, the load cell is highly suitable for weighing different types of substances. The normally used touch sensor buttons can be replaced by a more cost-effective assembly that uses variation in amount of infrared radiations falling upon the photodiode to sense the proximity of human finger [3]. Raspberry Pi is used to configure, that is, it converts analogy values into digital value and passes this value into database. So, we chose Raspberry Pi in our system [4]. In our system, we used MQ-6 sensors for detecting gas leakages. Being a highly sensitive sensor, MQ-6 detects the presence of LPG in concentrations from 200 to 10,000 ppm. It has an outer membrane coated with tin dioxide (SnO2). Upon contact with the components propane and butane, in LPG, this coating reacts with them and results in an output which is converted into an electrical voltage [5].

So, it is clear that MQ-6 sensor is suitable for detecting the gas leakage. ROSmotic, a system for building smart homes operated by a smartphone app which is accessible for people with visual disabilities, is controlled through touch or voice commands, and a three-layer architecture was implemented to connect new clients without modifying the control system and the UI consists of a voice control and touch control system, but this application only works for iOS platform [6]. Typhlex describes the use of motion gestures to control mobile applications and mentions 91% of blind users use an iPhone, but in developing countries like India, mostly people depend on Android platforms so we decided to release our application in Android platform, and moreover this system is difficult to set up as it is an iterative procedure and a very cumbersome process. Typhlex was in its initial stage of development and seemed to help many visually impaired people adjust their design which is a very important and necessary procedure while designing gadgets and applications for visually impaired as most products may fail to achieve their desired efficiency [7]. Gablind is an electronic equipment which is connected to a smartphone application for obstacle detection for visually impaired people. The system was tested in Indonesia, and they succeeded in providing a Bluetooth connection to the application and system so that the use of the Internet is not required, but to enhance the productivity we decided to connect the system through Wi-Fi [8]. Gas leak detection could be achieved by means of various MQ series sensors, and through researching about those sensors, we decided to use the MQ5 sensors which could detect LPG gas, and we implemented a system to turn off gas immediately, and in case of leaks, a serial plotter plots those values continuously, and when the threshold gets broken, gas supply will be cut [9]. A screen reader is a very essential function in our application, and for the ease of use, a gesture control operation is also very useful for them, so by researching through this paper, we are trying to implement both voice and gesture control for our application so that visually impaired people could utilize the best UI interface [10]. As we have seen IoT is becoming an integral part of daily life.

There are multiple components that we have used for our project. So, as we can see, there are two types of architectures in IoT—three-layer and five-layer architecture. The first and basic architecture of IoT is three-layer architecture. It basically consists of three layers—perception layer, network layer, and application layer. Perception layer basically deals with sensing and actuation. The network layer is responsible for transmitting and processing the information [11]. The application layer gives a specific application to the user [12]. In five-layer architecture, two more layers are there to give more abstraction to the IoT architecture. The five layers are perception, transport, processing, middleware and application layer [11]. Transport layer is used for sending the values, and middleware acts as a medium between the flow of data, i.e. it controls the data flow.

Application layer presents the necessary application for the usage of the device. These layers of architectures are mainly used in all types of IoT devices. IoT devices are classified into three sections low, middle and high. We have used devices from all these sections such as IR sensor and NQ5 gas sensor, all falling into the category of low. ESP8266 falls under the category of middle and at the latter Raspberry Pi in the last category. Visually impaired have a hard time detecting objects in surroundings, so we can use object detection to detect objects. One of the used object detectors is YOLO [14]. YOLO has achieved the balance between performances and computational costs [13].

There we can also use Faster R-CNN [15] which is also a model. Faster R-CNN [15] still requires a large computational cost in testing phase. These models are well known for their detection performance and usage. But these models are worthwhile to mention, and for our usage, we are using mobilenetSSD which performs better in the case of mobile application. The load cell is highly suitable for weighing substances. The normally used touch sensor buttons can be replaced by a more cost-effective assembly that uses variation in amount of infrared radiations falling upon the photodiode to sense the proximity of human finger [16].

3 Experimental Works

Figure 1 shows the workflow of our proposed system as to how the connections are made and the flowchart of our proposed model.

Fig. 1
figure 1

Representation of (a) work flow diagram and (b) flow chart

`

3.1 Sensor Attached Container

A load cell and IR sensor attached container is implemented to measure the amount of content present in that particular container. Here, we use load cell instead of ultrasonic sensor to produce an accurate result. Load cell will give accurate value for measuring fluids also [17].

3.2 Mobile Application

The application is designed in such a way that there are two log-in options, one for blind or visually impaired people and other for common people. After installing the application, user has to enter their basic details provided in it and have to choose the appropriate log-in option. After all the details are entered, the user can enter into the homepage in which four options are provided. They are:

  • Add to The Container: To add details of the container.

  • View Container: To view the container details.

  • Quick Bucket List: To find low-quantity products.

  • Usage Analytics: To analyse the usage of each month.

  • Object Detection: To detect objects present in the nearby environment.

  • Gas Control: Controlling the intensity of gas.

  • Iris Assist: Group chat-based assistance.

When we click on The Add to The Container button, a QR code scanner is opened. We have to scan the QR code pasted on the container. The QR code contains details about the container including container ID, maximum weight and so on. After submitting all the details, that corresponding container is ready to use. When we touch the container, the IR sensor attached to it will send a HIGH signal to the MCU to let it know that someone has touched the container. Then container parameters such as weight (calibrated to container level), expiry date, etc. read by MCU are sent via Wi-Fi to Firebase real-time database. From the database, the data is read by the mobile application. This idea makes blind people to be self-sufficient to a certain extent. It provides an easy text-to-speech mechanism for them. This product also helps people to keep an update about the items and their expiry date. It also prevents accidental consumption of medicines.

3.3 LPG Flame Intensity Controller and Gas Leak Detection

Visually impaired people find it very difficult in using LPG gas stove. To overcome their problems, we have devised a mechanism for controlling the LPG flame. A solenoid valve is used for this purpose to control the gas flow, and the solenoid valve is controlled by Raspberry Pi unit via a solenoid driver circuit . The gas tube is fitted via a solenoid valve which is connected to the main gas cylinder, and the smartphone application can send messages to the Raspberry Pi via the cloud Firebase. Based on the input from the user, the valve can be fully shut or fully open or anything in between. The solenoid driver is built using a TIP122 NPN transistor; TIP122 acts as a switch to control the voltage going through the solenoid valve, thereby controlling the gas flow through the valve.

A 1n4007 diode is used as a fly back diode to prevent the transistor/Raspberry Pi from overloading from reverse current. The solenoid driver is connected to a GPIO pin of the Raspberry Pi and PWM output can be used to simulate an analogue voltage value which then controls the valve.

By programmatically controlling the duty cycle of PWM, we can simulate an analogue voltage from Raspberry Pi. MQ-6 , a sensor specialized in the detection of LPG and gases whose constituents are propane and butane, is used in the proposed system [18].

3.4 IRIS Assist

Iris Assist is a group chat feature that provides assistance to visually impaired people through sighted volunteers. The feature works similar to a WhatsApp group chat in which our volunteers give replies to the queries by the visually impaired people. Every day, sighted volunteers lend their eyes to solve tasks to help blind and low-vision people lead more independent lives. Visually impaired people may need help for distinguishing colours, distinguishing currencies, reading instructions, navigating new surroundings, etc. Our volunteers will guide them to solve all these problems.

3.5 Object Detection

Object detection is a computer technology related to computer vision and image processing that deals with detecting instances of semantic objects of a certain class in digital images [19]. Recent advancements in the field of computer vision and deep learning have given rise to reliable methods of object detection [20]. We see the difficulties visually impaired people face in their day-to-day life. As they have hardships in recognizing objects, we plan to use deep learning technology that is blooming. With the help of that, the app can be used to identify the various objects in the real world. Once these objects are detected, an audio output is given as feedback so that the user can know what object is present in front of them.

This would be beneficial to them to identify objects near and help them have an intuition about their surroundings and the placement of items. Using TensorFlow, we can create a model that is trained on a COCO dataset which contains 40+ classes, and this will be more than enough to detect most of the common objects placed around a person. Now, the best detection model with the great accuracy would be YOLOv3 which has no supported version for mobile application. So, we are going to use YOLOv2 or SSD (single shot detector) mobilenet for the detection of objects. These models are converted from the original TensorFlow architecture into TensorFlow Lite which has the primary purpose of working in a mobile application . In the end we have found that SSD mobilenet works best for our project.

4 Result

Iris is capable of providing talk-back facility about the contents of the container once the proximity of the user is detected through the IR sensor. Moreover, the LPG gas stove can be controlled by using Iris mobile application to turn off the gas stove. In case of gas leaks, the solenoid valve automatically closes and an alert occurs in the application. The mobile application consists of two signs in functions including Google and biometric log-in which leads to the homepage. The homepage navigates to the previously defined functionalities mentioned above. All the data from the hardware is stored and retrieved from the Firebase, and the same data is used for analysis purpose for getting the consumption insight so that the user can track the usage insights. The same principle using load cell platform used for measuring the quantity of contents in the container can be used to measure the LPG gas quantity present in the gas cylinder by using a bigger platform and a 20 kg load cell. Hence, a smart kitchen model can be implemented using our hardware and mobile application.

The above figures explain more the results we obtained through implementing the system. Figure 2a represents the platform made for placing the container in the kitchen, and we designed our own platform using two acrylic sheets and using spacers in between to hold the load cell. Figure 3a shows the load cell attached to the platform through which we measure the amount of substance placed on top of the platform and a HX711 unit along with ESP8266 NodeMCU which is used to retrieve and send those data to the Firebase . Figure 2b shows the container we use to store those various groceries in kitchen and the container is placed on top of the platform and two IR sensors are placed on top of the container, and upon detecting proximity, that is, when the user brings his/her hand towards the lid of the container, the IR sensor gets activated and the corresponding view page, and as shown in Fig. 3b, each container contains a separate QR code to identify the contents present inside the container.

Fig. 2
figure 2

Representation of (a) the platform and (b) the container

Fig. 3
figure 3

Representation of (a) load cell and (b) QR code

The gas detection system contains these components:

Figure 4a shows the solenoid valve attached between the gas tube to regulate the gas flow and circuitry used; we used a Raspberry Pi 4 model b as the control unit for this system and the flame can be shut down through our mobile application as shown in Fig. 4b. The following are the screenshots of our mobile application:

Fig. 4
figure 4

Representation of (a) the solenoid valve (b) turn off button

Figure 5a describes our homepage UI and contains six different buttons. The first button ‘Add To The Container’ lets the user register a new container through scanning the QR code. Upon clicking this button, the user gets redirected to a page where the QR code can be scanned and to enter the relevant information regarding content and expiry date. Figure 5b shows the gas controller page where the user is provided with the feature to turn off gas remotely. In case of gas leaks, this button gets activated automatically. Moreover, an SOS button is also provided to tackle with emergency situations. Figure 5c shows a chat application where like-minded peers can engage in a conversation. These are the results we could obtain out of our work, and we are planning to include more features to our application such as to make a smart kitchen environment for both normal and visually impaired people.

Fig. 5
figure 5

Representation of (a) homepage, (b) gas controller and (c) Iris Assist

We have made a small comparison of our work with two other works related to kitchen inventory management: We have taken two papers to compare with our system: one is called Sims system [21] and the other one is called ZigBee system [21] which is similar to the device we have made (Table 1).

Table 1 Comparison with ZigBee and Sims system

5 Conclusion and Future Work

In this paper, IoT technology is mainly focused on along with deep learning. A mobile application is being developed for visually impaired to make them independent in the kitchen. A majority of visually impaired people, especially women, face many difficulties while cooking such as recognizing various groceries. They often forget to go to grocery shops, or about the grocery list, or how much grocery items are available in their home. From their testimonials, it was found that they also find difficulty in using LPG gas stove, and mostly gas leaks would occur which would lead to accidents. This system provides an out-of-the-box solution to all these problems in a user-friendly manner. The mobile application is designed in a manner for both visually impaired people and normal people for their inventory management in kitchen. By using this system, it becomes easier to shop for groceries even on a daily basis as well as to keep track of the groceries and also helps in object detection. The team behind Iris is planning to develop a smart spectacle-based system for object detection, navigation and facial recognition in the near future to make this product more effective and available for customer.

We are also planning to create a dual log-in feature for visually impaired and normal people so that normal people will be able to use a more visually appealing version of our application with improved user interface. Currently we developed a very basic UI intended mainly for visually impaired people [22,23,24,25,26,27].