Keywords

1 Introduction

Disability should not be an obstacle to success [2].

Results of international studies show that there is a continuing increase in the number of elderly people with physical disabilities worldwide. With the aggravation of physical disabilities, these people are increasingly losing their autonomy and becoming dependent on the help of the caretaker and nursing staff in solving daily tasks. This situation can lead to the financial overload of the social system in question.

Thus, in the SMC, the user can intervene correctively in a self-executed action if an erroneous situation is detected.

As an example, the point-to-point bringing can be recalled to a selected final position. In the event of a collision hazard, the user can influence the trajectory by direct command.

One way to prevent this situation is to implement new technologies in the health assistance process. Moreover, the way people with a disability are related to society could be improved if we consider that the majority are fit for certain types of work and the only thing that prevents them is a detail related to the degree of access to a given environment or space, for example. The main application and the greatest impact is to promote the mobility of people with disabilities by solving the problem of limb movement and improving their lives by allowing access to buildings and urban space designed for people without disabilities. The European Commission has adopted a strategy to remove these obstacles.

The main objective of the project was to develop an integrated hardware and software platform for people with movement disabilities. By implementing this project, we are giving hope to persons with disabilities who are in special situations.

Robots are one of the promises of the future. Although the robot has not yet emerged to know and make it all, robotic mechanisms already know to perform a certain task very well.

Robotics, from the application point of view, can be divided into industrial robotics and service robotics.

If industrial robotics is already a well-saturated domain. Industrial robot applications generally have found a clear usage between industrial applications and in the near future there are no spectacular developments expected, while service robotics are in full swing and identification of new applications.

Following the introduction of simple commands, the system must be able to independently plan and execute a series of actions necessary to accomplish a particular task.

In the literature there is a unitary name for human-machine cooperation, known as shared-mode-control (SMC). Within SMC there is a continuous switch between the autonomous actions of the system and the direct command of the user [6, 7].

2 Development of the Project

The block diagram for voice control used in the project is shown in Fig. 1.

Fig. 1
figure 1

Block diagram for voice control

Microphone: It has a condenser type with the ready-made power system on the EasyVR module.

EasyVR Voice Recognition Module: Takes the analog audio signal from the microphone to our capacitor type, amplifies it, and sends it for analysis to an ADC converter. The signal is sampled, quantised, filtered and encoded in a digital sequence.

The voice recognition program is structured with two priority levels. In the first level, a single “ROBOT” command is introduced. This command prepares the robot for the following instructions (Fig. 2).

Fig. 2
figure 2

EasyVR module [3]

After the command “ROBOT” disconnects the electromagnetic braking system, a new instruction from the set of commands can be activated. The commands are recorded in the internal library of the EasyVR module. We have tried commands in Romanian, English and combinations to get a clear distinction between commands.

In the current library we loaded the voice tag that was assigned an identification number.

  • ÎNAPOI/BACKWARDS ====== 0,

  • DREAPTA/RIGHT ==== 1,

  • STÂNGA/LEFT ===== 2,

  • ÎNAINTE/FORWARDS ===== 3,

  • STOP ======= 4,

  • ROBOTSTOP = 5,

None of the second level orders will be accepted before the first level passes.

The command, ROBOTSTOP will reset the robot to the first level of commands.

The ARDUINO MEGA2560 module: It is based on a microcontroller developed by Atmel “ATMEGA 2560” (Fig. 3).

Fig. 3
figure 3

ARDUINO MEGA2560 [3]

Connecting the Arduino to the EasyVR module is simple. It consists in overlaying the two modules with pins already prepared in a father-mother configuration. Connecting to a PC is done through the USB plug and the Arduino programming software requires no cost. Because our mobile platform uses two electrical motors, through their combined control, you can get forward, reverse, left, right, swivel. For safety concerns there is an electromagnetic braking system.

The brake system is released in our case at the “ROBOT” main command and brakes the “ROBOT STOP” command. The program loaded on the Arduino module is presented later in the paper.

In the program we took into account the command history of the drive motors. If in the last command both engines were “on” and if the left command (right) was activated, after executing the new command the engines were set again on “on”. We used the same procedure when the engines were “off”.

In the case of the turn, we opted to stop one of the engines if the platform was in motion. If the platform was “off” the corresponding engine was turned counterclockwise.

Engine driver mode: The engine driver module required for supplying power to the motors was built using transistors. The diagram is presented in Fig. 4.

Fig. 4
figure 4

MOS-FET transistor control

MOSFET transistors have the advantage of very low power consumption in the control circuit, but because of the high source drag–drain resistance, the losses are high.

On an IRFP360PBF mosfet transistor (N-Ch 400 V 23 A 280 W 0.2 R) which carries a 23 A current in the drain, the catalog gives a value of 0.2 Ω source drain-ON.

$$ {\text{P}} = {\text{R}}^*{\text{I}}^{ \wedge } 2 = 105\,{\text{W}} $$
(1)

105 W is loss that dissipates in the form of thermal energy.

For a bipolar transistor like TYPE 35, the E-C voltage in the ON position is 1.8 V.

$$ {\text{P}} = {\text{U}}^*{\text{I}} = 41\,{\text{W}} $$
(2)

For these reasons, I used a hybrid transistor that is voltage-controlled as a MOS transistor and the execution is done by a bipolar transistor, IGBT.

The IGBT is a semiconductor three-terminal power device mainly used as an electronic switch, and in new devices it has been noted for high efficiency and switching speed.

The IGBT transistor for engine speed control was commanded with a PWM signal generated by the Arduino module and for reversing the sense of the wheel motion we used a relay.

The printed wiring was photographed and designed with the Pad2Pad program (Fig. 5).

Fig. 5
figure 5

Printed cable design

The coated circuit was prepared by coating the board with a photosensitive paint (POSITIV20) after its preliminary cleaning.

For clearing we used a low NAOH solution until the ultraviolet sensitive paint was removed. Corrosion was done using a FeCl3 solution. After making the wiring, we went on to mounting and gluing electronic components (Fig. 6).

Fig. 6
figure 6

Motor drivers

The output pins on the Arduino Mega2560 board used by us are:

  1. 1.

    31 right engine speed control

  2. 2.

    33 right direction control

  3. 3.

    35 left engine speed control

  4. 4.

    37 Left sense control

  5. 5.

    41 pin electromagnetic brake control.

Since the signals generated by the microcontroller have values between 0 and 5 V and the mos transistor requires a high voltage in the grid, it was necessary to introduce into the circuit an optocoupler which also ensures the galvanic separation between the high voltage applied to the motors and the Arduino module (Fig. 7).

Fig. 7
figure 7

Optical command diagram [5]

The electronic installation was first tested in the laboratory and subsequently mounted on a wheelchair (Fig. 8).

Fig. 8
figure 8

Testing in laboratory conditions

For our project we used a wheelchair powered by two DC motors with a power of 250 W each and power supply at 24 V (Fig. 9).

Fig. 9
figure 9

Test wheelchair

Remote sensor: we used in this project a Sharp GP2Y0A21YK infrared sensor for object detection (Fig. 10).

Fig. 10
figure 10

Optical distance sensor [3]

The Sharp distance sensor can be used with Arduino to measure distance to various surrounding objects.

From the Sharp GP range there are 3 types of sensors with maximum efficacy on a particular area in terms of measured distance: proximity sensor, effective for measurements between 3 and 40 cm, average distance sensor, effective between 10 and 80 cm, and a 15–150 cm effective distance sensor.

The sensor we use is efficient between 10 and 80 cm, the distance that we are interested into avoid collision with various objects, considering that the speed of travel is low.

Because it is an analog sensor that generates variable voltage with the distance, connecting to Arduino is made on one of the analog inputs.

The device has 3 pins, two of which are supply pins (GND and VCC), and the third one is the pin that indicates the distance through the potential difference present on it.

The voltage/distance feature is illustrated in Fig. 11.

Fig. 11
figure 11

Characteristic curve for Sharp GP2Y0A21YK sensor [3]

The sensor code given by the manufacturer is a simple one, does not use much of the CPU resources and does not affect the voice recognition program. An example of how to calculate the distance:

  • Arduino Program

  • void setup() {

  •   Serial.begin(9600);

  • }

  • void loop() {

  •   int valoareSenzor = readDistanceMediata(10, 0);

  •   Serial.print(“Valoare senzor:”);

  •   Serial.println(valoareSenzor);

  •   delay(100);

  • }

  • int readDistanceMediata(int count, int pin) {

  •   int sum = 0;

  •   for (int i = 0; i<count; i++) {

  •   float volts = analogRead(pin) * ((float) 5/1024);

  •   float distance = 65 * pow(volts, -1.10);

  •   sum = sum + distance;

  •   delay(5);

  •   }

  •   return (int) (sum/count);

  • }          [3]

3 Experimental Results

Following tests, we made a voice-controlled robot for basic commands.

The main difficulty faced by voice recognition programs is that the voices of two people are not at all similar, and even the voice of the same person may vary in certain situations. Another problem is the rate OF speech recognition.

In all voice recognition programs, mistakes can sometimes be made. Microsoft claims that 95% of the words are recognized correctly, that is, just one word out of twenty is wrong. There are some apps with better performance, but none exceeds the 97% rate [4].

The design is designed in such a way that it can be mounted on any wheelchair that uses C.C. and can also be adapted to wheelchairs or other platforms that use special motors with small changes to microprocessor-based software.

4 Conclusions

By implementing this project, we are giving hope to persons with disabilities who are in special situations.

The proposed solution is accessible especially because these people do not have high incomes to invest in sophisticated platforms.

In the proposed project, we have not rebutted the rules of security and protection, especially since the people using such a device cannot take care of themselves.

The novelty in the project consists in the fact that a good result was achieved by programming a microcontroller that has been used for a long time in the industrial robots’ market.

Voice control also has drawbacks such as:

  • Sensitivity to environmental noise

  • Inability to use in cases of diseases that degrade the voice.

Connecting the microcontroller with the power side was done with minimal effort, resulting in a low price.

It is very important that other extensions are developed in the future such as:

  • Foot control if only upper limbs are affected,

  • Tongue or eye control if there are more serious conditions that can be implemented on the same microcontroller (which has open gates).

It is also necessary to monitor heart rate, breathing, blood pressure, etc. for people with these conditions, applications that can be further deployed on the same module that controls voice commands.