Keywords

1 Introduction

The role of sensors is undoubtedly relevant to the current 4.0 Era. In fact, any enabling technologies of the 4.0 paradigm thrives on raw measurement data provided by sensors. The internet of things (IoT) has contributed in boosting the pervasiveness of sensors in every field. Nowadays, however, the challenge is not only to monitor a physical quantity, but also to facilitate and make more effective the User’s fruition of the information made available by the sensors.

An analysis of the operating context suggests the importance of adopting a User-centered approach, to improve usability and User experience in the 4.0 Era applications [1]. Especially, in Industry 4.0 applications, humans are becoming a composite factor in a highly automated system; therefore, to improve the collaboration between men and the surrounding 4.0 Era context, it is necessary to enhance the human–machine interaction. This is particularly important not only in the industrial sector, but also in the medical one: in fact, in both scenarios, the overall effectiveness of any 4.0 process is intrinsically related to the performance of the human actor involved.

A promising communication channel is offered by the combination of brain computer interface (BCI) and augmented reality (AR). AR is the technology that increases reality perception by mixing physical objects with digital information, and providing the necessary feedback to the User [2]. Instead, brain computer interface (BCI) has the potential to become the “ultimate interface”, through which thoughts can become acts [3, 4].

Starting from these considerations, this work outlines the main results of the research activities related to the use of AR and BCI as sensor-to-human interfacing solutions, in the industrial and medical fields.

First, the general diagram of the envisaged human/sensor interaction is described. Successively, three experimental cases that implement the proposed human-sensor interaction model are reported.

In particular, the first case study relates to the adoption of both AR and BCI for hands-free interrogation of remote sensors. The second experimental case reports on the use of AR for assisting an industrial assembly task, and the analysis of the operator’s performance in his interaction with the working environment [5]. Finally, the third case reports on an ongoing research activity related to the use of AR in the operating room during surgical procedures: in particular, for an effective fruition by the anesthetist of the data, i.e. the patient’s vitals, coming from the monitoring instrumentation [6].

It should be mentioned that, although the reported cases refer to specific application scenarios, the results and the considerations can be readily extended to any other application field.

2 Schematization of the Proposed User-Centered Design Approach

Figure 1 shows a schematization of a user-centered, combined BCI/ AR interface, which is at the base of the approach described in this paper.

Fig. 1
figure 1

Schematization of the design of user-centered BCI/ AR interface

A wireless sensor network measures the physical quantities of interest from the system to be monitored.

The User wears a BCI/AR interface, which includes AR glasses; noninvasive, dry electrodes for the electroencephalogram (EEG); and a wearable computer for data processing. Basically, the BCI acts as an input interface, whereas the AR acts as an output interface.

BCI allows the User to select among different possible choices (i.e., alternative inputs). In one possible configuration, a BCI may exploit steady state visually evoked potentials (SSVEPs). Visual stimuli are shown on the display of the AR glasses worn by the operator: each visual stimulus corresponds to a different possible input and flickers at a different frequency. When the User wants to select the input, he stares at the associated flickering signal. This naturally generates (“evokes”) electrical activity in the operator’s brain, at the same (or multiples of the) frequency of the visual flickering stimulus [4, 7]. The brain signal is collected through a portable EEG with wearable dry electrodes, and it is processed by a wearable microcomputer. The real-time, automated analysis of the signal features of the brain signal discloses the selection that was made by the User.

If we suppose that the User’s selection regards the visualization of the output from a number of sensors, then the selected sensor’s output will be shown to the User on the display of the AR glasses.

3 Experimental and Results

3.1 Case #1

Figure 2 shows the practical implementation of the system sketched in Fig. 1. In particular, this application refers to the hands-free interrogation of remote sensors (in this experiment, the sensor output was emulated). The picture on the left in Fig. 2 shows the user wearing the AR glasses (Epson Moverio BT350), the dry noninvasive EEG electrodes, and a wearable computer [4].

Fig. 2
figure 2

On the left, picture of a User wearing AR glasses and the BCI. On the right, picture of what the User sees through the AR glasses. The writings “Humidity” and “Temperature” flicker at different frequencies [4]

The picture on the right of Fig. 2 shows what the user see through his AR glasses. In practice, the writings “humidity” and “temperature” flicker at different frequencies. The two writings are associated with the output of the corresponding sensors.

When the User stares for approximately one second at either the writings, the corresponding features are extracted from the acquired EEG signal. This processing takes less than one second. The automated analysis of the EEG signal allows to identify which selection was made by the User. Finally, the measurement information from the selected sensor is displayed through the AR glasses.

3.2 Case #2

The second case relates to the use of AR as an interface to provide visual instructions to an operator in a typical industrial application, namely the assembly of a product.

The case study consists of an assembly task in which the operator is required to assemble a prototype made with Legos, for which the assembly instructions are available in either paper or virtual-aided forms. Figure 3 (on the left) shows a picture of the assembly station and of an operator carrying out the assembly task following AR instructions. On the right of Fig. 3, the example of an instruction image shown to the operator through the AR glasses [5].

Fig. 3
figure 3

On the left, there is a picture of the assembly table and of an operator carrying out the assembly task following AR instructions. On the right, the example of an instruction image shown to the operator through the AR glasses [5]

Comparative experimental tests were carried out on two groups of volunteers, who had to complete the task. One group followed paper instructions, whereas the other followed instructions administered through AR glasses.

The goal of this research activity was to assess objectively the beneficial effects introduced by the adoption of AR. To this end, two figures of merit were considered in the measurement: (1) the overall processing time for the operator to  complete the task; (2) the number of mistakes the operator made during the execution of the task.

Results obtained in terms of operators’ learning curves showed that the group of people who used AR instructions slightly outperformed the group that used paper instructions. The most notable observation, however, was that the group with AR instructions made almost no errors in the assembly procedure. This confirmed the positive effect related to the use of AR as a User interface with real world.  This beneficial effect is expected to be more marked for more complex assembly tasks.

3.3 Case #3

The third case study reported in this work regards to an AR-based system for monitoring patient’s vitals in the operating room (OR). An optical see-through (OST) headset, worn by the anesthetist or by the surgeon’s assistant, shows in real-time the patient’s vitals acquired from the electro-medical equipment available in the OR [6]. A dedicated application was developed to allow a hands-free fruition of the AR content. Using AR glasses as a User interface allows the anesthetist to maintain a higher level of concentration on the task at hand. Experimental tests were carried out acquiring the vital parameters from two pieces of equipment typically available in the OR, namely a ventilator and a monitor. Figure 4 shows the AR visualization of some vital parameters made available on the AR glasses. Because of the stringent requirement of the applications, experimental tests were carried out to assess the transmission error rate and the latency: results, so far, have demonstrated the reliability of the proposed AR-based monitoring system.

Fig. 4
figure 4

Example of the visualization of the patient’s vitals acquired from the operating room instrumentation, as shown through the AR glasses [6]

Current research is focused on including additional “augmented” information for the User: (1) selected information from the patient’s electronic medical record; and (2) real-time alert information on the health status of the patient, based on the processing made through dedicated predictive algorithms.

4 Conclusion

In this work, the use of BCI and AR as output/input interfaces between the User and the sensors in a 4.0 Era scenario is proposed. These two enabling technologies can be used (separately or combined with each other), to guarantee an effective fruition of the information generated by the sensors. The common thread in all the experimental cases reported herein is the fact that all the applications are designed following a user-centered approach. This was done starting from the consideration that, also in the 4.0 Era, the effectiveness of any application that involves humans may be limited by humans’ performance, especially when they do not “blend in” and feel comfortable with the surrounding technologies. Current research activity is being dedicated to further improving the effectiveness of the proposed BCI/AR interfaces. This is done both in terms of usability (e.g., by identifying the optimal frequency values of the flickering visual signals in BCI, not to discomfort the user), and increase of the informative content made available to the User (e.g., by adding additional information for the User, as described in the medical application).