Keywords

1 Introduction

In 1998 Eli Zelkha, with Simon Birrell, coined the Ambient Intelligence (AmI) term. It was the vision of the pervasive computers in daily life, influenced by the human-centered design paradigm. The intelligence term denotes the technology capacity to learn, and in this specific case, learn how to interact with humans and their environments. After more than two decades, with the growth of the Internet of Things (IoT), where the number of Internet Protocol (IP) connected devices will reach three times the world population in 2023 according to [8], the concept of pervasive computing becomes finally tangible.

Nowadays, AmI indisputably evokes Artificial Intelligence (AI) with Machine Learning (ML) to serve humans according to the ambient context through Smart Devices, which are electronic devices connected to other devices or networks. In most AmI scenarios, AI understands the context and the needs of humans and, as a consequence, enables the actions of devices to satisfy such needs. For example, after recognizing an apartment’s inhabitant, the AI turns off the alarm system, opens the door, and lets her/him in. Furthermore, inside the apartment, the AI can identify the existing context, such as a light condition, ambient temperature, and person location, to decide whether turn on the light or open the curtains, arranging the heating if necessary.

Such a situation could be desirable for a person and indispensable for some impaired ones, but although humans always desired machines to serve them as much as possible, such a self-governing AI capability raises some concerns [3, 4]. Although some of the Sci-Fi plots, where machines with full autonomy outperform humans and become too powerful to control, have become more plausible, other reasons justify the emergence of the hybrid human-artificial intelligence research area. In [7] the authors well argue that an active human-AI interaction not only could permit better human control of resulting devices action but, as it happens when humans team up to perform a task that none of them could do alone, this collaboration could solve some of the AI weakness, and achieve solutions that not the AI nor the humans could achieve alone. Indeed, although AI well performs implicit knowledge or hidden patterns from large-scale data, it still lacks reasoning, inference, and instinct judgments on dynamic and multiple factors, which are instead well performed by humans.

To summarize, to have beneficial performances, the AI must acquire, as well as possible, the knowledge of context, the location of people in such a context, and some human inputs to contextualize the actions. Sensors supply the most information to the AI about the environmental context and people location, while a practical and straightforward Human-Computer Interface (HCI) can supply the required interaction.

There are still significant technical challenges about how the AI acquires the context and even more challenges on how the HCI allows the AI to understand the human requirements in a specific context.

The latter is not merely how to give commands to perform on a specific device, as it can be done by remote control, but it is about how to let AI understand the person’s requirements to autonomously decide how to control the Smart Devices to satisfy them.

There are devices, such as Amazon Echo with Amazon Alexa inspired by the Star Trek computer [20], which allow people to communicate more naturally with computers and Smart Environment, thanks to the Conversational AI [21]. Amazon Echo uses large volumes of data, machine learning, and natural language processing to imitate human interactions by recognizing speech and text inputs and translating their meanings to the AI.

Aside from the obvious observation that voice commands are not inclusive to deaf people, there are other situations where audio or visual interfaces are not appropriate. Within noisy environments at home, and other AmI scenarios in working environments, such as offices, health care places, and factories, the voice commands could disturb other people or could not be well received by the device, so modalities different from them could be required. Furthermore, although visual interfaces are functional when integrated into the devices like smartphones or personal computers, there are some circumstances where the sight must not be on the interface while using them. The following examples should clarify the idea. When using a remote controller to switch the TV channel, the user prefers to look at the TV to get feedback on what is happening instead of looking at the remote controller. Likewise, driving a car or a crane, it is preferable to look at the road and not at the steering wheel. Thus, envisioning places like homes, working environments, crowded public places, factories, and hospitals, this paper aims to propose an entirely tactile simple interface, which allows interacting with the AI of the AmI, by using only one hand without the need to talk or move the sight from what the user is doing.

Moreover, the tactile interaction proposed in this paper is bidirectional. It means that through the tactile sense it will also be possible to receive feedback from the intelligent ambient. Actually, the literature frequently describes AmI as invisible technologies, recalling the 1991 statement by Mark Weiser [24] ‘The most profound technologies are those that disappear.’ It implies that when entering in contact with AmI, the human needs a way to perceive the Smart Environment that cannot be seen otherwise.

This research was conceived by envisioning a guiding scenario as follows. A person carrying in (her)his pocket a small device could feel a tactile sensation when entering a Smart Environment governed by AmI solution. In this context (s)he can hold the device with one hand and start interacting with the smart devices in the proximity, just by using the fingers, without looking at the interactive device or earring any audio signal.

The paper proceeds by describing some related works in the next Section. Section 3 describes the HCI device, and Sect. 4 depicts one of its possible uses. Conclusions follow.

2 Related Work

This section reports on previous research on tactile interfaces, which somehow influenced or encouraged the research work proposed in this paper, and recalls some research on the automated environment discouraging automation complete independence without human interaction.

The tactile sense can be stimulated by feedback from the device, usually referred to as haptic feedback in literature, and it can be used as input to the device as a touch screen and a computer touchpad do. Common examples of haptic feedback are the vibration of a smartphone and a joystick force feedback.

In [19] the authors present a haptic display for small devices. The work highlighted the difficulty and the value of implementing a tactile output on small devices. Focusing mainly on haptic feedback since the device already has a touch screen as input, the work explains how useful it can be to receive action feedback without looking at the interface, which is one of the present paper motivations. However, it addresses the usage of a small screen that the present research work aims to avoid.

Ozioko et al. [18] present a wearable tactile communication interface with vibrotactile feedback for assistive communication. The interface demonstrates the effectiveness of the tactile communication method used not only by deafblind people. In this case, the work does not offer a tactile input device solution, as proposed in this paper.

Kashyap et al. in [12] emphasize the need for appropriate user interfaces and problems of full-automation, lack of control, and the complexity of the everyday smart devices environment, while [3] indicates that users do not accept a fully automated system. However, although several attempts have been made to provide solutions with complementary explicit interaction, these topics have remained little explored.

By [2], Becker et al. compare three scenarios, controlling Appliances Through Wearable Augmented Reality. The paper proposes the use of multiple devices to wear and three different modalities. From results it came out that a tangible interface has some preferred use, compared to the virtual gesture interfaces.

3 The Cube-Shaped User Device

The research activities done at the Laboratory of Geographic Information Systems (LabGIS) of the Department of Computer Science (University of Salerno) were to identify solutions that let humans interact with an AmI by a fully tactile interface device. The rationale behind this is that a tactile interface, if small enough, could be used by one hand only and could avoid audio or visual actions when these are not feasible.

3.1 Premises

Although it is possible to stimulate the real tactile perception by mechanical means, many research studies focus on the generation of the tactile illusion, i.e., the misleading sensation of tactile perception. It is a more flexible way to reproduce a piloted tactile sensation with electronic devices [25]. A haptic output artificially generated can produce a tactile illusion. For example, [1] studies an electrotactile feedback that can reproduce the texture sensation on a touch screen. It is an illusory sensation as the touch screen does not change its physical texture, but a piloted current passes on its surface, generating such an illusion on human fingers. Another example is given by Brewster et al. in [6], who studied vibrotactile messages, which can be used for non-visual information. The authors described various solutions to message with vibrotactile Roughness and Rhythm illusion, easily generated by electronic devices.

Tactile feedback such as force, pressure, and roughness are the primary sensory inputs presented to a user using a haptic display [14], but the human tactile sense has thermal receptors too [14], and some works studied how effective it could be to incorporate thermal feedback into haptic devices [11].

Following these research fields, the design of the device described in this paper addressed the generation of tactile illusions by electronic solutions to send user feedback.

Since our device requirements establish that it should be small enough to be manipulated by one hand only, the device should have reduced space for mechanical parts and the battery. Therefore, a tiny battery fixed severe restrictions on power consumption, and a small embodiment reduced the possibility of using sophisticated mechanical actuators. Hence, the device uses a vibrotactile messaging solution with an additional thermal sensation to represent the AI feedback, and the users could message to the AI using manipulation of the device and tapping on its faces.

3.2 The Design

The work started with designing a new low power, wireless device electronic circuit and a wireless architecture that could be easy to deploy in a Smart Environment governed by AI.

The hardware components selected for the device were chosen considering its features as well as the size. They allowed the realization of a cube-shaped device with an edge of 38 mm.

Fig. 1.
figure 1

The cube-shaped device manipulations

Figure 1 shows how it is possible to hold a cube with three fingers and rotate it.

Thumb and Middle fingers, hold the cube, while the Index finger is free to tap on the cube face below it. For each of the six cube faces that could stay under the Index Finger, it is possible to have four different faces under the Thumb just rotating the cube on the X-axis. It is so possible to have \(6*4=24\) distinct positions of the cube. Furthermore, it is possible to give a distinct meaning to the single and the double-tap of the Index Finger for each of the twenty-four positions, reaching forty-eight discrete inputs.

Moving from one position to another is possible through cube rotation, and the rotation sequence can also be associate with various additional inputs.

Fig. 2.
figure 2

Possible cube manipulation

Indeed, Fig. 2a shows that bringing the Face D to the position of Face A can be done in two ways, rotating twice clockwise or twice anticlockwise the face B. Each direction of rotation can assume a different meaning, even if reaching the same end position. The same consideration fits other axis rotations. For example, let us suppose that a person, located in an AmI environment, would change the heating conditions, then to communicate to the AI governing the AmI this request (s)he could rotate forward the cube on the face A to increase the temperature, or backward to decrease it (Fig. 2b).

3.2.1 The Embodiment

for the prototype, once designed with a 3D CAD, was built by rapid prototyping with a Fused filament fabrication (FFF) 3D printer. The cube faces textures were investigated so that the cube positions under the fingers could be recognized by tactile sensation. Gibson in [9], conducted an experiment demonstrating that the tactile receptors can better recognize the form of an object when this is rotated instead of just pressed. Hence, the cube face texture is easier recognized when the face slides under the fingers, i.e., while the cube is rotated. The design focused mainly on a pattern that could enhance the tactile sensation while a face passes from one finger to another during a cube rotation. Other experiments by Lederman in [15] confirmed that the grooves and lands are recognized by the tactile receptors, better if the groves are large. Following such indications, the six cube faces were designed to make consecutive face patterns unique for each possible rotation (Fig. 3a).

Fig. 3.
figure 3

Cube device prototype

One of the cube faces has a fingerprint sensor (Fig. 3b). The sensor allows user recognition and the choice of several personalized profiles. In this way, users start by holding the cube in a predetermined position, defining position zero. Starting always from a prefixed position gives an absolute origin to the cube possible manipulation and an easier way to associate the consecutive manipulations to a set of purposes.

4 Using the Cube

This section reports on a use case scenario to describe a possible cube application and facilitate understanding the device possible utilization. The scenario refers to the domotic environment to offer a familiar ambient for most of us.

A user entering an AmI environment feels the vibrating message by the cube kept in the pocket. This message informs the user of the presence of an AmI, which governs the smart devices present in the ambient. When the cube is in close proximity to a smart device, it warns the user of the possibility to interact, still vibrating but with a different pattern. If the user wants to interact with the device, s/he takes the cube and put (her)his finger on the cube fingerprint sensor, activating the interaction. Successively, the messages to AmI are given by tapping and rotating the cube faces.

For example, let us suppose that the user desires to change the room lighting by opening more the curtains. (S)he can inform the AmI by taking the cube and rotating forward the cube face in the curtains proximity. The AmI sends the correct command to the curtains controller to open them. The curtains are now fully open, but the user turns forward the cube to inform the AmI that still (s)he desires more light. Then, the AmI has to take appropriate actions to satisfy the request. Probably it decides to dim up the lamp, and the user with the same action can regulate the lamp dimmer.

From the above example, it should be noted that the cube does not give the command to the devices but messages the requests to the AmI that should interpret them according to the context and circumstances. For example, if the previous scenario happens during the night, the AmI should understand that the cube forward motion cannot mean more light, but something else, to infer by the user’s known habits or other profile information AmI know.

4.1 User’s Actions

As already mentioned, after fingerprint scan, the cube has one established face under the user’s Index Finger. The actions the user can perform are, Tap on the face under the Index Finger and rotate one or more steps the cube before tapping again. The tap can be single or double. Since there are available 24 distinct positions and for each of these, it is possible to tap a single time or double-times, a total of 48 distinct actions can be recognized univocally. By keeping the cube on the hand, it emerges that, referring to Fig. 1 the most instinctive manipulations that the user can perform to rotate the cube are those where the cube rotates forward, i.e., anticlockwise on the \(Y-axis\), backward, i.e., clockwise on the \(Y-axis\), leftward, i.e., anticlockwise on the \(Z-axis\), and rightward, i.e., clockwise on the \(Z-axis\). Although the rotations on \(X-axis\) are possible, they require a little bit more dexterity.

Within this paper the term manipulation is used and not gesture because the latter is widely used for hand motion on free air without constraints, but the cube already set some constraints by its nature, and others are set to make the interaction as simple as possible. Indeed, the cube position is referred to the fingers touching the cube faces and not to the cube position in the space, thus offering more comfort and freedom of use.

4.2 Networking

The cube device communicates with the AmI through the Wi-Fi and recognizes the nearby smart devices by Bluetooth Low Energy (BLE) protocol. The BLE has the advantage that it uses very low power for short-range communication, allowing the battery-powered devices to last for years, even with small-sized batteries. Instead, Wi-Fi, used for long-range communication, needs accurate power management, hardware, and software, to achieve reduced battery consumption.

The cube device communicates to AmI by messages, using one of the most used IoT communication protocols, the Message Queuing Telemetry Transport (MQTT). It is a lightweight messaging protocol based on the publish/subscribe pattern. The publish/subscribe pattern [10] offers asynchronous interaction among network nodes in almost real-time.

Based on a client/server model, (the clients) can publish and subscribe to a specific Topic connecting to a known Broker (the server). The MQTT Broker will forward the messages published to a specific Topic to any subscribers of such a Topic [16].

The connection between publishers and subscribers is governed by the Broker. More that one Subscriber can Subscribe on the same Topic and more that one Publisher can Publish to the same Topic. The publishers and the subscribers do not need to know each other, they only need to know the address of the Broker to connect.

The client code of MQTT has a small footprint and, for this reason, can be deployed on high constrained devices, such as the cube hardware. Whenever the user taps the cube device or changes its position by rotating it with the fingers, the device publishes a message communicating the action.

The device running the Inference engine for the AI of the AmI had subscribed to the necessary Topics and then can receive those messages in almost real-time. Once received, the message is given as input to the Inference Engine, which translates it into the appropriate action to actuate the smart devices.

Since the cube device can be an MQTT subscriber, the AI device can send feedback messages to the cube device, publishing to the appropriate Topic. So, the cube device can receive the feedback messages and actuate its peripherals for the output.

The cube device has a vibrating motor activated to reproduce vibrotactile messages as feedback from the AI, and nichrome wire embedded in the cube faces (Fig. 4) getting heated when crossed by the electrical current, increasing the cube device haptic feedback ability.

Fig. 4.
figure 4

Nichrome wire

4.3 The Cube Firmware

The cube device has a dual-core 32 bits microcontroller. One core is dedicated to the wireless communication protocols, BLE, and Wi-Fi stack, and the other is for the device programming, which includes the MQTT client and the drives of Inputs and Outputs peripherals. Furthermore, the firmware, rendering the user cube manipulation, has to compose the message to publish, which essentially will communicate to the AI the user’s intentions.

5 Conclusions

A completely tactile and tangible cube-shaped interface device was proposed to interact with AI in the AmI. The research aimed to offer an additional way of interaction within the AmI environments to worried users about the loss of control in fully automated environments, and improve the AI perception of the users’ needs where audio or visual interfaces are not fitting.

The proposed cube device is small enough to be used with one hand only and able to get tactile inputs from users as well as to provide haptic feedback. It offers users interaction opportunities with smart devices in multiple Smart Environments governed by AmI solutions, avoiding visual and natural language speaking but keeping the conversational AI paradigm for AmI.

The device is not intended as a personal device but, recognizing the user by the biometric sensor, becomes full user-tailored.

The device produces two kinds of haptic feedback: vibrotactile messaging solution and bland thermal changes. While the vibrotactile messaging has much literature evidence [5, 6, 13, 14, 18, 23], the temperature perception and Thermal-Interface [11, 17] need further investigation, mainly because, for this work, it was inspired by the designer [22] and not by a user-centric approach.