1 Background

The worth of haptic feedback in different notification systems has proven itself for delivering high-order tactile percepts to transfer complex meanings, and expressions. Instead of visual feedback or spoken commands played back over earplugs, the wearable haptic (touch) feedback could provide artificial tactile stimuli for different applications such as improving walking stability, reducing joint loading, facilitating navigation for visually impaired individuals or the elderly. Furthermore, the tactile messages do not disturb nearby users, while also have less privacy-invasive compared to visual and auditory feedback.

The skin has been considered as a conduit for information [1, 2], where a vibrotactile display can be added by an array of vibration actuators, with the resolution varying from 2 × 2 to 64 × 64 [3] and mostly applied to the skin on the back, abdomen, forehead, thigh, or the fingers. The vibrotactile display has been extensively studied in the context of sensory substitution. It could be used to represent visual or auditory cues to impaired users. Sensory substitution has been proven successful as the participants evidently showed the ability to identify contexts mapped to tactile stimulation. The significance of this study is to help in providing more “Normal” lifestyle for people who lose senses, such as blind, color-blind and deaf.

The inherent complexity of the subject-to-subject differences raises serious challenges in developing highly effective haptic systems. It is worth noting that even the subject’s level of familiarity with the system and the mental overload influence the effectiveness of the system. Therefore, this study aims to develop a haptic system based on each subject’s characteristics, individually rather than a fixed training model. In addition, delivering complex feedback and rich information through multiple vibration motors presents more cognitive challenges leading to exhaustion and frustration over time. Thus, another goal is to find an effective and personalizable solution to deliver complex meanings and expressions without adding significant user’s perceptual and memory loading.

In [4], a camera image is transformed into vibrotactile stimuli using a dynamic tactile coding scheme. The resolution of the image is necessarily reduced to fit the low resolution of the tactor array as their system consists of 48 (6 × 8) vibrating motors. The authors also compared their method (M1) in tactilely displaying the letters with two other typical continuous vibration modes [5, 6]. The first one is an improved handwriting pattern, and the actuation order is similar to handwriting. The vibrating duration time is overlapped between the adjacent motors (M2). In the second approach, called scanning mode (M3), the motors are triggered in the lines from top to bottom. As an initial study in pattern identification task, the capital letters were displayed to experienced and inexperienced subjects, using a 20 × 20 matrix of vibratory tactors placed against the back [6]. Authors reported the results of four modes of stimulus presentation, each letter being presented 42 times under each mode. They found that the sequential tracing by a single moving point leads to the highest recognition accuracy. A tactile stimulator (M8) mounted on a wheelchair is presented in [7], to convert the capital letters into tactile letters using 17 × 17 Tactile Vision Substitution Systems (TVSS). The dark region of the visual display captured by a stationary camera activated the tactors in the corresponding areas of the tactile matrix. Each black line of a letter drawn on a white cardboard activated a line of two tactors wide in the tactile matrix. The experiments demonstrate that at least three independent basic letter features i.e. enclosing shapes, vertical parallel lines, and angle of lines play important parts in tactile letter recognition.

The possibility of differentiating letters by using only a 3 × 3 array of vibrating motors on the back of a chair has been examined in [8], by providing a sequential pattern for each letter with a “tracing mode.” This work (M9) could obtain high recognition rate in reading tactile alphanumeric characters. Recently, a system of spatiotemporal vibration patterns called EdgeVib, for delivering both alphabet (M10) and digits (M11) on wrist-worn vibrotactile display was presented in [9]. Each unistroke pattern longer than four vibrations is split into multiple 2/3-vibration patterns. The new patterns are consecutively displayed to assist the recognition of the alphanumerical patterns. The study revealed that the recognition rate is significantly improved by modifying the unistroke patterns in both alphabet and digits. Factors such as familiarity with the displayed character set, stimulus duration, inter-stimulus onset interval, type of vibration motors, number of trials, number of letters, and cognition load affect the quality of recognition. Therefore, different studies cannot be directly compared. The results along with some details are brought in Table 1. The discrepancy between studies is due to the differences in equipment, procedure, and style of letters. As summarized in Table 1, the subjects had no time limits for letter perception. Moreover, most of the previous studies only focused on a subset of alphanumerics, and the participants were informed of the correct response. To overcome these limitations, we develop a customizable vibrotactile system to deliver any patterns including all alphanumerics under time constraints for letters perception.

Table 1 Previous published results

2 The proposed system design

Our tactile display is implemented on an adjustable portable belt attached on the back of the subjects. The system comprises nine cylindrical Eccentric Rotating Mass (ERM) motors (8.7 mm in diameter and 25.1 mm in length), shown in Fig. 1a. The motors are glued to the belt with a spacing of 5 cm (see Fig. 1c). This gap between tactors is necessary to perform vibration localization, robustly. The motors control the intensity and have fine temporal haptic characteristics (8 ms from off to a perceivable intensity, 21 ms from fully on to off using active breaking with H-bridges). The intensity of the tactors is controlled by Pulse Width Modulation (PWM) signals. The vibration intensity is set to 10 levels from very low (1) to very high (10). To fully control each motor individually, we used Adafruit 16-Channel 12-bit PWM Driver Shield that can drive up to 16 motors over I2C connections by using only two pins (see Fig. 1b). The on-board PWM controller will simultaneously drive all 16 channels with no extra processing overhead. Therefore, the system can incorporate the control of a vast number of different feedback devices into a single and unified interface. The shield plugs in directly into an Arduino device, which also provides the 5 V to power and control the PWM signal.

Fig. 1
figure 1

(a) 9 mm vibration motor from Precision Microdrive, model: 307–103, (b) 16-Channel 12-bit PWM Driver Shield, (c) Back belt with 3 × 3 tactor array

With the proposed platform, the users have a full control of the motors variables including spatial location, vibratory rhythm, burst duration, and intensity to generate vibratory patterns. For this purpose, a Graphical User Interface (GUI) is developed to create or revise the patterns and to optimize the temporal-spatial tactile coding according to human tactile perception (Fig. 2).

Fig. 2
figure 2

Developed GUI to customize the patterns

Several experiments are conducted with our 3 × 3 tactors array to evaluate the customizable tactile display perception. First, we report the recognition rate about tactilely displayed alphanumeric letters with both default and personalized vibratory patterns. Then, we expand its applicability to colors and words identification tasks. Algorithm 1 describes the test cases, where each session contains a number of trials with randomly selected characters. Algorithm 2 extracts changes in the motors (events) from the input pattern (line 2). The events stored in an array control the motor operations and 10 intensity levels, defined in line 9. Tactors are activated based on the vibrating order, spatial and temporal properties in lines 10–12.

Algorithm 1: Test_Cases.

figure a

Algorithm 2: runHaptic.

figure b

3 Experiment setup

We first conduct an experiment consisted of two sessions of vibrotactile pattern identification tasks, before and the other after the development of each subject’s personalized letters. The experiment is carried out with ten healthy volunteers (five males and five females) aged 18 to 46 with (Mean ± SD) 30.70 ± 8.87. The ethical approval was received from McGill Ethics Committee. The participants who had no experience of vibrotactile display devices, were asked to wear the belt in upright sitting position and to match felt sensations with the alphabets or digits. They had a time limit of 2 s for letter perception, and no chance to repeat the presented tactile stimuli. To have a more realistic scenario, they were not allowed to use any headset to block out the sound caused by the vibrators and environment. We wanted to analyze the results with a minimum cognitive load that is calculated by the average repeated time for the subject to conduct the letter’s identification [4]. In the training phase, the subjects knew the characters they perceived through 3 × 3 tactile grid display.

The training and testing phases are composed of 108 (3 × 36) trials with 3 sets of randomly selected characters. Figure 3 illustrates the sequence of tactors activated in the default patterns setting designed by a left-handed supervisor. There is no time interval between the onsets of stimuli, and the stimulus duration is set to 200 msec.

Fig. 3
figure 3

The sequence of tactors to be activated 36 different alphabets and digits in the default version - The arrows orders: red, green, and blue

The default settings help the participants to perceive the letter as a continuous stroke. In the second session, the subjects could revise the default patterns through the GUI. Indeed, each subject can turn the motors on and off in succession and therefore they could customize the tactors’ vibration patterns with any preferences such as following their own writing habit. Personalizing the spatiotemporal vibration patterns could deliver more information with easier interpretation and memorizing. Therefore, this property greatly facilitates the users to distinguish the letters.

4 Experimental results

Figure 4a shows the participants’ confusions between stimuli with the default patterns. Each cell value of the matrix C (i,j) shows the total number of trials that the response ‘j’ occurred upon the presentation of stimulus ‘i’. The results show that the subjects readily recognize the patterns under mean identification rate of 70.83% ± 24.65%, with a low cognitive load. The subjects reflected that sometimes they had difficulty in distinguishing the patterns different from their own writing habit such as letter ‘E’.

Fig. 4
figure 4

Confusion matrices for the recognition of (a) default patterns, (b) customized patterns

The patterns ‘E’ and ‘7’ presented to the participants tended to get highly confused with letters ‘G’ and ‘1’, respectively. The letter ‘O’ and number ‘0’ activated the same dot matrix patterns, but they can be discriminated by the direction of the activated tactors. Most participants reported that sometimes they judged a pattern according to their own writing habits. We expect they may be less likely to be confused by revising the spatial locations, stimulus duration and directions, etc. The subjects had a time limit of 2 s and no chance to repeat the stimuli. These constraints are beneficial for the multi-character words. In the second session, where each subject was allowed to make modifications to the default patterns, there is a more uniform confusion matrix (see Fig. 4b). For instance, letters ‘X’ and ‘Y’ have similar patterns directions, and subjects can apply an alternative writing sequence to create more differentiable patterns. Figure 5 shows some more effective alternative patterns for letters, where a participant used higher level of intensities for letters ‘A’ and ‘7’ (tick arrows). As seen in Fig. 6, customizing the vibrotactile patterns improved the recognition accuracy by 22.49%. A student’s t test revealed that the customized patterns achieved significantly higher recognition rates than the default patterns (86.76% ± 9.44% vs. 70.83% ± 24.65%, p value << 0.01). Among the numbers, the number ‘2’ yielded the best accuracy (96.67%) and ‘5’ was the worst (56.67%). For letters, ‘I’ and ‘J’ yielded the best accuracy (100%) and the lowest letter accuracies are: ‘V’ (70%), ‘Y’ (70%) and ‘G’ (73.33%). As seen in the confusion matrix, still some letters (‘Q’ and ‘G’) exhibited asymmetries. Although the updated patterns increase the total vibratory delivery time, they resolve the confusion between letters and reduce the misrecognition rates.

Fig. 5
figure 5

Examples of customized patterns by one of the participants

Fig. 6
figure 6

Recognition rates with default and personalized patterns for each subject

The misidentifications are more likely to happen due to time constraints for letters’ perception. Contrary to other studies, the participants could not repeat the questions and the error correction was not given when the subjects misidentified. These constraints are beneficial for the multi-character words. Another observation worth highlighting is the reduction of ‘Missed’ answers (57.85%) after revising the letters. The subjects could judge the patterns in the first two seconds, and their performance would be improved by tuning the vibratory variables again and practicing them for a couple of more trials.

We also evaluate the feasibility and robustness of the proposed platform in two other applications. To the best of our knowledge, this is the first work that shows how to apply the personalized tactile patterns in recognizing verbs and colors. Colorblind individuals need to learn about color through artificial means as the lack of color information rigorously impedes their spatial perception and social interactions. We have performed experiments on rendering color information using haptic feedback. As depicted in Fig. 7, eight different colors White (W), Black (B), Red (R), Green (G), blUe (U), Yellow (Y), Cyan (C), Magenta (M) are going to be recognized by the subjects. For the colors except white and black, the motors vibrate in three levels corresponding to the color intensities i.e. light, regular, and dark. It helps people who are colorblind feel the colors around them. In this experiment, the participants were requested to perceive a predefined set of customized tactile patterns delivered by the tactors array. All patterns were vibrated three times in a random order. In total, data is composed of 600 trials ((2 colors +6 colors × 3 intensity levels) × 3 rounds × 10 participants) for colors recognition task. The results of a 10-participant user study reveal the possibility of conveying color information by means of vibrations with an accuracy of 95.33%. The high recognition indicates the validity of our system to allow near real-time color perception. A total of 28 out of 600 trails have been misidentified, where the majority of recognition errors (22 out of 28) are due to the confusion between the levels of intensity.

Fig. 7
figure 7

Presentation of the colors by means of alphabet patterns and three intensity levels

The next point of interest is to extend the range of expressiveness by asking the participants to recognize the multi-character verbs (2–4 characters). We selected the most common verbs in English i.e. ‘BE’, ‘HAVE’, ‘DO’, ‘SAY’, ‘GET’, ‘MAKE’, ‘GO’, ‘KNOW’, ‘TAKE’, ‘SEE’, ‘COME’, ‘THINK’, ‘LOOK’, ‘WANT’, ‘GIVE’, ‘USE’, ‘FIND’, ‘TELL’, ‘ASK’ and ‘WORK’. A delimiter discriminates the verbs. There was no training session, and therefore the participants did not know what words are going to be felt. On each trial, a random verb was chosen, and its customized characters were sequentially presented to participants as vibrotactile stimuli. The results indicated that the multi-character verbs could be delivered with recognition rate 90.50%. This evaluation provides the first steps toward achieving a realistic accuracy for the multi-character words identification task based on leveraging the spatial and temporal properties of skin. These short-term studies could provide initial insights into the use of tactile instructions in more complex scenarios.

This system should be evaluated with impaired subjects to consider possible challenges in more realistic scenarios. For example, deaf people utilize different encoding strategies in reading such as visual representations of print, codes based on orthography, finger spelling or sign language. In some other cases, there might be limitations when using this system, e.g. people with visual impairment will need medical personnel to assist them in personalization.

As we mentioned earlier, the tactile stimuli could be also beneficial for learning motor skills and rehabilitation exercises [10]. Another key advantage of this mode of information representation is that it can be present at the same time as visual/audio cues [11]. For example, such a system can be generalized to deliver compound messages to allow the athletes to keep their eyes on the activities. It helps to improve the quality of performance with the use of a haptic back display for providing instructions and feedback on exercises in real time. For instance, “LAU = Left Arm Up” message can be used to correct the left arm movement by lifting it up. The intensity of the vibration might be varied according to the magnitude of the error between the perfect (reference) and current movements. In addition, identification of the compound messages is of great interest to the phone users while receiving semantic information e.g. ‘E12’ could indicate ‘You have 12 Emails’. Finally, using an identification task on the lower back, we found out the patterns customization can be considered as a promising method in relaying information through the skin. The applicability and usability of this light belt-like device can be extended to numerous health and fitness applications.

5 Conclusion

A 2D vibrotactile display was developed to transmit tactile stimuli onto the lower back of the users who can personalize the vibration variables including spatial location, vibratory rhythm, burst duration, and intensity. The experiments were conducted to investigate the effectiveness of the proposed customizable tactile display in alphanumeric letters perception. The results reveal that the fully customizable low-resolution vibrotactile display alleviates the cognitive loadings for the users who easily learn and recognize new patterns with no extensive training sessions. We also validated the system with colors and words identification tasks and the promising results proved that this system is feasible to be used by people whose normal visual/auditory channels are saturated or obstructed. It was fascinating to observe that the minimally trained subjects were able to perceive the 20 colors and 20 verbs with an overall accuracy of 95.33% and 90.50%, respectively. Our findings allow us to conclude that applying personalized tactile instructions can be used as a major component of an assistive wearable device for people with hearing and visual impairments. In addition, the flexibility of the presented system helps people to acquire various physical skills during wide types of training.