Keywords

1 Introduction

Auditory and tactile channels are often used to convey information to people with visual impairments. Considering that hearing becomes the primary sense for this population and that it is generally loaded with plenty of stimuli from the environment, the use of touch has been long studied to transmit information that unloads audition and that could assist in daily tasks such as reading, computer access, and mobility [1,2,3].

Tactile information displayed by assistive devices normally consists of very simple tactile cues to alert of events (for example, obstacles in the immediate location or to point users to a navigational direction). However, situational awareness assistance cannot be provided as simple event alerts. In fact, providing feedback on environmental elements with respect to time or space and the update of their status after some variable has changed definitively needs a more complex communication structure such as language. It still remains a challenge for assistive devices to provide situational awareness information that could be easily and fast understood through touch.

Language learning is the process by which humans acquire the capacity to perceive and use words to understand and communicate [4]. For every field of knowledge addressing human language and communication, the language learning process is very different whether it is the first or second language. While the first refers to an infant’s acquisition of the native language, the second deals with the process of learning an additional language when a native one has been already learned.

Learning a second language is a complex process extensively studied in neuroscience [5], applied linguistics [6], sociolinguistics [7], psychology [8], and education [9]. With no intention of further reviewing this process, we shall limit the discussion to state that humans learn a second language by making relations with their own native language, by memorizing, and by practicing. Think of a Spanish-speaking native learning Italian; as both are Latin-based languages, relations can be easily established. However, for a Spanish-speaking native learning Chinese, memorizing new vocabulary and grammar rules as well as constant practice seem the only way.

Language learning does not refer exclusively to spoken languages. Natural languages such as semiotics, sign language, gestural/body language, and tactile patterns are equally used to communicate ideas without conveying any sound. Regardless of their complexity, natural languages also require certain amount of time, effort, and practice in order to master their usage.

In particular, tactile patterns have received growing attention in several domains of human–computer interaction such as virtual reality, sensory augmentation, sensory substitution, robotics, mobile and wearable devices, game and entertainment, among many others. Short structured tactile patterns called tactile icons or tactons [10] have been used to code verbal language and convey information to touch especially in applications where sight and hearing are restricted or overloaded.

Tactons have been already studied to encode simple information such as flight data for pilots [11, 12], warning signals for car drivers [13] and clinicians [14], instructions for improving physical performance [15, 16], navigational assistance for visually impaired and blind people [17, 18], emotions [19, 20], and verbal words [19, 21]. In these studies, different sets of tactons were proposed to test subjects and satisfactory recognition rates were reported. However, only one tacton was recognized at a time.

In this study, we present several tactons to a group of 20 voluntary subjects and we combine them to construct sentences that represent gradually more complex information. We seek to evaluate human performance in tactile language learning and tactile memory and to determine how far we can cognitively handle tactons for more ambitious applications in human–computer interaction , wearable/mobile computing, and assistive devices . For this evaluation, we propose a novel approach: podotactile stimulation. Tactile-foot stimulation has shown interesting results and great potential in our previous studies [17, 19, 22].

The rest of the paper is organized as follows: Sect. 2 presents a brief review of relevant prior related work. Section 3 presents a technical overview of the apparatus used in this study. Section 4 evaluates human performance in tactile learning, tactile memory, and tactile language usage with the proposed device while Sect. 5 shows an example of situational awareness assistance with tactile language. Finally, Sect. 6 concludes with main remarks and future work perspectives.

2 Related Work

Let us start by defining the three main concepts addressed in this paper: tactile learning, tactile language , and tactile memory .

Tactile learning is the process of acquiring new information through tactile exploration. It is a process that does not happen all at once but is built upon practice.

Tactile language is a set of tactile information that can be used to construct a communication system. As oral languages, tactile ones contain a set of rules that govern how tactile information is used to form sequences meaning phrases.

Tactile memory refers to the persistence of learning in a state that can be revealed at a later occasion. It can be either long-term- or short-term memory. While tactile information stored in the long-term memory affects our world perception and influences our interaction with the environment, tactile information in the short-term memory is held in mind in small amount for a short period in an active readily available state [23].

Several studies evaluating these three concepts using more than a simple set of few tactons can be found on the literature.

F. Geldard conducted in 1957 one of the earliest attempts to evaluate tactile memory. He proposed to encode symbols of the alphabet with vibratory patterns. Guided by this reasoning, Geldard design the Vibratese language, which was composed of 45 basic elements; the tactile equivalent of numbers and letters [24]. About 12 h of practice were required to learn the language. Subjects were able to recognize single letters but could not interpret continuous sequences of patterns correctly.

During the 70s, experimental research on understanding tactile sequences provided the first interesting insights into the capabilities of tactile memory : Gilson and Baddeley [25] and Sullivan and Turvey [26] concluded that tactile memory works best for stimuli lasting 5–10 s and that people quickly forget tactile stimuli. Watkins and Watkins [27] and later, Mahrer and Miles [28] evidenced the importance of training for memorizing tactile sequences. Handel and Buffardi [29] experimentally observed that it is not only possible to learn and understand tactile sequences but to encode also statistical regularities to predict patterns within the sequences.

Conway and Christiansen [30] conducted experiments to compare the ability of learning sequences with hearing and touch. Ten different sequences combining from three to five elements each were displayed to both senses. Results showed that the auditory modality displays a significant learning advantage compared to touch.

Evreinova et al. proposed in [31], a memory game destined to strengthen the short-term tactile memory of hearing impaired adults. Ten participants explored 27 different vibrotactile patterns using the Logitech tactile feedback mouse. Results reported that after a significant learning time, subjects could reasonably manipulate the set of tactons. No concatenation of tactons was reported.

Wang et al. presented in [32], a computer implementation of the classic memory card game using the STreSS tactile display . Twelve tactile memory cards had to be matched with their visual counterparts. After a short training period, the cards could be distinguished from one another using tactile stimuli alone. Vision shortened the learning process.

Kuber et al. described in [33], a multimodal memory game for the blind. Combining speech, nonspeech audio, and tactons, both sighted and blind users achieved to replicate complex sequences of information. Concatenation of nonvisual information was reported with good results; however, multimodality eased the task. Similarly, Raisamo et al. proposed a tactile memory game using visual, audio, and tactile feedback [34]. The game got a positive response from a group of seven visually impaired children.

Oliveira and Maciel presented in [35] the design and assessment of tactile vocabularies to support navigation in 3D environments. Two approaches were explored: prefixation and tactile sequences. Using an eight tactor belt, vibrating patterns encoding obstacle, destination, course, warning, and itinerary information were conveyed to the user to enhance the visual navigation of virtual scenarios. Prefixed patterns were easier to learn and memorize than tactile sequences.

Barber et al. conducted in [36] a study involving three categories of tactons: static, dynamic, and directional. While static tactons consisted of constant patterns, dynamic and directional tactons consisted of a set of sequential subpatterns that transmit the sensation of motion. Static and dynamic tactons represent words while the directional ones describe some type of navigational context. Their results showed that through the pairing of dynamic and directional tactons, users were able to interpret two-tacton sentences with an accuracy of 92%.

Finally, Riddle and Chapman proposed in [37] a five step methodology for building tactile languages . The first step is to define the message set. Defining the message set consists of identifying the concepts to be communicated either for many different tasks or situations or for specific uses. The second step seeks to determine the physical characteristics of vibrations such as vibrating frequency, pulse duration, and sequence of activation. The third step is to define application-specific design rules so that tactile parameters have implied meaning. If these meanings can be linked to the inherent meaning of messages, patterns would be intuitive and easy to learn. The fourth step is the creation of the tactons while step 5 evaluates the tactons to validate their design and performance.

Note that most of the applications cited in this section were developed in the context of human–computer interaction , virtual reality, and serious games. However, much of their basic research comes from the fields of psychology, cognition, and neurolinguistics.

3 Tactile Display for the Foot

Much of what is found in the literature about tactile feedback concerns tactile stimulation of the fingers and hands. However, the acuity of other body areas has been explored as well: the wrist/forearm [14], abdomen [38], chest [39], tongue [40], ears [41], head [42], and even the backside [43] have been studied to transmit information to a user. The devices are as diverse as the technology used and the location on the body.

Since 2008, we have been studying the role of one of the less explored body locations in tactile perception : the human foot. We seek to understand how people perceive information through their feet and to evaluate whether this perception level can be exploited in different human–computer interaction tasks.

For this purpose, we have developed several prototypes of electronic tactile displays for the foot. In particular, the newest design (Fig. 1a) consists of four vibrating actuators that stimulate the medial and lateral plantar areas of the foot sole which concentrate most of the mechanoreceptors sensitive to vibrotactile stimulation [44, 45].

Fig. 1
figure 1

Tactile display for the foot: a Design concept. Inset: target stimulation area enclosed in square. b Prototype. c Fully wearable device with wireless connection. Inset electronic module

In this prototype, vibrators are arranged in a diamond-like shape with 35 mm side-length (Fig. 1b). All four actuators are integrated in a commercial inexpensive foam shoe-insole. They provide axial forces up to 13 mN and vibrating frequencies between 10 and 55 Hz. Each vibrator is independently controlled with a specific vibrating frequency command [19].

This version is intended to be used on the left foot and is fully wearable (Fig. 1c): it includes an RF (radio-frequency) transmission module which allows simple and reliable point-to-point communication with a computer within a range of 100 m. It also includes the electronic drive to power the vibrating actuators and an on-board power supply that ensures 6 h of autonomy. Figure 1c inset details the electronic module that the user carries comfortably attached to the ankle.

Experimental perceptual studies have been already conducted with our prototypes in sighted and blind users. We have tested navigational direction recognition, shape identification, pattern and emotion recognition, vocabulary learning, and real-time navigation in space [17, 19, 22]. Our results indicate that people actually understand information displayed to the plantar surface of the foot. However, this information must not be complex as the foot is not capable of precise discrimination. Information displayed to the feet must be simple and preferably, encoded as short structured vibrating patterns (tactons).

One of the most challenging applications for this device is perhaps the assistance of visually impaired people: this mechatronic shoe-insole could be used for providing diverse information such as directions for independent navigation and situational awareness assistance .

An attractive feature of this device is that it can be further inserted into a shoe making it an inconspicuous and visually unnoticeable assistive device . Unlike other portable/wearable assistive devices, an on-shoe device does not heighten the handicapped image that affects the user’s self-esteem.

4 Evaluation and Results

Perceptual experiments were carried out to evaluate human performance in tactile language learning and tactile memory using this prototype.

We reported in the past an experiment involving vocabulary learning [19]. For that test, we proposed five vibrotactile patterns or tactons to represent arbitrary five different words. Tactons were presented to test subjects, which were requested to memorize them in a short period of time. Subjects were then asked to identify the tactons. Results were very encouraging: recognition rates indicated that subjects could reasonably manipulate the set of tactons. We wondered then whether tactons could be combined to represent more complex information and subject performance in this case.

4.1 Study Participants and Experimental Procedure

Twenty undergraduate students (16 men and 4 women) at Panamericana University participated voluntarily in the experiments. All gave their consent in agreement with the university ethics guidelines. No special criteria were used to select them but availability. None of the participants reported problems in tactile sensory or cognitive functions. Their ages ranged from 18 to 24 years old with an average age of 20.8. None of them had tried any of our tactile display prototypes for the foot before.

To avoid any possible distraction, experiments were conducted in a restricted access laboratory where it just remained the test subject and the researcher. During the experiments, the subjects were seated wearing the tactile display on the left foot and a headphone set. Audio cues generated by the vibrating motors were discarded with a pink noise provided by the headphones. For hygiene, all subjects were requested to use socks. Before each session, they were totally naive about all aspects of the test and were given general instructions concerning the task. A short familiarization time with the device was granted prior to the tests. Each subject was asked to perform four experiments and to fill out four answer forms. All of this took on average 25–30 min.

For statistical analysis, subjects were randomly divided into two groups of 10. The χ2 distribution was used to evaluate difference in proportions across samples of a same group while the z-test to give a confidence interval for the true difference in proportions between groups. The level of significance to reject the null hypothesis (α) was set to 0.05 in all cases.

4.2 Experiment I: Vocabulary Learning

The purpose of the first test is to present the set of tactons to the subjects and to evaluate whether they could quickly learn and retain them in memory.

4.2.1 Method

Four tactons were chosen for this test: boy, play, ball, and big. The vibrotactile patterns in Fig. 2 were arbitrarily chosen to represent these words. For example, ‘boy’ in tactile language is represented by a long vibration followed by two short ones while ‘play’ by a long vibration followed by a short one, and again a long vibration.

Fig. 2
figure 2

Tactons representing arbitrarily four words. Set times for short and long vibrations are 0.5 and 3 s, respectively. Set time for ‘big’ (the longest vibration) is 5 s. For all tactons, all four vibrating motors in the display are actuated simultaneously at a vibrating frequency of 55 Hz

Subjects were asked to match what they felt tactually with one of these words. Before the test, all four tactons were displayed to the subjects so that they could make a mental representation of them. Upon request, they could have the tacton refreshed on the display. When ready, the tester made a 1 min small talk on purpose to distract their minds from the test. The test consisted of a single trial. Each tacton was randomly displayed twice. Subjects had no time restriction to provide their answers and they were allowed to modify them if they felt they had made a mistake.

4.2.2 Results

Figure 3 shows the results obtained. For group 1 the average recognition rates were 80, 90, 90, and 90% for boy, play, ball, and big, respectively. For group 2, these were 70, 75, 85, and 90%, respectively. Recognition rates suggest that the proposed tactons were easy to learn and to remember.

Fig. 3
figure 3

Performance of the 20 subjects at learning and memorizing the set of tactons (p > 0.05). The standard error is shown as an error bar

Subjects in both groups exhibited a uniform performance across the test (group 1: χ2 = 1.37, p = 0.71, group 2: χ2 = 3.12, p = 0.37). There was no statistically significant difference in the performances of the two groups (p > 0.05).

4.3 Experiment II: Constructing Sentences with Two Tactons

The purpose of this test is to evaluate subject performance when combining two tactons. This test would require a higher level of concentration and is a first step toward constructing sentences that describe more complex situations.

4.3.1 Method

Tactons were combined in pairs to form four sentences: big-boy, boy-play, big-ball, and playball. Sentences were displayed as in verbal communication: one tacton first, then a short pause, then the second tacton.

Subjects were asked to feel the entire sentence before reporting the words perceived. They were not aware about the sentences so they could report any possible combination. The test consisted of a single trial. Each sentence was randomly displayed twice. Subjects had no time restriction to provide their answers and they could have the tactile sentence refreshed on the display upon request.

4.3.2 Results

Figure 4a shows the results obtained. For group 1, the average recognition rates were 65, 80, 80, and 75% for big-boy, boy-play, big-ball, and playball, respectively. For group 2, these were 85, 75, 90, and 65%, respectively. Even though this task was more complicated, recognition rates did not decrease substantially.

Fig. 4
figure 4

a Performance of the 20 subjects at recognizing tactile sentences with two tactons (p > 0.05). b Average distribution of both correct and wrong answers

As in the previous test, subjects in both groups exhibited a uniform performance across the test (group 1: χ2 = 1.6, p = 0.65, group 2: χ2 = 4.4, p = 0.22). Again, there was no statistically significant difference in the performances of the two groups (p > 0.05).

Scores presented in Fig. 4a refer to totally correct sentences, that is when subjects recognized both tactons. However, it is interesting to appreciate in detail subject performance. Figure 4b shows the average distribution of both correct and wrong answers. For example, for sentence big-boy, the 65% of the answers provided by subjects in group 1 indicate that they recognized successfully both tactons while the remaining 35% indicate that they recognized one tacton but failed to do so for the other. Similarly, the 85% of the answers provided by subjects in group 2 indicate that they recognized both tactons, 5% only one, and 10% that they did not recognized any tacton. Note that for most incorrect sentences, subjects tend to recognize at least one tacton.

4.4 Experiment III: Constructing Sentences with Three Tactons

The third test proceeds to evaluate subject performance with sentences containing three tactons.

4.4.1 Method

Tactons were combined in triples to form three sentences: big-boy-play, boy-playball, and play-big-ball. Again, tactile sentences were displayed as in verbal communication: one tacton first, a short pause, then the second tacton, a short pause, and finally the third tacton.

As in the previous tests, subjects were asked to feel the entire sentence before reporting the three words perceived. They were not aware about the sentences so they could report any possible combination. The test consisted of a single trial. Each sentence was randomly displayed twice. Subjects had no time restriction to provide their answers and they could have the tactile sentence refreshed on the display upon request.

4.4.2 Results

Figure 5a shows the results obtained. For group 1, the average recognition rates were 70, 85, and 75% for big-boy-play, boy-playball, and play-big-ball, respectively. For group 2, these were 70, 80, and 80%, respectively. Note that subjects practically obtained the same recognition rates as with sentences with two tactons. It was observed that subjects requested more often to refresh the sentence on the display, but only at first; subjects quickly arrived to get concentrated and manage the three tactons.

Fig. 5
figure 5

a Performance of the 20 subjects at recognizing tactile sentences with three tactons (p > 0.05). b Average distribution of answers

Subjects in both groups exhibited again a uniform performance across the test (group 1: χ2 = 0.74, p = 0.68, group 2: χ2 = 1.3, p = 0.52) and there was no statistically significant difference in the performances of the two groups (p > 0.05).

Figure 5b presents the average distribution of answers. Note that for most incorrect answers, subjects did recognize one or two tactons. Answers with all three tactons incorrect are the less.

4.5 Experiment IV: Constructing Sentences with Four Tactons

The final test combines all four tactons to form the longest sentences that will be presented to the subjects.

4.5.1 Method

Tactons were combined in quads to form two sentences: big-boy-playball and boy-play-big-ball. Again, tactile sentences were displayed as in verbal communication: an alternating sequence between tacton and pause.

The same protocol was followed: subjects were asked to feel the entire sentence before reporting the sequence of words perceived. They were not aware about the sentences so they could report any possible combination. The test consisted of a single trial. Each sentence was randomly displayed twice. Subjects had no time restriction to provide their answers and they could have the tactile sentence refreshed on the display upon request.

4.5.2 Results

Figure 6a shows the results obtained. For group 1, the average recognition rates were 65 and 55% for big-boy-playball and boy-play-big-ball, respectively. For group 2, these were 70 and 75%, respectively. Note that subjects in group 1 dropped their performance while subjects in group 2 maintained it reasonably. The results for group 1, however, are heavily skewed by the low performance of three subjects.

Fig. 6
figure 6

a Performance of the 20 subjects at recognizing tactile sentences with four tactons (p > 0.05). b Average distribution of answers

For this last test, subjects in both groups exhibited again a uniform performance across the test (group 1: χ2 = 0.42, p = 0.5, group 2: χ2 = 0.12, p = 0.72) and there was no statistically significant difference in the performances of the two groups (p > 0.05).

Figure 6b shows the average distribution of answers. Note that for most incorrect answers, subjects did recognize half of the sentence. Answers reporting no correct word were rarely observed.

4.6 Discussion

All four tests show that tactons can be quickly learned and retained in memory. Furthermore, these experiments show that tactons can be combined into sentences that represent more complex ideas and that tactile sentences containing up to four tactons can be cognitively handled with high accuracy.

At each experiment, subjects in both groups observed a uniform performance across the test and no statistically significant difference in the performances between groups was found. However, performance of subjects in group 1 was statistically significant different across the four experiments (χ2 = 11.66, p = 0.008).

Figure 7 shows the evolution of the mean of correct answers across the four experiments for both groups. Note that performance of group 1 progressively decreases while that of group 2 remains reasonably constant. Also note that for both groups, performance is practically the same for sentences with two and three tactons.

Fig. 7
figure 7

Evolution of performance across the four tests. Subjects in group 1 exhibit a statistically significant different performance (χ2  = 11.66, p = 0.008) while subjects in group 2 do not (χ2 = 0.95, p = 0.8)

5 An Example of Situational Awareness Assistance During Navigation with Tactile Language

Independent and secure navigation in a real environment is one of the most challenging daily tasks people with visual impairments face having a direct impact on quality of life, on well-being, and on social integration.

As aforementioned, the main aim of the on-shoe tactile display is to assist the navigation of visually impaired and blind individuals. The purpose of this test is to evaluate if it possible to manage directional information and tactile language representing situational awareness at the same time.

5.1 Directional Information

We previously reported in [19], a tactile rendering approach for directional information that has achieved high recognition rates.

This approach consists on setting a navigational direction to each one of the four contact pins: forward F, backward B, right R, and left L. A navigational direction is encoded in five sequences (t1–t5) as follows: three consecutive short vibrations in the corresponding contact pin, then a short vibration in the opposite contact pin, and again a short vibration in the correct contact pin.

Figure 8 shows for example, the codification for going forward. Note that the contact pin F vibrates three times, then B once, and again F. Average recognition rates obtained from a group of 20 voluntary subjects were 91.65, 91.25, 78.75, and 91.65% for F, B, L, and R, respectively [19]. These figures suggest that people easily and intuitively associate the tactile patterns to navigational directions.

Fig. 8
figure 8

Schedule of activation of the vibrating motors for the navigational direction rendering (example for going forward)

5.2 Navigation

5.2.1 Method

A camera-based tracking platform was set for this experiment. It consisted of a camera placed 4 m above the ground surface that recorded RGB video. The acquired video was later processed in a PC for subject tracking.

To have an idea of a typical performance that is not skewed by good or poor performances, one of the 20 voluntary subjects was chosen for the experiment: a female exhibiting the median average performance in understanding tactile sentences.

The tactons shown in Fig. 8 were used for pointing her to a navigational direction. A fifth pattern consisting of two consecutive short vibrations, then a pause, and then two consecutive short vibrations (the typical pattern for SMS alerts in mobile phones) was used for indicating to stop. Patterns describing ‘ball’ and ‘big’ (Fig. 2) were redefined to ‘chair’ and ‘obstacle’, respectively. Subject was trained in learning the seven tactons as described in Sect. 4.2.

Tactons were provided by a computer located outside the navigation environment. During the test, the subject was blindfolded so that no cue from sight could be obtained. A navigational environment (Fig. 9) was proposed to the subject who was totally naive about their structure prior to and during the test.

Fig. 9
figure 9

Situational awareness assistance during navigation: a Navigation in a structured environment. The broken yellow line represents the trajectory followed by the test subject. b Action upon assistance provided by a tactile sentence

Subject was requested to move according to the pattern felt and to sit down on a chair located inside the environment when indicated. She had no time restriction to complete the test and, upon request, she could have the tactons refreshed on the interface.

5.2.2 Results

Figure 9a shows the results obtained in this test. Subject successfully followed 11 navigational instructions to reach point A (forward-stop-turn right-forward-stop-turn left-forward-stop-turn left-forward-stop). At point A, a four-tacton sentence displays: obstacle left, chair right (Fig. 9a). Subject acts accordingly (Fig. 9b).

Note that the obstacle/chair prefix changed the meaning from actually going to just pointing directions. Semantics relations like this one can be easily established by subjects.

6 Conclusion

This paper presented the results of user studies showing the ability to learn, memorize, and use tactile words .

Vibrotactile patterns or tactons abstractly representing verbal language can be understood, quickly learned, and retained in memory. Furthermore, sentences involving two, three, or four tactons can be constructed and recognized with high accuracy, which broadens the possibilities for describing complex situations that could improve interaction with mobile and wearable computers and in particular, situational awareness feedback provided by assistive devices .

An interesting observation from these tests is that tactons are retained in short-time memory. A follow-up study revealed that after one week without any practice; most of the test subjects could only recall the pattern associated to ‘big’. After a month and without any further practice, no tacton could be identified by any of the 20 subjects. Nevertheless, with constant practice (case of visually impaired and blind individuals), tactons could work in long-time memory.

Results obtained from the tests seem very promising for podotactile stimulation. Tactons applied to the foot can be understood and their predefined meaning can be easily associated. Future work will evaluate the perceptual load of tactons and tactile sentences. This work will seek to determine for how long and under which circumstances a user manages to be fully concentrated in tactons before getting tired or distracted. Several high perceptual load conditions will be tested such as noisy environments and crowded spaces at several tacton presentation rates: rare, constant, and high.