Keywords

1 Introduction

Tactical situation awareness (SA) is critical for ensuring the Warfighters situational dominance, especially during combat. SA can improve security, survivability and optimize lethality. Combat environments can subject Warfighters to extreme conditions, testing the limits of both their physical and cognitive abilities. So, while there have been considerable advancements in military and commercial efforts to integrate and display context sensitive SA information, displaying this information and interacting with the system remains a challenge on the battlefield. One of the general problems in all sensory perception is information overload. Tactile cueing has proven to be a particularly effective means of conveying direction and spatial orientation information and, when implemented effectively, can increase performance (e.g., speed, accuracy) and lower cognitive workload [15].

Tactile communication is as old as human communication itself. Touch is also a natural information channel that complements our sight, hearing, and the sense of balance. Tactile displays have been investigated as a sensory or cognitive augmentation mechanism for delivering feedback to users. Tactile cueing can also be used to convey information [6]. However, there is a difference between feeling a stimulus and identifying its meaning. While body-referenced tactile orientation is a natural or intuitive representation of spatial information, tactile communication signals and their associations must first be learnt and then detected and interpreted by the user. The maximum information rate for the auditory channel, extrapolated from the perception of normal rate speech, is estimated at 75 bits/s. The tactile channel is typically lower, with rates of 25–50 bits/s possible for highly experienced Braille users [7]. Despite this limitation, the transfer of tactile information can potentially occur on many different levels or “channels”. Combat ground soldiers do not need the same type that higher echelons need—too much information may be a distraction and induce cognitive overload [8]. Therefore complex tactile language constructs that require concentration are not likely to be useful.

Several studies have shown that there are a range of touch variables that can be used to convey useful information to Soldiers [9]. We have recently demonstrated a Navigation and Communication system, NavCom, that includes GPS driven tactile navigation cues. In field trials, NavCom reduced mission times and increased navigation accuracy [10]—Soldiers reported being more aware of their surroundings and having better control of their weapon. In one specific trial, Soldiers using NavCom during a 300 m navigation task at night checked their visual (map) display on average 1.2 times, verses 17.7 times for those without NavCom; one demonstration of how NavCom can improve effectiveness and lower cognitive workload, effort, and frustration. We also showed that tactile navigation cuing and tactile coding for messages could be presented simultaneously. NavCom was also fielded at the 2016 AEWE exercise in Ft Benning.

Although task performance enhancements have been demonstrated, care must be taken when considering transitioning from one tactile display mode to another. For example, the usual construct for tactile cuing may be to alert the wearer of an impending threat through the use of a body-referenced vibrotactile stimulus in a particular sector. The same touch variable may be used in navigation where the tactile cuing may be in terms of a tactile “instruction” to head towards a particular sector. Clearly, in this example the two modes are ambiguous and could in fact be contradictory! The effective transition from tactor display modes or context remains a challenge.

Typically, tactile stimuli are defined by dimensions such as the frequency, intensity, force, location, and duration of the signal. However, these definitions and their associated thresholds, in isolation, are of little value if one does not consider characteristics of the user or situational context. Potential contributions of tactile cues may be obscured through inattention to moderating factors, fuzzy construct definitions, or imprecise measures. A common limitation of investigations of tactile cues is over-reliance on performance or workload outcomes, without consideration or measurement of mediating factors such as the ease by which tactile cues can be perceived and interpreted. We have introduced a framework for describing and quantifying the performance of tactile systems [11].

As unmanned assets become more autonomous, Soldier-robot communications will evolve from tele-operation and focused oversight to more tactical, multimodal, critical communications. Therefore the requirements for tactile and Soldier multimodal displays will need careful analysis and optimization to meet these expanded needs. In this paper, we discuss our research towards designing context sensitive and adaptive displays using our salience model.

2 Tactile Salience

One of the general problems in all sensory perception is information overload, but we humans are adept at using selective attention to quickly prioritize large amounts of information and give attention to what is most important. Salience is the property of a stimulus that allows it to stand out and be noticed. Salience is widely used in describing visual system performance of humans and in computational models that drive computer vision [12, 13] but has not been extensively or systematically applied to the tactile modality.

Tactile salience can be simply defined as the probability that the tactile cue will be detected. In controlled laboratory settings, salience can often be modeled as a function of tactor engineering and the vibratory stimuli characteristics—i.e. physical characteristics of the signal itself, when context, or “noise”, is very low. However, as context becomes more complex, additional factors become significant [14].

2.1 Tactile Salience Construct

It is clear that tactile salience is affected or moderated by many factors in addition to engineering characteristics of the factors. While issues of tactor engineering are clearly important to the concept of salience, the salience of any tactile cue or tactor will be affected, perhaps to a great extent, by many factors, including body location, characteristics of the user, the task demands, and the environment in general.

Perhaps even more important, these factors interact with each other, such that the interactions may be more highly predictive than particular characteristics per se. Predictions of operator performance in naturalistic settings require the consideration of these characteristics as they interact in a particular setting. To better emphasize the interplay of these factors, Fig. 1 shows the construct of tactile salience as mediated by three core factors—characteristics pertaining to the user, the technology, and the environment—and their interactions. The effect of any one characteristic cannot be precisely predicted without consideration and/or control of other core factors.

Fig. 1
figure 1

Core factors and interactions affecting tactile salience

The three main sources of influences on tactile salience, as shown in Fig. 1 are:

  • Technology characteristics/capabilities. Many studies have addressed a multitude of features related to the design of tactile stimulators and the construction of the stimulus signal. There is no doubt that these factors affect salience, and can predict user perception, localization, and interpretation, in controlled settings. For example, abrupt onset (or changes) in stimuli and high frequency (200–300 Hz) tone burst vibrations are known to be naturally salient. Effects of characteristics such as amplitude, frequency, and ISI are well summarized in a number of publications [1517].

  • Individual/User differences. Salience can also be affected by characteristics of the user. These include sensory and perceptual characteristics common to all operators, such as sensory processing limitations that limit tactile discrimination and body location [18]. These can also include individual differences in cognitive abilities, personality, training, experience, age, or posture. Differences can always occur with regard to user motivation or focus of attention.

  • Environmental factors and task demands. It has been shown that factors such as physical demands and workload can significantly affect tactile cue perception. In addition, a variety of contextual aspects, such as operational tempo, physical demands, nature and type of distracters, level of threat, and consequences of failure should be considered. Features such as environmental noise can certainly impact the perception, recognition and thus effectiveness of tactile signals.

While each category can act as a main effect on tactile salience, interactions among the categories are also important.

Interactions between the user and the environment/task context produce factors such as perceptions of stress, workload, or fatigue that are likely to affect attention, the need for alerts, and/or the ability of the user to attend to alerts. As an example, individuals with higher levels of neuroticism, emotional reactivity, and/or lower stress tolerance are likely to experience work situations more intensely [19]. While simple direction cues may prove valuable when the user is stressed or fatigued, more complex cues may be less likely to be attended to. Thus, map-based visual information and complex audio information (e.g., turn north after the second street on your left) can become much less effective than tactile direction cues (e.g., go this way). One can see that issues regarding multisensory integration would fit here.

Interactions between environmental/task context and technology include the degree of match between the operational context and technology features or capabilities. Basic examples include situations that require factors that are very quiet (ex, covert communications), or that augment attention management (complex decision making), or that are very easily perceived (e.g., during strenuous movements).

Interactions between the user and technology basically address the traditional domain of human factors engineering. The mismatch between user and technology characteristics can result in poor performance, when technology does not address operator norms that affect their ability to perceive and easily interpret tactile signals.

Thus, it is reasonable to posit that tactile salience depends on main effects and interactions among characteristics of the user, the technology, and the environment.

Tactile salience depends on the user, the technology, and the environment. To be salient for diverse users, tactile display technology should provide a wide range of recognizable touch characteristics, and do so in a small lightweight, efficient system that is not limited by the mounting or usage. Research has shown that features such the abrupt onset (or changes) in stimuli and high frequency (200–300 Hz) tone burst vibrations are known to be naturally salient, while lower frequency tone burst vibrations are typically less salient. A complete consideration of tactile salience is essential to accomplish successful critical signaling, through three fundamental steps: (1) perception: the signal must get their attention, (2) the signal must provide recognizable and distinct stimuli (patterns, locations, areas, changes in patterns, timing, tempo, melody) and (3) provide some confidence or support that substantiates the selection (either repeat or give multimodal so that their identification is confirmed).

2.2 Adaptive Salience

The requirements for tactile salience are therefore complex and situation-dependent. We have identified the need for technology that meets requirements for adaptable salience [20]. In visual salience, the spotlight of attention is based on both exogenous properties of the object (bottom-up saliency) as well as cognitive processes (top-down) [21]. Top-down attention is usually associated with the completion of a specific task, while bottom-up saliency is a fixed characteristic of the stimuli and does not vary with changes in task or situation [22]. Similarly, we can expect that some aspects of tactile salience will be naturally salient, and other aspects will vary as a function of context (e.g., noise, competing demands for attention).

The concept of salience can contribute to perception and effectiveness, particularly when adapted to dynamic situations. Tactile salience can represent priority, such that more salient signals communicate higher importance or urgency [23]. A signal could start at a low level and move to higher salience if not attended to, or if priority increases.

While single tactor cueing can vary in salience depending on tactor characteristics and context, the issue of salience becomes more complex when using multi-tactor arrays which can vary along a number of features. Controlled comparisons showed some patterns to be more salient than others [11]. In addition to tactor characteristics and context, multi-tactor arrays vary along additional dimensions that can also affect salience. For example, these arrays may communicate a pattern through sequential activation—such that an illusion of movement can be communicated (e.g., activating factors around the torso would be felt as a circling movement, in the same way that sequential activation of visual cues can create an illusion of movement—e.g., neon signs). Other patterns may be based on simultaneous activation of factors, which may be a single burst, or repeated. It’s been shown that tempo of sequential activations create “melody” type of sensations that are easily recognized and distinguished based on their rhythmic features [24]. Simply changing the frequency of activation from slower to faster can change perceptions of urgency [23]. Patterns may be communicated across multiple body locations, such that an additional cue in a particular location can increase salience and indicate urgency.

As understanding accumulates, tactile patterns can be developed to different levels of salience, and to automatically change levels of salience as the situation changes, or until an action is taken. From a user perspective, operators have indicated they would want the ability to control, to some degree, the salience of incoming communications, and be able to “turn down the volume” when the operator is in quiet conditions, while turning it up as needs arise.

2.3 Design Guidance for Tactile Displays

Salience will vary depending on the specific stimulus characteristics associated with the task at hand. For example, factors found to be effective when users are standing still have been found to be much less effective when the users are engaged in strenuous activity [25], whereas factors more specifically engineered for salience were more readily perceived regardless of activity. As another example, a tap on the shoulder from a tactile navigational aid can mean different things to a helicopter pilot than to an infantryman clearing an urban area.

Ideally, tactile displays should be salient and readily interpretable. For example, localized mechanical vibration to the front of the torso will probably be salient because it is not usually experienced. Saliency can be increased further by using a vibration frequency that maximally stimulates the Pacinian corpuscles. Even though deep in abdominal tissue, the exquisite sensitivity of the Pacinian receptors to high frequency (~250 Hz) stimuli allows appropriate vibration stimuli to be detected readily [26]. The interpretability of tactile displays can be ensured by intuitive somatotopic mapping of direction and spatial orientation information [27, 28]. Usable stimulation patterns can be generated by placing multiple factors on many different body sites and providing a wide range of tactile stimuli, such as vibration, temperature, and light touch.

The engineering challenge is to develop efficient actuators that can display such a wide range of touch qualities in a small, lightweight package that is not adversely affected by variations in mounting or skin attachment. To preserve salience, input characteristics should be adjusted as changes occur in external factors. This adjustment is similar to environmental or cognitive forms of masking, or habituation, except that less is known about tactile masking.

Tactile displays have typically been implemented as an “artificial” language that has no natural relationship between the spatial and temporal elements of the original information and the display output. To decrease the required training time and increase the probability of correct interpretation, the language needs to be related to the user’s task and be intuitive. An example of an intuitive set of tactile commands was the TACTICS system developed by UCF and EAI [9]. Tactile messages were created using four standard Army arm and hand signals, as described in the Army Field Manual FM 21–60 chapter 2. The five signals chosen for the experiment were, “Attention”, “Halt”, “Rally”, “Move Out”, and “NBC”. Overall accuracy rates depended on training but were high in spite of minimal subject training and even when tested in a stressed environment (subjects were led with full kit on an obstacle course). A key design factor that contributed to the success of these particular tactile messages was that the tactile patterns had a temporal and spatial similarity to the original visual representation of the particular hand signal. For example, the hand signal for “Rally” is a circular motion by the hand over the head. Similarly, the tactile signal for “Rally” is a circular motion around the torso. The hand signal for “NBC” is for arms and hands to touch the left and right side of the head. Similarly, the tactile signal for “NBC” are sharp simultaneous signals to the left and right of the torso. Given this approach to design, naïve participants were able to correctly guess the meaning of each signal 51 % of the time, and a group given five minutes of training were accurate 75 % across all five signals. Thus, it is clear that the design of tactile hand-signal messages should attempt to use the visual reference where relevant.

3 Experiments

In addition to human-human communications, tactile displays can be used to enhance human-robot critical communications. A recent experiment [11] explored this concept in two ways. Several multi-tactor cues were developed that varied in tactor and tactor pattern features, to represent alerts and status updates that could be received by a robotic squad member. In one approach, experimenters used paired comparisons with forced-choice and independent scaled ratings of various multifactor patterns. Results showed significant differences due to tactor design characteristics, such that the EAI C-3 actuator was consistently perceived as more salient. However, participants also had no problem perceiving the EAI EMR factors, which have the advantage of a low frequency stimulus, low acoustic signature, and higher efficiency regarding power usage. Significant differences in salience were also associated with different factions, such that some factions were more salient. In addition interaction effects suggest that differences in salience due to tactor type can vary among different factions. While forced-choice approaches to measurement of salience can reliably identify differences, the number of paired comparisons becomes quite large when comparing more than a few factions; thus the use of independent ratings based on 5 pt response scales offer a more efficient approach to measurement.

While the investigation above was based on participants who were stationery, a corresponding effort presented factions while participants were on the move, during night operations. For this experiment, participants received two types of signals. Navigation direction cues were presented using lower frequency EMR factors, which continually indicated the direction to the next navigation waypoint, and also indicated when participants needed to avoid an exclusion zone. During this navigation task (e.g., 3 waypoints, totaling 900 m), participants also received four different factions indicating threat or status updates. Participants indicated each time they perceived an incoming tactor and identify the nature of the communication. Responses were 93 % accurate, which was very high given that they were given each signal only once, with no warning signal and no repeats.

4 Discussion

Tactile displays have been shown to enhance performance and reduce workload across a range of performance domains. In this report we discuss some fundamental issues with regard to the conceptualization, measurement, and usefulness of tactile salience. Recent experiments with Soldiers proved that tactile displays, engineered to present salient signals, can significantly enhance navigation and critical communication. This underlines the need for systematic investigations of tactile salience, its measurement, and moderators. We propose a framework to guide such investigations. It is clear that the design of tactile display systems must achieve effective levels of salience. As a further step, tactile displays can be engineered to be adaptive, such that levels of salience can adapt to levels of operator activity, environmental factors, and task demands. Salience can thus increase to demand an operator response, or decrease, either automatically or upon request.

5 Conclusion

The design of tactile displays should be guided by systematic consideration of operator characteristics, environmental and task demands, and technology options, in order to achieve necessary levels of tactile salience for that situation—one size does not fit all. The notion of adaptive salience is predicted to further enhance the contribution of tactile cueing in dynamic circumstances.