1 Introduction

Touchless user interfaces have the potential to radically change how people interact with technology. For example, users can interact in more ‘natural’ and expressive ways, leveraging more degrees of freedom for input sensing than are available using contact-based alternatives like touchscreens (Sridhar et al. 2015). Touchless user interfaces also offer convenience. For example, users can interact without reaching for a screen or input device, without washing messy hands and without taking attention away from other tasks. Finally, touchless user interfaces can address hygiene concerns with shared input devices (Corenthy et al. 2018). Whilst this can help in contexts where sterility is a concern (e.g. in hospitals (Cronin and Doherty 2019; O’Hara et al. 2014)), the COVID-19 pandemic has led to increased awareness of using shared touch surfaces and increased interest in using touchless alternatives for accessing digital information and services.

Fig. 1
A seven-part figure labelled a, b, c, d, e, f, and g shows seven ultrasound haptic design patterns. Parts a, b, c, d, e, f, and g show tracked fingertips, tracked palm, floating screen, forcefield, object outline, motion patterns, and special effects, respectively.

This chapter presents seven ultrasound haptic design patterns: a Tracked Fingertips, b Tracked Palm, c Floating screen, d Forcefield, e Object outline, f Motion patterns, g Special effects

These potential benefits have led to the adoption of touchless technologies across a variety of market sectors, as seen throughout this book. Many chapters examine particular use cases in detail: e.g. automotive user interfaces (Chapter “Augmenting Automotive Gesture Infotainment Interfaces Through Mid-air Haptic Icon Design”), mixed reality (Chapter “Ultrasound Mid-Air Tactile Feedback for Immersive Virtual Reality Interaction”) and input for novel displays (Chapter “Touchless Tactile Interaction with Unconventional Permeable Displays”, Chapter “Superimposing Visual Images on Mid-air Ultrasonic Haptic Stimulation”). However, touchless gesture input has usability challenges that affect its use more generally, e.g. the challenge of knowing where to provide input Freeman et al. (2016, 2019), uncertainty about whether the system is responding (Freeman et al. 2014) and a limited feeling of control over interaction (Cornelio-Martinez et al. 2017).

Suitable feedback about interaction can help users overcome these issues, and ultrasound haptic feedback is ideally suited to this, allowing tactile feedback to be given directly to users’ hands as they gesture in air. There are many user experience benefits from using ultrasound haptic feedback in a touchless user interface. Such feedback has been found to address some of the usability challenges inherent with touchless input, e.g. guiding users so they can find where to provide input (Freeman et al. 2019) and creating a feeling of control over user interface widgets (Cornelio-Martinez et al. 2017). Mid-air haptics can also help enhance touchless interaction by giving interaction designers access to another sensory modality, which can increase user engagement (Limerick et al. 2019). These usability benefits are being applied across a diverse range of application areas, including automotive HCI, mixed reality and interactive advertising (Rakkolainen et al. 2020).

A growing body of academic research has helped to improve our understanding of how ultrasound haptic feedback is perceived, has established its benefits to user experience and evaluated its use across a variety of application areas (Rakkolainen et al. 2020). All highlight the compelling benefits and exciting potential of this novel haptic technology. Less clear, however, is the question of where to begin. How can designers, developers and researchers start to incorporate ultrasound haptic feedback into a touchless user interface design? This chapter begins to address this question by creating a collection of design patterns for ultrasound haptic feedback, previewed in Fig. 1. These design patterns represent common solutions used by the ultrasound haptics community, which can be used to kick-start the mid-air haptic feedback design process.

2 Background

Ultrasound haptic devices can be used to present a variety of tactile sensations against the hand. The basic unit of output is a focal point, a region of intense focused sound pressure in mid-air that imparts a subtle force against the hand upon contact Iwamoto et al. (2008). These focal points are generally not strong enough to be perceived on their own, but can be purposefully modulated in a way that greatly improves perception, so that users can feel distinct tactile sensations. It is not necessary to understand modulation approaches (see chapter “Modulation Methods for Ultrasound Midair Haptics”) or haptic rendering to read this chapter, because the design patterns will be described in terms of what the user experiences against their hands. Indeed, there may be several modulation methods that can produce similar tactile sensations, and by the time you read this, novel rendering methods may have replaced the current state of the art. Haptic designers and practitioners will likely have software tools at their disposal that streamline the development process and take care of the nuances of rendering, and so their responsibility is to choose the ‘best’ design for a given problem, to meet the needs of those who will use their touchless user interface. This chapter aims to inform this selection.

Design patterns and their intended tactile experience will be described using haptic points and haptic patterns as design primitives.

Haptic points are focal points, the smallest unit of perceptible output from an ultrasound haptics device. Multiple independent focal points can be positioned in 3D space above an ultrasound haptics device, and their size corresponds to the sound frequency; most devices use 40 kHz sound, creating focal points that are approximately 8.6 mm in diameter (Carter et al. 2013).

Haptic patterns are composed from one or more focal points, which change position in a deterministic way over time. For many of the design patterns described in this chapter, haptic patterns will be simple shapes, e.g. lines and polygons. There are numerous methods for creating such shapes, e.g. distributing multiple focal points along the outline of the shape (Long et al. 2014) or rapidly moving a single focal point along that outline (Frier et al. 2018; Takahashi et al. 2018, 2019) to elicit different tactile sensations (Frier et al. 2018, 2019; Freeman and Wilson 2021). To understand this chapter, it is sufficient to know the concept of a haptic pattern without understanding how such a pattern is created, especially since cutting edge research improves our understanding about how to improve rendering (Hajas et al. 2020).

Recent work has proposed simple design spaces that formally categorise and describe ultrasound haptic experiences. Rakkolainen et al. (2020) identified four categories of mid-air haptic output: sensations of motion, shapes, textured surfaces and abstract dynamic patterns. Dzidek et al. (2018) identified five categories of perceptual sensation: field sensations, edge detection, focused sensations, spherical sensations and fingertip sensations. This chapter takes a retrospective view of ultrasound haptics research to explore common haptic designs, but it is not an exhaustive overview and does not attempt to cover all designs found in the literature in a formal design space.

3 Ultrasound Haptic Design Patterns

This section presents a collection of design patterns for ultrasound haptic feedback. These represent commonly used interaction metaphors and feedback designs, which satisfy many usability needs and allow the creation of a variety of engaging user experiences. These are designs that designers, developers and practitioners may find useful—‘recipes’ for a good touchless user interface experience.

Each design pattern will be described in its own section. There will be a summary box that explains what the design pattern is, why it may be used in a touchless user interface, where it is rendered, and when the haptic feedback may be presented. Finally, there will be questions that designers need to consider if using these design patterns, and examples of research where they have been described and used.

3.1 Tracked Fingertips

In this design pattern, haptic points are positioned at one, or more, fingertips, like in Fig. 2. When the user moves their hand or fingers, the haptic points are repositioned in 3D to remain in contact. This pattern implies the use of hand tracking which is capable of multi-finger location relative to the haptic device. One aim of this haptic design is to create the experience of touching something in mid-air; for example, to let the user know they have touched a user interface element or a virtual object. In this case, the presence of feedback is enough to enhance the user experience, because users can feel where and when they have touched an interactive object in mid-air. Another aim of this haptic design is to inform users that their fingers are actively being tracked and that the system is responding to their movements. In this case, the presence of feedback shows ‘system attention’ (Bellotti et al. 2002), reassuring users that they are providing input in a suitable position (Freeman et al. 2014).

Fig. 2
An image illustration shows tracked fingertips where haptic points are positioned at one or more fingertips. The haptic points are linked to the fingertip positions, with marks on the fingertips of one palm and another palm with one finger marked.

Tracked Fingertips: haptic points are given against the fingers and are linked to the fingertip positions

3.1.1 Design Considerations

Haptic feedback can be presented against one or more fingertips. Designers need to choose which number of fingers is most appropriate for their interaction design, as this may affect the strength of the haptic feedback. When a single haptic point is created, the ultrasound haptic device can maximise feedback intensity; as more points are added, the intensity of all points will typically be reduced. Presenting additional unnecessary points can therefore have a detrimental effect on the overall strength of the haptic feedback.

Tracked Fingertips (UHDP1)

  • What? Haptic points that are linked to one, or more, fingertip positions.

  • Why? To create the experience of touching something. To confirm that the system is actively responding to the user’s actions.

  • Where? One or more fingertips.

  • When? In response to input (event-driven), or continuously, or to show system attention.

In most cases, a single haptic point is sufficient. A common touchless gesture design is to use a single extended index finger for input, e.g. to control an on-screen cursor or to ‘tap’ virtual buttons. For this, a single haptic point at the extended fingertip can be sufficient to support effective input, and it confirms to the user that the correct finger is being tracked by the user interface.

There are situations where multiple fingers will require haptic feedback. For example, consider a pinch gesture between index finger and thumb, used to drag a slider control; in this case, presenting feedback to each fingertip may enhance the sensation of ‘grasping’ the slider between the fingers. Likewise, if the user is holding a virtual object in a touchless user interface, then presenting feedback at all fingertips supports the experience of a person grasping that object.

Designers must also decide when feedback should be given. Haptic points can be presented in response to actions using an event-driven feedback model (e.g. a user experiences feedback once their finger ‘taps’ a mid-air button). Alternatively, feedback can be presented at all times whilst the hand is within range of the device. The most appropriate choice here depends on the intended user experience. In an event-driven input model (e.g. pressing buttons, grasping objects), feedback can be presented in short bursts (e.g. after a button press) or continually (e.g. whilst grasping a virtual object). For other user experiences, users may feel more confidence if feedback is presented continuously whilst their hands are within the interaction volume, so that they know when their hands are being tracked.

In this design pattern, haptic feedback is presented as one or more discrete points. Amplitude modulation (Iwamoto et al. 2008) and lateral modulation (Takahashi et al. 2019) are suitable rendering methods for this design pattern, as they enable perceptible feedback at fixed-position points. The perceived size of the focal point corresponds to the wavelength of the sound wave; for 40 kHz ultrasound, this is approximately 8.6 mm (Rakkolainen et al. 2020). When the focal point is positioned appropriately, users will feel like the entire fingertip is being stimulated.

3.1.2 Questions for Designers

When using this design pattern, consider:

  • How many fingertips should receive haptic feedback?

  • When should haptic feedback be presented, and for how long?

3.1.3 Examples

One of the first examples of the Tracked Fingertips design pattern can be seen in work by Carter et al. (2013), who presented a touchless user interface that was capable of tracking multiple fingers and targeting them with independent points of haptic feedback. Haptic feedback was used in their system to mimic the sensation of touching a screen in mid-air, an experience we will look at in more detail in Sect. 3.3. Shakeri et al. (2018) used discrete event-driven haptic feedback, presenting a 500 ms pulse against two fingertips to confirm recognition of the ‘victory’ gesture (i.e. extended index and middle fingers). In this instance, event-driven feedback was given to inform the user that their input gesture was recognised. As can be seen by contrasting these examples, event-driven feedback may be better suited to confirming response to a user’s actions, whilst continuous feedback may be more appropriate for creating the sensation of touching something.

3.2 Tracked Palm

In this design pattern, a haptic point or pattern is positioned on the palm of the hand, like in Fig. 3. When the user moves their hand, the haptic output is repositioned to remain in contact with the hand. This is very similar to the Tracked Fingertips design pattern, except haptic feedback is presented against the palm (or whole hand), rather than just the fingertips. This offers the same potential benefits as the Tracked Fingertips design pattern, i.e. letting the user know when they are touching a virtual object, or informing them when their hand is being actively tracked for input.

Fig. 3
An illustration shows two hand images with the design pattern, tracked palm. It shows a haptic point positioned on the palm of the left hand or a haptic pattern positioned on the palm of the right hand.

Tracked Palm: haptic points (left) or patterns (right) are given against the palm and are linked to the palm position

3.2.1 Design Considerations

One of the first things designers should consider is whether to use this or the Tracked Fingertips design pattern. Both aim to give the same benefits to the user, so the most appropriate choice will likely be informed by the choice of tracking technology and the intended interaction metaphor. Targeting haptic points at fingertips requires precise finger tracking, which may not always be available. In this situation, targeting haptic feedback at the palm will be more straightforward as this requires a lower resolution sensor that only needs to be able to roughly estimate hand position (e.g. like in work by Hoshi (2011)). Choice of design pattern will also be influenced by the intended interaction metaphor. If the palm position is used as input to the system (rather than a fingertip position), then it makes more sense to target haptic feedback at the palm. Likewise, if the intended sensation is for users to grasp a virtual object and feel it in their whole hand, then presenting feedback on the palm will be appropriate.

Tracked Palm (UHDP2)

  • What? Haptic points or patterns linked to the palm position.

  • Why? To create the experience of touching something. To confirm that the system is actively responding to the user’s actions.

  • Where? On the palm, typically centred.

  • When? In response to input (event-driven), or continuously.

A key decision with this design pattern is the choice of tactile sensation to render on the palm. An individual haptic point or a spatially modulated pattern could be presented (e.g. circles). Choice may be limited by the haptic device and its driving software: haptic points are more straightforward to render, whereas continually moving haptic points require higher sample rates, more complex calculations, etc. From a usability perspective, there is likely to be little difference between the choice of tactile sensation; the presence of haptic feedback will be more important than its shape or tactile qualities. There will be a perceptual difference, however: patterns can feel more intense than fixed-position points (Frier et al. 2018; Takahashi et al. 2019), and so these may be the best choice if available.

Similar to the Tracked Fingertips design pattern, designers need to consider when feedback should be presented. As discussed before, the most appropriate choice depends on the intended user experience and the reader should refer to Sect. 3.1.1 for more insight.

3.2.2 Questions for Designers

When using this design pattern, consider:

  • Should this design be used, or is Tracked Fingertips more appropriate?

  • What should be presented against the palm—haptic point, pattern?

  • If using a haptic pattern, what should be rendered?

  • When should haptic feedback be presented, and for how long?

3.2.3 Examples

The Tracked Palm design has been widely used to give users feedback that confirms the touchless user interface is actively tracking their hand movements in air, although there are subtle variations in how this experience is created. Hoshi (2011) and Georgiou et al. (2018), for example, both presented a continuous haptic point against the centre of an open palm to confirm the system was tracking the hand. In the latter system, additional haptic patterns targeted other regions of the hand in an event-driven model, e.g. to confirm when gestures were recognised. An advantage of presenting a single point like this is that it leaves other parts of the hand free for presenting additional feedback.

Alternatively, larger patterns can be presented against the palm. For example, Freeman et al. (2019) presented a continuous circular pattern against the palm, which dynamically resized to guide hand movements in mid-air. Shakeri et al. (2018) also presented a circular pattern, although this was only presented briefly after input gestures were recognised. As can be seen by contrasting these examples, event-driven feedback is typically more appropriate when feedback is given in response to a user action, whilst continuous feedback will be more appropriate when feedback aims to guide users or confirm that the system is tracking their hands correctly.

3.3 Floating Screen

Touchless user interfaces often mimic the behaviour of touchscreens, allowing users to ‘tap’ buttons and icons on a virtual screen in mid-air. A virtual screen is generally defined as a flat surface that is oriented and positioned in air in front of a real display. Users’ hands are tracked and mapped to the position of an on-screen cursor, which can be used to make selections by reaching forward, breaking the surface and effectively ‘tapping’ the floating screen. This interaction metaphor leverages familiarity with touchscreens and, from a more pragmatic perspective, can be easier to retrofit to existing user interfaces (effectively using the hand or finger position to control a mouse pointer). Ultrasound haptic feedback is naturally suited to these floating virtual touchscreens because it can provide the missing sense of physical contact that supports effective touchscreen input (Freeman et al. 2014), overcoming a usability issue with floating screens (Waugh and Robertson 2021).

In this design pattern, haptic feedback is positioned in order to create the experience of the hand or fingers touching the virtual screen, like in Fig. 4. One aim of this haptic design is to inform users of where the floating screen is positioned, so they know how far they must reach to activate its user interface elements (Vo and Brewster 2015). Another is to give confirmation to users that their input actions were recognised by the system, because even the brief presentation of a focal point after a button activation gesture can be effective (Cornelio-Martinez et al. 2017). This can be considered a special case of the Tracked Fingertips and Tracked Palm design patterns, where haptic feedback is presented when targeting controls in a touchless user interface, with the intention of mimicking contact with a touchscreen.

Fig. 4
An illustration shows two hand images with the design pattern, a floating screen. It shows haptic feedback is positioned to create the experience of the hand or fingers touching the virtual screen surface or buttons on the surface.

Floating screen: haptic feedback is given when users touch a virtual screen surface, or buttons on the surface. For example, feedback on the fingertip (left) or palm (right)

3.3.1 Design Considerations

Haptic feedback can be presented against a fingertip, the palm or the whole hand. The most appropriate choice is the part of the hand used to activate content on the floating screen, so users know how to target user interface elements effectively. For example, if an extended index finger is used to ‘tap’ buttons, then haptic feedback should be positioned at the index fingertip, or if the centre of the palm is used to detect a whole-hand button ‘press’, then haptic feedback should be positioned at the centre of the palm.

Floating Screen (UHDP3)

  • What? Haptic feedback given when the hand is targeting a virtual screen.

  • Why? To reveal the position of the virtual screen surface. To give feedback about activating screen controls.

  • Where? At the point of contact with the screen, typically at the part of the hand used for input tracking.

  • When? In response to screen activation (event-driven), or continuously.

Screen contact can be conveyed using both haptic points and haptic patterns, although the most appropriate choice will depend on the input gesture design: e.g. a haptic point is sufficient for a single fingertip, whereas a haptic pattern may be more suitable if the screen is activated by the palm. In some touchless user interface designs, it may be possible to represent the shape and size of the button as a haptic pattern, creating cross-modal congruence between visual and haptic feedback. Whilst this may create a richer interaction experience, the main usability benefits will come from simply feeling the feedback in the first place, as this conveys the screen position and informs the user that they have made contact.

Designers need to decide when feedback should be given, a choice that will be informed by the floating screen design. Touchless buttons on a floating screen can be activated in numerous ways; for example, when a hand contacts its surface, when a hand hovers in front of it for a short period of time or when a finger performs a ‘tap’ motion in front of it. When buttons are activated through contact or tapping gestures, event-driven feedback will likely be most appropriate, because the onset of haptic feedback informs the user that the activation gesture has been acted upon. When buttons are activated via hover, it may be more suitable to present feedback continuously whilst the hand is hovering, to inform users that they are controlling an active cursor and an unintended selection may take place.

Button activation method will also influence the hand positions where haptic feedback should be given. If buttons are activated through contact or tapping gestures, haptic points or patterns should be positioned at the surface of the screen and oriented towards the hand. This is a natural complement to the event-driven feedback model: haptic feedback will only be experienced by the user when their hand reaches towards the screen to activate a user interface element. Alternatively, for continuous feedback, haptic feedback should be given at all times when the hand is actively being tracked: e.g. whilst the activation timer is enabled for dwell activation.

3.3.2 Questions for Designers

When using this design pattern, consider:

  • Which part of the hand should be used as input to the floating screen?

  • What should be presented against that part of the hand—haptic point, pattern?

  • If using a haptic pattern, what should be rendered—the button shape and size?

  • How should floating buttons and other user interface elements be activated?

  • When should haptic feedback be presented, and for how long?

3.3.3 Examples

This is a widely used design pattern, and numerous examples can be found in the literature; however, for brevity I focus on a few examples that highlight how this design can be varied. Hoshi (2011) and Carter et al. (2013) used the Floating Screen design pattern and targeted the palm and fingertips, respectively. In both examples, the floating screen surface was positioned directly in front of a visual display.

Floating screens can be placed in other positions, however. For example, Freeman et al. (2014) used a floating screen in an offset position, with users gesturing beside a small screen instead of directly in front of it (to avoid occluding the display content). Sand et al. (2015) used this design pattern in virtual reality, using a hand tracker and haptics device mounted on a virtual reality headset, such that users felt contact with a floating screen when their hands touched it in virtual reality. This design pattern has also been used with mid-air holographic displays, e.g. by Monnai et al. (2014).

This design pattern can also be used in interactive experiences that do not mimic interaction with traditional graphical user interfaces; for example, Hwang et al. (2017) describe a novel example whereby users can play a piano in virtual reality, tapping piano keys instead of user interface buttons.

3.4 Forcefield

A key usability challenge with touchless interaction is knowing where to perform input gestures. Physical input devices that users touch or grasp have affordances that help users discover how to direct their input, but touchless user interfaces do not—the interaction volume is not visible, and users cannot be expected to know where their hands can, and cannot, be sensed (Freeman et al. 2016). Users may not even know that touchless interaction is available (Limerick 2020), especially if a touchless input device is used alongside an existing touchscreen display.

In this design pattern, ultrasound haptic feedback is used to convey the boundaries of a touchless user interface by creating a ‘forcefield’, a haptic surface that users feel as they reach through it (like in Fig. 5). One aim of this haptic design is to help users discover the boundaries of a touchless interaction volume; reaching into this volume—by breaking through the forcefield—creates a perceptible change in state, letting users know that this is where interaction begins. At the same time, the presence of the forcefield reveals the otherwise invisible touchless user interface, which users may have previously been unaware of; the touchless haptic feedback conveys interactivity in the space in front of the display and may prevent them reaching for the screen.

Fig. 5
An illustration shows the design pattern of the Forcefield. It shows how the haptic feedback creates a forcefield, a haptic surface that users feel as they reach through it. A Palm with a marked strip denoted as Forcefield surface.

Forcefield: haptic feedback is used to create a ‘forcefield’ surface that users must reach through

There are similarities between this and the Floating Screen design pattern, in that both utilise the concept of a surface in a fixed position in mid-air. The key distinction between them is that users are intended to interact on the surface of a Floating Screen and interact on the other side of the Forcefield. An alternative means of revealing a touchless user interface would be to use continuous haptic feedback linked to the hand (i.e. Tracked Fingertips or Tracked Palm). However, the advantage of using a fixed position Forcefield is that users only experience a tactile sensation when they reach through the surface; once their hand is inside the interaction volume, haptic feedback can then be used for other purposes, e.g. to give feedback about touchless gestures, or to render haptic representations of virtual objects.

3.4.1 Design Considerations

When creating an ultrasound haptic forcefield, two of the first design considerations are where to place it and how to orient it. A forcefield will typically be used alongside a visual display, and if the intention is to convey the boundaries of the touchless user interface, then it will make the most sense to align the forcefield with the screen. As a result, the forcefield surface will generally be the same distance in front of all regions of the screen, aligned like a Floating Screen. That distance between screen and forcefield depends on the intended interaction metaphor: does the forcefield define where the interaction area begins (i.e. after crossing this point, touchless input sensing is active), or ends (i.e. after crossing this point, touchless input sensing will stop)? Perhaps even both, using two forcefields to show both boundaries?

Forcefield (UHDP4)

  • What? Haptic feedback that represents a surface that users must reach through.

  • Why? To indicate the boundaries of the interaction volume, so users know where to provide input or can feel the transition between two interface states.

  • Where? On a line segment across the hand, where the hand intersects the forcefield surface.

  • When? When the hand intersects the forcefield surface.

An ultrasound haptic forcefield will be placed in a fixed position and orientation in space, but users’ hands will approach it from different positions and at different angles. In some cases, it may be more appropriate to choose a curved forcefield surface rather than a flat one. For example, a flat haptic surface is ideally suited to a flat screen, but a curved surface might suit other configurations, e.g. for a touchless interface in a vehicle where the user does not receive any visual feedback on a screen (Georgiou and Griffiths 2017; Shakeri et al. 2018). The choice of surface shape will impact how the forcefield is presented against the user’s hand: a touchless user interface needs to calculate the intersection between the hand and the surface, taking hand height and orientation into consideration. The intersection can then be used as the trajectory for one or more focal points to move along, creating the sensation of a surface that remains in place whilst the hand passes through.

3.4.2 Questions for Designers

When using this design pattern, consider:

  • What does entering the forcefield mean—entering the interaction zone, leaving the interaction zone, both?

  • What is the shape of the forcefield—spherical surface, flat surface?

  • How is the forcefield oriented—aligned with input sensor or visual display?

3.4.3 Examples

This design pattern exists as a template within the Ultraleap Sensation Editor (Ultraleap 2019) but has seen little use in the academic literature so far. A similar design was described by Shakeri et al. (2018), who evaluated a touchless user interface for in-car interaction. In their system, ultrasound haptic feedback was briefly presented against the palm when it entered the interaction volume. Whilst this rendering did not create the sensation of a solid surface being broken by the hand, it had the same intention of conveying the boundary between interactive and non-interactive regions in space.

3.5 Object Outline

An alluring capability of ultrasound haptic feedback is its ability to take simple focal points and use them to render patterns of varying shape and size. A compelling use of this capability is to create haptic representations of virtual objects, so that users can ‘feel’ the visual content they see on a display. Rendering haptic shapes that can be accurately recognised is a challenge (Hajas et al. 2020; Korres and Eid 2016; Long et al. 2014; Rutten et al. 2019), although a corresponding visual representation of the shape can help users make sense of the haptic feedback.

In this design pattern, ultrasound haptics is used to create a haptic representation of a virtual object shown on a visual display. Whilst there are many ways to achieve this, the most common is to render the outside edge of the object, where it intersects the hand. For example, Fig. 6 shows examples of how a haptic circle may be presented using discrete focal points (left) or spatially modulated focal points (right). Users can only perceive a 2D shape on their palm at any one time, but by dynamically scaling the shape outline, users can experience the illusion of moving their hand through a 3D object. Consider a sphere: as a user moves their hand through a virtual sphere, its circular cross-section on the palm will increase, reach maximum size at the midpoint and then decrease as the hand approaches the opposite side (Long et al. 2014).

This design pattern aims to help users locate virtual objects in mid-air and support haptic exploration (e.g. by conveying shape and size). The addition of haptic feedback can also create a more engaging user experience, through the addition of another sensory modality that allows users to ‘feel’ what they see on screen.

Fig. 6
An illustration of two hands shows the design pattern. One palm with many haptic points marked and another palm denoting the haptic pattern in circular motion denoted by the dotted circle a and arrow.

Object outline: haptic feedback represents the edge of a virtual object, e.g. using a series of points (left) or a moving focal point (right) to render a circle on the palm

3.5.1 Design Considerations

An object outline pattern needs to resemble the shape of the corresponding virtual object, so there are less design parameters for designers to consider. However, a key decision will be how to represent the outline shape. Haptic shapes can be presented using several haptic points distributed around the outline (e.g. Fig. 6–left) or using spatially modulated patterns, where haptic point(s) rapidly traces the outline (e.g. Fig. 6–right).

Object Outline (UHDP5)

  • What? Haptic feedback resembling the outline of a virtual object.

  • Why? To help users locate virtual objects. For haptic exploration. To increase engagement and enhance content shown on screen.

  • Where? On the region of the hand that intersects the object.

  • When? When the hand is intersecting the virtual object.

We cannot recommend a ‘best’ method for presenting haptic shapes, as research into improved shape rendering is ongoing and recommendations will change over time—as will be discussed in Sect. 3.5.3. It is worth noting, however, that most research into haptic shape perception investigates shape recognition with haptic-only presentation. In practice, the Object Outline design pattern is most likely to be used with a visual representation on the screen, which is likely to make the haptic shapes more easily recognisable, such that subtle variation in shape rendering approach become less important.

When creating 3D virtual objects for a touchless user interface, the virtual object will likely have to be fixed in position. This allows the user’s hand to move ‘through’ the object, experiencing the varying shape and size as a result of the changing intersection between hand and virtual object (e.g. Fig. 7).

Fig. 7
An image of the palm in the lateral view along with the area of the triangle behind.

As the hand moves ‘through’ a 3D virtual object, the outline of the intersection will vary in size and/or shape. For example, as the hand moves through a cylindrical cone, its circular cross-section diameter will change

Presenting 2D outlines is more straightforward as the shape and/or size of the outline does not vary (although may change position or orientation as the hand moves). Consequently, 2D shapes need not be fixed in position and could be linked to the hand, so that users perceive them from any hand position (a special instance of Tracked Palm).

3.5.2 Questions for Designers

When using this design pattern, consider:

  • What visual cues, if appropriate, can be given to aid shape perception?

  • How large should the haptic object be—will it fit on the palm?

  • Will 2D shapes be fixed in position, or should they be linked to hand position?

3.5.3 Examples

In one of the earliest explorations of this design pattern, Long et al. (2014) described a novel method for rendering volumetric 3D objects by creating several disconnected haptic points around the edge of the 2D cross-section with the palm. Frier et al. (2018) presented a more sophisticated rendering method for polygons, where one focal point rapidly and repeatedly traverses the outline. Whilst this works fine for circles, object outlines with corners are more difficult to accurately perceive (e.g. squares, triangles). Hajas et al. (2020) discussed a novel extension of Frier’s method, where the moving focal point briefly pauses at corners before changing direction. This helped to emphasise the corners and edges of the object, so that users could more accurately recognise the shapes.

3.6 Motion Patterns

In the design patterns described so far, the haptic sensations have been fixed in position: some are fixed on the hand (e.g. Tracked Fingertips and Tracked Palm) whilst others are fixed in space (e.g. Floating Screen and Forcefield). Users may experience sensations of haptic movement when interacting with fixed-position haptic feedback, like when they reach through a Forcefield, but that motion is a result of the user’s actions and not deliberate movement intended by the designer.

In this design pattern, ultrasound haptic feedback is used to create a deliberate and controlled sensation of motion on the hand (like in Fig. 8). This is distinct from other designs because the motion is consistent and intentional, controlled by the touchless user interface and not a result of inherent hand movement. One aim of this design is to convey a change in system state, informing the user that something has happened through animated haptic sensation. This form of feedback can be perceptibly distinct from other designs that may be used in the same touchless interface, e.g. a static Tracked Palm sensation given to confirm active hand tracking. Another aim of this design is to create more engaging user experiences, e.g. by synchronising haptic motion with effects shown on screen.

Fig. 8
An illustration shows the motion patterns. It includes two hand images, the left-hand image shows the haptic points and the line across the palms, and the right-hand image shows a point moving in a circle and an arrow denoting the direction of the point.

Motion Patterns: haptic patterns that are perceived as movement across the hand, e.g. lines that scan across the palm (left) or points moving along circular paths (right)

3.6.1 Design Considerations

Most ultrasound haptic feedback primitives can be used to create a sensation of motion on the palm, e.g. by moving haptic points, lines and shapes. Designers thus need to identify the most appropriate motion patterns for their touchless user interface design. If Motion Patterns are being used to give feedback in response to input gestures, it is often most appropriate to align the motion with the action that caused it. For example, if users swipe their hand to the left or right, feedback patterns could confirm input recognition with a corresponding haptic sensation, that moves to the left or right across the hand (Shakeri et al. 2018).

Motion Patterns (UHDP6)

  • What? Dynamic haptic patterns that are perceived as motion on the hand.

  • Why? To convey change in system state. To encode information. To give feedback. To create engaging and dynamic user experiences.

  • Where? Typically on the palm, but may move across the fingers too.

  • When? In response to screen activation (event-driven), or continuously.

Choice of motion can also be informed by interaction metaphors used in the touchless user interface. Dials are a common metaphor in touchless user interface design, whereby users adjust values through circular motions (Freeman et al. 2016) or ‘grasping and turning’ gestures Freeman et al. (2015). Circular motion of haptic points can extend this metaphor to the haptic feedback. For example, a haptic point moves clockwise when values increase or anticlockwise when values decrease (Georgiou and Griffiths 2017). Motion can also be paired with animated feedback shown on screen, creating a sense of cohesion between mid-air haptics and the visual content on a distant display.

After choosing appropriate motion patterns, designers need to think about where and when to present them against the hand. Motion patterns are typically targeted at the palm of the hand, since it is a contiguous space across which motion can be perceived (unlike the fingers, which may be spread apart). Motion patterns can be presented continuously (e.g. when synchronised with on-screen animations), but will mostly likely be event-driven, presented in response to an action by the user, a change in system state, etc.

There are many ways that sensation of motion can be created. One of the earliest demonstrations of perceived motion used a perceptual illusion known as apparent tactile motion (Wilson et al. 2014). This sensation was created by presenting a sequence of three haptic points in order, with a slight delay, such that people perceived continuous movement between those points. Contemporary rendering approaches can use actual motion, updating the position of a haptic point thousands of times per second, so that it actually moves across the skin (Frier et al. 2018). This, in turn, can elicit the sensations of dynamic and ‘static’ haptic patterns (Freeman and Wilson 2021).

3.6.2 Questions for Designers

When using this design pattern, consider:

  • Which types of motion should be presented to the user?

  • Where should the motion pattern be presented?

  • When should it be presented, and for how long?

3.6.3 Examples

 Motion Patterns can be used to convey a change in touchless user interface state; for example, (Georgiou and Griffiths 2017) used clockwise and anticlockwise circle patterns to indicate increasing and decreasing values, respectively. Motion can also be used to give feedback confirming the recognition of hand motion gestures; for example, Shakeri et al. (2018) and Georgiou et al. (2018) both used motion patterns after mid-air swipe gestures, e.g. haptic points that moved across the palm in the same direction the user had swiped for input. Many examples of Motion Patterns can be found in the Ultraleap Sensation Editor (Ultrahaptics 2017), e.g. scanning lines across the hand or presenting circles that ‘expand’ and then ‘contract’.

3.7 Special Effects

In the haptic design patterns discussed so far, haptic sensations have been grounded in familiar interaction experiences: e.g. the sensation of touching user interface elements or virtual representations of physical objects. Due to the unique design capabilities of this technology and its lack of mechanical constraints, ultrasound haptic feedback can also be used to create radically new and unfamiliar tactile sensations: best described as special effects, or ‘supernatural experiences’ (Martinez et al. 2018).

In this design pattern, haptic feedback is used alongside visual and audio to create multisensory special effects (like in Fig. 9), e.g. the feeling of touching lightning, holding a ball of fire and casting magical spells (Limerick et al. 2019; Martinez et al. 2018). Unlike other design patterns, the haptic rendering itself may seem irregular, using random and disjointed movement to create sensations that ‘feel right’ for the intended effect. The success of these special effects comes from an effective coupling between multiple sensory modalities. Unsurprisingly, these effects have the ability to capture users’ imagination and increase engagement with a touchless user interface Limerick et al. (2019) and could be compelling for entertainment applications, e.g. video games (Georgiou et al. 2018; Martinez et al. 2018) and movies (Ablart et al. 2017).

Fig. 9
An image with two palms one with the special effect of fire and the other palm with the lightning effect.

Special Effects: haptic patterns intended to create the sensation of touching unfamiliar yet recognisable experiences, like touching fire (left) or lightning (right)

3.7.1 Design Considerations

Creating ultrasound haptic special effects is not straightforward, because there is no systematic way of defining the tactile experience of touching a flame, holding a hand under running water, etc. Most design patterns discussed in this chapter can be defined using geometric primitives (points, lines, shapes) and the spatial relationship between the user’s hand and touchless user interface (e.g. fixed position vs linked to the hand), but this is not possible for special effects. Instead, a more exploratory approach is needed, to find suitable spatial and temporal characteristics for the intended effect.

Special Effects (UHDP7)

  • What? Dynamic patterns intended to create recognisable tactile experiences, not grounded in the physical world.

  • Why? To create an engaging experience that captures the imagination.

  • Where? Where the hand intersects the visual effects.

  • When? In synchrony with visual and/or audible effects.

Since little systematic guidance can be offered for creating new special effects, this section instead looks at case studies of existing special effects, to give insight into possible approaches. What is notable about these examples is the haptic effects are always presented in synchrony with visual and audio effects. These other sensory modalities help users attribute meaning to a tactile experience that may otherwise difficult to describe. In other words, the graphics and audio help to sell the illusion.

One of the first ultrasound haptic special effects was the sensation of raindrops falling on the palm, described by Hoshi et al. (2010). In their system, a holographic display showed falling raindrops landing on the user’s hand, which were synchronised with the presentation of haptic points against the palm (like in Fig. 10). Although these simple haptic points did not feel like water, the temporal coincidence between visual and haptic effects contributed to the experience of rain falling on the hand.

Fig. 10
An illustration shows the raindrop effect as graphics and haptic points on the palm and the raindrops touching the haptic points.

In the raindrop special effect, haptic points are presented in synchrony with visible water droplets

In their paper on ‘supernatural experiences’, Martinez et al. (2018) describe numerous haptic special effects. One of these is the experience of casting a lightning bolt from the fingertips, in a virtual reality spellcasting game. Their lightning spell effect was created using haptic points that follow an erratic path from the base of the palm to the fingertip (like in Fig. 11), coinciding with visual and audible cues in the virtual reality game. The combined feeling of motion across the palm and other sensory information created a convincing and engaging user experience of casting magical spells.

Fig. 11
In an image of two palms, one palm with a haptic point moves in a direction denoted by the arrows along the palm, and another palm with the lightning special effect.

In the lightning spell special effect, a haptic point moves along the palm and a finger, to coincide with an electrical arc graphic that extends from the fingertip

3.7.2 Questions for Designers

When using this design pattern, consider:

  • ‘What elements of the audio-visual [design] should one look to haptically enhance and/or augment?’ (Corenthy et al. 2018)

  • Are there spatial or temporal characteristics in the visual effects that can be replicated via haptics?

3.7.3 Examples

Ultrasound haptic special effects—‘supernatural experiences’ (Martinez et al. 2018)—have mostly been used to increase user engagement during gameplay. For example, Martinez et al. (2018) describe a virtual reality game where users cast magic spells, feeling the elemental sensations of wind, fire and lightning. Limerick et al. (2019) used haptic special effects for interactive digital advertising, e.g. to experience the sensation of firing lasers from a spaceship or feeling electrical static against the palm. Similar effects exist within the Ultraleap Sensation Editor (Ultrahaptics 2017), e.g. to mimic the sensations of rippling water or electrical sparks. Haptic special effects have also been paired with holographic content: e.g. Hoshi et al. (2010) created the effects of raindrops falling on the palm and a small animal walking across the hand, both of which were accompanied by mid-air graphics from a holographic display. Recent work shows the potential for combining ultrasound haptic sensations with audio effects from the same device (Hirayama et al. 2019), which could be a promising way of expanding the range of tactile sensations for haptic special effects (Freeman 2021).

4 Discussion

4.1 Retrospective Look at Haptic Design

This collection of design patterns shows seven widely used haptic interaction designs found in human-computer interaction research and in real-world deployments of this technology. Whilst the main aim of this chapter is to help designers identify suitable haptic designs for a touchless user interface, these design patterns also give insight into how this technology has been used and the user experience benefits it offers.

In the earliest years of this technology, the Tracked Fingertips and Tracked Palm designs were common. Amplitude modulation (Iwamoto et al. 2008) was the predominant rendering method at the time and was best suited for stationary haptic points, in a fixed position in mid-air or on the hand. Targeting the fingertips or centre of the palm was a straightforward way of creating a consistent user experience, and this often created a coupling between the input and output: i.e. presenting haptic feedback against the location on the hand that was being tracked for input. This was a simple yet effective design, creating a sense of presence in a touchless user interface; the haptic feedback both revealed the presence of a haptic user interface in mid-air and provided reassurance to users that their actions were being tracked.

Over the past decade, the predominant use of ultrasound haptic feedback has been to create a haptic embodiment of a touchless user interface and its interactive elements. Floating Screen provides the experience of pressing a ‘touchless screen’ in mid-air, with feedback about familiar user interface components like buttons and sliders. Forcefield represents the boundaries between interactive and non-interactive space, analogous to a window in a graphical user interface. Finally, Object Outline conveys the shape and size of user interface elements and other virtual objects. Collectively, these haptic designs convey the position of touchless user interface elements and give feedback about interactions with them.

More recently, Motion Patterns and Special Effects have emerged as compelling use of ultrasound haptic feedback. These ‘animated’ haptic patterns take advantage of improved rendering methods and increasingly more capable technology. These are predominantly used to give users feedback about interaction, or to enrich interaction and increase engagement through the use of an extra sensory modality. Special Effects, in particular, are an exciting departure from the geometric primitives that dominated the early use of ultrasound haptic technology (i.e. the points, lines and shapes used in numerous haptic design patterns). It is exciting to imagine what might come next—perhaps the design patterns of the future will bear no resemblance to those presented here, e.g. by using focal points in novel ways or by moving away from focal points entirely to exploit ultrasound pressure in different ways.

4.2 Selecting Design Patterns

A key question addressed by this chapter is where to start?—how should one identify design patterns for a new touchless user interfaces? Table 1 shows a suggested mapping between design patterns and six common user experience objectives in a touchless user interface, intended to guide readers towards a suitable design pattern. Whilst these objectives can be satisfied through numerous designs, this table gives suggestions about which patterns may be the most effective.

Reveal Interactivity means haptic feedback is intended to inform users about the presence of a touchless user interface. Confirm Tracking means haptic feedback is intended to give reassurance that the system is correctly sensing their actions. Action Feedback means haptic feedback is intended to confirm response to a user’s input actions (e.g. feedback about mid-air gestures). Object Representation means haptic feedback is intended to represent virtual objects in a touchless user interface, and UI Representation is a special case where the virtual object is a user interface element (e.g. a screen, button or slider). Engagement means haptic feedback intends to engage and excite users through novel multisensory effects.

4.2.1 Case Study: Touchless Button Menu

When designing a touchless user interface, it may be necessary to employ multiple haptic designs to support different usability needs. As a case study, consider a touchless user interface with a gesture-activated button menu. Users’ hands are tracked in 3D, and buttons can be activated at any distance from the screen, by hovering a hand in front of them and then ‘pushing’ the palm towards the screen.

Table 1 Suggested mapping of ultrasound haptic design patterns to user experience objectives in a touchless user interface

This touchless interface would benefit from feedback that (i) reveals touchless interactivity, (ii) confirms that users’ hands are actively being tracked when within range of the touchless interface, (iii) represents the touchless buttons in their mid-air position and (iv) gives feedback about button activation gestures. As can be seen from Table 1, many patterns could be chosen to satisfy these interaction needs. However, not all combinations will make sense to users and they may have difficulty differentiating between feedback designs. A suitably chosen combination of design patterns must therefore be cohesive, so that users can recognise different interface states through clearly perceptible differences in feedback design.

One combination that satisfies our feedback needs in this case study example would be the Tracked Palm, Floating Screen and Motion Patterns designs:

  • Tracked Palm: a haptic point presented against the centre of the palm when the user’s hand is within range of the input device reveals interactivity and informs the user that their hand is being tracked (Fig. 12a). As corresponding visual feedback, a model of the user’s hand would be shown in the user interface.

  • Floating Screen: when the user places their hand over the position of a mid-air button, a circular pattern is presented against the palm, so they feel the button’s position in mid-air (Fig. 12b). This feedback informs the user that their hand is targeting a button; visual feedback would show the hand model in front of the button, with an animation that invites them to ‘push forward’.

  • Motion Patterns: when the user pushes their hand forward to activate a button, the diameter of the circle pattern changes, so that the user feels it contracting to a point on the palm (Fig. 12c) and then expanding back to full size (Fig. 12d). This haptic animation shows a dynamic response to the button activation gesture.

Fig. 12
A four-part illustration labelled A, B, C, and D. Part A shows the presence of a haptic point in the centre of the palm. Part B shows a haptic circle in the palm and Part C shows the contraction haptic circle with the arrow marks pointing inwards and part d shows the expansion of the haptic circle with the arrow pointing outwards.

Haptic feedback designs for the case study example: a a haptic point in the centre of the palm confirms tracking when within range; b a haptic circle is presented when the user hovers over mid-air buttons; cd when the user pushes forward to ‘press’ a button, the circle contracts c then expands d again to confirm recognition

These three designs are intended to represent three states of the touchless user interface: (i) being tracked by the interface but not targeting a button, (ii) actively targeting a button by hovering the hand over it and (iii) targeted button has been activated by the push gesture. The transitions between these states will be noticed by perceptible changes in the feedback. When the user moves over a button, the single haptic point on their palm is replaced by the circle pattern, which stimulates a larger area of the hand and feels more intense (Takahashi et al. 2019). Likewise, when the user activates a button, they will perceive the circle contracting and expanding. When the user moves away from a button (or if the interface transitions to a new window), the haptic feedback resets to a haptic point in the centre of the palm.

This simple feedback vocabulary combines three design patterns to give haptic feedback before, during and after button activation; the transitions between these designs reflect transitions in user interface state, a haptic accompaniment to visual feedback that would be shown on screen. Other design patterns could have been selected for the same purpose, e.g. a haptic Forcefield to inform the user when they have entered the interaction space, rather than continuous Tracked Palm feedback, suggesting the so-called ‘best’ combination is a challenge for future research.

5 Conclusion

This chapter presented seven ultrasound haptic design patterns, which illustrate the variety of ways that interaction designers and researchers are using this technology in touchless user interface design. This serves three aims: (i) to reflect on the evolution of this technology (and our understanding of it); (ii) to highlight the many ways that ultrasound haptic feedback can improve usability and user experience; and (iii) to inform the design of future touchless user interfaces. The set of design patterns presented in this chapter is by no means complete. Ultrasound haptic technology is continually advancing, and so is our understanding of touchless interaction and haptic perception. In turn, design patterns will evolve and new ones will emerge, to make better use of ultrasound haptic feedback and to pave the way to more engaging and usable touchless interaction experiences.