Keywords

1 Introduction

The field of human–computer interaction has traditionally focused on designing user interfaces and interactions that rely on the user’s undivided attention. This changed with the introduction of visions of ubiquitous computing (Weiser 1991) and context-aware computing (Schilit et al. 1994), Buxton’s background–foreground model (1995) and the notion of calm technology (Weiser and Brown 1996). Calm technology is a vision of digital interactions that—just as many of our interactions in the physical world—take place in the background or periphery of attention. While calm technology mostly focused on perceiving information in the periphery—as with ambient displays such as Jeremijenko’s Live Wire (Weiser and Brown 1996)—Hausen (2014) and Bakker et al. (2015) extended this idea by introducing the notion of peripheral interaction, which also included interacting in the periphery of attention. As described by Bakker et al. (2015), interactions that occur in the periphery can also dynamically transition between being peripheral to being the center of attention when relevant or desired.

This chapter focuses on aspects of peripheral interaction within proxemic interactions. The idea of proxemic interactions in computing extends the classic vision of context awareness and uses proxemic relationships (e.g., distance and orientation between entities) to mediate interaction between people and ensembles of various digital devices (Ballendat et al. 2010; Greenberg et al. 2011). In particular, this chapter discusses how to facilitate transitions between outside the attentional field, the periphery, and the center of attention in proxemic interactions.

We start with a brief overview of proxemic interactions and highlight potential problems. We then explain solutions to address these problems with the use of a peripheral floor display called Proxemic Flow. Next, we analyze the different techniques used in Proxemic Flow and explain how these facilitate transitions between outside the attentional field, the periphery, and the center of attention, grounded in Norman’s Stages of Action model. Finally, we generalize our experiences with designing such interactions into two general design patterns: slow-motion feedback and gradual engagement.

2 Proxemic Interactions

In this section, we introduce proxemic interactions and provide an overview of potential interaction challenges with proxemics-aware devices.

2.1 Background

Proxemic interactions (Greenberg et al. 2011; Marquardt and Greenberg 2015) feature devices that have fine-grained knowledge of nearby people and other devices—such as their precise distance, orientation, how they move into range, and their identity or location, depicted in Fig. 7.1.

Fig. 7.1
figure 1

Proxemic interactions imagine a world of devices that have fine-grained knowledge of nearby people and other devices. When designing proxemic interactions, five key proxemic measures (or dimensions) between people, digital devices, and non-digital objects can be considered: distance, orientation, movement, identity, and location (image source Greenberg et al. 2011)

Proxemic interaction is based on anthropologist Edward T. Hall’s theory of proxemics (1966), which investigated the use of interpersonal space in nonverbal communication. In particular, proxemics theory identified the culturally specific ways in which people use interpersonal distance and orientation to understand and mediate their interactions with others. The idea of proxemics is not limited to interpersonal communication; it also extends to ‘the organization of space in [our] houses and buildings, and ultimately the layout of [our] towns’ (Hall 1963). As put forward by Marquardt, Greenberg, and colleagues (Ballendat et al. 2010; Greenberg et al. 2011; Marquardt et al. 2012), proxemic relationships are used to mediate interaction between people and ensembles of different digital devices, such as mobile devices or large interactive surfaces, as shown in Fig. 7.2. Additionally, they envision devices to take into account the non-digital, semi-fixed, or fixed objects in the user’s physical environment (Greenberg et al. 2011).

Fig. 7.2
figure 2

An example of proxemic interactions with the Proxemic Media Player (Ballendat et al. 2010). a The system is activated when the person enters the room, b continuously reveals more content when approaching the display, c allows explicit interaction through direct touch in close proximity, and d switches implicitly to full-screen mode when the person is taking a seat (image source Ballendat et al. 2010)

One of the most commonly featured aspects of Hall’s theory applied in HCI is the use of four proxemic zones that correspond to interpretations of interpersonal distance: the intimate, personal, social, and public zone (Greenberg et al. 2011). In earlier research, these different interaction zones have been used to mediate interaction with large interactive surfaces (Prante et al. 2003; Vogel and Balakrishnan 2004; Ju et al. 2008). Inter-entity distance in the context of proxemics has also been used to facilitate cross-device interaction (Hinckley 2003; Hinckley et al. 2004; Kray et al. 2008; Gellersen et al. 2009).

In recent years, large interactive surfaces such as vertical displays or tabletops are appearing increasingly in semi-public settings (Brignull and Rogers 2003; Ojala et al. 2012). With the availability of low-cost sensing technologies (e.g., IR range finders, depth cameras) and toolkits such as the Proximity Toolkit (Marquardt et al. 2011) or the Microsoft Kinect SDK, it is fairly straightforward to make these large displays react to the presence and proximity of people. This has been picked up both by researchers, e.g., (Ju et al. 2008; Müller et al. 2009a, 2012; Jurmu et al. 2013), and by commercial parties—see (Greenberg et al. 2014) for several examples. Although these low-cost sensing solutions tend to apply fairly crude measures of proxemics and only take into account a few proxemic dimensions (Fig. 7.1), proxemic interactions are becoming more commonplace in our everyday environments.

People have natural expectations regarding increasing engagement and interactivity when approaching others. In proxemic interactions, these expectations are applied to interactions with devices. Given that this is learned and often implicit behavior, the fact that people expect increasing interactivity and engagement when approaching digital devices (Greenberg et al. 2011) can be characterized as occurring in the periphery of attention.

2.2 Interaction Challenges with Proxemic Interactions

We provide a brief summary of potential interaction challenges within proxemic interactions. These motivate the peripheral floor visualizations that we will introduce in Sect. 7.3.

2.2.1 Interaction Challenges with Implicit Interaction: The Need for Fluent Transitions Between the Center and the Periphery of Attention

One of the core issues causing interaction challenges with proxemics-aware interactive surfaces is their reliance on implicit interaction. The Proxemic Media Player (Fig. 7.2) automatically pauses videos when two people are both oriented away from the display (e.g., when starting a conversation), which might be surprising and disturbing for users when they first encounter this. Ballendat et al. (2010) argue that defining the rules of behavior that indicate how systems using proxemic interactions interpret and react to users’ movements is critical. It is important to indicate how users are being tracked by the system and also to indicate how the system is taking action based on people’s movements. When the system is doing something that could potentially be surprising or disturbing to the user, peripheral interactions could subsequently transition to the center of attention to make the user aware of what is happening.

Transitions between interaction outside the user’s attentional field, or the periphery of attention and the center of attention are necessary to avoid unintended actions, undesirable results and difficulties in detecting or correcting mistakes (Bellotti et al. 2002; Ju et al. 2008). When designing proxemic interactions, it should be possible for systems to fluently moving between the periphery and the center of attention. Proxemics-aware systems should partially reside in the periphery, where they inform people about what is happening without overwhelming them, while still allowing people to move to focused interaction at the center of attention when they want to take control and intervene.

Ju et al. (2008) introduced a framework for implicit interaction and proposed interaction techniques along two axes: initiative (which party is driving the interaction: user or system) and attentional demand (the degree of cognitive/perceptual load: background or foreground interactions), building on Buxton’s background/foreground model (1995). Their implicit interaction framework can be used to design systems that can easily transition between outside the attentional field, the periphery, and the center of attention, providing the right amount of balance between proactive behavior and user control. Transitions between different combinations of the degree of attentional demand—i.e., background or foreground interaction—and the degree of initiative—e.g., whether the system acts, indicates that it can act or waits for the user to act—allow systems to transition between outside the attentional field, the periphery, and the center of attention and back to prevent, mitigate, and correct errors in proactive behaviors. A system could for example transition from a proactive/background state to a proactive/foreground state to make the user aware of what it is doing. This is illustrated in Ju et al.’s (2008) proximity-aware interactive whiteboard by its use of the user reflection, system demonstration, and override interaction techniques.

2.2.2 Invisibility of Action Possibilities and Lack of Guidance

Users can have difficulty knowing how they can interact with proxemics-aware large displays. As stated by Müller et al. (2010), the commonly used interaction modalities for public displays (e.g., proximity, body posture, mid-air gestures) can be hard to understand at first glance. For example, when the display reacts to the user’s location in different interaction zones (Vogel and Balakrishnan 2004), the invisibility of these zones causes problems with identifying the exact zone where the display reacts to their input. This is particularly difficult when the display is also reacting to the input of other people (Jurmu et al. 2013). Next to showing the possible actions that users can perform, people may want to know what will happen, for example, when approaching the display.

2.2.3 Lack of Support for Opt-in and Opt-out Mechanisms

Another problem is the lack of explicit opt-in or opt-out mechanisms, which is especially important in (semi-)public spaces. Jurmu et al. (2013) and Brignull and Rogers (2003) found that users sometimes wish to avoid triggering the display and rather just passively observe it. Greenberg et al. (2014) further discuss how interactive surfaces in semi-public settings typically lack opt-in and opt-out choices (either deliberately or unintentionally). They state that at the very least, a way to opt-out should be provided when people have no desire to interact with the surface. Furthermore, users could want to know what would happen if they leave or opt-out. Will the surface be reset to its original state? What will happen to their personal information still shown on the surface?

In the next section, we explore how we addressed interaction challenges with proxemic interactions in the Proxemic Flow system using a peripheral floor display.

3 Proxemic Flow: Dynamic Peripheral Floor Visualizations for Revealing and Mediating Proxemic Interactions

As mentioned earlier, devices that react to the presence and proximity of people and devices can bring about interaction challenges, due to the implicit nature of interaction with these devices. Proximity and presence are typically sensed in the background, outside people’s attention. People may not notice that the device is interactive, commonly referred to as display blindness or interaction blindness (Huang et al. 2008; Müller et al. 2009b; Ojala et al. 2012) in the domain of large public displays. This can lead to people being uncertain about possibilities for interaction, or unaware of how to recover from mistakes such as accidental interactions.

Proxemic Flow (Vermeulen et al. 2015) is designed to address these challenges using a secondary, peripheral floor display that provides a set of dynamic visualization strategies to help people interact with a primary proxemics-aware display (Fig. 7.3). The floor reveals the interaction area through borders and zones, shows halos around people’s feet when they are recognized by the display, and invites spatial movement and next interaction steps through waves and steps animations. Information shown in the periphery—on the floor display—can seamlessly become the center of the attention and move back to the periphery in fluent transitions (Weiser and Brown 1996).

Fig. 7.3
figure 3

Proxemic Flow providing awareness of tracking and fidelity, zones of interaction, and invitations for interactions (image source Vermeulen et al. 2015)

Due to their low visual complexity, a quick glance at the floor visualizations is often sufficient, for example, when users are unsure about action possibilities, or whether or not they are correctly tracked. Since the visualizations do not coincide with the content on the primary display, users can focus their attention on the primary display. The floor visualizations nevertheless provide continuous peripheral awareness of tracking, interaction zones, and possibilities for future interactions. Similar to Bakker et al. (2015), we imagine that these floor visualizations could move further into the periphery after users get more acquainted with them. During informal observations of people interacting with the floor, we noticed that essential concepts such as halos and zones were easy to understand.

Next, we provide an overview of the different floor visualizations supported by Proxemic Flow, and explain how the combination of two interactive surfaces, one targeting interaction at the center of attention (the primary vertical display) and another aimed at interaction at the periphery of attention (the secondary floor display), allows for seamless transitions between both types of interaction across the user’s attentional field. The peripheral floor visualizations provide awareness of tracking status and quality (Sect. 7.3.1); awareness of entry and exit points for interaction (Sect. 7.3.2); and invite approach, encourage movements, and suggest possible next interactions (Sect. 7.3.3).

3.1 Tracking Feedback with Halos

A fundamental challenge for designing interaction with proxemics-aware displays is providing a person with immediate feedback about how the system is currently recognizing and interpreting spatial movements, gestures, or other input from the user.

3.1.1 Personal Halos

The personal halo provides immediate feedback on the floor display about the tracking of a person in space. When the person enters the area in front of the public display, a green halo (an area of approximately 1 m diameter) appears underneath the person’s feet (Fig. 7.4a). The halo moves with them when moving in the tracking area and therefore gives continuous, peripheral feedback about the fact that the person is being recognized and tracked by the system.

Fig. 7.4
figure 4

Halos: a providing feedback about active tracking and b the tracking quality (image source Vermeulen et al. 2015)

In addition to information about the fact that a person is tracked, the floor provides information about the quality of tracking. Most computer vision-based tracking systems (RGB, depth, or other tracking) have situations in which tracking works well, does not work well, or does not work at all (e.g., due to lighting conditions, occlusion, limited field of view). Therefore, the personal halo visualization encodes the quality of tracking in the color of the halo. To indicate tracking quality, we use three colors (Fig. 7.4b). A green halo indicates optimal tracking of the person in space. Its color changes to yellow when the quality of tracking decreases, for example, when the person moves to the limits of the field of view or when partially occluded by another person or piece of furniture. Finally, a red halo color is shown when the tracking of the person is lost, such as when moving too far away from the camera, or if occlusion is hiding the person completely. For this last case, since the person is no longer tracked, the red halo visualization remains static at the last known location of the person, fades in and out twice, and then disappears (the duration of this animation is approximately 4 s). If the person moves back into the field of view of the camera and the tracked region, the halo color changes back to green or yellow accordingly.

3.1.2 Multi-user Halos

Interactions around interactive surfaces are often not limited to a single person. With multiple people, information about active tracking and its fidelity becomes even more important due to the likelihood of occlusions causing increased tracking problems.

If multiple people are present in front of the screen, each person’s individual position that the system currently tracks is shown with a colored halo (Fig. 7.5a). Color changes indicate a change in how well the user is tracked. For example, in case another person walking into the space interrupts the tracking camera’s view of a person, the changing color of the halo from yellow to red tells the person that they are no longer being tracked (Fig. 7.5b). Similarly, if two people stand very close to each other, making it difficult for the computer vision algorithm to separate the two, the halo color changes to yellow.

Fig. 7.5
figure 5

Halos for multi-user interaction: a both people are visible to the system; b one person is occluding the camera’s view of the other person, indicated by the red halo (image source Vermeulen et al. 2015)

3.1.3 Trails: Revealing Interaction History

As a variation of the halo technique, the spatial trail feedback visualizes the past spatial movements of a person in the interaction area. The trails are shown as illuminated lines on the floor that light up when a person passes that particular area (Fig. 7.6). The illumination fades out after a given time (after 5 s in our application), thus giving the impression of a comet-like trail. The colors that are used to light up the floor are identical to those of the person’s halo (i.e., green, yellow, red) and therefore still provide information about the tracking quality. As the trail visualization remains visible for a longer time, it provides information about past movements of people interacting with the system. The trails can potentially help to amplify the honeypot effect (Brignull and Rogers 2003)—the effect that people are attracted to a device that they see others interacting with—by showing the past trails of other people moving toward the interactive display, thereby inviting other bystanders and passersby to approach the display as well.

Fig. 7.6
figure 6

Trails, visualizing the history of spatial movements of a person (image source Vermeulen et al. 2015)

3.2 Zones and Borders as Entries and Exits for Interaction

The next set of floor visualization strategies aimed to reveal interaction possibilities and facilitated opt-in and opt-out. Zones reveal spatial regions around the primary display, while borders make the boundaries of the interaction area explicit.

3.2.1 Opting-in: Proxemic Interaction Zones

Many designs of large interactive displays make use of spatial zones around the display for different kinds of interaction (Vogel and Balakrishnan 2004) or to change the displayed content dependent on the zone, a person is currently in. These zones, however, are not always immediately understandable or perceivable by a person interacting with the display. Our floor visualizations explicitly reveal zones of interaction, enabling a person to see where interaction is possible and make deliberate decisions about opting-in for an interaction with the display by entering any of the zones.

We demonstrate the use of zone visualizations with the Proxemic Flow system and an example photograph gallery application. Similar to earlier examples of proxemics-aware displays (Vogel and Balakrishnan 2004; Ballendat et al. 2010), our photograph gallery application uses discrete spatial zones around the display that are mapped to the interactive behavior of the application on the large display. When no users are interacting with the system, a large red rectangular zone indicates the area furthest away from the display that triggers the initial interaction with the display (Fig. 7.7a). This serves as an entry zone for interaction, i.e., an area to opt-in for interaction with the system. In our current implementation, we use a 3 s pulsating luminosity animation, fading the color in and out. Once a person enters this zone, the large display recognizes the presence of the person, tracks the person’s movement, and their halo is shown. The first zone then disappears and a second zone appears—an area to interact with the display when in front of it (visible as the blue rectangle in Fig. 7.7b). When the person begins approaching the display, the content gradually reveals more of the photograph collection on the display. As the person draws closer, more images are revealed. This is a behavior identical to the Proxemic Media Player (Ballendat et al. 2010). Once entering the second zone, the person can use hand gestures in front of the display to more precisely navigate the temporally ordered photograph gallery (e.g., grabbing photographs, sliding left or right to move forward or back in time). Again, once the person enters the close-interaction zone in front of the display, the floor visualization of that zone disappears.

Fig. 7.7
figure 7

The interaction areas in front of the display represented as a red and b blue rectangular zones; c borders indicate thresholds to cross for d leaving the interaction space in front of the display (image source Vermeulen et al. 2015)

3.2.2 Opting-out and Exit Interaction: Borders

While we envision zone visualizations primarily as explicit cues to convey the zones for interacting, and for allowing a person to deliberately engage and opt into interact with the system, we can also consider visualizations that help a person leave the interaction area (i.e., opting-out). We illustrate this concept with borders shown in the Proxemic Flow application. In continuation of the application example from before, once the person entered the interaction zone (blue) directly in front of the display and interacts with the display content through explicit gestures, a red border around the actively tracked interaction area surrounding the display is shown to make the boundaries of that interaction space explicit and visible (Fig. 7.7c). We chose to dynamically show the border only in situations when a person engaged with the system, but this could alternatively remain a fixed feature of the visualizations shown on the floor. A reason for showing a fixed visualization of the interaction boundaries with borders could be always to clearly indicate where a person can both enter and leave the interaction area (Fig. 7.7d).

3.3 Footsteps and Waves to Invite Interaction

Finally, we introduce floor visualization strategies to invite approach, encourage a person to move to a new location, and suggest possible next interaction steps. In particular, in this category of visualizations, we introduce two strategies: waves and footsteps.

3.3.1 Waves: Encouraging Approach

Our first strategy is intended to invite people to move closer to the large display for interaction. With our waves technique, we make use of the output capabilities of the illuminated floor for showing looped animations of lights fading in and out, with the effect of a wave of light going toward the large screen (Fig. 7.8a). Different visual designs of the wave effect are possible, for example, a circular wave effect with the large display at the center, starting with larger circles and continuously decreasing the radius.

Fig. 7.8
figure 8

a Waves inviting for interaction and b footsteps suggesting action possibilities (image source Vermeulen et al. 2015)

3.3.2 Footsteps: Suggesting Next Action Possibilities

The footstep visualization is designed to offer a person clues about possible next interaction steps, in particular for encouraging spatial movements in the environment. The visualization shows animated footsteps (in our case, these are represented through glowing circles) beginning at one location on the floor and leading to another location. This technique is inspired by the earlier work of the Follow-the-light (Rogers et al. 2010) design that uses animated patterns of lights embedded in a carpet to encourage different movement behaviors by luring people away from an elevator toward the stairs.

To illustrate this technique, we revisit our Proxemic Flow example application with the large-display photograph gallery viewer. When a person enters the interactive (i.e., tracked) space in front of the display and stands still for over 5 s, the floor begins the footstep animation (Fig. 7.8b) to invite the person to move closer to the display, in particular, to move to the interaction zone in front of the display, enabling the person to use mid-air gestures to further explore the image collection. The footstep animation begins directly in front of the person and leads toward the blue rectangular area highlighted in front of the display (Fig. 7.8b). The footsteps visualization strategy can be used to reveal interaction possibilities, particularly those involving spatial movements of the person. This strategy can be used in many other contexts for guiding or directing a user in the environment and for encouraging certain movements in a space.

3.4 Proxemic Flow in Norman’s Stages of Action Model

Next, we position the Proxemic Flow floor visualizations in Norman’s Stages of Action model (Norman 2013). We illustrate how they assist users in interacting with the primary display by providing essential information during the stages of execution and the stages of evaluation.

3.4.1 Norman’s Stages of Action Model

Norman introduced the Action Cycle as a way to analyze how we interact with ‘everyday things,’ including doors, light switches, kitchen stoves, and also computers and information appliances. Norman (2013) suggests there are two main parts to any action in an interface: executing the action and evaluating the results, or ‘doing and interpreting.’ Furthermore, actions are related to our goals; we formulate a goal, execute certain actions to achieve that goal, then evaluate the state of ‘the world’ to see whether our goal has been met, and if not, execute more actions to achieve our goal or otherwise formulate new goals that again result in more action (Fig. 7.9).

Fig. 7.9
figure 9

Norman’s Stages of Action: formulating goals, executing actions that impact the ‘state of the world,’ and evaluating these changes to see whether the goals have been met. The Seven Stages of Action consist of one stage for goals, three stages for execution, and three for evaluation (image based on Norman 2013)

Norman introduces the Stages of Execution and the Stages of Evaluation as a breakdown of these two parts, which together with goal formulation form the Seven Stages of Action. Starting from our goal (the first stage), we go through three stages of action: plan (the action), specify (an action sequence), and perform (the action sequence). To evaluate the state of the world, there are three more steps: perceive (what happened), interpret (make sense of it), and compare (was what happened what I wanted?), as illustrated in Fig. 7.9.

With respect to peripheral interaction, Norman notes that not all activity in these stages is conscious—he states that even goals may be subconscious: ‘we can do many actions, repeatedly cycling through the stages of while being blissfully unaware that we are doing so. It is only when we come across something new or reach some impasse, some problem that disrupts the normal flow of activity, that conscious attention is required.’ (Norman 2013, p. 42).

3.4.2 Peripheral Floor Visualizations in Norman’s Stages of Action Model

The peripheral floor visualizations in the Proxemic Flow system act as cues that enable people to more easily navigate between implicit and explicit interaction. In other words, they enable interaction in the periphery of attention and focused interaction. Figure 7.10 shows how the different floor visualizations are situated within Norman’s Stages of Action model.

Fig. 7.10
figure 10

The floor visualizations in the Proxemic Flow System, situated in Norman’s Stages of Action model. Tracking feedback helps users know-how their input is being interpreted by the system during the stages of evaluation (right). Borders and zones reveal action possibilities and help users in the stages of execution (left). Finally, waves and steps animations invite and guide interactions, again helping users in the execution phase (left)

Personal halos (Figs. 7.4 and 7.5) improve peripheral awareness of how the system is tracking people’s spatial movements (tracking feedback), and help people evaluate the ‘state of the world.’ The landing area (Fig. 7.7a) reveals an entry zone for interaction to help users know where they should go to engage with the system, and thus assists users in executing actions. When a user is engaging with the primary display, borders appear around the actively tracked interaction area to make the boundaries of the interaction space explicit and visible, and reveal exit zones for opting-out or disengaging with the system. Again, these visualizations help people discover action possibilities and thus can be situated within the stages of execution. Finally, Proxemic Flow uses the waves and steps visualizations (Fig. 7.8) to invite interaction, guide people’s interactions, and suggest next interactions (e.g., direct people to a certain location using the footsteps visualization). This category of visualizations helps people to execute and perform actions. All the floor visualizations are shown in the user’s periphery and do not require constant attention.

4 Design Patterns

Based on our experiences in designing proxemic interactions that transition between outside the attentional field, the periphery, and center of attention, we generalize and summarize our insights into two design patterns: slow-motion feedback and gradual engagement. The strengths of design patterns (Borchers 2001; Tidwell 2005) lie in unifying prior work, synthesizing essential and generalizable interaction strategies, and providing a common vocabulary for discussing design solutions. Most importantly, patterns can inform and inspire future designs and also allow for variations of the pattern applied to different domains.

4.1 The Slow-motion Feedback Pattern

One of the core design patterns we employ to enable fluent transitions across a person’s attentional field is slow-motion feedback (Vermeulen et al. 2014). We start by illustrating how slow-motion feedback can enable interactions that transition from outside the user’s attentional field toward their periphery of attention, to the center of attention, and then back. Next, we provide a definition of slow-motion feedback and illustrate how it is used in Proxemic Flow.

The idea of slow-motion feedback is simple: Just as we speak slowly when we explain something to someone who has difficulty understanding what is being said, interactive systems can slow down when executing actions on the user’s behalf and provide intermediate feedback to make sure that the user understands and is aware of what is happening. Slow-motion feedback is a way to provide users with sufficient time to (i) notice what is going on, and provide them with the opportunity to (ii) intervene if necessary.

4.1.1 Applications of Slow-motion Feedback

Slow-motion feedback allows people to control devices in the periphery of attention, when they are made aware of what is happening outside their attentional field. We illustrate how this might work by referring back to the example in Chap. 1, in which the lights automatically turn on in the home when inhabitants enter late at night, even though others are already asleep.

In this case, the lighting control system could use slow-motion feedback to provide users with control over this automatic action. It could first increase the brightness of the lights slowly and provide a simple means to cancel or control this action (e.g., by flicking one of the light switches). After noticing what the system is doing (or about to do), and deciding that it is an unwanted action, the user can then override the system action so that the lights do not turn on. In this example, we have effectively moved from an automatic action occurring outside the user’s attentional field with the motion-sensitive lighting control, over the periphery of attention when using slow-motion feedback to make the user aware of what is going on, to the user’s center of attention when they decide to control the lighting and turn the lights off (see Fig. 7.11). Finally, the lighting control system moves back into the periphery and outside the user’s attentional field.

Fig. 7.11
figure 11

The three types of interaction with computing devices, as explained earlier in Chap. 1, along a continuum ranging from fully focused attention to interaction occurring completely outside the attentional field. Slow-motion feedback (Sect. 7.4.1) and gradual engagement (Sect. 7.4.2) allow us to transition between these different types of interaction (image reproduced from Chap. 1)

A similar example is illustrated by Vermeulen et al. (2009): A system action that automatically turns off the lights is slowed down. In this technique, animated lines are projected on the walls of the room to visualize what is happening (Fig. 7.12). These animated lines represent connections between sensors and output devices and they progress toward the target output device. In this case, line animations are drawn toward each of the lights in the room. The lights will only turn off when the animated lines reach the lights, providing people with the opportunity (and time) to intervene if necessary.

Fig. 7.12
figure 12

An application of slow-motion feedback. Animations show that the system is about to dim the lights (left). The system’s action is slowed down to allow users to notice what is happening, and provide sufficient time to intervene, if necessary. The lights are only dimmed when the animated line reaches them (right) (image source Vermeulen et al. 2009)

Another example of an action by the system being ‘slowed down’ to allow users to intervene is Gmail’s ‘undo send’ feature (Fig. 7.13). This feature provides users with a configurable 5 to 30 s window to undo sending an e-mail. While Gmail shows feedback to the user informing them about the sent e-mail, the actual sending of the e-mail is delayed so that users have a chance to undo this action in progress. The e-mail is sent after the specified time-out unless the user clicks the ‘Undo’ button. In the meantime, the user can go about other activities in the e-mail interface, while the ‘Undo Send’ label essentially provides them with a control mechanism in the periphery of attention.

Fig. 7.13
figure 13

Another example of slowing down the system action: providing a specific time window during which sent e-mails can be ‘undone’ (source Gmail)

A final example of slow-motion feedback can be found in the Range proximity-aware whiteboard (Ju et al. 2008). The whiteboard transitions between an ambient display mode and a whiteboard mode based on the user’s distance to the display. It does so by showing an animation where all content is moved from the center of the board to the borders when a user steps closer. This happens slowly enough so that users both notice it and have sufficient time to react if this was not what they wanted. Users can override this automatic action of making space by grabbing content and pulling it back to the center.

4.1.2 Defining Slow-motion Feedback

Slow-motion feedback essentially manipulates the time frame, in which the system executes actions to realign it with the time frame of the user (Bellotti et al. 2002). With slow-motion feedback, the system’s actions are deliberately slowed down to increase awareness of what is going on outside the user’s attentional field and provide opportunities for user intervention. Slow-motion feedback is less relevant for long running tasks or tasks that are being performed at the center of attention, where users have no difficulty noticing that something is happening and have sufficient time to intervene.

We now define slow-motion feedback using a two-dimensional design space that allows us to articulate the different possibilities for how and when information about the result of an action can be provided. The two dimensions in this design space are the time at which information is provided about the result of an action and the level of detail of that information (Fig. 7.14). We define two key moments: At time t 0, the action is started (either by the user or the system), and at time t 1 the action has been completed by the system. Likewise, we define two important values for the level of detail dimension: The level d 0 represents the situation, in which the user does not receive any information about the result of their action, while at level d 1, the user receives fully detailed information about the result of the action.

Fig. 7.14
figure 14

Slow-motion feedback amplifies the time to intervene by showing feedback until t 2 (orange line) instead of t 1 (gray line) (image source Vermeulen et al. 2014)

Slow-motion feedback amplifies the time difference between t 1 and t 0 (t 1 − t 0) or the duration of an action in the user’s time frame. Execution of the action is postponed by delaying t 1 to t 2 (with t 2 > t 1). The available time to notice that the action is happening thus increases to (t 2 − t 0), as shown in Fig. 7.14. Designers can rely on animations (Chang and Ungar 1993) to transition between t 0 and t 2, such as slow-in/slow-out, in which the animation’s speed is decreased at the beginning and at the end of the motion trajectory to improve tracking and motion predictability (Dragicevic et al. 2011).

4.1.3 Slow-motion Feedback in Proxemic Flow

To draw people’s attention and thus move from the periphery to the center of attention, the floor visualizations rely on animations. For example, when tracking is lost, Proxemic Flow uses slow-motion feedback to make the user aware of this: A pulsating red halo visualization is shown at the person’s last known location, which disappears after approximately 4 s (Fig. 7.4). When something goes wrong with tracking, users are given cues to alert them to this, and they can intervene if necessary (e.g., when occluding another user, or stepping outside of the tracked area).

Similarly, the trails strategy effectively uses a slowed down version of the tracking halos to display traces of previous movements on the floor to make bystanders aware of people’s movements (which occur outside the attentional field or in the periphery), and amplify the honeypot effect (Brignull and Rogers 2003).

4.2 The Gradual Engagement Pattern

Gradual engagement (Marquardt et al. 2012) is the second design pattern facilitating the transitions from peripheral to focused interaction and one of our core design principles. Essentially, this pattern describes how interfaces can be designed to gradually engage users by progressively revealing connectivity and interaction possibilities as a function of inter-device proximity. The capabilities of a system follow this pattern flow across three distinct stages: (1) awareness of device presence/connectivity, (2) reveal of exchangeable content, and (3) interaction methods for transferring content between devices tuned to particular distances and device capabilities. We first explain the gradual engagement pattern and then apply it to the peripheral-to-focused interaction transitions in Proxemic Flow.

4.2.1 The Gradual Engagement Design Pattern

The gradual engagement pattern recognizes that a person may not be directly attending to a system (i.e., the system is outside the person’s attentional field). The system can still try to be helpful by presenting an interface that selectively and progressively informs the user of information of interest. The pattern synthesizes and generalizes strategies from earlier work, in which designed systems interpret decreasing distance and increasing mutual orientation between a person and a device within a bounded space as an indication of a person’s gradually increasing interest in interacting with that device (Vogel and Balakrishnan 2004; Ju et al. 2008). As mentioned earlier, Vogel and Balakrishnan (2004) directly applied Hall’s theory (1966) to a person’s interaction with a public display. They defined four discrete zones around the display that affect a person’s interaction when moving closer: from far to close, interactions range from ambient display of information, to implicit, subtle, and finally personal interaction. The interaction moves from the periphery of attention to focused interactions. Similarly, Ju et al.’s (2008) interaction techniques with the digital whiteboard remain public and peripheral or implicit from a distance, and become increasingly more private and explicit when the person moves closer to that display.

We generalize the sequence inherent in these (and other) systems as a design pattern called gradual engagement. There are three basic stages, which we will further elaborate later:

  • Stage 1. Background information supplied by the system provides awareness to the person about opportunities of potential interest when viewed at a distance;

  • Stage 2. The person can gradually act on particular opportunities by viewing and/or exploring its information in more detail simply by approaching it; and

  • Stage 3. The person can ultimately engage in action if desired.

The pattern can be further refined and applied to different contexts. For example, to mitigate challenges when creating cross-device interactions, we can refine the general gradual engagement design pattern by considering fine-grained proxemic relationships between multiple devices allowing seamless transitions from awareness to information transfer. Specifically, engagement increases continuously across three stages as people move and orient their personal device toward other surrounding devices (Fig. 7.15). The refined three stages are given below:

  • Stage 1 Awareness of device presence and connectivity is provided, so that a person can understand which other devices are present and whether they can connect with one’s own personal device. We leverage knowledge about proxemic relationships between devices to determine when devices connect and how they notify a person about their presence and established connections.

  • Stage 2 Reveal of exchangeable content is provided, so that people know which content of theirs can be accessed on other devices for information transfer. At this stage, a fundamental technique is progressively revealing a device’s available digital content as a function of proximity.

  • Stage 3 Transferring digital content between devices, tuned to particular proxemic relationships and device capabilities, is provided via various strategies. Each is tailored to fit naturally within particular situations and contexts: from a distance versus from close proximity; and transfer to a personal device versus a semi-public device.

Fig. 7.15
figure 15

Stages of the gradual engagement design pattern: from awareness, to reveal, to information transfers (image source Marquardt et al. 2012)

An interesting feature in the gradual engagement pattern is that users control the speed at which information is revealed. The faster the users approach a device, the faster the information is shown, which realigns the system’s time frame with their own. In this case, the natural hesitation of novices and the rapid approach of experts can have the intended consequences.

4.2.2 Applications of the Gradual Engagement Design Pattern

To illustrate how the gradual engagement pattern can be applied, consider the following use of a proximity-dependent progressive reveal for mitigating cross-device interactions. A brainstorming application, shown in Fig. 7.16, provides awareness (Stage 1) of nearby recognized tablet computers by showing proxy icons on the screen. These indicators on screen are representations supporting the transition from peripheral to focused interaction. The application continuously reveals content during Stage 2—in this case, multiple sticky notes located on people’s tablets—as they move closer to the large display. The wall display shows thumbnails of all sticky notes located on the tablets above the awareness icons (Fig. 7.16). For the person sitting at a distance, the actual text on these notes is not yet readable (Fig. 7.16a), but the number of available notes is already visible. For the second person moving closer to the wall display, the thumbnails increase in size continuously (Fig. 7.16b). For the third person standing directly in front of the display, the sticky notes are shown at full size (Fig. 7.16c), allowing the person to read the text of all notes stored on the tablet and to pursue Stage 3 interactions. While in Stage 3, digital content can be exchanged through various interaction techniques, such as direct touch drag-and-drop of content or device gestures initiating transfer of information.

Fig. 7.16
figure 16

Proximity-dependent progressive reveal of personal device data of multiple users at different distances to the display: a minimal awareness of a person sitting further away, b larger, visible content of a person moving closer, and c large awareness icons of person standing in front of the display (image source Marquardt et al. 2012)

Next, we consider the characteristics of the gradual engagement pattern in the context of the Proxemic Flow visualizations, and how this pattern can support transitions from peripheral to focused interactions.

4.2.3 Gradual Engagement in Proxemic Flow

As mentioned in Sect. 7.3, the different visualizations in Proxemic Flow can be categorized into different phases. Similar to the gradual engagement design pattern, the floor visualizations gradually reveal possible interactions as a function of proximity to, and increasing engagement with, the primary display.

As people move around the space in front of the primary display, the secondary peripheral floor display progressively moves through three phases that afford gradual engagement: (1) awareness of tracking status and quality through personal halos, (2) awareness of entry and exit points for interaction through borders and zones, and (3) inviting approach, encouraging movements, and suggesting possible next interactions with waves and footsteps.

Borders and personal halos correspond to Stage 1 of the gradual engagement design pattern, providing awareness of tracking and entry and exit points for interaction. Note that phases (1) and (2) of Proxemic Flow can be interchanged, depending on whether borders are always shown around the interaction area, or only after initially engaging with the system (as discussed in Sect. 3.2.2). When the floor initially does not show borders or zones, people can still become aware of the floor display as they enter the tracking zone and notice their personal tracking halos.

As people increasingly engage with the primary display by approaching it, the floor reveals more detailed information in the user’s periphery through zones that reveal where interaction is possible, for example to interact with the display using gestural interaction, as shown in Fig. 7.7b. Zones can be revealed continuously as users approach the primary display or may be shown in discrete steps (e.g., as in Fig. 7.7 where a possible next zone is shown after the user entered an initial landing zone). This corresponds to Stage 2 in the gradual engagement design pattern: progressively revealing action possibilities.

Finally, once people are directly engaging with the primary display, the floor provides additional inviting and guiding visualizations to suggest future interaction steps and encourage movements around the display. These visualizations serve the purpose of assisting users in their interactions and correspond to Stage 3 in the gradual engagement design pattern.

5 Discussion

In this chapter, we discussed how designers can enable interactions that transition between outside the attentional field, the periphery, and the center of attention while interacting with proxemics-aware devices.

First, we demonstrated the use of dynamic, in situ visualizations on a peripheral floor display with the Proxemic Flow system to mediate proxemics-aware interactions with large interactive surfaces. Our floor display (1) provides peripheral information about current tracking and tracking fidelity; (2) reveals action possibilities for easy opt-in and opt-out; and (3) provides cues that invite users for movement across the space and possible next interaction steps. These proposed techniques target several important interaction problems with large interactive surfaces that were identified in earlier work. The fluent transitions between the periphery and the center of attention made possible by these floor visualization strategies have the potential to improve walk-up-and-use interaction with future large surface applications in different contexts, such as gaming, or for entertainment or advertisement purposes. During initial observations, we noticed that users only need to pay attention to the floor occasionally, which allows them to stay focused on the main application running on the primary large interactive display.

Secondly, we generalized our experiences with designing proxemics-aware systems that can transition between interactions outside the attentional field, peripheral interactions, and focused interactions using two design patterns: slow-motion feedback and gradual engagement. We propose slow-motion feedback as a way to draw attention to actions happening in the background and provide opportunities for intervention, while gradual engagement provides peripheral awareness of action possibilities and discoverability and reveals possible future interactions. These design patterns are not limited to the specific form factor of a multi-display setup with a floor display and large vertical display. They can also be applied to smaller-scale proxemic interactions and other ubicomp spaces.

There are some limitations to our proposed techniques and design patterns. Proxemic Flow is targeted at walk-up-and-use interaction with proxemics-aware large displays in sparsely populated semi-public spaces. In very crowded spaces, the floor visualizations can be less effective due to people obstructing the floor. Moreover, there are limitations to what the low-resolution floor visualizations can convey. Nevertheless, the visualizations were intentionally designed to be minimalistic and act as effective peripheral cues that minimize the required visual bandwidth for attending to them. Furthermore, slow-motion feedback could be a problem for time-critical tasks, as it could have a negative effect on the overall task completion time. Ideally, users should also be able to control the extent to which interactions are slowed down and the speed at which increasing feedback is provided (e.g., as in gradual engagement), as the optimal speed will be different for each user.

During informal observations of people interacting with the floor display, we noticed that essential floor visualizations such as zones and halos were easy to understand. In the future, we plan further studies to confirm these early findings and further explore the use of peripheral floor displays to mediate proxemic interactions with large interactive surfaces.