Keywords

1.1 Introduction

It is a regular Thursday morning; Sandra and Patrick are about to have breakfast. The kitchen is filled with a pleasant smell of coffee and freshly baked bread. Sandra switches on the coffee-machine, while realizing she hears rain against the windows. She opens the curtains and contemplates which clothes to wear with this weather. A sound interrupts Sandra’s thoughts: the breadmaker has finished. At the breakfast table, Sandra reads the news on her tablet computer and simultaneously sips from a hot cup of coffee. The cup is rather full; she briefly stops reading to concentrate on taking the first sip. Patrick takes a bite from his sandwich while browsing through his emails on his smartphone: he receives an urgent message from a business associate asking for a document. He puts down his sandwich, walks to his study-room, flicks on the lights, unlocks his computer, starts his email application and searches for the document by going through a number of folders. He finds it and sends the email. Patrick walks back to breakfast table while thinking about his meeting that will start in an hour. Just when he tells Sandra that he has to hurry to make it in time, his phone buzzes alarmingly. Patrick takes the phone from his pocket and unlocks the screen: a reminder for that meeting. “It must be busy on the road with the bad weather” Sandra says, and opens an application on her tablet to look up the traffic information. Patrick looks over her shoulder and sees that delays are expected. He kisses Sandra goodbye, grabs his coat and leaves for work.

The above story illustrates an everyday scenario in today’s world. Lots of things are happening at the same time, and Patrick and Sandra almost continuously interact with their physical surroundings. They pick up, drink from and put down their cups of coffee, eat bread, open curtains, and switch on and off lights. No focused attention seems required to execute these activities; they can easily be conducted while at the same time reading the news, browsing through e-mails, or thinking about what clothes to wear. However, attention is also easily focused on these actions for brief moments of time, for example when Sandra realizes her cup is so full that she needs to attend to it, to avoid spilling. Similarly, Patrick and Sandra constantly perceive information from their surroundings without conscious thought, such as information about the weather or the bread being freshly baked. Such information may also quickly shift to the focus of attention when relevant, for example when realizing Sandra may need to change her choice of clothes because of unexpected bad weather.

Clearly, everyday activities and perceptions can take place in the background or periphery of attention, where they are performed on a routine basis and require minimum attention and effort. These activities, however, can also be consciously focused on in the center of attention when this is required. As evident from Patrick and Sandra’s Thursday morning, activities can easily and frequently shift between periphery and center of attention. As a result, these perceptions and actions do not overwhelm or overburden, but instead form a fluent part of the everyday routine.

In the above story, Patrick and Sandra also frequently use computing devices: They receive messages and alerts, look for digital documents, send e-mails, and search for traffic information. Contrary to the above-discussed activities and perceptions that fluently shift between center and periphery of attention, their interactions with computing devices usually require focused attention. They consciously browse through folders to find a document, and alerting messages needlessly attract their attention away from their conversation or preparing to go to work. Clearly, computing devices are most often interacted with in the center rather than in the periphery of attention and move more unpredictably between periphery and center compared to non-computer-mediated activities.

The number of computing systems in our everyday environment is increasing. They are not only part of personal devices, but also integrated in everyday objects and environments such as water faucets, toilets, toothbrushes, irons, doors, thermometers, coffeemakers, and breadmakers. These developments bring along numerous opportunities, while they also raise challenges. In particular, we cannot simultaneously focus on all interactive devices that are available in our immediate surroundings. Inevitably, an increasing number of everyday computing devices have to be interacted with in the periphery of attention. Inspired by the way we fluently divide our attentional resources over various activities in everyday life, this type of interaction is called “peripheral interaction”: interaction with everyday interactive systems that reside in our periphery of attention but can easily shift to the center of attention when relevant for or desired by the user. Considering and enabling peripheral interaction contributes to more fluently embedding of computing technology in everyday routines.

As computing systems and the physical world intermingle, studying peripheral interaction, as described above, has become increasingly relevant. This book aims to lay out the challenges and opportunities in the field and underpin these through research presented in various chapters. The goal is to help us contribute to a future where computing technology gracefully coexists with the physical world.

1.2 A Brief History

Integration of computing technology into our everyday lives is not a new advancement. For example, microprocessors began being integrated into bicycle computers and cars in the 1980s. Over two decades ago, Weiser (1991) described a vision for the twenty-first century in which computers of all sizes and functions are part of and integrated in the everyday environment. A vision he described as ubiquitous computing, which acknowledged that traditional human–computer interaction relies on the user’s focused attention and therefore hinders the seamless integration of such interaction in everyday life. He argues not only that computational devices need to be physically hidden (e.g., in furniture), but moreover that people should be enabled to interact with such devices outside their attentional focus, i.e., in their periphery of attention. In his own words, people would thereby be “freed to use them without thinking and so to focus beyond them on new goals” (Weiser 1991, 94). Weiser and Brown (1997, 79) later introduced the term calm technology, which “engages both the center and the periphery of our attention, and in fact moves back and forth between the two”. Making use of both the center and the periphery of attention, people are able to interact the same way with technology as they do with their everyday environment. They would be in control of their interactions with computing devices while at the same time not being overburden by them, leading to a seamless or unremarkable integration of technology in our everyday routines (Tolmie et al. 2002).

Inspired by visions on ubiquitous computing, several adjacent fields of research have emerged, which study the embedding of computers in the everyday environment. While some have used the term pervasive computing (Satyanarayanan 2001) as a synonym for ubiquitous computing, it was introduced as the infrastructure to support ubiquitous computing. The term ambient intelligence (Aarts and Marzano 2003) relates to using reasoning and learning in ubiquitous computing, to support people’s actions in their everyday environments. Further exploring connected devices, the term Internet of Things (IoT) (Atzori et al. 2010) is used to present the powerful advantages of ubiquitous systems with sensors and actuators coalescing their value through address-based intercommunication. IoT celebrates interactions between sensed events and computational support for actions in the world. The term context-aware computing (Lieberman and Selker 2000; Abowd et al. 1999) was used to discuss not only ubiquitously present computing devices, but particularly to address the usage of various sensors to determine and take into account information from the environment in computer-initiated activity. This is, for example, applied in the domain of considerate systems (Selker 2011; Vastenburg et al. 2008), which adjust their notification behavior to the sensed context and thereby improve the appropriateness of notifications.

Among research inspired by the vision of ubiquitous computing, many endeavors have been inspired particularly by Weiser and Brown’s (1997) notion of calm technology. Such work developed and studied computational devices that unobtrusively present relevant information to users, thereby exploring how digital information can be perceived in the visual or auditory periphery of attention (Hazlewood et al. 2011; Heiner et al. 1999; Ishii et al. 1998; Matthews et al. 2004; Mynatt et al. 1998; Pousman and Stasko 2006). From the scenario of Patrick and Sandra’s morning routine, however, it is evident that not only perceptions, but also physical activities shift between center and periphery of attention in everyday life. Inspired by this observations, researchers have started to address a second facet of calm technology—peripheral interaction (Edge and Blackwell 2009; Hausen et al. 2012, 2013; Bakker et al. 2015a, b), which encompasses both perceptions of and physical interaction with computing technology shifting between people’s center and periphery of attention.

Today, much of Weiser’s vision has turned into reality. Digital technology is integrated in many devices ranging from water faucets to parking meters. Hence, the need to employ both the center and periphery of people’s attention is unavoidable (also see Brown 2012) and will increase even more in the future. Although present-day interactions with digital devices are majorly different from such interactions 20 years ago, they are still carried out mainly in the user’s focus of attention. Therefore, the challenge to embed technology into our everyday life and thereby to offer fluent shifts between the center and periphery of attention still prevails today.

1.3 Framing Peripheral Interaction

This book addresses challenges and opportunities for peripheral interaction: interaction with computing technology, which can take places in the periphery of attention and shifts to the center of attention when relevant. The goal of peripheral interaction is to fluently embed meaningful interactive systems into people’s everyday lives. We now lay out how peripheral interaction fits into the larger domain of interactive systems and HCI. We will start by giving an example of possible (peripheral and non-peripheral) interactions with a very simple interactive system: a motion-detecting light switch.

Two years ago, Thomas and Mara installed a light in their front yard that automatically switches on when motion is detected after dark. When installing it, they walked around their yard a few times to check when and where exactly the light would be triggered. They are happy with the light; when someone approaches their front door at night, the light switches on which gives visitors an inviting feeling. Sometimes they sit in the yard to have a drink together. When they sit down for longer than ten minutes, the light automatically switches off. This has happened so often, that it has become a routine to quickly move the arms up to trigger the light: Thomas usually conducts this brief action while in a conversation with Mara.

Three types of interactions with the light switch are apparent from this scenario. First, Mara and Thomas intentionally walk around to actively search for the sensing area. This interaction is conscious and intentional and takes place in the center of attention: It is consciously performed with the intention to probe the system’s function to understand how it switches on the light. Second, a visitor enters the yard, triggering the switch to turn on the light. This person’s interaction with the system is subconscious and unintentional: He or she did not walk there with the intention to switch on the light, though the system interpreted this behavior as input (Schmidt 2000; Ju and Leifer 2011). The interaction was implicitly initiated and thus happened outside the attentional field of the visitor. Third, Thomas moves his arm as a routine activity in order to switch on the light, while in a conversation with Mara. Since another activity is performed simultaneously, this interaction takes place in the periphery of attention. Furthermore, the interaction is performed automatically and subconsciously as a result of a habit or routine, though clearly intentional, aimed at switching on the light.

These three types of interaction are illustrated in the basic model presented in Fig. 1.1 along a continuum ranging from “fully focused attention” to “completely outside attentional field.” As evident from the example above, an interactive system may at one moment be interacted with in the center of attention, at another moment in the periphery, and in a third case outside a person’s attentional field.

Fig. 1.1
figure 1

Three types of interaction with computing devices, illustrated along a continuum ranging from “fully focused attention” to “completely outside attentional field”

Though all three types of interactions are possible with the simple interactive light described in the above example, this light switch is clearly developed for interaction outside the attentional field of the people it affects. The other two types happen occasionally, or seem rather awkward. More and more modern interactive systems are developed for the very right end of the continuum in Fig. 1.1 (outside the attentional field), such as smart thermostats, ABS brakes, and automatic windshield wipers. Also, numerous interactive systems can be named that are designed to be interacted with in the center or attention and therefore are to be placed on the very left end of the continuum, for example, interactive games, traditional desktop computing including instant messaging, e-mailing, text processing, or image editing as well as the usage of many smartphone applications. Contrarily, not many interactive systems are developed for the middle of the continuum, where interactions may not be precise, but where users directly control these interactions, be it with minimal mental resources. While clearly many interactive systems benefit greatly from automatic system behavior or require the user’s focused attention during interaction, there seems a gap in between these two extremes, a gap which peripheral interaction aims to help fill by providing an area of interactive systems that flexibly respect attention and support the embedding of computing technology in everyday life routines.

To illustrate this gap in more detail, we describe interaction scenarios with modern interactive lighting systems, designed for interaction in the center of attention and outside the attentional field, while interaction in the periphery of attention is not straightforward. Various interactive lighting systems are commercially available [e.g., (“Philips Hue” N.D.; “Belkin WeMo” N.D.; “Elgato Avea” N.D.; “LIFX” N.D.)] consisting of light bulbs of which the color and intensity can be controlled wirelessly. Users, who typically have multiple such light bulbs installed in their home, can directly control lights using a dedicated smartphone application which enables selecting a predefined configuration or dragging icons which represent each individual light bulb, to the desired color on a gradient map. Turning on the lights using such applications is clearly done consciously and intentionally in the center of attention: This interaction is located on the far left end of the continuum in Fig. 1.1. Alternatively, some of these interactive lighting systems can be programmed to perform automatic system behavior. For example, one may program the system to automatically switch on the lights when a user is near his house (measured through the GPS location of the user’s smartphone). This type of interaction happens subconsciously and unintentionally (i.e., a user does not go near his house with the intention of switching on the lights, rather with the intention of going home) and is thus located on the far right end of the continuum in Fig. 1.1.

Imagine a house in which all light sources contain the above-described bulbs. If automatic behavior is preprogrammed, the lights switch on automatically when approaching this house. However, people may have different lighting needs at different moments. For example, when entering the house late at night, while other people in the house are already asleep, all lights switching on automatically would be highly inappropriate. Since automatic system behavior happens outside our attentional field, we have no direct control over it. Numerous scenarios may exist in which lighting needs differ, depending on the user’s wishes, plans, intentions, and (social) context. Since interactive systems can unlikely be fully aware of and flawlessly adapt to all nuances of everyday life, its users must be given some form of direct control in addition to the automatic system behavior. This direct control is present in current systems by means of a smartphone application. If no automatic behavior is programmed, a person entering the house in the dark would need to get his smartphone out of his bag, unlock the screen, search for the application, and either select a setting or drag icons over the screen to turn on the lights. While such applications enable users to control their lighting down to every detail (selecting precisely the right color and intensity for each individual lamp), this seems like a needlessly long and complicated sequence of actions to simply switch on the light. This sequence of actions is more likely to interrupt one’s everyday routine, than to seamlessly fit into it.

Interactive lighting systems enable direct and precise control in the center of attention, but also offer the possibility of automatic behavior without requiring any direct control from the user. However, a gap between these two extremes is apparent: A way of controlling the light quickly but imprecisely might support the system in seamlessly blending into people’s everyday routines. In other words, a possibility to control the lights in the periphery of attention is lacking. While products have recently been launched to address this gap [e.g., “hue tap” (“Philips Hue” N.D.) and “M!QBE” (N.D.)], these interactive lighting systems only stand exemplar for many modern interactive systems in which this gap is apparent.

As evident from Fig. 1.1, the periphery of attention on one side borders with the center of attention, while on the other side it borders with events that happens outside our attentional field. Depending on the user’s current mind-set and his/her current context, interactions with modern systems may take place in any of these three fields. Therefore, the “borders” in this figure should be seen as overlapping grey areas. While peripheral interaction is intended to take place in the periphery of attention the majority of the time, shifts to the center of attention, and events happening outside the user’s attentional field are certainly an important part of it. Different from interactions that are always in the center of attention, the aim of peripheral interaction is to enable interaction possibilities with minimal attentional resources. Different from autonomous system behavior, peripheral interaction aims to provide users a means to intentionally interact when needed and thus control their interaction, be it with a low amount of mental resources.

1.4 Challenges and Opportunities, Outlining This Book

While activities taking place in the periphery of attention are common in our everyday interactions with our physical environment, they are rare in our interactions with computing devices. This was already predicted over two decades ago (Weiser 1991), and with the increasing presence of computing devices in our everyday environment, seamlessly embedding computing technology in our everyday routines remains increasingly challenging. This book poses that peripheral interaction—enabling both perceptions of and interactions with computing technologies to reside in the periphery of attention—is a promising direction to overcome this challenge. The aim of this book is to capture the current state of the art with regard to peripheral interaction.

Part I presents theoretical perspectives on peripheral interaction and starts off with an analysis of everyday peripheral tasks, by John N.A. Brown, based on the principles of anthropology-based computing. This chapter covers people’s preattentive use of tools in their everyday interactions with the physical world. The following chapter, by James F. Juola, digs deeper into human attention processes by presenting an overview of attention theories that underlie human abilities to effortlessly perform multiple tasks at the same time. These two chapters together cover important theoretical grounding for peripheral interactions and lay the basis for the following parts of the book.

Part II presents four chapters which each address a different perspective on peripheral interaction styles. First, Darren Edge and Alan F. Blackwell elaborate on tangible peripheral interaction. They consider how physical interaction styles afford rapid initiation and fluid execution of peripheral interactions with digital content. Second, Katrin Wolf discusses peripheral interaction through microgestures: an interaction style that relies on gestures that last only a few seconds. This chapter presents how microgestures can be suitable in contexts where the user’s hands are busy and reviews design and technology for and requirements of microgestures. Third, Henning Pohl reviews casual interaction to support human–computer interaction in the periphery of attention. This chapter discusses the delicate relation between a user’s engagement with an interface and the level of control offered. Fourth, Jo Vermeulen, Steven Houben, and Nicolai Marquardt explore how the proximity between users and interactive systems can be employed as implicit system input, by means of “proxemic interaction.”

Part III presents three chapters discussing peripheral interaction in context. The first chapter, by Tilman Dingler and Albrecht Schmidt, explores how environments equipped with peripheral interaction technology could support human cognition and unintentional learning, by providing peripheral information relevant to the user’s current activity. Kathrin Probst’s chapter then elaborates on the relevance of peripheral interaction for desktop computing, by reviewing a number of innovative interface designs for this context. Finally, Dzmitry Aliakseyeu, Bernt Meerbeek, Jon Mason, Remco Magielse, and Susanne Seitinger review interaction design in the field of lighting and consider how peripheral interaction can contribute to this ubiquitous medium.

Part IV collects visions on the future of peripheral interaction. These essays are aimed to provide the reader a taste of how the field may progress in the future. The first chapter, by Berry Eggen, elaborates on future directions involving the auditory modality as a means for peripheral interaction. Finally, Brygg Ullmer, Alexandre Siqueira, Chris Branton, and Miriam K. Konkel draw inspiration from historical demonstrations and fictional architecture to envision a future in which peripheral interaction may be operationalized.