Keywords

1 Introduction

This introductory chapter to “Playful User Interfaces: Interfaces that Invite Social and Physical Interaction” is meant to introduce and discuss user interfaces to applications that have been designed to invite users to engage into playful interactions. Obviously, the applications should allow playful interaction. Moreover, the interfaces we want to look at should also allow playful social and physical interaction. The interfaces are “playful,” that is, users feel challenged or are otherwise persuaded to engage in social and physical interaction because they expect it to be fun. However, both from the point of view of the users and that of the designers, there can be more than fun that has inspired the design of the application and characteristics of the user interface or that are meant to motivate the user. Users do not necessarily be aware of that. A video game can be fun to play, but maybe it has also been designed to teach mathematics or history, have the user learn about art, or the game was aimed at enhancing cognitive or social capabilities, or at changing an unhealthy life style. Recreational activities can now be digitally supported and enhanced. Solving puzzles, reading books, playing chess, maintaining collections, providing information to social media and consuming information from social media, picture and video processing and collecting and retrieving sports events and results are some examples that come into mind.

Whether it is just about providing, supporting, and enhancing fun activities or whether there are additional educational or change of behavior, attitude, and opinion motives involved, designers can now also use physical, sensor-equipped environments, to design such games and entertainment applications where the user is not condemned to sit on a chair, using keyboard and manipulating mouse or joystick and following actions on a monitor. That is, games, entertainment, and educational applications can be designed where the user, or maybe several users, can be physically engaged in an application, and where, when there are more users, whether they are co-located or distributed, users can compete and collaborate or inform others about their whereabouts and activities. Competition and collaboration can take place in home and office environments, “arcade-like” public spaces, or public spaces in general, for example in the case of urban games. Sensors and actuators in wearables and mobile computing devices will add to the possibility to design a playful interface to the physical world and its inhabitants. These added possibilities to have playful interfaces will extend application areas and approaches to application areas, such as passive and active recreation, education, behavior change, training, and sports.

2 Exploring Playful Applications: Early History

The assumption that only in recent years or in the last decade ideas about playful applications of computers and computer supported environments emerged is very wrong. Already in the early years of computer science (1950 and 1960s), applications were predicted, and sometimes even designed and implemented that focused on non-scientific, non-administrative and non-industrial use of computers. Alan Turing, Norbert Wiener and later many Artificial Intelligence (AI) researchers considered such applications. However, at that time the focus was mainly on the application, not on how users, that is, the general audience, could interface in a convenient or attractive way interface with the application. Understandable of course, the users were computer scientists and intellectual challenges such as can we make the computer play chess were more important than having a “user-friendly” interface to a chess playing program. And, of course, the general public did not have access to computers. Computers became available for scientific, administrative, and industrial (process control) applications, computer time was expensive and only professionals were able to feed the computers with programs that were executed in “batch processing,” without interactivity between computer and professional user. That is, hand over the program and see how it has been processed by the computer the next day. Most probably there was an error message. Having a computer more efficiently running a program was worth the extra human effort. Soon there were attempts to provide users with a language that could be interpreted by the computer and that helped them to control how their programs had to be executed without human intervention.

New applications and more and other groups of users required more direct access to available commercial computer power. It required also more interactivity to control processing of collections of interacting programs and to provide user data. Interactivity in the late 1960s and early 1970s meant having access to a Teletypewriter (TTY) that allowed interactively changing commands in your program, resubmit your program, and evaluate results (and error messages) in real-time. Communicating with computers in real-time and from a distance, rather than offering a pack of punch cards to a receptionist of a computer center, became common practice. Having a “dialogue” with the computer about tasks that had to be processed became a point of view when using computers. Two additional points of view, not really in the main stream of computer science and its applications, came from Artificial Intelligence (AI) research and from artists that explored computer applications from an artistic viewpoint. These views are explained below.

  • AI Research: AI researchers explored whether and how computers could perform tasks that required intelligence, that is, when performed by a human being. Early AI research in natural language processing looked at machine translation systems, question-answering systems and database retrieval interfaces. Performance, not efficiency was the issue. And although useful applications could be foreseen, the applications did not necessarily address societal, business or industrial problems. But of course, the political situation in the 1950 and 1960s did steer some of the interests. Eliza, a conversational program developed by Joseph Weizenbaum in the 1960s, allowed users to chat, using natural language, about any topic (Weizenbaum 1966). Although performed in a rather primitive way, this research can be considered as a first attempt to understand the user and to offer feedback based on that understanding. Moreover, the application did not in any way ask for efficiency in the interaction. Users took their time (more than the system) to think about the questions posed by the system and to formulate their answers. In the same period there were other attempts to design natural language interfaces to applications that were meant to amuse the user or to provide information about a user’s sports and entertainment interests rather than about his or her computer-supported professional needs for handling information.

  • Artistic Research: Artistic applications, starting with drawings of pin-ups (ASCII art) using pen-and-ink plotters and matrix printers, were added to the domain of applications. Other input and output modalities than text were investigated in the interaction between humans and computers. Camera’s that provided information about the user’s presence, movements and activities, and allowing the computer to manipulate this information before giving feedback, were certainly among the main tools used by many interactive art artists. That is, the user or the audience played an active role in the creation of interactive drawings, paintings or music. Less-known than these applications are the artistic efforts of composers, musicians, brain researchers and computer scientists to use brain activity as input to artistic computer applications. Although in the early years computer science did not yet offer advanced (digital) signal processing, machine learning methods, or even the possibility to store data for future analysis, there nevertheless was much artistic activity to use brain signals in order to create and modify visual, auditory and audiovisual landscapes.

AI research, interest of artists, and interest of computer scientists that came with ideas to use the computer for recreational purposes and to support their own daily activities (including their recreational activities) with this new technology helped to draw attention from the general audience (starting with amateur engineers and computer hobbyists) to the use of computers for tasks that were in the interest of a particular user in his or her home and interest environment rather than in his or her task-oriented office or industrial environment. However, many investigations and developments in computer science research labs and institutes remained unknown for the general audience until their results became part of a wide-scale employment in the context of the advent of the personal computer. Long before the introduction of the personal computer we see research institutions experimenting with graphical user interfaces (GUIs), with devices (indeed, the mouse) to interact with such interfaces, and with devices that allow users to use input devices for the computer to compose drawings and sketches, that is, presenting the computer with non-textual and non-command-like information that has to be processed and transformed. Workstations with GUIs appeared in the early 1970s at Xerox’s Palo Alto Research Center, commercial workstations with GUIs followed and Apple introduced the GUI in the personal computer in the 1980s. In the same period, that is, before the introduction of the personal computers, we see the introduction of virtual reality environments and devices (head-mounted displays) that provide access to these environments, including the possibility that the environment adapts to the user’s view.

3 Arcade Systems, Home Consoles, and Personal Computers

When the first personal computers were introduced in the 1970s by computer hobbyists, it was often the case that the abilities of these hobby and “garage” computers were shown with simple games or other properties that showed how simple software could perform on this simple hardware. But already in these and later years we see that small commercial companies developed playful applications. An interesting view on the development of the early personal computers can be found in (Markoff 2005). Companies developed software and hardware, for hobby and personal computers that was meant to attract users, other than hobbyists and (very) early “professional” personal computer users, to buy and use software and special-purpose hardware that allowed them to play games. An independent development was the advent of text-based adventure games, often made in the spare time of computer science researchers and distributed through the ARPAnet (early 1980s). Multiuser games (for example, MUD: Multi-User Dungeon), first available on local computer networks of universities and research institutes, became also accessible for external users through the ARPAnet.

First home console and entertainment systems (Atari, Nintendo) appeared at the end of the 1970s and early 1980s (Wolf 2008). At the same time small companies took the initiative to develop playful applications, applications that allowed users to consider his or her “hobby computer” as a device that was there to have fun with. Examples could also be drawn from arcade video and electro-mechanical games. The interfaces to arcade games such as Pac-Man and Space Invaders were extremely playful, persuasive, sometimes humoristic, providing sounds, animations, and force feedback and doing this in such a way that not only the gamer, but also his or her friends and possibly other audience could become engaged in this social activity (Smith 2006). Human–computer interaction researchers took notice of this development (Malone 1982). Simple keyboard and mouse controlled graphical user interfaces appeared. But other devices, allowing speech or pen input were developed as well.

Interestingly, during the 1980s we see the development of software and hardware for game computers that allow the design of games and input modalities that make use information obtained from measuring physical movements or changes in physiological information from the user. Arcade games moved to the personal computer, even when the graphics, the sounds, and the animations were not or hardly comparable with what could be experienced in arcade environments. In the 1980s and early 1990s of the previous century we can see applications that were designed from the point of view from bodily interaction (gestures, movements) and a point of view that involved physiological information to control an application or, but certainly less obvious at that time, adapt an application to a particular user. This burst of creativity and interest in bodily interaction did not remain. Many of the ideas disappeared until they reappeared some decades later in the twenty first century when cheap sensor technology to measure physical and physiological user information became available.

4 From ARPAnet to the Worldwide Web

Already in the 1960s it became possible to offer programs to a mainframe computer for execution or communicate with a distant computer using telephone lines. ARPAnet made it possible to make the transition from distributed input devices connected to mainframe computers to the possibility to access a network of worldwide connected computers. Messages between computer users could be exchanged and documents and programs could be transferred from one user to the other. Internet, as it existed since its early exploitation in the late sixties and early seventies, remained the domain of scientists at research institutes and universities for some decades. Internet facilities such as file transfer, electronic mail and, later, news and discussion groups only slowly entered the world of personal computer users during the 1990s of the previous century.

Standards to format documents for standardized exchange, editing and retrieval using distributed databases and computers connected together through the Internet were also first developed in a scientific environment and for scientific purposes. Tim Berners-Lee at the CERN laboratory in Geneva developed the technologies that made World Wide Web possible between 1989 and 1991 (Berners-Lee and Fischetti 1999). This technology was made publicly available some years later and made attractive for a broader audience with graphical browsers. They allowed ubiquitous use and commercialization through an increase in start-up companies in the late nineties and early 2000s. Web research and new web technologies that included the use of audio, pictures, video and animations made it possible to have entertaining and playful web applications. Users extended their presence on the Internet from a linear address to personal webpages and by becoming present in social media displaying personal information, preferences, opinions, and daily activities.

5 Ambient, Ubiquitous, and Pervasive

During the early years of computing, in parallel with the more mainstream developments that focused on improving efficiency of hardware, software, and interface technology in general, there were experiments in research laboratories that aimed at introducing special purpose hardware, software and interaction technologies. We already mentioned AI applications, mainly software-oriented (with the exception of special symbol processing machines) and game hardware, software and interaction devices, allowing players to have more natural interaction, based on the game-activity provided by the application, than made possible by keyboard, mouse, windows and menus. Distributed collaboration issues had already gotten early attention (Hiltz and Turoff 1978), just as virtual and augmented reality, and haptic applications with new interaction possibilities (data gloves, headsets, haptic devices). A well know example from the early haptics history is the Tactile Vision Substitution System (TVSS) (Bach-y-Rita et al. 1969). Images of a television camera were converted in vibrations with different frequencies of 400 pens that were put in the back of a chair. A person, for example a blind person, could then experience (or “see”) the image while sitting in this chair.

In the early 1990s, Mark Weiser introduced his vision of ubiquitous computing (Weiser 1991). Weiser based his views on three forms of ubiquitous devices that became available in research laboratories: tabs (wearable centimetre sized devices), pads (hand-held decimetre-sized devices), and boards (metre sized interactive display devices). In the years that followed interconnectivity and the use of Internet became more visible. This led to similar concepts, sometimes emphasizing the role of the environment (ambient intelligence), the use of small sensors (pervasive computing) or the interconnectivity of devices (Internet of Things). Presently it is difficult to distinguish these “different” views.

Although there was quite some of interest in the ubiquitous computing view and similar views but with different names, most research efforts related to Human–Computer Interaction, went to Internet, the World Wide Web, Multimedia, Computer-Supported Collaborative Work, and Information Retrieval. There were certainly great, useful, and successful attempts to lay the foundations of the field by developing methods and methodology for interaction design, for requirements engineering, for usability research, user experience design (Hassenzahl and Tractinsky 2006), and persuasive technologies (Fogg 2003). The foundations were also laid for interaction research based on virtual and augmented reality and, starting with speech, natural language, and pointing gestures, multimodal interaction research. Again, as always, once there is a clearly visible new development, it is always possible to trace it back to some ideas that were introduced some decades before. Successful development of new interaction technology very much depends on the possibility to have it integrated with existing technology and to being able to develop an infrastructure that helps to make this technology attractive and affordable. The latter obviously depends on mass production or massive use of a new technology.

6 Tangibles, Smart Materials, and Wearables

In Weiser’s view the tabs, pads and boards were assumed to be wirelessly connected; devices such as tabs (and pads) can move around and proximity can be detected. But there is still lot of attention for large, medium, and small-sized displays on these devices to present information. A more rigorous break with the tradition of graphical user interfaces appeared in the work of Hiroshi Ishii in the MIT Media Lab (Ishii and Ulmer 1997). The emphasis in this work is on physical objects that have sensors and actuators and that invite physical interaction with digital content represented by the object. This view does not exclude interconnectivity between objects as we discussed in the previous section. Neither does it exclude the ambient intelligence view where it may be the case that although the user focuses on the interaction with a physical object, ambient media are there at the periphery of human perception to shift a user’s attention. But certainly, in this view the focus is on objects in the physical world that can be grasped and spatially manipulated. These Tangible User Interfaces (TUIs) can be seen as a way to implement Weiser’s view of computers that disappear in the environment by coupling digital information and information processing capability to everyday physical objects. This view was illustrated with a physical implementation of a GUI that included the possibility to move physical objects (phicons) on a desk surface to control the computation.

Commercial interactive surfaces (tabletops, multitouch tables) became available in later years and found their use in collaborative work and entertainment applications. Tangible tabletops allow the movement and manipulation of tangible objects on their surface and therefore also the manipulation of digital content as it is projected on the surface. But many other tangible user interfaces appeared. A tangible tabletop is about objects that can be moved and manipulated on a fixed surface with a graphical and touch interface and a perceptual coupling between these physical objects and the dynamic representation of content on the surface. But, to mention another extreme, tangible user interfaces can also be about interconnected physical objects with sensors and actuators that can be thrown from one player to another player, keeping track of speed, position, and individual or team player activity. Players can be informed of the play or interaction knowledge collected, integrated and interpreted by the tangibles. Players can change their behavior based on such information, the play, as it is implemented in the tangibles and the environment where the play takes place, can adapt its parameters to the players and the progress of the play. Again, we see a close, synchronous and real-time coupling of real-world activity involving physical objects and a digital model of a play and players’ activities. Educational and entertainment applications appeared and domestic applications have been investigated. In the next section, rather than exploiting a user’s or player’s activity from the point of view of interacting with tangibles, we will look at measuring human activity, behavior and bodily expressions with multiple sensors embedded in the environment, including sensors embedded in physical objects, to better understand the actions and intentions of a user (the human computing view).

In a next edition of the view on tangible user interfaces it was observed, for example in (Ishii et al. 2012), that the tangibles, that is, the objects that invite physical interaction and their physical manipulation represents manipulation of digital content, despite actuators that provide sound and light effects or information on an embedded display, do not really change their (natural) physical appearance. Is it possible to have tangibles that dynamically change their appearance and behavior in sync with changes in digital content? We can, for example, think of objects that have motors and gears and investigate them in order to make a transition from, as mentioned in (Ishii et al. 2012), the transition from static/passive to kinetic/active tangibles. This view assumes a bidirectional coupling between dynamically controllable deformable and reconfigurable physical objects or physical material and an underlying computational model. In particular nanoscience research on material property changes has made it possible to introduce smart material interfaces that change their appearance because of changes in underlying digital content based on changes induced by interacting users (Vyas et al. 2012).

Other views on tangible user interfaces take into account “wearables.” That is, devices that are integrated in our clothes, or, dependent on the definition of wearables, devices that we wear on our body, and in our pockets (Mann 2013). These devices know about our activities, and they can also inform others about our activities. A similar observation can be made about devices that measure physiological information, including information about brain activity. Such kind of information provides knowledge about the emotional and cognitive state of a user and how he or she wants to provide input to the system. That is, if there is involuntary input, based on monitoring a user’s mental state or a user’s reaction on externally evoked feedback, or voluntary provided input, such as motor imaginary input.

7 The Human Computing View

Weiser’s view did not include, at least not explicitly, the measurement and interpretation of human behavior and human activity. Neither does the work of Ishii. Obviously, humans are part of the physical worlds that are accommodated with embedded sensors, actuators and intelligence. There are traditional displays, but also tangibles and smart material interfaces as explained in the previous section. In these digitally supported physical worlds, new interaction modalities or new integrations of interaction modalities have to be investigated. This can be done from the point of view of the characteristics of a particular device or tangible that allows other than remote control input devices such as mouse and keyboard, but it can also be done from the point of view of being able to sense human activity, human behavior, human (body) movements, and to sense (neuro-) physiological information when performing tasks or otherwise being active in such an environment.

Although it is not impossible to detect some aspects of a user’s mental state from his or her mouse and keyboard use, in particular when the mouse has some physiological sensors, more information related to natural human activity, behavior, and movements need to be extracted and interpreted in order to provide satisfactory reactive and pro-active support by an environment. For specific applications, including games that require bodily activity, other interaction devices are of course available. Haptic devices, devices that capture movements, eye trackers and other, now sometimes considered to be exotic interaction devices such as thread mills to experience virtual reality, were already introduced decades ago, but usually in a context of a human-device interaction (one human, one device, one particular application). These devices capture one particular natural human physical activity and transform it into the control of an application. Cameras to capture human behavior did not yet connect to computers to analyze this behavior. Applications based on measurement and analysis of human vocal sounds (speech processing) got more attention.

In contrast, intelligence embedded in environments and in physical and virtual objects is meant to allow interaction with users in pro-active and reactive ways and therefore requires more knowledge about their users and their activities. With the exception of the just mentioned input devices, in the past, knowledge about the user had to be collected from keystrokes and mouse movements and the tasks and contents that were accessed. Current sensor technology and the embedding of intelligence in environments, physical objects and clothes and devices on our body allows other and more comprehensive ways of knowing about the user, including his or her preferences, abilities, and emotions. There are many ways to have sensors track human behavior and have this information integrated in order to allow such information to be used in a playful way. Gestures, body poses, body movements, and moving around in an environment or in front of an application can be thought of as explicit commands, or as ways to provide information (produced voluntarily or involuntarily) to the environment and its objects, just as we do in interaction with our human partners. Clearly, microphones and cameras are among the sensors that are embedded in environments and objects and that can measure such behavior. Eye movements and facial expressions provide information about interest or boredom or about focus of attention. And, obviously, when interacting with a social robot or virtual (embodied) agent, our verbal and nonverbal behavior should have meaning to them in order to make interaction more natural. In addition there are applications where an environment or its objects is required to know about and understand the interaction between its human inhabitants. Human computing (Pantic et al. 2008) and social signal processing (Vinciarelli et al. 2009) are research areas that have emerged to serve such applications. Computer-supported play, games and sports in the physical world with two or more players can be designed in which such information is exploited, whether it is for making interactions more natural or for making interactions more challenging, and whether it is for competition or for cooperation (Nijholt et al. 2012).

Physiological sensors, including sensors that measure brain activity, can complement the information generated from other sensors, or, depending on the application, be used separately to feed an application with information about the physical or mental state of a user. It can be used to inform the user about this physiological state, asking or persuading him or her to change current activities or long-term behavior, for example for health or fitness reasons. Based on physiological information from the user an application can also adapt to the user, asking for more or less effort, asking for other input modalities, or providing different feedback. In particular games that require physical effort can profit from such information, but also videogames can use it to adapt the level of the game to measured frustration, interest or boredom. There are also playful applications where the user is asked to manipulate aspects of his (neuro-) physiological state. This is in particular true for brain-computer interfacing, where human—computer interaction researchers are now experimenting with interfaces that expect, maybe in addition with other modalities, brain activity input that is evoked by external stimuli or by voluntary mental activity that is transformed to a command to a computer or other device in the environment (Nijholt et al. 2008).

8 Design Your Own Playful Interfaces for Your Entertainment

Logo (Papert 1980) was a child-friendly programming language that was based on Piaget’s constructivist educational philosophy. It allowed children to construct their knowledge through experience. “Turtle graphics,” that is, simple animations could be programmed by children. There were also possibilities to “program” physical objects. Logo programming environments for teaching purposes were developed, including programming the control of sensors, motors, and lights in physical objects (“Programmable Bricks,” later called LEGO Mindstorms). Teaching and learning was also the objective of the Alice environment developed by Randy Pausch and colleagues. “Drag and drop” enabled students to create programs and get familiar with programming constructs (Cooper et al. 2000). Programming environments for children and students have been further developed into environments that allow designing, in a playful way, interactive stories, animations, music and art applications. Environments can provide examples that can be “remixed” to introduce other characters, animations and storylines. An example of such a visual programming environment is Scratch (http://scratch.mit.edu/).

We already mentioned the programming of physical objects. Nowadays, commercially available micro-controller boards such as Arduino allow the reading of sensors, the control of motors and the behavior of actuators. Microcontrollers, sensors (location, proximity, and movement) and actuators (changes of appearance, location, or movement) are becoming affordable and can be used to design playful tangibles, including the control of natural objects in an educational or home environment. Simple tools such as Makey MaKey make it possible to construct tangible interfaces. Hence, in addition to creating possibilities for constructivist learning for educational purposes, interactive entertainment can be constructed using commercial off the shelf technology (cheap sensors, Kinect, Arduino, Makey Makey, etc.). And, creating entertainment and playful interfaces, especially when done with others, can be as much fun or even more than playing a commercial videogame.

9 More About This Book

This first chapter with background information about playful interfaces is followed by five sections. The first section is devoted to Public and Mobile Entertainment. The chapters in this section provide a view on playful interaction in various situations using different technologies. The chapters discuss interaction with large displays in public environments, using playful whole body and location-based interaction detected with cameras (Chap. 2) and mobile phones (Chap. 3), and interaction with small displays on mobile devices (mobile phones, smart phones, tablets) that allow the user to play ubiquitous games, wherever the user is (Chap. 4).

  • In Chap. 2, “Public Systems Supporting Non-instrumented Body-based Interaction” by Dimitris Grammenos and colleagues, three technologies for body tracking are demonstrated in three public systems for culture and marketing. These camera-based technologies are location-tracking, body-shape tracking and skeleton tracking. The applications use wall-projected 2D and 3D game and virtual worlds and all three allow multiple users. They concern information presentation in an exhibition room, an “advergame,” and a public system to explore timelines using hand and leg gestures. Design considerations and user evaluations are discussed.

  • In Chap. 3, with the splendidly fitting title “Playing with the Environment” by Pedro Centieiro and colleagues a persuasive location-based multiplayer game is introduced that aims at inducing or increasing a pro-environmental attitude. Players use mobile phones to interact with a large public display. The application requires players to physically walk around and collect (virtual) litter on their phones and drop it in correct virtual recycle bins on the public display. Environmental information is displayed to players and their audience. In addition to raising environmental awareness and aiming at a pro-environmental attitude change, social and collaborative activities are stimulated in an entertaining and awarding way. The authors discuss the design methodology and present their user studies, including observations on the persuasive abilities of their system.

  • In Chap. 4 on “Designing Mobile and Ubiquitous Games and Playful Interactions” Paul Colton discusses a development not really foreseen by Weiser and others: the transition from portable phones to feature phones and to smart phones, where the latter have operating systems that allow the integration of computing capabilities, connectivity and multimedia options and many on-board sensors that can collect information about location, position and movements. Primitive versions of traditional console games were recreated on early mobile phones. However, presently game and entertainment applications can be developed that are built on knowledge of the environment, including maps, positions of other players or users, real-time recordings (pictures, audio, video) of the environment, and knowledge about nearby objects. And, of course, there is the possibility to communicate with others in a multiplayer setting. Behavioral and physiological information are other knowledge sources that can be exploited in games and entertainment applications. Colton surveys characteristics of mobile games, in particular the on-board sensors that allow different kinds of interaction and therefore also different kinds of mobile and ubiquitous game play. These developments are illustrated with examples.

The second section of this book is devoted to interfaces that are not only playful but also have educational purposes. Development of social, cognitive, and physical skills is a goal that is addressed. Persuading users to perform physical activity by doing some exercises can be a main aim of design, but it can also be a side effect of the playful applications discussed in the chapters of the first section of this book. In this second section we focus on playful interfaces to applications that are aimed at providing children (but adults are invited to join) with opportunities to engage in physical and social play in interactive indoor and outdoor environments. The chapters in this section discuss interactive playgrounds that provide fun and that invite play employing social and physical interaction. Design of playgrounds where sensors and actuators are embedded in the environment is discussed in Chap. 5; design of playgrounds where sensors and actuators are embedded in player devices is the topic of Chap. 6; in Chap. 7 a player device is introduced that has its own play intelligence, but performs in an environment that can monitor and change its behavior in the interaction with players.

  • In Chap. 5, “Interactive Playgrounds for Children,” Ronald Poppe and colleagues discuss design considerations of interactive room-sized playgrounds with sensors and actuators. They focus on playgrounds where technology supports open-ended play. That is, play without pre-defined rules and goals and where children can have ad-hoc competition or cooperation. Children can introduce their own rules or borrow and adapt rules from games they already know. Design challenges are discussed from the points of view of context-awareness, personalization and adaptiveness. The role of various types of sensors and actuators is discussed, with an emphasis on cameras that determine position and movements and floor or wall feedback using projections. The chapter concludes with observations on future interactive playgrounds.

  • Chapter 6, “Designing Interactive Outdoor Games for Children” by Iris Soute and Panos Markopoulos focuses on the design process for outdoor games. As in the previous chapter, players are assumed to be collocated, but rather than assuming sensors and actuators embedded in the environment, children have mobile player devices (physical objects) with several modes of interaction and the possibility of communication between devices. These games that distinguish themselves from games that rely on screen interaction are called Head Up games. The authors discuss the role of brainstorming sessions to generate ideas and how and when to involve children in the design process. Various methods for early user requirements gathering are discussed, including the positive and negative experiences the authors had with these methods. Playtesting of prototypes with children can help to introduce rules in the game that they understand or they think that are fair. Playtesting with adults, in addition to testing with children, can also lead to insights in usability problems and to useful feedback to designers. The chapter concludes with a list of recommendations for designing Head Up games.

  • Chapter 7, “Smart Ball and a New Dynamic Form of Entertainment” by Sachiko Kodama and colleagues introduces a tangible object, a smart ball, that has embedded sensors and actuators, and that is wirelessly connected to a more powerful computing device (a personal computer) in the environment. Sensors can be embedded in toys, or more generally, devices that can move around or be moved around in a physical environment. Among them are play, entertainment, and sports devices and equipment that are used in physical play. Wireless connection to a computer makes it possible to process and integrate sensor data coming from these devices and augment it with other context information to adapt the behavior of the object or to adapt the environment to the behavior of the object. In this chapter on smart balls, the authors discuss various implementations of smart balls and games that rely on specific properties of these balls. Embedded sensors detect the “state” of the ball (not moving, being grasped, thrown or rolled), LEDs in the ball can be actuated, and sensor information can be processed by a wirelessly connected computer that decides how to add sounds and graphical effects to the ball’s behavior, for example, when and where it bounces on the field. Cameras are used to track the position of the ball on the playfield or, using a high-speed camera, the speed of the ball. Experiences obtained at exhibitions with various implementations involving one or more players are discussed.

The third section of his book is devoted to games that aim at a change of opinion, attitude or behavior (Chap. 8), playful interfaces that help in collaborative decision making (Chap. 9), and playful interfaces that help teachers of autistic children (Chap. 10). All the multiuser applications in the chapters of this section run on a multiuser touch table.

  • In Chap. 8, “Games for Change: Looking at models of Persuasion through the Lens of Design” by Alissa Antle and colleagues the authors start off by reminding us that there is little evidence that Games for Change are effective. These digital games aim at changing players’ opinions, attitudes, or behavior. In this chapter, the focus is on games that address the issue of sustainability. The authors discuss models of persuasion. The underlying idea of the Information Deficit model for example is that when learning about facts and consequences people will change their opinion, attitude, or behavior related to an issue such as climate change. In the Procedural Rhetoric model, when implemented in a game, the players experience the consequences of their assumptions and actions during game play and, again, it is assumed that this will lead to an awareness of the problem and the necessity of a behavior change. In addition to such existing models of persuasion the authors introduce a new model called Emergent Dialogue that puts emphasis on enabling participation in discussions about information, decisions and personal values. In an analysis of several Games of Change design markers are identified that can provide evidence of the persuasive model(s) that have been used in a game. A tabletop game on sustainable land use is introduced that incorporates the author’s Emergent dialogue model. Guidelines based on the design markers that support behavior change through Emergent Dialogue are provided.

  • Chapter 9, “Individual and Collaborative Personalization in a Science Museum” by Betsy van Dijk and co-authors, investigates how a multitouch table that provides playful access to information about a museum’s exhibition can be used to enhance the experience of a museum visit. The table can of course be considered as a tangible interface. Children have touch interaction with the table, but they certainly can continue verbal and nonverbal interaction, discussing and negotiating with the other players, while interacting with the table. Clearly, this is different from what we saw in several previous chapters where users could freely move around in an environment with sensors and computing power to give meaning to their positions and movements, or where users interact with their mobile player devices. In this application, based on the information presented to them, a small group of children can discuss and integrate their interests in a collaborative interaction with the table. They are then provided with their “collaboratively personalized” route through the museum. The authors report results of experiments that aimed at measuring aspects such as enjoyment and collaboration during the multitouch interaction with the table and the effect on their visiting experience when following their suggested route and answering questions about objects (the “quest”).

  • In Chap. 10 “No Problem! A Collaborative Interface for Teaching Conversational Skills to Children with High-Functioning Autisms Spectrum Disorder” Massimo Zancanaro and colleagues introduce a multiuser interface to teach children with autism spectrum disorder social conversation and social interaction skills. They built their work on techniques of cognitive behavioral therapy. These techniques include role-playing to learn about various social situations and observational learning, where the latter is implemented in such a way that children can observe themselves in videos. Several social settings are provided by the system; two children, assisted by a facilitator can choose settings and their conversations can be recorded. Authoring tools to design settings and stories to introduce them were developed for the facilitator. Example conversations can be provided and can be compared with the conversation the children choose to have in a particular setting. In experiments the multitouch table implementation was compared with a multimice implementation on a desktop computer. From the experiments it could be concluded that the No Problem! system was usable, enjoyable, and the therapeutic goals could be achieved.

The fourth section of this book is devoted to health and sports applications. It should be noted that also in many of the previous chapters playful interfaces were designed in such a way that they required physical activity of their users. Apart from developing interesting games and entertainment that is “just” fun and provides enjoyment, many authors, including authors of chapters in the previous sections, also motivate their research from a point of view of developing cognitive, social or physical skills, and, when physical activity is involved, make references to encouraging a healthy life style and attacking sedentary behavior of children who are playing traditional video games. In this section, we have two chapters that explicitly address these issues. That is, we have a chapter on designing interfaces that invite social and physical interaction, with an emphasis on exertion games, that is, on games that require intensive physical efforts and interfaces that help users to be successful with their efforts (Chap. 11), and on designing interfaces that know how and when to interrupt user activity in order to persuade the user to engage in some physical activity (Chap. 12).

  • Chapter 11 on “Designing for Social and Physical Interaction in Exertion Games” by Florian (Floyd) Mueller and colleagues a decade of research on exertion games is summarized with the aim to provide future developers with a set of design themes and recommendations. Exertion games require intense physical activity of the user, but this activity can be performed in a playful environment. In this chapter, a representative case study is presented (Table Tennis for Three) that allows investigations in social and physical behavior of players, where players can be in physically distant locations. Video recordings and questionnaires were used to analyze behavior and to gather user provided input to questionnaires. From this qualitative analysis, some salient themes emerged that facilitate social and physical exertion play, such as the availability of shared virtual objects play, being able to anticipate a player’s next action, supporting players in expressing themselves using their bodies, have the opportunity to “bend the rules” of a game, and, utilizing the uncertainty that arises in physical exertion play.

  • Chapter 12 is about “Designing Games to Discourage Sedentary Behavior” by Regan Mandryk and colleagues as authors. Games, as mentioned in the title, are called “energames.” The authors define energames as “… games that reduce sedentary time by requiring frequent bursts of light physical activity throughout the day.” The authors start with making a useful distinction between being physically active and anti-sedentary behavior. Persons can be physically active and nevertheless spend most of the day sitting. The negative effects of a sedentary lifestyle can apply to physically active persons. The authors discuss and compare existing guidelines for physical activity and anti-sedentary behavior. The latter aim at introducing frequent, low-intensity physical activity into daily routine, rather than demanding intense physical effort. Barriers to physical activity and nonsedentary lifestyles are discussed. Guiding principles for exertion games (exergames) design are extended to energames design and additional principles for energames are introduced. Casualty, motivation and persuasion are some of the issues that are addressed in these principles. Examples of energames and a comparison with exergames with a focus on casualty and accumulated activity are also discussed.

In the final and fifth section of this book, we find two chapters that are about creating games and tangible interfaces to games by children or teenagers using specialized tools, game design platforms (for example, Scratch), low-cost tangible interface construction kits (for example, Makey Makey) and multitouch tables. Low-cost tools such as Arduino and GoGo Board, sensors and actuators also appear in the final chapter where students are provided with such tools to build physical and virtual models for science learning.

  • Chapter 13, “Playing in the Arcade: Designing Tangible Interfaces with MaKey Makey for Scratch Games” by Eunkyoung Lee and co-authors is about how they guided children (10–12 years) in setting up a game arcade with games and tangible (touch-sensitive) interfaces that were constructed using the Makey Makey construction kit, Play-Doh, or made from whatever materials that were available. They also learned the basics of creating circuits. The interfaces that were built connected to remixed on-line available games from the Scratch game design platform. The authors describe the two workshops they organized, one focused on game and controller design, the second added the playing in the arcade experience. All activity in the workshops was recorded (observation notes, photographs, and video recordings) and analyzed. In remixing the Scratch games, the children added functionality and multimedia effects and spent time on game mechanics and aesthetic features. Tangible game controllers for these remixed games were designed and gender specific characteristics of these designs were noted. Insights on creating learning opportunities (design, programming, control) for children are reported.

  • Game and interface design and implementation are also the topics of Chap. 14, “Playful Creativity: Playing to Create Games on Surfaces”, by Alejandro Catalá and colleagues. In this chapter tabletop systems are explored on dimensions such as fostering creativity, development of computational thinking, and game and interface design. The focus is on teenage students who have to collaborate in creating games and the assumption is again that learning to create games is more effective from the point of view of design, computational thinking, and, more generally, creativity, than “just” playing a game. The authors discuss the various tools that are available to create games and interfaces, but they conclude that existing tools support single-user interaction, rather than supporting a group process that is aimed at fostering creativity and learning. A tabletop interface and software platform is introduced that supports non-programmers in designing game environments. Results of experiments with teenage students are reported.

  • The final chapter (Chap. 15), “Bifocal Modeling: Promoting Authentic Scientific Enquiry through Exploring and Comparing Real and Ideal Systems Linked in Real Time” is by Paulo Blikstein. The chapter aims at improving STEM (Science, Technology, Engineering, and Mathematics) education. This is done by providing students with tools to connect real world physical models with computer simulated systems in real time. This is called bifocal modeling. Real-time sensing and computational modeling are brought into the classroom and are connected in real time. The exploration of this synergy is the main aim of this chapter. Tool kits such as Arduino and GoGo board are provided to students to build the sensor-equipped physical models. Computational models of certain phenomena such as bacterial growth or heat transfer are built using game and other modeling platforms. The chapter provides a taxonomy for modes to merge sensors, actuators and models for science learning. Examples and case studies of bifocal modeling are presented. Among them are studies concerned with biology (bacterial growth), physics (Newton’s laws), and chemistry (gas laws study). Experiments involving many students are reported and analyzed. The real world may be too messy; the virtual world may be too perfect. How to provide students with software and hardware tools to playfully explore incongruities and contradictions is one of the aims of this chapter.

10 Predictions and Conclusions

The chapters in this book do not only provide the current state of art in design, technology and use of playful interfaces, they also provide a view of the future of playful interfaces. Obviously, new technological developments will happen and new playful interfaces will appear. Any attempt to be complete at one particular moment will fail. Some of the developments reported in the chapters of this book could not or hardly have been predicted ten years ago, even when the basic technology was already available. Many ideas that were already available in the 1980s were not followed up until thirty years later when basic analogue and digital technology could be integrated in products that became interesting for mass production. That has happened before. In 1928, in his essay “The Conquest of Ubiquity,” Paul Valéry wrote (Valéry 1928):

Just as water, gas, electricity are brought into our houses from far off to satisfy our needs in response to a minimal effort, so we shall be supplied with visual or auditory images, which will appear and disappear at a simple movement of the hand, hardly more than a sign.

and,

Just as we are accustomed, if not enslaved, to the various form of energy that pour into our homes, we shall find it perfectly natural to receive the ultrarapid variations or oscillations that our sense organs gather in and integrate to form all we know. I do not know whether a philosopher has ever dreamed of a company engaged in the home delivery of Sensory Reality.

Valéry’s enthusiasm was caused by inventions that made it possible to reproduce art such as photography, motion pictures and phonograph recordings, and the possibility to manipulate pictures and recordings. Obviously, this was written long before families possessed a photo camera, let alone many photo cameras. Valéry did not predict and could not foresee a world with wireless security cameras or Wi-Fi digital cameras for private use, or smartphone cameras that can send pictures and recordings “with a simple movement of the hand” to wherever the user wants. And at that time certainly no one would predict that separate nineteenth century inventions such as photography, telephone, phonographic recordings, and motion pictures in the future could be integrated in one device.

Many know also the first sentences with which Mark Weiser started his famous Scientific American article (Weiser 1991) in which he introduced the notion of “ubiquitous” computing:

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are distinguishable from it.

Weiser developed his view of “ubiquitous computing” by extrapolating from the computing devices (tabs, pads and boards) that were researched in his computer science laboratory at the Xerox Palo Alto Research center. He envisioned rooms with hundreds of computers, but most or all of them “invisible to common awareness,” that is, computers embedded in the everyday world. In later years slightly updated views were denoted by terms such as ambient intelligence and pervasive computing. In (Nijholt et al. 2004a; Nijholt 2004b), we discussed some problems when having to interact with computers that have disappeared in the environment. How do we recognize how to interact (Gibson 1977)? The impact of smartphones as computing devices was not foreseen by the computing research community. Due to developments in technology research into social media, social robots, and affective computing has become much more important than 20 years ago could have been foreseen.

There now is a foreseeable impact of wearables in general, including devices embedded in clothes, body, and brains. Detecting and interpreting human physical and mental behavior with the aim to pro-actively support humans in their daily and professional activities (Pantic et al. 2008; Vinciarelli et al. 2009) has made human–computer interaction an interesting research area for behavioral scientists.

Many of these developments in research and technology underlie the design and implementation of the playful interfaces that are discussed in this book. Future playful interface will also profit from the possibility of having brain-computer interfaces (Nijholt et al. 2008; Gürkök and Nijholt 2012), due to the cooperation of neuroscientists with HCI researchers. Developments in nanoscience and the development of smart materials will lead to increased interest in smart material interfaces (Vyas et al. 2012) and the cooperation between HCI researchers and nano-scientists. Playful interfaces that also make use of smart materials and that can reactively and proactively interact with us knowing about our physical and cognitive activity through wearables and sensors in the environment are something to look forward to. Playful interfaces will enter our homes and weave themselves into the fabric of everyday life.