1 Approach

1.1 Background and influences

Now that microcontrollers have found their way into almost every household product, be it cookers, washing machines, cameras or audio equipment, a domain which was once considered pure industrial design is faced with many interaction design challenges. For modern-day industrial designers, getting a grip on these interaction problems appears to have become an essential part of their profession. Yet, the last two decades or so show that this integration of interaction design and industrial design is far from trivial. Many interfaces of electronic products feel “stuck on” (Fig. 1). This is not only a matter of form integration, but also a matter of how “display and push buttons” interfaces disrupt interaction flow, causing many electronic products to feel computeresque [2, 3]. One would expect that “strong specific” devices tailored to a single task would feature alternative interfaces that are superior to the “weak general” PC, which needs to cater for many tasks [4, 5]. However, most electronic products actually feel very PC-like in interaction style—complete with decision trees and menu structures—only worse, because of their lack of screen real estate and full-sized input devices. In our research, we try to bridge industrial and interaction design, searching for more appropriate interaction styles for electronic products.

Fig. 1
figure 1

Espresso machine with ESC button (middle row, far right)

As so many others in the interaction design community [68], we have been strongly inspired by Gibson’s ecological psychology. Norman’s “The Design of Everyday Things”—which introduced Gibson’s term “affordance” into the interaction design community—is, to us, among the most inspiring interpretations of ecological psychology, as it remains one of the few books that touch upon the relationship between physical formgiving and usability. Whilst the term affordance continues to be at the centre of much heated debate [9, 10], one of the more popular interpretations is that it concerns the relationship between appearance and action: formgiving that invites effective action. We, too, have focussed on affordance as an invitor of action. In this line of thinking, it was not important what kind of action was invited or what the result of the action would be, as long as it was clear which action was required. This proved to be a useful way of looking at things in the context of traditional industrial design, in which many products, such as taps and lights, have a single expectable function. Once the user figured out the action, the function would follow automatically.

Although misleading or missing information on the required action can be a problem in interactive products too, generally, this is not the core of the usability problem. In fact, most interactive devices clearly show that push buttons need pushing, sliders need sliding, rotary pots need rotating etc. (Fig. 2). Over the years, we have become aware that the real usability challenge lies elsewhere: communicating what will be the result of an action. For this, we now use the term “feedforward” [11]. Clearly, the user is interested in information that will enable him to complete his task: the action is not the goal of the user; fulfilling his task is. In this approach, neither action nor appearance is arbitrary: they need to be designed concurrently with function in order to craft a meaningful relationship between appearance, action and function. Identifying formgiving-related factors that play a role in creating meaning through feedforward forms an ongoing part of our research.

Fig. 2
figure 2

In most electronic products, the controls clearly communicate the required actions (pushing, sliding, rotating etc.), but this does not necessarily mean that they communicate their function

1.1.1 Options for creating meaning: the semantic vs. the direct approach

As pointed out by Norman [6], controls of electronic products often look highly similar and require the same actions. If all controls look the same and feel the same, the only way left to make a product communicate its functions is through icons and text labels, requiring reading and interpretation. One of our interests is to avoid this reading and interpretation of icons and labels by designing controls that communicate their purpose through their forms and the actions they require. So how can this be done? If the operation of a control has directly perceivable and spatial consequences in the real world, then Norman’s natural mapping offers a solution. The way product components are laid out spatially can help the user in understanding their purpose. Figure 3 shows an extreme example of this: the layout of this railway control panel maps directly onto the physical layout of the railway tracks themselves. Figure 4 shows a graphical variant of natural mapping in which a three-dimensional line drawing on a control panel of a crane shows how the controls map to the crane’s articulating parts. The idea can be applied to anything in which spatial layout is meaningful, be it cooking rings, room lighting, car mirrors etc. Yet, the settings of electronic products and computers are often abstract and do not naturally have spatial meaning. Natural mapping, thus, fails in the area where we need it most desperately: in making the abstract intuitive in use. In short, it does not suffice to make controls differentiated in appearance and action; the crux of the problem lies in the creation of meaningful appearance and actions. So what are our options in the creation of meaning?

Fig. 3
figure 3

An extreme example of natural mapping in which the controls map directly on the railway lines

Fig. 4
figure 4

A graphical variant of natural mapping: the controls are placed in a perspective line drawing of the crane itself

The way a control looks and the action that it requires express something about the its purpose. In general, there are two ways to approach this expressiveness; these are the semantic approach and the direct approach. We outline them side by side in Fig. 5. Although they are seldom made explicit, we feel that they underlie many interaction concepts. The first approach starts from semantics and cognition, i.e. representation. The basic idea is that, in using the knowledge and experience of the user, the product can communicate information using symbols and signs [12, 13]. The approach is characterised by reliance on metaphor, in which the functionality of the new product is compared to an existing concept or product with which the user is familiar (“this product is like a...,” “this functionality resembles...”). Often, this leads to the use of iconography and representation. In the semantic approach, the appearance of the product and its controls become signs, communicating their meaning through reference. Products resulting from this approach—be it hardware or software—often use control panels labelled with icons or may even be icons in themselves. The second approach or the direct approach takes behaviour and action as its starting point. Here, the basic idea is that meaning is created in the interaction. Affordances only have relevance in relation to what we can perceive and what we can do with our body: our effectivities. In this approach, respect for perceptual and bodily skills is highly important. What appeals to us in the direct approach is the sensory richness and action-potential of physical objects as carriers of meaning in interaction. Because they address all the senses, physical objects offer more room for expressiveness than screen-based elements. A physical object has the richness of the material world: next to its visual appearance, it has weight, material, texture, sound etc. Moreover, all these characteristics are naturally linked, an issue which we will get back to later. Equally important to rich sensory expressivity are the action possibilities that physical objects offer. Unlike graphical objects, physical objects potentially fit our bodies and our repertoire of actions.

Fig. 5
figure 5

The semantic vs. the direct approach

Whilst we have always considered ourselves as exponents of the direct rather than the semantic approach, previously, we saw only appearance as a carrier of meaning. In this view, appearance was an invitor of an arbitrary action which then triggered a function. It is only in more recent days that we have tried to redress the balance between appearance and action: we now see both appearance and action as carriers of meaning. Whilst clearly we cannot design the user’s actions directly, we now consciously design the action possibilities to invite a particular, meaning-carrying action.

1.2 From aesthetics of appearance to aesthetics of interaction

Aesthetics has always formed an integral part of formgiving. Whilst traditionally this has been an aesthetics of appearance, we are particularly concerned with aesthetics of interaction: products that are beautiful in use. Many current electronic products are lacking in this respect. Whilst they may look aesthetically pleasing from a traditional industrial design point of view, they frustrate us as soon as we start interacting with them. In our work, we see design for usability and design for aesthetics of interaction as inextricably linked. Much of the interaction design community reasons from usability towards aesthetics: poor usability may have a negative impact on the beauty of interaction. This has led to a design process in which usability problems are tackled first and questions about aesthetics are asked later. Yet, we are also interested in reasoning in the other direction: working from aesthetics and using it to improve usability. We consider temptation to form part of an invitation for action, both through aesthetics of appearance and the prospect of aesthetics of interaction. The prospect of beauty of interaction may not only tempt users to engage in interaction, but also tempt them to persevere in interacting. In other words, we are interested in not only the structural but also the affective aspects of affordance. The popular interpretation of affordance is mainly a clinical one which in the invitation of action rarely considers temptation.

This begs the question: what makes for aesthetic of interaction? Traditional industrial design often considers haptic or tactile qualities of materials and controls that influence the feel of interacting with products. But there are more factors involved. Dunne [14], for example, seems to focus on an aesthetic of narrative in which products, through their appearance and interaction, become carriers of stories with often ambiguous or contradictory elements which instil aesthetic reflection in the user or onlooker.

We are intrigued by three other factors which we think play a role in aesthetics of interaction. The first is the interaction pattern that spins out between the user and product. The timing, flow and rhythm, linking user actions and product reactions, strongly influence the feel of the interaction.

The second is the richness of motor actions. As Maeda [15] points out in his introduction to “Design by numbers,” current creative programs exploit a very narrow range of motor skills. “Skill” in the digital domain has become mainly a cognitive one: the learning and remembering of a recipe. Whilst we do not intend to turn every product into a calligraphy brush or a violin, there seems to be a fair amount of room to manoeuvre between the actions required by those objects and the push-button interfaces of today’s electronic products. The third factor in aesthetics of interaction is freedom of interaction. In most current products, activation of a function requires a fixed order, single course path in which the user does or does not get things correct. In this path, the actions are prescribed and need to be executed in a particular sequence. Much of interaction design has been concerned with optimising this single path for speed and effectivity. Yet, it is exactly this repetition of a single, predictable path, time and time again, which, in the end, becomes a clear “aesthetics killer.” Therefore, we have become interested in products that offer a myriad ways of interacting with them. Interaction in which there is room for a variety of orders and combinations of actions. Freedom of interaction also implies that the user can express herself in the interaction. This requires that the product allows for such expressive behaviour—not constraining the user—and may even take advantage of it. Not forcing the user into an interaction straight jacket allows the feel of the interaction to stay fresh.

1.3 The wholly trinity of interaction: respect for all of man’s skills

This brings us to our view of what makes “good” interaction design. To us, good interactive products respect all of man’s skills: his cognitive, perceptual-motor and emotional skills. Current interaction design emphasises our cognitive abilities, our abilities to read, interpret and remember. We are interested in exploring the other two. With perceptual-motor skills, we mean what the user can perceive with his senses and what he can do with his body. With emotional skills, we mean our ability to experience, express emotions and recognise emotions. This includes our susceptibility to things of beauty as well as boredom.

But, perhaps, what we find most important in this triangle is that we see perceptual-motor skills and emotional skills as linked. The link works in a number of ways. Firstly, as already pointed out above, we see enrichment of actions and challenging the user’s motor skills as a source for aesthetics of interaction. Secondly, we are interested in how the user’s emotional state influences her motor behaviour. Motor actions can become carriers of information on the user’s emotional state, provided the product invites such emotionally rich behaviour. This is something we will come back to in one of our examples.

2 Retrospective

2.1 Alternative history

Here, we show a number of design examples from our work. Writing and thinking have their limits when it comes to exploring the perceptual-motor fit and the beauty of interaction with things: the only way to evaluate these is to make experiential prototypes. Most of our examples concern product concepts. In these concepts, we rarely propose new functionality. Instead, they focus on making existing functionality accessible in an alternative manner. The concepts can, thus, be seen as forming a kind of alternative history: they explore new interaction styles through existing product functionality. We present four such concepts. None of these concepts manages to implement all elements of our approach, but together, they embody and, at the same time, challenge our thinking. For each, we explain our thinking at the time, the concept itself and finally, how it influenced our thinking. Before we dive into the product concepts, we show one student exercise which simultaneously illustrates the rich expressive possibilities of physical objects and the limits of the semantic approach in interaction design.

2.2 Opposite poles

2.2.1 Our thinking at the time

When we jointly organised this design exercise with Bill Gaver (Royal College of Art, London), we were interested in exploring the expressive properties of the physical world with a view to improving the expressive qualities of graphical user interfaces (GUIs). This work was partly inspired by Houde and Salomon [16] who describe how, when searching a bookcase, the physical properties of the books play an important role: if we have handled the book before, we search by size, proportions, colour and typography, as much as by title or author. A transfer of properties from the physical world to GUIs could lead to such things as folders expressing their contained number of items through bulging, their creation date through an ageing process such as rust, wear or yellowing, and the amount of disk space occupied through their perceived weight. This could lead to interfaces which require less interpretation: instead of reading about the properties of a folder in a dialogue box, they would be intuitively clear. At the time, we saw this as enriching the perception part of the perception–action loop. In Gibsonian theory, offering perceptually rich information is a prerequisite for successful action.

Exercise

In this exercise, students were asked to create a pair of hand-sized sculptures which were expressive in three dimensions (Fig. 6). Each dimension had two opposite poles. The first dimension was number (few–many), the second dimension was accessibility (accessible–inaccessible) and for the third dimension, students were offered a choice of the following: weight (light–heavy), age (old–new), size (small–large), robustness (fragile–sturdy) and speed (slow–fast).

Fig. 6
figure 6

Opposite poles. Design exercise for second year Masters students, Faculty of Industrial Design Engineering, Delft University of Technology, 1995

Each student had to create a pair of objects which coincided on two dimensions and which were opposite poles on a third dimension. The two resulting objects therefore were similar in some respects, yet were also opposite poles. Note how physical objects have many expressive aspects including size, proportion, form, colour, material and texture.

2.2.2 What did we learn?

From evaluating this exercise, we learnt that physical objects have indeed rich formgiving potential. As you may have noticed, some dimensions—most notably weight, size and robustness—are interrelated and require subtle manipulation of form, material and texture to express well. Much sensitivity and skill is involved in creating these objects: whilst some students successfully explored the expressive possibilities of physical objects, others were completely lost and confused.

In theoretical hindsight, we were still too much focussed on design for appearance as this exercise tried to find meaning purely in the appearance of objects, and did not consider action at all. With a view to the application we had in mind, this is understandable, as GUIs clearly do not allow direct physical interaction with folders: the interaction is mediated by a mouse or another input device. Therefore, the challenge at the time was exactly to express physical properties over the visual channel only. But whilst this exercise managed to enrich the perception part of the perception–action loop, it neglected its action part. As a result, the outcome of the exercise tends towards semantics and representation. Because the user was positioned only as an onlooker and not as an actor, the opportunity to create meaning in interaction was missed [17].

2.3 Videodeck

2.3.1 Our thinking at the time

In the design of this videodeck, we focussed on the formgiving of controls. We were interested in how this formgiving could invite actions and how these controls could be related to product functionality. Contrary to the current “black box” electronic products in which interaction is hampered by controls which look highly similar, the idea was to differentiate strongly between the forms of the various controls. Instead of hiding the physical tape, we wanted it to figure as a central, visible element to which all controls could be related. In the first instance, we focussed on the basic functionality of the tape mechanism, power on/off and video input/output, leaving out TV tuner and programming functionality.

2.3.2 Design

Interaction with the outside world (Fig. 7a)

Instead of a rectangular black box, the contour of the device is broken where there is interaction with the outside world: where the mains cable comes in, where video in and out cables are attached and where the tape is inserted.

Fig. 7a–i
figure 7a

Videodeck. Design: Tom Djajadiningrat, 1997 (invited submission for a competition organised by the Sekisui Design Corporation)

Fig. 7j–u
figure 7b

Videodeck. Design: Tom Djajadiningrat, 1997

Power on/off (Fig. 7b–d)

The mains transformer breaks the contour of the outline of the videodeck. It features a switch whose ribs either “allow” or “block” the flow.

Fast-forward/reverse (Fig. 7e–g)

The fast-forward/reverse control is positioned directly between the tape reels. It is a spring-loaded toggle to fit the reverse-neutral-fastforward function.

Eject (Fig. 7h, i)

The eject button has become a ribbon. To eject the tape, the user pulls the tape towards himself. Clearly, a ribbon is meaningful only in terms of pulling, not pushing.

Video-in/video-out

This is where the video in signal comes in (Fig. 7j, k), whilst this is where the video out signal comes out (Fig. 7m, n). Although the sockets are technically identical—S-VHS style MiniDIN 4—the formgiving of their context indicates that one is an input, whilst the other is an output. This is in sharp contrast with current audio/video equipment in which similar looking sockets are flush mounted in back panels, requiring the user to read labels or trust arbitrary colour coding.

Record and play sliders

The left-hand side of the videodeck where the video-in signal comes in doubles up as a record slider (Fig. 7l). The right-hand side of the videodeck, where the video-out signal comes out—doubles up as a play slider (Fig. 7o). Pushing in the left-hand side of the videodeck activates record standby, pushing in the right-hand side activates record.

This leads to the following pictures. When both sliders are slid outwards, the deck is at standstill (Fig. 7p), when the right-hand slider is slid inwards, the deck start playing (Fig. 7q, r). Beginning from standstill again: sliding the left-hand slider inwards activates record standby (Fig. 7s), sliding in the right-hand slider activates record (Fig. 7t, u). Because of their clear travel, the controls act as displays at the same time.

2.3.3 What did we learn?

Clearly, the most serious usability problems with video recorders are to do with programming recordings and the TV tuner, and this example has often been criticised for not addressing these issues. However, our idea was that, to tackle these successfully, we would first need to create meaningful formgiving for the base functionality. We will come back to the challenge of programming consumer electronics in a later example.

Having discussed this example with our students during lectures and with many of our peers, we have a reasonable idea of its shortcomings. For example, the record and play sliders are not always perceived as slideable: because of their sharp rectilinear forms, it is unclear how they fit the user’s hands. Also, not everyone perceives the forms as communicating a “signal flow” from left to right. Sometimes, forms can be ambiguous in unexpected ways. For example, one person perceived the sliders as “brakes” acting on the rims of the tape reels: sliding them inwards was expected to stop the mechanism, the complete opposite of “setting things in motion.”

Looking back, the interesting part of this example is not so much the inviting of actions, but more the exploring of factors that plays a role in feedforward. First of all, there is the differentiation in appearance between controls. For a control to say something about the function that it triggers, we need to move away from designs in which all controls look the same.

Likewise, there is the differentiation in actions. For an action to say something about the function it triggers, we need move away from designs in which all actions are the same. In the videodeck, the controls not only look completely different, but they also require different actions (sliding, pulling, rotating, pressing). This differentiation in both appearance and actions is not self-evident: there are products in which the appearance of controls is differentiated, whilst the actions are similar (i.e. differently shaped push buttons) and there are products in which the controls looks similar but require different actions (i.e. similar cylindrical rotary controls, selectors and push buttons on an amplifier).

Thirdly, there is a deliberate emphasis on the showing rather than hiding of informative physical components. The videotape is kept visible and the mains transformer is emphasised through its ribbed housing. As a result, controls can be related to these parts through proximity. It is a fair guess that the power on/off switch is positioned close to the mains transformer and to where the mains cable enters. Similarly, a control positioned between the tape reels suggests it has something to do with winding.

Finally, there is the placement of controls in the 3D context. The eject control is positioned on the path over which the tape is inserted and ejected. The record slider and video-in socket are clustered, as are the play slider and video-out socket. All these aspects contribute to the videodeck being the opposite of non-descript: instead of being a black box in which all controls and sockets look the same, require the same actions and are mounted on flat surfaces, it makes use of every opportunity to differentiate in 3D form and action.

2.4 Digital camera

2.4.1 Our thinking at the time

In the digital camera example, we explored “database management” functionality, such as entering, storing, retrieving and deleting information that is so typical of information appliances. In this particular case, the information in the database concerns digital photographs. One objective was to do away with the screen-based menu structures that now dominate the interaction with many electronic products, including digital cameras. The digital camera attempts to let the user manipulate the digital world through a physical interface.

2.4.2 Design

In this design (Fig. 8a), the interaction is based around the making and breaking of relationships between the following four components: the lens, the hinged screen behind it, the trigger to the right of the screen and the memory card to the left of the screen (Fig. 8b). In “ready-to-shoot” mode, the hinged screen is perpendicular to the lens, with the centre of the screen lying on the central axis of the lens (Fig. 8c). Pressing the trigger captures a photograph and, at the same time, releases the screen, causing it to hinge away from the body (Fig. 8d). The relationship between the lens and the screen is, thus, broken. The user now has the opportunity to review the photo and make a decision as to whether it needs to be stored or deleted. Now that the screen has hinged away from the body, it falls in line with the memory cardholder but does not yet touch it, suggesting a relationship. If the image is satisfactory, the user slides the screen towards the memory card (Fig. 8e) and the image is animated to suggest that it “slides” into the memory card (Fig. 8f). The screen is spring loaded and returns to the screen open position when released and can be clicked back against the lens to re-enter ready-to-shoot mode (Fig. 8g). If the image is disappointing, the screen can be simply clicked back against the lens to re-enter ready-to-shoot mode, causing the image to be deleted, after which the live preview is visible again (Fig. 8h).

Fig. 8a–j
figure 8

Digital camera. Design: Joep Frens, 2002

To enter replay mode—viewing images stored on the memory card—the screen is pressed against the memory card, effectively clicking it into position. Using a lever on the screen, the user can browse through the stored images (Fig. 8i).

At any time, the pixel size of an image (1,024×768, 1,600×1,200 etc.) can be adjusted by moving the sliders on the screen. As the user moves the sliders, the displayed image is scaled proportionally in real time so that it fits snugly between the sliders (Fig. 8j).

2.4.3 What did we learn?

In this example, both forms and actions suggest how relationships between physical components can be broken or established, which, in turn, is an indication of the functionality that is accessed. The form of the trigger not only expresses the required action, but also shows that it restrains the screen in its relationship with the lens. Pressing it breaks the relationship between lens and screen, and establishes a potential relationship between screen and memory card. We think that the camera challenges the current “display and push button” interaction style: using forms and actions to make and break relationships between physical components meaningfully can be a way to dispense with nearly identical, meaningless push buttons that crowd the back of so many cameras. Finally, in this concept, the screen is only used for the display of images and not for any menu navigation. A typical menu function such as choosing the pixel size of an image is moved into the physical interface.

In this example, we were also confronted with the drawbacks of modal behaviour that is reflected in a change in physical configuration. For example, switching from ready-to-shoot to playback mode currently requires releasing the trigger, thus, capturing an image “on the way.”

2.5 Programmable heating controller

2.5.1 Our thinking at the time

Clearly, the programmability of consumer electronics is a recurring problem. Having left it out in the videodeck example, we came back to programmability in this example of a programmable heating controller. Another issue we were interested in was feedback. In using mechanical devices, such as a pair of scissors, we get what is called inherent feedback: the feedback feels as a natural consequence of our actions. In electronic devices, feedback often lacks this feeling of natural consequence, feeling arbitrary instead. In the heating controller, we were interested in strengthening the coupling between action and feedback, and in which factors contributed to this strengthening. We suspected that the following factors play a role in the strength of the coupling between action and reaction:

  1. 1.

    Unity of location: the action of the user and the feedback of the product occur in the same location

  2. 2.

    Unity of direction: the direction of the product’s feedback is the same as the action of the user

  3. 3.

    Unity of modality: the modality of the product’s feedback is the same as the modality of the user’s action

  4. 4.

    Unity of time: the product’s feedback and the user’s action coincide in time

2.5.2 Design

The heating controller consists of three types of components: a single wall-mounted FloorPlan, a TimeRule and several TempSticks (Fig. 9a). There is one TempStick per room, and the TempSticks are related to the rooms through natural mapping on the FloorPlan. The reasoning behind this example is that each room (living room, bathroom, bedroom, garage etc.) has a particular comfort temperature. To adjust a room’s comfort temperature, its TempStick can be slid vertically through a hole in the horizontally placed FloorPlan. The length of the TempStick which protrudes above the floor plan, thus, indicates the comfort temperature. The basic idea behind a programmable heating controller is to lower the temperature when the user is asleep or away from home. In our example, we assume a fixed fallback temperature, i.e. the temperature is lowered by a fixed amount from the comfort temperature. In the remainder of this explanation, we concentrate on setting the day program for a single room (Fig. 9b). When the TimeRule is slid through a TempStick, a time interval on the rule is visible through the window of the TempStick. There are two modes. In recording mode, the user can adjust the day program of a TempStick (Fig. 9g). In playback mode, the user can inspect this program (Fig. 9h). Switching between the modes is done by means of a record button at the end of the TimeRule (Fig. 9c, d). When the TimeRule is slid through the TempStick with a pressed record button, a day program for a room can be input by means of the spring-loaded fallback button on top of the TempStick. Pressing it activates the fallback, that is, the programmed temperature is adjusted downwards from the comfort temperature (Fig. 9e). Releasing it causes the programmed temperature to equal the comfort temperature (Fig. 9f). When the fallback button is pressed and the programmed temperature is decreased, a blue colour filter slides into view in front of the TimeRule. When the fallback button is released, a red colour filter slides into view. To understand the playback mode, it is important to note that the spring-loaded fallback button on top of the TempStick is solenoid-powered. When the user slides the TimeRule through the TempStick without pressing the record button and resting his finger lightly on the fallback button, he can see and feel the fallback button move up and down in accordance with the program in the TempStick.

Fig. 9a–h
figure 9

Programmable heating controller. Design: Tom Djajadiningrat, 2001

2.5.3 What did we learn?

We first come back to our “unity” assumptions.

Regarding unity of location

In this example, the user presses the button on top of the TempStick to activate the fallback and the product operates the same button as feedback in playing back the fallback pattern. Input and output, thus, become co-located [18]. Because input and output occur in the same spot, and because the physical elements involved are both controls and displays, the inherency of the feedback is strengthened.

Regarding unity of direction

As the user presses and releases the button on top of the TempStick, the feedback provided through the coloured filter that is visible in the window moves in the same direction. If feedback includes movement, be it on a display or of physical components, this movement could conceivably be different in direction from the action of the user. Such a deviation in direction weakens the inherency of the feedback.

Regarding unity of modality

Here, the user exerts force and creates displacement, and the product responds through force feedback and displacement. In many products, there is a discrepancy between the input modality and the output.

Regarding unity of time

In this example, most actions cause immediate feedback. This is also related to the fact that most actions and reactions are continuous rather than discrete. Sliding the TimeRule immediately causes the time interval to change within the window; pressing the fallback button immediately causes the colour filter to move. The creation of non-arbitrary couplings between action, function, feedforward and feedback is something that has our ongoing interest [19].

Apart from such functionality gaps as the lack of a week programme, one clear drawback of this interface is that it does not provide an immediate overview of the day programme. The fallback state is visible for only a quarter of an hour at a time. Another drawback is the record button on the TimeRule, which currently does not provide any meaningful feedback on whether the device is in record or playback mode.

So, compared to a traditional mechanical timer, this heating controller lacks an overview, but in terms of motor actions, it provides a more fluent way of setting a programme. What, thus, happened unintentionally is that, compared to previous examples, the emphasis shifted from form to human motor skills. The programmable heating controller brings two-handedness—a familiar topic in computer–human interaction [2022]—to consumer electronics: the smooth transition between recording, playback and editing modes is achieved through concerted actions of the two hands. Yet, fully exploiting the refinement of human motor skills may take much more than designing for two-handed interaction. Creating designs which truly address human dexterity may require a completely new approach to the interaction design process [23].

2.6 Alarm clock

2.6.1 Our thinking at the time

The affective computing movement claims that emotions form a prerequisite for intelligent behaviour [24, 25], leading to a class of products which could be called “emotionally intelligent” products [26]. Current research concentrates on determining the user’s emotional state from physiological data such as heart rate, blood pressure and skin conductivity. In contrast, in this example, we focussed on determining emotion from behaviour. Since the way we feel influences the way we act, can we figure out the user’s emotional state from his motor behaviour?

2.6.2 Design

The prototype of the clock consists of two displays and twelve sliders (Fig. 10). The front display shows the current time whilst the central display shows the alarm time. For each slider that is moved from the starting position towards the central display, time is added to the current time to make up the alarm time. For each slider that is moved away from the central display towards the outer rim, time is subtracted from the alarm time. Each slider has a range of 0–60 min. Upon reaching the preferred wake-up time, the central display is pressed and the alarm is set.

Fig. 10
figure 10

Alarm clock. Design: Stephan Wensveen, Daniel Bründl and Rob Luxen, 2000

The clock’s internal system interacts with the user as follows. Each displacement of the sliders is electronically tracked and fed into a computer. In the evening, the wake-up time is set (factual information). This is done differently when in a different mood (mood information) so that we can extract mood information from the user’s behaviour. The idea is—although this part has not been implemented yet—the alarm clock could choose an appropriate alarm sound, ranging from urgent and aggressive to relaxed and laid back. The next morning, the person wakes up to this sound and silences it by touching or hitting the snooze button. This behaviour expresses the person’s emotions about the appropriateness of the wake-up sound chosen by the alarm clock. From this behaviour, the system gets feedback on its decisions, and can learn and adapt accordingly. The user turns off the alarm clock by sliding all the sliders to the outer edge.

2.6.3 What did we learn?

In an experimental setup, we found that the alarm clock indeed invited expressive behaviour from which information about the user’s mood could be distilled. The results of these experiments are documented elsewhere [26]. Whilst we have made considerable progress in determining mood based on behaviour, the choice of sound is currently unimplemented. That is to say, so far, we have concentrated on emotionally rich input, rather than emotionally rich output.

During our explorations, we became aware that, for an emotionally intelligent product to allow emotionally rich behaviour, it needs to offer freedom of interaction, so that the user may express himself in his actions. Providing a myriad of ways of reaching a goal is in sharp contrast with current products which only allow a function to be accessed in a single, prescribed manner. Ultimately, this means that the interaction is less rigid in two respects: the user has freedom of interaction on the input side and the device reacts accordingly and, therefore, differently. This keeps the interaction interesting.

Finally, the design and experiments with the alarm clock made us aware of another form of feedback: traces. We define a trace as feedback that is still present after the action has ceased. In the alarm clock, the slider pattern forms a trace of the user’s actions. As the trace changes continuously with the user’s actions, it not only reflects but also guides the user’s actions.

3 Our view on tangible interaction

So how does all this relate to current views in tangible interaction? First, we will list a number of our concerns with the status quo in tangible interaction. Then, we will clarify how we have come to see tangible interaction as perceptual-motor-centred rather than data-centred.

3.1 Our concerns with the status quo

The past few years have spawned many impressive tangible interaction prototypes [27]. These are very interesting to us, since the challenges of creating meaning in tangible interaction and in electronic product design strongly resemble each other. We are concerned, however, that the approach to creating meaning has not really changed. The pitfalls, too, remain the same.

The limitations of natural mapping

It strikes us that so many tangible interfaces rely on natural mapping [18] for creating meaningful couplings between form and function. This clearly works well for some applications, but makes tangible interaction appear limited in the kind of problems it can deal with. Natural mapping falls short when dealing with abstract data that has no physical counterpart.

Everything looks and feels the same

In many current tangible interaction systems, there is little differentiation in appearance and actions. Often, the blocks used to represent or manipulate data look exactly alike. And often, the repertoire of actions that is used is very limited, mostly positioning and rotating. From a perceptual-motor point of view there is, thus, a striking similarity between many tangible interaction systems and electronic products: everything looks and feels the same. In many token-based systems, the functionality of the tokens is based on proximity and context whilst the form and required actions are the same for all.

Stopgap semantics

Once a system is implemented, its designers may realise that some kind of differentiation between tokens is needed. In general, adding this differentiation after the design is nearly complete is problematic, as it is often too late to change the action potential or 3D layout of the system. Then, the only way left to create meaning is the semantic approach: tokens are colour-coded or given iconic shapes.

GUI thinking in disguise

It, therefore, seems to us that there is still much “GUI thinking” in tangible interaction. GUIs must rely on metaphor and semantics, since, regardless of functions, the required actions are nearly always the same: click and drag-n-drop. Many tangible interfaces are a kind of extruded GUIs: 2.5D solutions with phicons, physical icons which represent data and which offer multiple loci of control, yet, do not tap the full potential of physical interaction. We feel this is a waste. One thought experiment we use to evaluate tangible interaction prototypes is to consider how much effort it takes to simulate the interaction on a GUI. Will a 2D projection with two six degrees of freedom input devices—one for each hand—work just as well? If so, the prototype does not really make use of the action potential and inherent feedback of the physical world. After all, characteristic for GUIs is their narrow repertoire of actions and arbitrary coupling between action and function.

3.2 Our emphasis:

From data-centred to perceptual-motor-centred

Seen from an information science point of view, tangible interaction is about moving from the virtual to the physical domain, from bits to atoms [28]. In this approach, objects are often used as physical carriers or manipulators of chunks of data. Typically, this leads to designs with many separate physical objects. We see this as a data-centred approach. This approach has been a productive way of looking at tangible interaction, but we think that it is not the only one.

From an industrial design point of view, the physical aspect is not so interesting in itself, since product design has always been about designing the physical. Rather than viewing tangible interaction as physically represented or manipulated data flow, what we value in physical objects is the richness with which they address human perceptual-motor skills. In this approach, differentiation in appearance and differentiation in actions is highly important. The differentiation provides the “hooks” for our perceptual-motor system to get a grip on a system’s functionality and to guide the user in his actions. Physical objects offer rich action possibilities with inherent feedback to exploit the refinement of human motor skills. This is territory which remains largely unexplored in much of data-centred tangible interaction, as well as in the currently prevalent display and push button interfaces of electronic products. If we accept the value of differentiation in appearance and actions, the main challenge becomes the exploration of meaningful and beautiful couplings between appearance, action and function.

We hope that the examples in this paper collectively illustrate what we value in a perceptual-motor-centred approach to tangible interaction. The “Opposite Poles” exercise shows the richness of visual expression that the physical world has to offer and how forms, colours, materials and textures can communicate sophisticated messages. The videodeck illustrates how the physical world allows controls to be differentiated in appearance and action to create meaningful triads with function. The digital camera is an example of how users can physically couple and decouple geometric relationships between components to create meaningful relationships between appearance, action and function without resorting to loose parts. The heating controller is an illustration of how we can use the inherent feedback of the physical world and concerted motor action to achieve smooth data input and output. Finally, the alarm clock makes use of our emotionally charged behaviour with the physical world to determine user mood. It allows for a myriad of ways of motor action whilst leaving a trace of those actions in the physical world to provide feedback on past actions and guidance on those to come.

4 Summary

In our work, we strive to consider formgiving of appearance and action possibilities from the very outset of a concept, in consideration of functionality and aesthetics. We do not see formgiving as a kind of sauce that can be poured over the design once the hardcore functional and usability work is finished. In that way, opportunities for the creation of meaning and for control over aesthetics of interaction are lost. Meaningful couplings with functions depend on making use of the rich appearance, action potential and inherent feedback of physical objects. At the same time, the diversity of motor actions with interactive physical objects has tremendous aesthetic potential which is still largely unexplored. If there is any term in Gibsonian psychology that is valuable to tangible interaction, it may be not so much affordance as perceptual-motor skills. Fitting interactive, physical products to man’s perceptual and motor capabilities may ultimately provide not only a route to improved usability, but also to an aesthetically rewarding experience.

5 About the authors

Tom Djajadiningrat studied industrial design at the Brunel University of Technology and industrial design engineering at the Royal College of Art. Since completing his PhD on desktop virtual reality at the Delft University of Technology, he tries to combine industrial and interaction design thinking. He shares his time between the Designed Intelligence Group at the Eindhoven University of Technology and the Mads Clausen Institute for Product Innovation at the University of Southern Denmark. In Eindhoven, he teaches in the educational unit, “Mobility.”

Stephan Wensveen studied industrial design engineering at the Delft University of Technology. He is about to complete his PhD on emotionally intelligent products in the Designed Intelligence Group at the Eindhoven University of Technology. In his thesis, he tries to bridge the tangible interaction, affective computing and product design communities. He is a member of the educational unit, “Communication.”

Joep Frens studied industrial design engineering at the Delft University of Technology. After obtaining his Masters degree, he worked on tools for measuring emotional expression of products at the ETH Zürich. He is currently working on his PhD project in the Designed Intelligence Group at the Eindhoven University of Technology, which concerns designer tools for exploring interaction in the early stages of the design process. He teaches on idea generation techniques that are based on “doing” rather than “thinking.”

Kees (C.J.) Overbeeke is a member of the Designed Intelligence Group and the educational unit, “Communication,” at the Eindhoven University of Technology. He has been active in design research and teaching for the last 20 years. His research and teaching interests include design and emotion, embodied interaction, expressivity, product experience, and the resulting new philosophy of science and methodology.