Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Framing the Problem

Once researchers have gathered information in the various ways described in the previous chapter, they must integrate all this information into a helpful framing of the problem space for themselves and all those who will collaborate on the project. Some of the information may be easy to parameterise and can provide specific guidelines for how to proceed (e.g. the Laban analysis of gesture, and using this to guide the design of a gestural interface to a mobile phone application – see Sundström et al., 2007). However, the rich, varied, and even contradictory information that can emerge from techniques such as the cultural probes requires careful and creative synthesis to properly inform the design process. Designers have evolved a range of techniques that can be used to preserve the richness of this sort of user data and to incorporate it into design thinking. Some examples are mood boards, personas (Cooper, 1999), and user scenarios.

Mood boardsFootnote 1 are groups of images and inspirations that designers collect to help them to envision the mood that they hope to create in the target user community through the product itself. This technique can also be used to cluster artefacts that users have contributed through cultural probes or other means that help remind the designer of the feelings that users have around the activity space for which they are designing (Fig. 1). All those who participate in the project can gather around and discuss these clusters of artefacts from users and can keep coming back to them as the team brainstorms, seeking confirmation or contradiction of emerging patterns and re-engaging the material for further inspiration.

Fig. 1
figure 35_1_213094_1_En

An example of cultural probe materials for creating a moodboard

Personas are another tactic for aggregating and framing the problem based upon a rich set of user data. Designers use the information at hand – whether gleaned from surveys, interviews, aggregate data available about the user community from prior products, and/or direct observation of target users – and create imaginary people who are composites of key features of the user community’s wishes and constraints. In commercial contexts, these personas may be carefully weighted and tied to specific sub-demographics in the target user group. Inter-related personas can be constructed that can help the designer to get at how communities of users will share a product – e.g. the initial user as well as friends and family who share the application. There is a rich body of information in the commercial design community about using personas.Footnote 2

A bit further into the process of idea generation user scenarios can help to even closely specify a design. User scenarios insert sample users (such as the personas described above) into carefully constructed walk-through descriptions of use of the system that will be designed. When considering in greater detail how a system should be designed, such scenarios allow designers to better imagine how users would conceptualize and use it. User scenarios can begin at a pretty high level of abstraction (e.g. ‘Jane would like to rent a video online quickly, to be viewed tonight at home’) to very detailed sub-areas of the target interface. For an excellent summary of a range of scenario tactics, see Benyon et al. (2005, Chapter 8).

2 Idea Generation Methods

As a next step in a prototype-driven design process one needs to generate actual ideas for prototypes that will aid in validating a potential solution to the now framed problem. Idea generation method is probably mostly thought of as regular brainstorming where a design team sits down and simply starts to talk of great new ideas, but there are in fact a range of methods for making this next step, potentially seen as mountain high, much less magic than what it most often is experienced as, looking at someone else’s project.

2.1 Brainstorming Methods

Brainstorming can be used at all stages in a design process. One can brainstorm around methods for evaluation, research ideas in general, interaction models, etc. The list is endless and the various ways to set up brainstorming sessions are probably even more numerous. As examples of what a brainstorming session can look like, we present two well-established methods from product design: Random Words and Six Thinking Hats (De Bono, 1985). These methods are very much idea driven and therefore very useful when pursuing a prototype-driven approach. Even when designing a research prototype for exploring an idea rather than a product, it is important to stay in the realm of the possible to be able to assess the quality of different designs. When having a less successful scenario or prototype, one’s users will most likely be concerned with the problem areas more than the overall design idea.

Random Words is used to come up with novel, inspiring, thought-provoking combinations of words from specified categories, for example, emotions, techniques for sensing emotions, and places. To start with, a group of words under each category is required, for example, ‘angry’, ‘sad’, ‘happy’, for the emotion category. These are either provided by the session leader or collected as a start-up activity for the brainstorming session. The words are written on separate pieces of paper, such as post-it notes, and placed upside down in three piles. Starting the brainstorming, the first note in each pile is turned and shown to all participants. The idea is then to brainstorm for a few minutes around what an application using that specific combination of words could be, before going on to the next combination of words. Random Words is used to come up with a range of ideas. The method is very low in cost, including time requirements. As always in brainstorming activities, it is important not to be afraid of any bad or outrageous ideas: it may well be that ultimately parts of a few of such ‘bad ideas’ contribute and together form a really good one. The aim is to have the mind under stress go in new directions, directions where it usually does not go. The combinations of three randomly picked words shall set up a helpful framework when having to be creative.

The same holds for the Six Thinking Hats method which is more for evaluating and developing already existing application ideas, ideas that perhaps originated from the Random Words method. In the Six Thinking Hats method, each idea is reflected upon from five different viewpoints represented by five differently coloured hats that the participants ‘put on’: facts and information (white hat), optimism (yellow hat), opinions and thinking (red hat), cautiousness (black hat), and creativity (green hat). These viewpoints represent five of the different hats. The last hat (blue hat) is given to the person who regulates the process. The hats can be represented by a coloured slip of paper placed in front of each member of the brainstorming design team. The participants take turns with the hats and have to act within the limitations of their current viewpoint: the wearer of the yellow hat is only allowed to be optimistic, the green hat has to be creative, etc. While wearing the white hat, one has to stick to facts and information, such as ‘Bluetooth technology does not work over distances exceeding 6 m.’ Factual knowledge in a new design idea can pose a formidable challenge that can be tackled either by granting Internet access to the person carrying the white hat or by informing all participants beforehand of the ideas to be discussed, so they can prepare themselves. Similar considerations hold for the person wearing the red hat, who has to be up to date on people’s opinions and thinking, for example, ‘Women in their forties generally think new technology is hard to learn.’ Another way to work the group around these issues is to allow people to lie and make up stories and facts on the fly, e.g. ‘Women in their forties generally like blue things’, an approach that works surprisingly well.

2.2 Bodystorming

As emotions are not only a cognitive, but also a physical experience, a good way of testing ideas for functionality is to actually act out the interaction idea physically together in the design team. This can be done before the system even exists through so-called bodystorming techniques (Oulasvirta et al., 2003), essentially a simple way to act out a scenario as in a role playing game or an improv theatre. As characterised in Rodriguez et al. (2006, p. 964),

Unlike brainstorming, bodystorming is the transformation of abstract ideas and concepts into physical experiences. Fun and tactile, this approach allows us to investigate different qualities that an idea may have when applied in a physical setting. It enables rapid iteration and development of ideas and relationships through a dynamic, continuous and creative process of trial and error.

In bodystorming, you typically brainstorm in situ, that is, in the location that will typically be the place where the system is aimed to be placed and used. If you are designing for a train, you spend time on the train, brainstorming together with your team, and any ideas that come up, you act out there and then on the train. As a consequence, you quickly get a grasp on how well your system will interact with and be integrated with all the other aspects of the environment for its usage. For example, assume that the aim is to design a mobile messaging system, where users should be allowed to express themselves physically when sending messages to their friends. Acting out the kinds of gestures one comes up with in the various settings where mobiles are typically used will quickly lead to the discovery that large gestures with the phone will feel silly when in public spaces. However, for some such applications it may ultimately turn out that interaction patterns that had been thought to be too extreme are in fact the ones pushing the community forward (Sundström et al., 2007).

3 Prototyping Methods

To actually build the prototypes may be yet another mountain to climb. In many cases one needs to work with new and challenging interaction techniques and not only software but also hardware that needs to be adjusted to the software and also perhaps fitted into a suitable package. To come up with a good software design can be difficult enough, but to also have it working with custom hardware requires constant validation and redesign. This holds especially when working with emotionally engaging and emotionally involving prototypes. Therefore, we discuss not only methods for final evaluation but also validation methods for all steps of a system design process, from paper sketch to digital technology.

3.1 Paper Prototyping

Paper prototyping is a method used for early usability testing in the design process once the appearance of the future prototype has been identified but before any actual code is written. The method is also well suited for workstation and laptop applications, but mainly used when designing for smaller mobile displays (Rettig, 1994). The idea is to draw all potential screen displays on pieces of paper and let a user navigate them. The analysis of the user interactions informs the adaptation of the screen displays. It is important that all buttons, interactive areas, and help texts are represented, to make the experience for the user as close as possible to the experience with an actual physical device. The main aim is to locate areas where the user runs into interaction difficulties, e.g. due to misunderstandings of feedback or other signals. While a paper prototype cannot fully replace the real experience with a working prototype, it is still a very rewarding method and a valuable design step to take.

3.2 Staged Lived Experiences

To take the paper prototyping method closer to the experience with a functional prototype, it can be combined with aspects of the bodystorming method through the creation of a staged lived experience (Iacucci et al., 2002). By letting the user experience the paper system in the environment where the real prototype is to be used, it is possible to approach and evaluate the actual experience rather than focusing exclusively on the usability of the user interface. While the paper prototyping method is less well suited for more tangible and alternative interaction models, the staged lived experience method can cover also these kinds of systems. By using just parts of a future system, such as a biosensor bracelet or a camera, it is possible to improve users’ understanding of what that future system actually is going to be like and also of how it is, e.g., going to feel to use and wear it in public. Exposing users to such experiences also facilitates their participation in focus groups and other more informed activities. Not only users but also designers themselves can get inspirational experiences from such experiments. The main idea with the staged lived experience method is to play, pretend, and experience bits and pieces of a future system out in the wild and not in the laboratory environment where it usually is hard to experience the everyday practice.

The paper prototyping in particular is an extremely cheap method. The costs of the staged lived experience method depend on the probes, but for single-user systems the user study can be set up with one user at a time and also for multi-user systems there are ways to play and pretend that help contain costs. However, both methods are rather time consuming. For paper prototyping, the whole system needs to be thought through in detail and then sketched on pieces of paper. To set up a staged lived experience is of course even more laborious than to create a pretend setting in the laboratory but most often worth every second of it. Staged lived experience is a valuable method at all stages of a design process; in contrast, it can be argued that for final evaluation a laboratory setting is no longer acceptable, especially for affective interaction systems where a laboratory environment works more with created and staged emotions than emotions that occur in real life practices.

In their review of experiences gathered with the deployment of the related method of experience prototyping in a number of real design projects, Buchenau and Fulton-Suri (2000) illustrate how it contributed to developing an understanding of essential factors of an experience by simulating important aspects of the whole or parts of the relationships between people, places, and objects as they unfold over time; to the exploration and evaluation of ideas, providing inspiration, confirmation, or rejection of ideas based upon the quality of experience engendered; by producing answers and feedback to designers’ questions about proposed solutions in terms of ‘what would it feel like if …?’; and in communicating issues and ideas, enabling direct engagement in a proposed new experience and thereby providing common ground for establishing a shared point of view.

3.3 Wizard of Oz

Designers of interactive technology often face what is best described as a chicken and egg problem: in order to design the technology they need to know something about how it will work when it is finished. By using an iterative design process where designs are repeatedly evaluated against established goals an understanding of how a system will work is gradually assembled. A number of different methods can be used during the design process to construct such an understanding. Which ones to use will depend on the particular project at hand. A method that has proved to be particularly useful when designing for very complex interaction technologies, or when entering new domains such as affective interaction, is the Wizard of Oz (WoZ) method.

The name of the method refers to the wizard in L. Frank Baum’s novel The wonderful Wizard of Oz, who manually operated complex machinery from behind a curtain to appear more powerful. Within the human–computer interface research, the name has come to designate an iterative design method in which a human (the wizard) simulates the behaviour a computer system under development would have if it was fully functional. WoZ studies are usually performed in laboratory settings where the wizard operating the system and the participant testing it are in different rooms to maintain the illusion of a fully functional system. Sometimes participants are informed about the system status, i.e. that it is a simulated system with a human acting behind the scene, and sometimes they are not, in order to encourage natural behaviours. If participants are not made aware that the system is simulated beforehand, for ethical reasons it is important to ask for their informed consent after completion of the study, offering them a chance to withdraw their data from the study.

Using WoZ can be particularly helpful in circumstances where interpreting user input is a difficult task. In a traditional point-and-click interface one can be fairly certain that when users click a button, that is what they intended to do although the effects of pressing the button may not always be what they intended. However, when working with interfaces that include modalities such as natural language, gestures, postures, and emotions, interpreting a user’s intentions or state of mind is not always as straightforward. For instance, does frowning mean that the user is annoyed or merely focused? Or does gesturing in a certain direction mean ‘look there’ or ‘go there’?

Many natural language applications have been developed using the WoZ method. Participants behave as if communicating with a computer system using natural language in text or speech, while in reality it is the wizard who interprets and responds to their input (see Dahlbäck et al., 1993). The purpose of such studies is in general to observe the use and effectiveness of a proposed user interface rather than measuring the quality of an entire system. For example, a WoZ study may reveal a great deal about how participants would interact with a speech-based ticket booking service (e.g. what they say or what the steps of the process are) but might say nothing about the effectiveness of the natural language algorithms that would be needed for the interaction to take place. Such details are generally considered to be beyond the scope of the study. The functionality provided by the wizard may sometimes be implemented in later versions of the system but is sometimes very futuristic, far beyond the capabilities of current technology. The cost of performing WoZ studies can vary significantly depending on the system being evaluated and how (and what) data is recorded and analysed. Often special tools or systems with a special wizard backend need to be constructed in order to perform the studies, videotaped sessions may be more costly to analyse than questionnaires, etc. However, in relation to the benefits that can be reaped from performing the studies, and compared to the cost of iterating development of a fully functional system, it is often worth the extra investment.

The WoZ method has been used in several projects within the domain of affective interaction. For instance, WoZ has been used to evaluate tangible emotional interaction interfaces. Paiva et al. (2002) developed an interface to a computer game based on a tangible doll device called SenToy using the Wizard of Oz method. Their goal was to use the sensor-equipped SenToy to let players express a limited set of emotions, e.g. by jumping with the doll, shaking it, or positioning its limbs in certain configurations. These emotions would in turn control the behaviour of the player-controlled character in a fantasy game setting shown on a screen. The question that the design team faced was how players would use the SenToy doll to express the intended emotions. Based on the available literature about human behaviour, hypotheses were formed about movement patterns that were likely to be used for each emotion. These patterns, however, needed to be validated, as there could have been differences in how players expressed themselves using a doll compared to how they would using their own body (cf. Dahlbäck et al., 1993). At this point a WoZ study was performed in which players used a collection of dolls without sensors to express emotions which were mirrored by a character shown on a computer screen. The on-screen character was controlled by a wizard who sat in the same room as the player, watching their behaviour. Whenever the player performed an action with their doll that matched one of the patterns hypothesised to match an emotion according to the literature research results, the wizard would push a button to make the on-screen character show that emotion. The study provided information about which of the hypothesised patterns matched how players actually expressed emotions. In cases of insufficient correspondence, the players’ actual actions with the doll suggested other patterns to look out for instead. In addition the study informed the design team about desirable qualities the doll itself should possess, such as being soft enough to bend easily and big enough to let players easily perform movements with it. A functional SenToy interface was developed and tested based on these results.

Another area where WoZ studies have been helpful is in the development of embodied conversational agents (ECAs). Such agents appear human-like and attempt to interact with users as another human being would, e.g. through conversing with them, using facial expressions and body language, displaying emotions and showing empathy. However, as noted by Dahlbäck et al. (1993), interacting with a human-like system is not the same thing as interacting with a human. Hence questions regarding interaction style including who should take initiative in a dialogue (agent or user), which emotions should be displayed by the agent, and which user expressions an agent should recognise as being emotionally charged remain largely unanswered (Cavalluzi et al., 2005; de Rosis et al., 2005). To address such issues, de Rosis et al. (2005) performed a WoZ study that investigated the forms of empathy that can be induced by ECAs in the context of promoting appropriate eating habits. The study was conducted using a WoZ tool developed specifically for the purpose of evaluating aspects of user–agent communication. The tool allows experimenters to alter various aspects of the experimental setup including physical aspects of the agent, its expressivity, and the set of dialogue moves that are available to the wizard. Thus the tool is flexible enough to handle other usage contexts as well. The study was performed iteratively aiming to gradually design a conversational agent in the chosen domain, with a particular emphasis on inducing empathy (in the broad sense of ‘entering into a warm social relationship’) in the user. To this effect, six rounds of WoZ tests were performed that gradually shaped the agents’ personality and expressiveness. During the study, parameters such as the agents’ interaction style (warm vs. cold), use of more natural sounding speech generation using different text-to-speech systems, and use of social small-talk to draw the user into a relation were varied to study their effect on the interaction. While the study did not yield any conclusive results regarding the effect of the above-mentioned parameters, it suggested that subjects were disappointed when receiving a ‘cold’ reply to an attempt to establish a friendly relationship. This in turn points to the need for ECAs to recognise the various forms of social contact making that humans routinely engage in.

3.4 Sensual Evaluation Instrument

The sensual evaluation instrument (SEI) is a tool for gathering affective feedback from users about a system that is a work in progress. It is a self-report measure that uses small, sculpted objects (see Fig. 2). Instead of offering verbal descriptions of how they are feeling, users indicate with the objects how they are feeling as they engage with the system prototype. This allows the designers to gain rich, nuanced feedback from users, which has not been forced into pre-conceived categories of response (e.g. ‘happy’ or ‘sad’). Each user can create one’s own taxonomy of meaning and strategies for conveying emotion through arraying the objects, gesturing with them, stacking, and the like. SEI sessions should be videotaped to review the feedback in more detail. It is also best to engage participants in a post-use discussion to elicit verbal feedback on how they used the tool and their own descriptions of personal taxonomies and use patterns that emerged for them.

Fig. 2
figure 35_2_213094_1_En

The sensual evaluation instrument objects

In initial testing of the SEI, users demonstrated a wide range of usage strategies (see Figs. 3, 4, and 5).

Fig. 3
figure 35_3_213094_1_En

A SEI session participant using multiple objects in an array that he kept close to his computer

Fig. 4
figure 35_4_213094_1_En

A SEI session participant who stacked two objects

Fig. 5
figure 35_5_213094_1_En

A SEI session participant who held the objects in his hand and gesticulated wildly with them

The SEI was designed to allow for flexible, yet informative self-report of affect. The designers (see Isbister et al., 2006) worked closely with a sculptor who crafted biomorphic shapes meant to evoke a range of affective states. Preliminary research in both the USA and Sweden suggests that there are consistent emergent dimensions along which users tend to array the objects (see Isbister et al., 2007). For example, more spiky and sharp objects tend to be used to convey negative emotions. So the SEI provides some grounding common dimensions for feedback, while allowing for rich variance in individual expression of affect through the establishment of individual taxonomies and use patterns.

SEI has been used to evaluate three different interactive stories/games (Laaksolahti et al., 2009). The study aimed to identify the dramatic moments in the games and whether people did feel immersed. The SEI-based evaluation captured some important aspects of the emotional experiences of the interactive stories. As could be seen in the in-depth descriptions provided, participants could talk about their SEI objects and explain what emotions they portrayed in different situations. Through its purposefully ambiguous design the SEI objects are open to interpretation. In the study the objects seem ambiguous enough to accommodate a variety of emotions and shades of emotional experiences. The strength of the SEI evaluation was how it could pinpoint emotional experiences and allow for many shades of emotions. The weakness was that it only gave us hints on the local emotional experiences – not on the dramatic development of the whole game. This is something that the repertory grid technique (see below) could give a better grip on.

3.5 Repertory Grid Technique

When dealing with reports of subjective experiences a common problem is that either subjects are allowed to express themselves freely, possibly rendering large amounts of qualitative data that it is difficult to structure and compare across subjects, or the evaluators will set the boundaries for what can be expressed by asking a set of predefined questions decided by the experimental leader in a questionnaire, interview, or the like. In contrast, the basic idea behind the repertory grid technique is to elicit a set of personal constructs (or dimensions) from each participant, which are then used to evaluate the objects being studied.

The repertory grid technique (RGT) is based on Kelly’s personal construct theory (Kelly, 1955). It is a tool that was designed by Kelly to gain access to a person’s system of constructs by asking the person to compare and contrast ‘relevant examples’. Kelly originally used the tool for investigating interpersonal relationships by having people classify a selection of persons that were important to them along a set of constructs describing relationships that were elicited. The method has later been used for many other purposes including knowledge modelling and management, construction of expert systems, and lately for capturing subjective experiential aspects of a person’s interaction with various forms of technology (Fällman and Waterworth, 2005; Laaksolahti, 2008). Fällman and Waterworth also provide a good introduction to the method’s underpinnings and use in the context of evaluation of artefacts.

Constructs are elicited by comparing elements with each other in various ways and extracting their similarities and dissimilarities. Constructs are usually bi-polar, taking on values between two extremes. For instance, we can judge people along dimensions such as tall–short or light–heavy. Typically, the subject is presented with three objects to be compared and has to tell which pair of objects is similar and which object is the outlier. The quality employed to separate the three objects has then to be used as a scale along which all three objects have to be assessed. This process continues until the subject cannot identify any further discriminative qualities of the objects according to his subjective experience. This process can then be used to identify experiential qualities of objects such as cars or mobile telephones.

Laaksolahti (2008) used the RGT method alongside in the SEI study (see previous section). The aim was to assess how well users become immersed in the stories, whether they feel they can influence it (agency), and to what level it allows users to transform themselves into the role they are playing in the interactive story. Laaksolahti’s evaluation is also focused on the so-called dramatic arc of users’ experience of the story. That is, did the story capture their interest, create a greater and greater tension, until the climax was reached and the story was completed? Or was the interactive narrative failing to produce a story-like experience? All of these concepts are very hard, elusive, qualities to evaluate. Very few structured user study methods are able to address them.

Laaksolahti modified RGT in order to capture the dynamic experience of an interactive story/game. Instead of comparing games with one another, subjects got to compare snippets of video-recordings from when they played one game, i.e. different parts of their experience. One subject expressed his experience of one of the games with the following constructs: boring–entertaining, unengaging–engaging, mundane in a negative sense–exciting, follow the story–explorative, and finally, demanding–relaxed. By following how this subject graded different snippets of video of his play the dynamics of his dramatic experience (rising and falling), his sense of being involved or not in the game, as well as his experienced ability to influence the outcome of each scene could be traced.

RGT thus allows getting at users’ own subjective experiences of using interactive systems as the interaction unfolds, through their own concepts and words. Thereby it can provide vital feedback to the designer about what parts of the prototype system work and which parts need to be modified to cater to the intended kinds of experiences.

3.6 Critical Design Practice

Critical technical practice describes an approach to developing solutions to technical problems, which includes taking a core premise on which a field is founded and reversing it. It then proposes building a technology based on that reversed premise, which can contribute to the field in a novel and interesting way (Agre, 1997). Agre’s key example is the notion of disembodiment that underlies classical artificial intelligence. By contrast, he proposes building fundamentally embodied agents; this notion is, e.g., at the heart of much of Rodney Brooks’ early work at MIT’s AI Lab (Brooks, 1986).

Critical technical practice also includes a level of reflective awareness of the discipline one is engaged in, including the field’s sociological and cultural context, the philosophies it espouses at an unconscious level, and the field’s key metaphors or analogies. Several designers of interactive systems have used critical technical practice as a tool to generate innovative and critically relevant systems (Sengers, 1999). For example, Simon Penny’s notion of ‘reflexive engineering’ integrates robotics with an artist’s sense of design and play. His robot Petit Mal is chaotic, whimsical, and clumsy: un-robot-like conduct that encourages the audience to generate theories as to the origin of this unusual behaviour, encouraging the public to become aware of and to consider their own notions of agency (Penny, 1997). Similarly, Gaver and colleagues (2003) propose inverting HCI’s traditional goals of ‘usefulness and usability’ and explore the possibility of designing for rich experiences, with the potential to be intriguing, mysterious, and delightful.

Critical technical practice does not advocate the replacement of a field with one founded upon its inverse; rather, it proposes that such conceptual changes can bring insight into, awareness to, and novel contributions to a discipline. When we approach affective interaction, it may be very useful and important to use a critical design practice perspective, as it is far too easy to fall into various pitfalls where we assume that we have a good grip on emotions and emotional interaction. Contributions from a critical perspective on affective computing are, e.g., included in the chapter on the interactional approach to affective interaction.

4 Challenges Related to Affective Interaction

As can be understood from the methods we picked for this chapter, the design and prototyping phase of a project involving affective interaction and experiences has to meet a range of challenges. If we pick a method that allows for a laboratory environment, we face the challenge of making it realistic enough that our subjects get into the mood, emotion, or situated experience that we aim for. If, on the other hand, some of the prototyping involves users ‘out in the wild’, we will have problems with how the researcher/designer can study the situation, as most such settings do not allow the experimental leader to follow their subjects around in their daily lives.

On a general level, we might also feel troubled by the fact that people differ: we all have our own personal expressions and unique experiences. Most of the methods described in this chapter do not even attempt to generalise a larger user group. Unless we are very careful in choosing end-user groups and representatives of that group, we run the risk of getting irrelevant feedback from only a few of the participants.