Keywords

1 Introduction

When conducting interviews face-to-face an interviewer and respondent interact closely and use various combinations of body postures, gestures and facial expressions to enhance their exchange [1]. Body language, facial expressions and hesitations in speech can help the interviewer to understand how certain the respondent is about what he/she is saying and how comfortable, relaxed or tense, someone is during the interview. The interviewer may, by his/her mere presence, affect the interviewee to answer in ways that the interviewee feels obliged to. Conscious or unconscious body signals of the interviewer can also affect the respondent [1].

Telephone interviews are not used so much in qualitative interviewing, but more often for survey research; advantages of telephone interviews are the cost savings, the ability to reach a large population and “hard-to-reach groups” [1] (p. 489). It is also easy to supervise and it reduces biased answers from the respondents as they are less affected by the interviewer [1]. What interviews, either face-to-face or via telephone, are lacking when it comes to supporting the development of interactive systems, is the interaction with an interface.

We are developing a type of interview-based collaboration technique which we refer to as the GUI interaction interview (GUI-ii). Using half-made prototypes which the respondent can access via web browsers, we ask for help filling empty parts but also probe plain usability issues and encourage immediate remediation by prompts to “let’s try to change this in the way you would like it”. We contrast this with the UI discussions by high-fidelity color print or screen displays. Such stimuli are known to provoke very narrow comments on specific details [2].

In GUI-ii we use a “wizard” as in the “Wizard of Oz” (WOz) methodology to apply a Participatory Design (PD) approach. PD implies that representative actors directly affected by the system should be part of the development [3]. WOz is normally used for usability testing of interactive systems without the need for programming, and it can also be used for explorative interaction design (e.g. [4]). Our research group developed a system, Ozlab, that supports WOz experimentation and facilitates wizards to control GUI responses in inter-action with a test participant [5]. The most interesting thing is how to make explorative tests in a GUI—wizards will have to be able to articulate themselves in a rather artificial medium. Paper prototyping has long been advocated by user-centered designers, as “a running prototype couldn’t be changed immediately” [6] (p. 373). However, in-between the programmed prototypes and the paper mockups there is a toolbox space where Ozlab fits in. Per definition, the WOz method circumvents the need for programming to demonstrate interactivity. Instead, the crucial factor is to have a WOz tool that allows—without much overhead work—for changes in GUI as well as in interaction. At least to some extent Ozlab fulfills these requirements.

Ozlab can also be used in participatory design sessions over a distance. The purpose of the present paper is to discuss how by traversing the “Map of Design Research” by Elizabeth Sanders [7, 8] and the “making, telling, enacting” sequence of activity [9] (“tell” becomes “say” in [10]).

The structure of the chapter is as follows: Sect. 2 explains the ideas behind the interactive prototyping system we used for our GUI-ii sessions and the GUI-ii method itself, while Sect. 3 presents examples of applications of the method. The stage is thereby set for discussing GUI-ii aspects in relations to frameworks for Participatory Design. Section 4 thus highlights and illuminates these aspects in relation to the cardinal directions of Sanders’ map and some of the populated locations in it. This discussion is rounded off by explaining some tensions connected to the use of a testing tool for co-design purposes, and also by noting the tension between designing the interactivity in the artefact itself or in a user’s interaction with it. Section 5 concludes the paper by summarizing the main points made in the preceding sections.

2 Supporting Development of Interactive Systems in Interactive Sessions

Under the heading “Mimesis and interaction”, Brenda Laurel once wrote “The most important distinction between a play and an interface is that an interface is interactive, while a play is not. […] An interface […] is literally co-created by its human user every time it is used.” [11] (p. 73). Obviously, being a hidden actor behind the GUI output, the test manager in a Wizard-of-Oz experiment stages an interactive play, where things are enacted on the display (and sometimes also audible in the air) by both the test user and the manager.

Several authors writing on WOz experiments have mentioned the possibility of using the method not only for strict testing of an unimplemented but meticulous specified interaction idea but also for the explorative use of what output would facilitate a user’s understanding of the user interface under development [12]. “As opposed to assuming a certain dialogue flow, WOz experiments can be used to explore the dialogue space in more detail.” [13] (p. 44).

Kelley [14] coined the “OZ paradigm” in the beginning of the 1980s when simulating a language processing components for IBM in two ways. In the first run, “no language processing components were in place. The experimenter simulated the system in toto.” In the second run, “Fifteen participants used the program, and the experimenter intervened as necessary to keep the dialog flowing. As this step progressed, and as the dictionaries and functions were augmented, the experimenter was phased out of the communications loop” [4] (p. 28), [14] (p. 193). The second run yielded fewer and fewer new words for each new participant, and it was succeeded by a validation step where a further six participants tested the resulting program to see how it performed. Kelley’s use reveals how the Wizard-of-Oz technique can be applied, that is, to develop an interaction design in interaction. This is in contrast to merely testing the interaction design.

The possibility to influence the interface elements during explorative interaction sessions are of course dependent on the support for such things provided by the experimental set up. Our system Ozlab is geared to aid in GUI interaction, both for ordinary small-scale usability testing and for explorative sessions. From the very beginning in 2001, we included user groups as designers and testers [15]. During the first few years, it also became apparent that such a tool was quite useful during team-internal demonstrations—the plasticity of the interaction design made it easy to immediately see the implication of different suggestions. After 15 years of various uses, we have now not only toppled the developer—user roles and mixed face-to-face team discussions with interactive GUI expressions, but also started extending the use to GUI-based co-design discussions at a distance.

In 2011, our Wizard-of-Oz prototyping tool Ozlab went from being based on a multimedia production tool called Director to become web-based. While the web environment presents several difficulties [15], it also provides new possibilities for remote interaction. The system runs in the web browser, which means that a person participating in, for example, a test session does not have to install any software on her computer. Instead, she can access the prototype via a URL provided by the designer. Ozlab allows the creation of mockup prototypes from pure graphics with the aid of predefined behaviors which are then accessible during testing, for instance to make objects visible and invisible, to accept text entry or drag-and-drop actions. Such features make a collection of static graphics to appear interactive when a wizard interprets a participant’s GUI actions and then responds by further GUI events. Before going into the use of this system in GUI-ii, we will expound a little on what it means to do prototyping.

2.1 Prototypes for Interactive Systems

As we have put forward WOz to include prospective users—these users have been acting as, for example, designers and testers, co-designers, expert reviewers—it is apt to compare it with the three approaches to “making” that Sanders and Stappers identify in a special issue of the journal CoDesign [8]. They mention probes, toolkits and prototypes. In early design phases, probes can be thrown in by designers to see how stakeholder representatives react: a kind of stimulus for the imagination. When the design work takes on a more directed format, scenarios and storyboards are typically used to visualize the ideas. A participatory-minded designer can serve co-designers with toolkits that allow them to participate in the making of the visualizations. For the third stage, they refer to Stappers’ [16] list of roles that prototypes can play in research through design:

  • Prototypes evoke a focused discussion in a team, because the phenomenon is ‘on the table’.

  • Prototypes allow testing of a hypothesis.

  • Prototypes confront theories, because instantiating one typically forces those involved to consider several overlapping perspectives/theories/frames.

  • Prototypes confront the world, because the theory is not hidden in abstraction.

  • A prototype can change the world, because in interventions it allows people to experience a situation that did not exist before. [8] (p. 6)

We agree very much with this view, but have noted that programmed prototypes tend to lock in imagination rather early as noted also by others; such as in a textbook in HCI [17] where Bill Verplank is interviewed saying, inter alia: “There is a big push towards prototyping tools that will lead very directly to the product. Almost every computer-based development that I’ve been part of suffered from a lack of consideration of alternatives” (p. 467).

Now, when extending the WOz to include users as designers and testers (rather than only as test participants), and further, to mix demo and co-design with “testing” into the GUI-ii, we note that this effect in using designs as probes (whether the designs stem from designer or co-designer in a preceding workshop or GUI-ii session) and using designs as prototypes (in more or less all the ways enumerated by Stappers).

2.2 Overcoming Geographic Distance

To overcome geographic distance between users, designers and developers, the designers can send mockups digitally to the user representatives and the developers to receive comments. However, what is missing in sending a fully or partly interactive prototype or a sketch to someone else is the possibility to explore new and unforeseen interactions. To clearly communicate what part of the GUI one is talking about—the functionality offered or the interaction design—is furthermore difficult. Humans are good at interaction, but not at envision it in advance. If a designer or co-designer have the courage to meet various prospective users and other stakeholders through the interface, the Wizard-of-Oz method can be surprisingly productive. WOz can be used in numerous ways to get ideas for developing the interaction design and later to refine ideas or select among ideas, and by the very approach much of this stems from real-use experience and not only from discussions in the team. For the present purpose, we must ask how much of this can be based on distance.

When it comes to ordinary usability testing, Schade [18] argues in an article on the website of the Nielsen Norman group, that a physically present facilitator can more easily time follow-up questions and read the participants’ body language. However, if resources are scarce or if timeframes are tight, remotely moderated usability tests can be a good alternative, especially if the users are “geographically dispersed”, Schade points out. This calls for attention to another divide: remote user tests can be unmoderated or moderated. Because the facilitator and the participant does not have to schedule a session, the unmoderated remotely conducted user tests can be very time efficient (ibid.). Of course, the system to be tested has to be programmed as it has to run by itself and tasks must be easily understood by the test users. Often completely unmoderated user tests call for too much planning and stress testing of the prototype to be really feasible in iterative design processes where much information on user reactions is wanted almost instantaneously in order to re-design the prototypes. The close encounter of the moderator with the participants is an essential part in participatory design. As will be brought up in the following section, GUI-ii is a close encounter on distance, and it is more co-design oriented than moderated remote testing. At the same time, it should be noted for the three projects mentioned in Sect. 3, that they had already included various sorts of stakeholder discussions and workshops before the GUI-ii sessions.

2.3 A Snapshot of the GUI-ii Workbench

The interviewer, or designer, or Wizard of Oz, in Fig. 1 tacitly controls certain non-implemented functionality of a user interface, which the interlocutor acts upon and also changes, but the participants’ direct actions are limited by what the designer has made changeable in the mockup. Sometimes we stop a session and make changes according to expressed suggestions from participants—the WOz method for enacting interfaces allows for quick implementation of rather drastic changes. The paper sheet before the wizard in Fig. 1 is the wizard’s interaction script—it can easily be changed if the participant calls for this.

Fig. 1
figure 1

A wizard in front of the prototyping system used during one GUI-ii session

The laptop to the left in Fig. 1 shows a copy of what the participant sees. Screen recording is made there including sound; the round black object is a loudspeaker microphone. Sometimes we record also the wizard’s screen, but that is mainly for evaluating our WOz system. Schade, in her above-referred discussion about remote moderated usability tests, suggests that the facilitator and the participant communicate via telephone, email, chat or by combining these methods. For GUI-ii, where a wizard is always present to run the interaction (designer-purported or participant-suggested interaction), email and chat are used only by participants to e.g. send documents or links to us. Obviously, the exact arrangements are rather unimportant so long as unconstrained expressions are possible in a normal channel (voice) and GUI (essential for the collaboration exercise).

3 Examples of GUI-ii

For participatory design, Brandt et al. say, “new application of existing tools and techniques is an area ripe for design and research discovery. It is especially important that the exploration of and reflection on the use of the new tools and techniques be situated at all the phases of the design and development process. It is also important that the results of these explorations be published” [19] (p. 176). However, the present presentation is not aiming to give a precise account of explorations made but rather to reflect on how to understand GUI-ii along the dimensions presented by Sanders in her design research map. Nevertheless, a short description of actual GUI-ii practice is in place to explain how it works. In this paper, we profit especially from the experiences of using GUI-ii in three projects with international partners as will be described here.

3.1 Project A: GUI-ii Traits in Walkthrough at a Distance

As an international undertaking, project A yielded important insights about using web-based WOz during remote sessions in addition to some face-to-face sessions. This makes some comparative analysis between remote and in-person sessions feasible. Two of the remote test participants were located in other towns in our country (Sweden) and one in Germany. Think aloud was used during tasks and after the tasks a discussion around the GUI pages took place. This has some traits of GUI-ii. Our findings include:

  1. i.

    Usability testing using WOz at a distance can be compared to traditional WOz testing.

  2. ii.

    The necessary slow response from the wizard may prompt some people to click repeatedly (if they do not get a quick response their instant feeling of interactivity appears to wane).

  3. iii.

    A high-quality Internet connection is essential to reduce lag.

  4. iv.

    User testing at the mockup stage increased the participants input in the design work.

Of course, this was more of a traditional user test (a demo based on a prototype and tasks to solve) followed by “post-test” discussion of the demonstrated interaction design. But this encouraged us to a more extensive use of what one might call the “participatory potentials” of the GUI dialogues, as shown in the following project.

3.2 Project B: Observing Both Ordinary Interviews and GUI-ii Employment

In project B,Footnote 1 participatory design is complicated by not only distance but also two legal systems and two languages: Norwegian and Swedish. The project includes workshops, interviews, and the development of a cross-organization, cross-border web tool for collaboration. The very aim of the project is thus a tool for Computer-Supported Cooperative Work (CSCW [20]), which is why using the Ozlab tool for GUI-ii is quite congenial to the project goal. However, GUI-ii is only one participatory technique among several others used within (and before) project B. Below, we will expound on the observation of face-to-face and GUI-ii interviews held in spring and autumn 2016.

Two methods were used to gather the relevant data for evaluating GUI-ii: observations of face-to-face interviews and recording GUI-ii over Internet (with screen and voice recording). An occasional face-to-face GUI-ii (with recording) gave the interviewer more visual input of the interviewed co-designer but no extra notes needed to be made compared to notes we normally make. 10 GUI-ii sessions were held in the spring and 7 in the autumn. The interviewees were asked to suggest contents in addition to what had been jointly defined in workshops, or to comment on existing content including interaction design. Some text in the GUI were authored by the participant. There were several levels of authoring, from open text spaces for side comments, over instruction texts and specific labels for buttons, drop-down menus, and text fields, and the text fields themselves that our participants could fill in when we walked through the mockup before or after their changes (that is, they acted as “users” of the future CSCW system). Also for drag-and-drop there were actions in the design phase and in the “user” phase.

Interviewer’s (Wizard’s) Behavior

Thanks to the shared interaction space of the mockup, the wizard could explain the functionality by highlighting changes in the interface by various means, such as displaying a colored emphasis on a list of items. Example: “if you were to click this checkbox [wizard demonstrates by ticking the checkbox], the content matching the selection would be made visible like this [displays colored emphasis].”

The two main interviewers were quite different in style:

  • In the spring series of 10 GUI-iis when the mockup was rather empty, the wizard often waited for the interviewee to find and click on continue buttons, and if the participant asked what to do, he prompted the participant to say what he/she thought was important to do next or suggested to click on the expected button.

  • In the autumn series of 7 GUI-iis, when the mockup was rather full and contained some alternatives, the wizard sometimes felt a time pressure to keep within the agreed time (1 h) and had a tendency to click through to other screens in order to be able to demonstrate and redesign all.

The second interviewer had a tendency to use the mouse pointer to encircle the object she spoke about, but the Ozlab system did not show the wizard’s mouse pointer as the wizard in WOz tests are meant to be a secret hand behind the purported system’s action. Demo pointing had to be done by drag-and-drop icons to be visible to the co-creating participant. Even if this wizard behavior did not really mess up the discussions, we had Ozlab enhanced in 2017 with a switch to make the wizard’s pointer visible for the participant.

Observations of Face-to-Face Interviews (Non-GUI-ii Sessions)

In the face-to-face interviews, the respondents used body language and gestures to emphasize their arguments. For example, one respondent said “Everyone uses their smartphones”, while taking up his phone to show it, and continued to discuss specific tools and apps he uses while tapping his fingers on the phone. Another respondent used his fingers to count what platforms they had in his organization. After mentioning all platforms, the respondent looked up at the interviewer as if asking “Is the answer enough?” Receiving no spoken feedback, the respondent elaborated the answer by describing the use of the different platforms.

Participant’s Behavior in Dialogue in GUI-ii

  1. (a)

    Participants differs in graphical preferences: When asked how they would like to arrange some crisis exercise activities into a logical workflow (by drag-and-drop), respondents had different preferences about how a flow is graphically organized. One of the respondents organized the workflow top-down, another bottom-up in inverted chronological order, a third from right to left (inverted chronological order), and a fourth organized the workflow diagonally. The prototype contains a label “Place the elements in the area below” and this was also prompted by the interviewer; obviously, few respondents took the word “below” to mean “vertically top-down”.

  2. (b)

    Language barriers can be overcome: Another important consideration is the terminology and language used. Terms used, and taken for granted by some respondents can be difficult to understand for others. Because this project includes Swedish and Norwegian project partners there is potential for even greater language confusion. Using our GUI-ii technique these problems became very evident and confusion was reduced by negotiating alternative terms and concepts.

  3. (c)

    Real use and real data reveals new problems: Likewise, by asking the participants to actually use the (mockupped) planning system, such as filling out the form with actual data, made it clear whenever the participants struggled to fill out some information in the form. Furthermore, and perhaps even more important, the participants themselves became aware of where the form asked for redundant information or where the labels needed clarifications. As this content had been discussed before in workshops, we dare argue that the participants would not, at least not as easily, reach this insight by just looking at the form instead of interacting with it (just as Beyer and Holtzblatt [6] argue; cf. p. 375).

  4. (d)

    Designer’s ideas can be demoed and replaced: One idea was that each user of the finished system would themselves decide the categories for objects he/she creates (collaborative training material and small educational snippets). During the sessions, however, the interviewees made clear that such a solution would probably result in too many categories. Instead, by checking their files and folders, they filled the boxes with categories that best matched how they today sort and search for the content. Similar in other cases, for instance, even if we provided icons, icons for drag-and-drop could be discussed in several ways: graphic design, symbol meaning, meaning of positioning an icon, number of each icon type.

  5. (e)

    Sessions with developers can reveal misconceptions: One of the GUI-ii sessions was held with one of the developers of the CSCW program. One scene in the mockup discussed during the session was showing an overview of a crisis training plan in form of a horizontal time-line. Even though the designers already had gathered information from the stakeholders, the developer argued the horizontal presentation manner would be a problem not only programming wise but also usability wise. He held a firm belief that horizontal scrolling would be necessary if items were not simply written in a vertical list, not thinking of that planners draw their timelines on a screen and have no reason to go outside the screen more than they would do on a piece of paper.

Face-to-Face GUI-ii

We found that firewalls sometimes block an easy use of our WOz system via the web. One participant solved this by participating from home.

For another interview session, when the firewall settings caused a problem on the participant’s side the interviewer brought two laptops so that participant and wizard both could connect to a wifi present in the building, and thereby connect to the Internet and Ozlab. (As a backup, an interviewer can use a laptop with the Ozlab system installed which could be reached through the laptop’s shared hot spot.) For data collection, screen and audio recording software was used on the participants’ side. In this single face-to-face GUI-ii session, the respondent was more inclined to comment on “smaller” issues, like incorrect use of tenses, than other respondents had been. This raises the question, did the face-to-face aspects of the interview make the respondent to feel more confident in commenting on details?

3.3 Project C: GUI-ii

Finally, as an example of a more casual GUI-ii employment we can take a series of contact by GoToMeeting with an Italian project partner in yet an international project, here “C”. While some project demonstrators needed extensive design, one application area was mainly for back-end IT staff and no user testing was needed. One person delivered scanned sketches and a list of functions to us. We mocked up without regard to graphic design the interaction flow, including some alternatives. By a couple of GUI-ii-based telcos these were walked-through and several minor additional requirements popped up. However, now the project partner expressed an eager to see how the prototype would look like in a review; interest was clearly on hifi graphics while we knew that the parallel demonstrators in this project were not at this stage at all yet. Some months later we again had to bring up issues of exact functions: GUI-ii went fine, but our partners would like to get the whole set of screens—this is not always quickly made in a GUI WOz tool as everything is not “in place” in the mockup but enacted (made visible or invisible) during an interaction session (whether a GUI-ii interview, a GUI-ii group discussion, or simply a test at distance).

3.4 Summary of Lessons Learned from the Three Sources of GUI-ii Experiences

Our observations show that on-screen objects in a GUI-ii constitute a resource for respondents, just as things and fingers do in face-to-face interviews, and that the lack of visual cues from the interviewer’s physical presence can promote more elaborate responses, even if different interviewers will have different styles.

Applying GUI-ii at a distance facilitates co-creation of the graphical user interface and the functionality available. Although it might be argued that the GUI-ii is a GUI walk-through [21] we suggest it is not. We are more flexible during the interview; both the interviewer and the respondent can make changes in the mockup during the interview session to try a new design idea. Notably, even the walk-through parts of a GUI-ii session may be dependent on earlier design parts of the on-going session which makes the whole process more of a collaboration exercise than a traditional walk-through. Obviously, also a walk-through approach and not only a GUI-ii approach can utilize input from previous sessions to prepare the next session. However, a growing feeling of ownership is easier to create when one lets the participant walk through what is partly his or her own creation. For instance, in the example above about organizing the workflow, the participants later had to click the activities (labelled buttons) in order to access the corresponding pages (where more co-design activities followed).

We do not use a CSCW system with full-mode interactivity (e.g. GoToMeeting, Skype), because our system is better at handling possibilities and parallel design—the wizard has controls to manage screen content, there are interaction widgets present which makes it easier to demonstrate checkboxes, drop-down menus, and other standard GUI objects. Also, it is easier to enter a use-mode (that is, using the design rather than designing it) without necessarily making the interlocutor think of it and thereby think of it as a test of his/her suggestions. Rather, Ozlab’s wizard controls make it possible for the interviewer to just join the interaction if the interviewee tends to act on the objects, which many people tend to do when they have GUI-like images on the smartphone display or computer screen. Had the co-design interview been executed via an ordinary teleconferencing system, the interviewer would have to announce that the interplay now shifts modes, as the interviewee would have to play against the interviewer in the latter’s overt role as some form of interactivity crutch.

Nevertheless, just as when using teleconferencing systems, the limits of the technology used will sometimes be all too apparent. Checking the equipment, checking firewalls, and having a relaxed attitude to failures will help as in simpler forms of interviews or co-designing sessions.

4 Framing GUI-ii in the Map of Design Research

Sanders et al. [22] presents a short sketch of “A framework for organizing the tools and techniques of participatory design” with an aim to provide “an overview of participatory design tools and techniques for engaging non-designers in specific participatory design activities.” The framework is more elaborate than the map presented by Sanders [7] and further discussed in Sanders and Stappers [8, 10]. On the other hand, the map allows for more discussion rather than classification and therefore, we feel, better suites the presentation of a new PD tool/technique/method such as GUI-ii. Moreover, when going through the tables of Sanders et al. [22], we find that the authors have not ticked table entries in the “on-line” column for any of the techniques listed under “Acting, Enacting and Playing”. This makes us wonder if GUI use is excluded from the playing they have in mind. This does not fit with our notion of interaction between people and within groups—computer displays are so prevalent nowadays that trying to hide them from a collaborative session is unnecessary even if one strives to provide a bias-free environment for the discussion. And if GUI is let in, then there is a short step to on-line acting, enacting, and playing. We argue GUI-ii falls under, for example, the category: “Participatory envisioning and enactment by setting users in future situations”. But rather than discussing the tables of Sanders et al. [22], we take this remark as a starting point for characterizing GUI-ii techniques in relation to the Map of Design Research [7, 8].

Sanders [7] presents a map over design practice and design research, with four cardinal directions. Horizontally, one dimension spans from “Expert Mindset” to “Participatory Mindset”, and vertically a second dimension goes from “Design-Led” to “Research-Led” (cf. Figs. 2 and 3). “The research-led perspective has the longest history and has been driven by applied psychologists, anthropologists, sociologists, and engineers.” (p. 13). While the scientific-led methods have had a gradual extension to the right in the diagram, the “Scandinavian” school took a more decisive step to this end. Methods developed by practitioners in their design work may have a very participatory mind-set but there has also been the opposite mind-set even among practitioners, stressing the designer’s special eye for providing critical design questions rather than merely solving design issues [23].

Fig. 2
figure 2

Adapted from Sanders [7]

WOz in the of map of design research.

Fig. 3
figure 3

Adapted from Sanders [7]

GUI-ii in the map of design research—extending to the right?

In order to frame GUI-ii in the Map of Design Research, we start by noting that WOz methods take various forms. Explorative WOz is surely more designer oriented; it is about finding good designs, not about establishing a hypothesis of human-computer interaction. Sometime the interaction of a WOz mockup is formed when an interdisciplinary team discusses around it, but as the participants in many other situations may not be aware of the faked interaction, it is hard to ascribe explorative WOz exclusively to the right-hand side of the diagram in Fig. 2. Thus, we let the rectangle “Explorative WOz” stretch from the design expert half into the half representing the participatory mind-set.

Our own experimentation with users constructing and testing their designs is more firmly rooted in the participatory mindset, and the Ozlab system is quite important to allow refinement cycles and not only fortuitous WOz setups.

Many WOz studies have been within NLP (Natural-Language Processing) and are often oriented quite far to the left in the Map, sometimes led by companies (design-led [4, 24]) rather than conducted primarily to develop corpora of HCI dialogues for a general research and development community [13, 25]. Thus, “WOz in NLP” is put in two different locations in the map.

Our own evaluation in project A of GUIs on distance finds its place within Sanders’ Usability Testing circle—except that in project A the renegotiated GUI aspects are brought out and tried within the same session, an act reaffirming the participant as co-designer.

Sanders often refers to the “making, telling, enacting” sequence of activities in design (not necessarily in that order). We think it should be observed that these activities are not automatically allocated to different sessions. Our GUI-ii sessions are often combining four steps: telling (around material at hand), making, telling, enacting. In order to understand what is “made” during making in a GUI-ii session, it is worthwhile to note the shift in the two suites of GUI-ii sessions in the B project: in GUI-ii with pre-prepared mockups, the participants are really interaction designers but much less graphic designers. We suppose this fact might easily go unnoticed by design theoreticians who put a heavy weight on the probes. That the interaction in itself is a co-creation can easily be missed if concrete objects and a lot of storytelling is emphasized.—To Sanders’ “thoughts on the curriculum for design” [9], where she writes “We will need to learn from storytellers, performers and sellers” (p. 71), one might add, “and from psychiatrists” as listening (observing) is also very important.

It would seem that the thing to do now would be to push GUI-ii as a method further to the participatory mindset edge, as we are trying to get away from ordinary interviews’ non-designing and their weak co-creation nature. On the other hand, when looking at the complete development cycle, co-creation activities have already taken place, or, at least, other stakeholders than the designers have set the functional goal for the system. When GUI-ii is brought in, it is not only to entice a host of design suggestions and uncover implicit requirements, but also to refine the ideas for interaction (including graphic) design. The temporal sequence within projects A, B, C runs from a high participatory mindset to a more system expert mindset. The interactive prototypes later piloted are definitively further to the left when the cost of implementation has been given a greater weight.

Therefore, the picture of GUI-ii employment within a project will show a leftward drifting in the Map of Design Research as indicated in Fig. 3, possibly ending in ordinary usability testing, whether based on WOz or programmed interactivity. This means that the employment is very participatory-minded: rather than the process starting further to the left, participants can initially be co-creators of the interaction design. Our previous studies show that a good support for GUI wizardry can facilitate doing explorative interaction design and evaluation. This good support is exploited in GUI-ii, even if the interview format on distance gives a rather clear division of the roles as (expert) designer and (content expert) co-designer.

Participatory design (PD) is normally applied in internal, organizational settings where the development team and the (future) users meet physically [26,27,28,29,30]. However, it is not uncommon that projects span outside of one organizational or even geographical setting because “individuals, stakeholder groups and other entities can be distributed physically, organizationally or temporally” as Gumm et al. explain [31].

PD in itself is not always entirely unproblematic, as teams may face communication problems between the developers, designers and users (for example). When it comes to Distributed Participatory Design [31], however, it has been shown that other problems may occur due to the distribution of the team members and users. We will not go into the problems here, but the face-to-face GUI-ii instance naturally brings up the question whether it is necessary to keep GUI-ii strictly to remote discussions, or should it be used also for what might mistakenly be taken as a prototype walkthrough? Right now, it is most useful for us to use the term GUI-ii for remote co-creation interviews where there is a strong emphasis on actual interaction with the discussed interaction design. For many years, we have used WOz in face-to-face team discussions. Now we need to explore the space for the remote GUI interactions with more or less single participants as this facilitates scheduling and makes every voice heard distinctly. For the latter aspect, confer for example Trischler and co-workers: “in teams where individuals dominate, […] less collaboration and diminished innovation outcomes are more likely” [32]. Naturally, this can be mitigated by team building processes. Nevertheless, individual suggestions can better be recorded and user-tested if individuals are participating individually after the representatives for the different organizations (especially as in project B) have reached a consensus of a project idea and sketched use scenarios in workshops.

Distance is also important for another reason. Distance means that participants can be in their normal environment, the one they will be in when using the projected system. The correct context of use is important when developing a new system (ISO 9241-210:2010) and it is not surprising that Sanders in her map included Contextual Inquiry, the field interviewing method by Holtzblatt (and further developed into Contextual Design [6]), where the customer, instead of having to explain her work to a designer, the designer goes to the customers’ workplace to observe, discuss and gather “data about the structure of work practice” and to “make unarticulated knowledge about work explicit, so designers who do not do the work can understand it” [6] (p. 37). In Sects. 2.3 and 3.2 it was mentioned that participants in GUI-ii sessions refer to material they have on their computer or sometimes they grab a physical folder to check things.

At times, a neutral ground is searched for developers and stakeholders to be on equal footing when the discussions start. Against a PD placement of GUI-ii in the Map of Design Research it can be argued that if the designer prepares the playground (the WOz mockup) both the place and the things in it are biased. However, preliminary workshops outside ordinary workplaces and also at different stakeholders can establish the things (labels, structures, illustrations) to be used. This makes the GUI-ii mockups a shared ground (not neutral).

Our impression is furthermore that the fact that co-designers can drag-and-drop things, or re-write labels and other things, makes it obvious that they own the things and the space. Of course, a programmer will tend to think in ease-of-implementation and general-solutions terms, as the example from B demonstrates. This risks over-writing what co-designers propose. In the reported example, the developer met the designers in the GUI-ii, not the co-designers, which suggests that multi-party interviews can be needed (in fact, the reported case took place at three places; one designer had only viewing rights and could only argue by voice. Our system has no restrictions on the number of viewers). However, such a use would approximate telcos and most participants would not really be in the GUI dialogue, hence we leave this option here.

Having mentioned the possibility for co-designers to directly re-design certain features of a GUI under discussion naturally begs the question about the effect of other features, namely features which have to be negotiated with the interviewer before any change can be made: what is the effect on ownership and co-creation in such cases? The answers depend on the tool used, and in particular on the specific version of it used. We will avoid turning the present discussion into a technical manual even if this kind of question directly relates to the method’s place in the Map of Design Research. It is worth noting here that Ozlab was built for testing, not co-design, but initially needed no adaptation to our actual use of the system in individual walkthroughs and team discussions. During a live session, not everything can be changed, especially not for the one logging in as participant. In addition, some other features work against us in some GUI-ii situations: for instance, captures of text input can be reused within a running session in other scenes than where the texts were entered, but if the wizard stops a session this memory is lost. This has been a good safety precaution to ensure that fields are empty when a new participant (test subject) enters a test session. But for GUI-ii this is not so convenient because, as mentioned before, the interviewer can swiftly stop and change any aspect of a mockup. Obviously, the clearance of the text memory can in some sessions destroy valuable constructs (if the interviewer does not take time to re-enter them before opening the session again, but that would in most cases not count as a swift change of the mockup). The system is planned to be extended with a permanent memory for text fields. Other features added after cases A, B, and C improve the wizard’s work in testing as well as in co-design sessions. However, it would demand an understanding of what GUI-dialogic (that is, ‘interactive’) interaction design entails in a host of details to really appreciate this mutual support for testing sessions and co-design sessions, which is why an exposé of wizardry widgets is not presented here.

Instead we briefly touch a making issue raised by Löwgren: “making,” which is programming in his discussion, “is required for explorative design of non-idiomatic interaction” [33] (p. 28). Ozlab will not support many non-idiomatic interaction formats, and obviously, programing is needed to make transformations of graphical objects. Furthermore, Löwgren considers “disposable programming as a major technique for hi-fi sketching” (ibid.), that is, just as for Sanders and others, making, is not making the final system but making something to evaluate before continuing with the design efforts. Löwgren stresses the importance of immediate feedback to the designer. Such thoughts are also the basis for both explorative WOz and GUI-ii. For GUI-ii, the feedback is as much to the prospective future user, acting as co-designer, as for the designer. Notably, the immediate feedback, through “rapid-fire rounds of experimental coding,” that Löwgren talks about does not necessarily involve “real users” in the loop, but is simply a check for the design team whether the last tweaks improved the look-and-feel or not. Sketching interaction can thus be made in different ways. Even if limited in some dimensions, WOz and GUI-ii necessarily include clients of some kind. This is, in principle, a strength. However, it is also a problem for adoption of these techniques as many designers have a strong wish to see the interaction themselves, rather than to try it out in co-action with prospective users. Then the user is left out of the loop.

5 Conclusion

GUI-ii is a technique that can be used in requirements analysis to deepen the understanding of what required functions really are meant to provide. This technique also facilitates the co-creation of GUIs, to probe usability issues and to pre-evaluate possible extensions.

Interactions between one or several stakeholders and a designer or design team underlie many participatory design activities. However, interview techniques are seldom emphasized in the PD literature which rather focuses collaboration around objects and sketches. Utilizing communication technologies allows for less travelling and might allow more participation, but single-individual interviews make it even easier for people to participate as less scheduling is needed and should be considered for certain participatory design cycles. There is the further good of evaluating lots of design suggestions as one moves from one participant to the next. Even more, if given time as in a one-to-one interview, people in GUI interaction interviews become talkative (or at least interactive), engaged, and can utilize their usual accessories.

The technique itself is as good as the designer/interviewer. The tool supporting the interviews and co-creation sessions have limitations but can also be developed. In addition, when analyzing the multifaceted interaction between two GUI-ii interlocutors, we feel that some sort of notation should be developed for the interaction between Wizard—Wizard’s user interface—WOz tool—Participant’s user interface—Participant—Ambient resources. From the ISD 2016 conference there is one notable model [34] that might be adapted as a protocol of interview sessions and not only for designing a better WOz tool.