1 Introduction

Experience without theory is blind, but theory without experience is mere intellectual play. Immanuel Kant (1724–1804)

1.1 The Research-Practice Gap

This chapter is written with two communities in mind: the research community and the practice community. Because we are concerned with bridging the research practice gap, we have made an attempt to present our research and outcomes in a manner that both communities will find value and inspiration. It is our hope that this chapter creates the steps toward creating a dialogue that will bring the knowledge and expertise of both communites together to promote excellence in design and the study of design.

Design Thinking (DT) has become a widely used method to produce creative outcomes in different contexts, cultures, and disciplines. DT is applied worldwide in a variety of settings and formats, from industry to the social sector and education. As an innovation paradigm, Design Thinking is currently undergoing an exciting and critical transformation. Ad hoc content and practices, based on anecdote and experience, are increasingly being displaced by new content and practices grounded in empirical evidence and rigorous theory.

At least two forces drive this transformation, one force having its source in academia and the other in industry. From academia, the demand has been for Design Thinking to communicate the same rigor and theoretical depth required by other academic disciplines; from industry, the demand is for robust, reliable, and verifiable methods. In both cases, new research and the transfer of this research to the community of practice is crucial to the ongoing growth and success of Design Thinking. For researchers, understanding and accepting design as a valid domain of scholastic inquiry is required; for industry, having reliable methods that support evidence-informed decision making to dedicate resources towards developing new products and services is required. In light of these factors, it would make sense that it is in the interest of both the academia and industry to embrace one another and share knowledge and practice to create a flourishing and sustainable Design Thinking community.

Nonetheless, our experience as design practitioners, teachers, and researchers point to a gap in the Design Thinking community. On one end of the spectrum of popular understanding of research and practice, we have theorists who count words and gestures in the hope of finding meaningful patterns that indicate team performance. On the other end of the spectrum, we have practitioners who are often encouraged to make it up as they go along in the hope of creating a useful design intervention. Unfortunately, the impact of research outcomes in the practice of Design Thinking remains marginal. The development and application of new DT methods, tools, and frameworks often lack the foundation of rigorous research, and research insights seldom get implemented to inform practice.

This state of affairs is not unique to the design community; indeed, it appears to be endemic to most fields. In the paper “How to Develop an Impactful Action Research Program: Insights and Lessons from a Case Study” (Lakiza and Deschamps 2019) Lakiza and Deschamps state that “no matter how relevant the work of theorists is, practitioners often disregard it as too theoretical to be applicable in their precise situation.” Several other authors also highlight the research-practice gap as “the failure of organizations and managers to base practices on best available evidence” (Rousseau 2006).

Our investigations have suggested that the research-practice gap results in the confluence of three factors:

  1. 1.

    Issues are arising regarding the dominance of affective outcomes over skill-based and cognitive outcomes in design thinking training contexts. (Taheri et al. 2016)

  2. 2.

    Difficulties experienced by academics in translating findings into tangible solutions in DT education and industry. (Edelman et al. 2012)

  3. 3.

    The inability of design thinking practitioners and program managers in using research findings to improve their team performance (Meinel and Leifer 2020)

In order to address these factors, and thereby create a bridge that links research and practice, the authors of this chapter have developed a collection of training materials based on research on high performing teams. This research—the foundations of the approach—has been outlined in the previous publication in this series (Edelman et al. 2020), and is drawn from research done at the Center for Design Research and the Hasso Plattner Institut, as well as new findings in cognitive science and media studies. The collection of training materials we present has been tested in several scenarios, and while a work in progress, we offer an overview of our research and training packages called Designing-as-Performance.

The fundamental premise of the Designing-as-Performance (D-a-P) approach is that designing is a performative act, and that design sessions are a performance of a corpus of behaviors with mediating objects. We call these behaviors with mediating objects performative patterns. Performative patterns are micro-interactions that are distilled from observing high-performance teams at work. Performative patterns can be articulated and taught through training routines comprised of relevant theory (frameworks) and repeated practice of well-crafted drills and exercises. Furthermore, performative patterns serve as shared models (both mental models and interactive models) that enable design teams to perform well.

2 Theoretical Foundations

2.1 The Three Learning Outcomes: Affective, Cognitive and Skill-Based

The paper “An educational perspective on design thinking learning outcomes” (Taheri et al. 2016) investigated current Design Thinking education through the lens of an educational model of learning outcomes. Taheri and her colleagues studied Design Thinking learning outcomes under three primary categories: Affective Outcomes, Cognitive Outcomes, and Skill-Based Outcomes, based on work by (Bloom 1987; Gagné 1984). Taheri observes that in most Design Thinking educational contexts, there is a strong bias towards Affective Outcomes and a lack of emphasis on Cognitive and Skill-based outcomes.

In response to Taheri’s insights concerning current Design Thinking education, the work we present in this chapter—Designing-as-Performance—emphasizes Cognitive and Skill-Based modalities as a supplement to current DT educational practice. Designing-as-Performance builds on previous research by designing skill-based and cognitive outcomes to augment the overemphasized affective outcomes in DT training. Our position is that although affective outcomes are necessary, they are not sufficient.

The issue here is that a large sampling of Design Thinking training is offered in quick workshops, which seem to be aimed at providing an introduction that focuses on affective outcomes such as ‘creative confidence.’ In our experience, it is relatively quick and easy to give encouragement in a short workshop, and indeed participants come away feeling good about themselves. However, as time passes, this effect wears off, and the realization that they have neither sufficiently developed skills nor have the theoretical frameworks that can support more depth to their work.

In order to bring clarity to the current state of education in Design Thinking, we offer an analogous situation, developing expertise in diving off the 3-meter board. If an athlete is primarily coached with affective tools, such as ‘you can do it’ or ‘you just have to be confident’ or at worst, ‘just jump, and you will figure it out,’ most athletes and coaches would not expect a substantial outcome. While necessary, the affective approach in sports training is not sufficient for high performance. Physical skills for which supervised repetition and practice are required, as is an understanding of body mechanics and physics, are also necessary for success.

To draw a fine point on this, the so-called rules of brainstorming are an example of affective tools in the guise of cognitive and skill-based tools. For example, ‘come up with many ideas’ is equivalent to a coach telling a diver ‘jump higher!’ Or ‘encourage wild ideas’ is analogous to a coach telling a diver ‘now do a lot of twists and turns!’. We have witnessed the equivalent to ‘just jump and you will figure it out’ in many DT training scenarios. Part of the problem can find its roots in the training of trainers themselves. Very often, DT coaches have had to make it up as they go along, and while some coaches are experienced practitioners because theoretical instruction is virtually non-existent in DT coaches training, they have little material to draw on. This paucity of grounded theory leaves students on their own to devise solutions and methods for radical and or meaningful change without a solid foundation in the mechanics of design and team interactions. After their experience from a short workshop, beyond feeling good about themselves and free to make things up, many students of DT lack the creative confidence the training claims to instill, as shown in Fig. 1.

Fig. 1
figure 1

Dimensions of Engagement Matrix adapted from Edelman et al. (2012)

For these reasons, we have supplemented currently and commonly used DT education practices with materials that cultivate cognitive and skill-based outcomes. Our goal is concerned with the radical transformation and innovation of design thinking education. To make this goal actionable, we set the objective as the “development of new training material in the form of work packages to make designing accessible to designers and digital engineers.” These packages were prototyped to meet three learning outcome criteria: Affective, Skill-based and Cognitive.

The impact of Designing-as-Performance goes beyond training DT practitioners; it also enhances the training of the trainers or DT coaches. In a coaches’ certification course, we surveyed coaches and asked what kinds of things do they do when coaching teams. The responses were overwhelmingly weighted towards affective coaching, like helping teams with motivation and getting along. The good news is that after an advanced coaching workshop that included skill-based and cognitive-based materials, they reported a broader range of coaching repertoire.

3 Extended and Distributed Cognitive Models: Beyond Cartesian Thinking

While Designing-as-Performance remedies the current bias for affective outcomes in DT training, it also addresses flaws in the cognitive model upon which Design Thinking is based. DT training and literature emphasizes the generation of ideas based primarily on text—such as words written on post-its—and verbal communication—design conversation—as the driving factor of team success. Research has shown that while this approach is necessary, it is neither sufficient nor based on a contemporary cognitive model. Rather than a primarily brain-based cognitive activity, we see team-based design activities as an exemplar of extended and distributed cognition.

In light of this, we will introduce two concepts: extended cognition and distributed cognition. By ‘extended cognition,’ we mean that ‘thinking’ is a loop that engages the brain, the body, and the media (tools and representations) with which we work. By ‘distributed cognition,’ we mean that ‘thinking’ in teams is shared and distributed across team members, in the same way, that music-making is distributed amongst players in a jazz ensemble or that action is distributed amongst the players in a sports team. These terms and their usage follow contemporary work in cognitive science (Tversky 2019; Kirsh 2010; Hutchins 1995; Clark and Chalmers 1998).

We have observed that most DT concepts and training are based on a Cartesian model of cognition, in which ‘thinking’ is a mental activity that occurs in the brain. Much of DT research also tacitly uses the Cartesian model as it overwhelmingly relies on the analysis of transcripts to identify ‘ideas’ and their development in a design session. Interestingly, the very notion of ‘idea’ as an internal mental entity is itself a Cartesian invention. Indeed, the popular notion that designers make ‘ideas into objects’ constitutes a misdirect, which can lead neophyte designers astray. Designers create ‘experiences with things’ (Kirsh 2013). While ‘ideas’ can have great power, they are generalizations. Designers do not create generalizations as such. The power of design in its many faces is more often than not in the realization of specific experiences for users. Research suggests that high-performance teams are not, in fact, on a quest for many ‘ideas.’ Instead, high-performance teams act out a series of interconnected scenarios or ‘enactment’ (Edelman 2011) in which they enact short hand experiences called ‘marking’ (Kirsh 1996, 2011).

Thinking, in a traditional ‘Cartesian’ model, happens in the brain. However, in a contemporary cognitive model, cognition includes more than the brain: motion and gesture are critical to thinking, as are the types of tools that we use, which includes the characteristics of the media that we use to think and communicate (Tversky 2019). While there are several labels for this kind of cognition, we have chosen to use the term ‘extended cognition’ to describe a model that accounts for thinking to be a loop that includes the brain, the body, and the tools that we enlist when we think. This model implies that cognition is embodied (Varela et al. 1993). Our analysis of design teams at work follows this model by taking into account not only language as a proxy for ‘brain’ cognition but also gesture and the kinds of shared media used by designers at work.

Seeing the work of design as a kind of extended cognition accounts for why enactment and marking is a hallmark of high-performance teams. It is much easier to ‘think’ with the body and the right things than it is with the mind alone (Tversky 2019). Off-loading memory and processes into the body and the right kinds of shared representations and tools frees a designer to be imaginative, which is to walk through and experience how the world could be different than it is.

Extended cognition can be observed to be distributed amongst members of a team. High-performance teams, whether they are design teams, sports teams, or jazz ensembles, break up complex transformational routines (like running a play in sports, or improvising in music) into small radically distributed interactions: parts are handed off and developed moment to moment, place to place. In high-performance design teams, we see gestures copied, extended, and combined amongst team members; we see how semi-imaginary environments are transformed piece by piece as if in a dreamscape; we see new and unexpected experiences unfold and emerge as the high-performance design team performs radically distributed micro-interactions.

3.1 Training in Performative Disciplines

Traditionally, performative disciplines have relied on a combination of theory and structured practice that reinforce desirable behaviors that are critical for performative excellence. In the case of sports (Porter 1974; Schmidt and Lee 2014), the understanding of theory and body mechanics and repeated application of this understanding in multiple use case scenarios (warm-ups, individual skills, team drills) are critical for high performance. In the same manner, musical performance (Harnum 2014) enjoys a long tradition of training, which is comprised of musical theory, body mechanics, skills, drills, and free play as requirements for outstanding performance.

The common thread that unites jazz training and sports training: improvisation and development of cognitively distributed and extended skills, which are comprised of offers, methods of picking up these offers, and transforming the offers and handing them off again. Our research approach thus integrates research in (a) musical performance training and (b) sports training. Literature about and educational practice in sports and music suggest the following:

  • Designers may benefit from relevant theory and structured practice of design behaviors in the same way that other performative disciplines benefit from instruction in theory and structured practice of domain-specific behaviors

  • These behaviors are repeatable and understandable

  • These behaviors can be articulated into drills and exercises

  • Repeated practice of well-crafted drills and exercises build fluency and expertise

Because Designing-as-Performance proposes that design activity is a performative act like improvisation in Jazz and team sports like Football (American or World), we have appropriated several elements from these performative disciplines as a basis for training: Warm-Ups, Individual Skills, and Team Drills.

4 Designing-as-Performance

Designing-as-Performance proposes that the design activity is a performative act, like improvisation in Jazz ensembles, team sports like football, or a surgical team in the operating room. In each case, we observe a series of carefully coordinated interactions with instruments (e.g., musical instruments, balls and bats, and surgical instruments), in each case we observe that things do not always go according to plan; in each case, we observe that success is predicated by a collection of shared models of interaction (both behavioral and theoretical) that are repeatedly practiced by team members that enable them to anticipate planned or unplanned steps proactively. Team members, whether in music, sports, or surgery, show up and perform their roles. Performance is characterized by bodies engaged in coordinated activities with things, situated in a place that affords reaching an end (whether the end is free play and exploration or a concrete goal makes no difference).

Jazz musicians understand the macro-structures of their performance, beginnings, middles, and ends in which different activities are shared. Likewise, athletes understand that different plays are performed on different parts of the field and at different times. It is the same for surgical teams. While being fluent in instantiating the right kinds of interactions in the larger arc of a performance, many interactions also take the form of micro-interactions: handoffs passes, exchanging vital information explicitly and tacitly.

In the case of high-performance teams in design, we have observed that the same factors hold. Team members understand where they are in the arc of their performance, or the macro-interactions. Are they in an analytic phase (calling out what is there in the current state of affairs) or affirming a new solution? Are they in a generative phase (disrupting what is there, tentatively addressing new possibilities, or sketching them)? Are they dealing with what could be called UI, UX, or systems-level issues? High-performance teams know what kinds of questions are appropriate to ask in the phase in which they are performing and what kind of answers correlate to these questions.

On the micro-interaction side, high-performance teams know how to play ‘roles’ like ‘disruptor’ and ‘integrator’ and how to ‘hand off’ so that the next team has a better chance of moving the inquiry along. They share a repertoire of interactive patterns or plays that allow them to go beyond what any one of them could imagine, and then bring it home in the form of a novel object-interaction. If all of this seems abstract, do not worry. The following sections will shed light on the interaction mechanics of high performing teams.

4.1 Performative Patterns

The real power of team-based design is unleashed when the tasks are distributed in an appropriate way amongst the team members. The approach we propose is to articulate team creative collaboration as a set of micro-interactions that break down cognitive tasks into small steps.

These micro-interactions are called Performative Patterns.

In team sports, a fundamental Performative Pattern is the play. Plays are predetermined interactions that determine where players and the ball will be in a given time frame and within limits. The time frame depends on the sport and on the play. The limits are often co-determined by the opposing team in the form of coverage. Thus, a play serves as a container for previously undefined content (e.g., the unfolding of the play in the context of the coverage) that allows the ball to be effectively be sent to and caught at a place where no one is at the time of release. In basketball, where the action changes at a phenomenal rate, plays are called out dynamically and in rapid succession, execution of which is only possible if the teams have practiced not only the plays but transitions between plays.

In Jazz, analogous situations abound. For example, a typical performative pattern is ‘call and response.’ Call and response require that players trade short melodic phrases, repeating sub-phrases or fragments and transforming and building on them. This is not done willy-nilly. The craft of improvisation in an ensemble is knowing what kind of phrases and fragments will be fruitful to hand off and how to transform them, as described by Thomas Brothers regarding the jazz legend Louis Armstrong (Brothers 2014). As in basketball, Jazz improvisation can be a rapidly changing landscape, and a reliance on a well-practiced and shared interactive repertoire is required for a successful performance. In Jazz, musically inventive routines constitute performative patterns that serve as container for previously undefined content, which comes in the form of melodic or harmonic co-invention.

What is a Performative Pattern in team-based design?

Here we offer a working definition:

  • A performative pattern in design is a set of defined iterative micro-interactions that serve as a container for previously undefined content.

Simply stated, performative patterns are designed to enable design teams to cycle through a succession of instantiations of new experiences by way of mapping what exists, disrupting what exists, and creating a re-configuration. Performative patterns are structured interactions that work best when rapidly iterated. Each performative pattern addresses a different aspect of high-performance team behaviors. In the early stages of training, designers are directed to practice a performative pattern in isolation from other performative patterns. In later stages of training, performative patterns are combined. Combinations are modeled on the observed behaviors of high-performance teams.

The content, or subject matter, of a performative pattern, is not defined. Performative patterns are configured to accommodate a wide range of design subjects, including products, services, and systems. In the same way that plays in sports or riffs and inventive operations in music break the development process into small chunks, the micro-interactive steps of each iteration of a performative pattern break the design process itself into smaller phrases or fragments. Furthermore, performative patterns provide a guide for what needs to be handed off to the next team member and what the ‘receiver’ can do with the design fragment before passing it on. Because cognition is radically distributed amongst design team members (each with different experiences, points of view, etc.), the outcomes are not determined in advance. Instead, the new object-interactions emerge from the iteration of the performative pattern. Each endpoint of a pattern serves as the starting point of the next iteration. This affords teams to go far beyond preconceived notions and into the creation of new and unexpected experiences.

Training in performative domains often has similar elements: affective, cognitive, and skill-based training. In high-level sports training, we observe that sports psychology, body mechanics, and physical training are all considered part and parcel of cultivating high-performance outcomes, and the desired end of training is to combine these separate elements in a single athlete and team. In music, we find the same elements: interventions for developing confidence (e.g., overcoming stage fright or feeling confident in their compositional style), training in music theory, and physical training. Here again, excellence is assumed as an integration of the three elements in a single player and ensemble. In both these domains, the physical training is grounded in frequent and seemingly endless repetitions that take the form of bespoke warm-ups fit for specific activities, practice of a repertoire of individual skills that are appropriate for their instrument, and team/ensemble drills that cultivate coordination of the parts within the greater enterprise of scoring in sports or creating a compelling musical experience.

In design training, we have appropriated several elements from other performative disciplines as a basis for training: Warm-Ups, Individual Skills, and Team Drills. Specific examples of these training modalities will be introduced in the context of each of the four performative patterns discussed below.

The following are four examples of the performative patterns we developed and tested: MEDGI, Dimensions of Engagement, Analytic and Generative Questions/Answers and Media Models. We have chosen these four examples because they are all foundational to D-a-P, and have been tested in several venues.

4.1.1 MEDGI

M-E-D-G-I is an acronym that stands for Mapping, Educing, Disrupting, Gestalting and Integrating, and serves as a primary performative pattern. MEDGI is both a macro and a micro pattern in that it describes both long term project development arcs and moment to moment development team interactions. The MEDGI Re-Design Method was developed as a result of over 10 years of research at Stanford and the HPI into how small design teams create new concepts. It has been taught and tested in several institutions and on several contents.

The five steps of MEDGI were distilled from observing high-performance teams in action and analyzing their interactions. The thrust of MEDGI is to move an existing object-interaction to a state of potentiality and then reform it into a new object-interaction.

On the macro level, Mapping activity is laying out current object-interactions and their accompanying narratives on a time and space map. Much more than simple ‘understanding,’ Mapping entails creating a shared representation of the current state of affairs. Another salient difference between ‘understanding’ and Mapping is that ‘understanding’ in DT often stops at having a linguistic account for what exists. While the generality of a linguistic representation has the power to cover many situations, it can lack the specific and situated characteristics of an object-interaction or experience that often yield insights that lead to well-crafted design interventions. Mapping ensures that the representation of the state of affairs is externalized and somewhat persistent. A map allows team members to point to and to refer to specific points in time and space that they can then address. Furthermore, because the map is an external representation, a map affords reduced cognitive loading and can free up cognitive processes and allow room for imagination.

On the micro-interaction level, Mapping refers to not only stating the state of things at specific moments in time and in space but also to enacting these moments, which is to say bringing an experience to the table, instead of just a description. For example, instead of a designer exclusively stating that ‘you hold the bottle and twist the cap,’ the designer would act that out too, either as a full enactment or as a marking. These specific gestural maps constitute offers that enable other team members to pick the gestural cues up and transform them. Mapping can also be done with sketches.

Educing refers to identifying and highlighting what works and what does not work, and pain and pleasure points. On a macro level, educing often means encoding the map problem and success areas, literally identifying and highlighting them for the team to see. As in Mapping, this step ensures that very little important and shared information is kept or lost in memory because it is held in an external representation.

On the micro-level, Educing refers to enacting what works and what doesn’t work, pain and pleasure points, on an experiential level. As in Mapping, this entails more than a verbal account of what works and what does not work, Educing asks the designer to act out the moments in the object-interaction that could be reduced or augmented. For example, the struggle to twist the bottle cap off, or the pleasure of a clever mechanism that allows a user to feel mastery. Here again, as in each step of MEDGI, the gestural cues constitute offers that enable the other team members to transform them.

Disrupting refers to suggesting a change to the state of affairs in an object-interaction, and take the form of questions like, ‘what happens if...?’ On a macro level disruption can take the form of suggesting the introduction of whole new technologies, or sweeping changes that could result of combining different, seemingly unrelated fields. For example, ‘what are some of the things that could happen if we could hear brain waves?’ in the field of neuroscience.

On a micro level, in an analogous manner to Mapping and Educing, Disrupting refers to acting out the force of change. For example, making a gestural enactment of throwing something that was previously static. In Disrupting, there is little or no explicit notion of a solution; disrupting is a move towards exposing the potentiality of an object-interaction.

Gestalting refers to ‘roughing in’ a new object-interaction. On a macro level, Gestalting is seen as a general picture of the field of possibilities that could result from the Disruption in the previous phase.

On a micro level, Gestalting is the enactment of a broad picture with few details articulated. Gestures are generally not fully enacted, but marked in a gestural shorthand (Kirsh 2011).

Integrating refers to when the new state of affairs comes together, crystallized in a new articulation. On a macro level, Integrating entails the thorough definition of specifications for manufacture and distribution, detailed attention to touchpoints for compelling user experience and interaction, as well as systems integration.

On a micro level, Integrating entails noting experiential factors such as touchpoints (e.g., buttons and adjustors, pay points) and new potential names for the product or service. Unlike Gestalting, in which enactment is characterized by broad marking, Integrating tends towards full enactment.

In the course of researching and teaching team-based design, we have frequently observed that poorly performing teams are either not aware of what phase in which they are acting or not in agreement about what phase they are in. This observation holds true on both the macro and micro levels of place in the process. The remedy for this is to explicitly provide a framework (for strong cognitive outcomes) as well as the repetitive practice of acting (for strong skill-based outcomes) in a phase appropriate manner.

4.1.2 Dimensions of Engagement

The Dimensions of Engagement constitute an architecture for redesigning products and services from a systems point of view (Edelman 2011). Each dimension delineates at what level—incremental, mid-level, and radical—interventions can be accomplished. The Dimensions of Engagement emphasize the interdependence of two elements of redesign: objects and their context. The implication of this is that designers create both objects and the context in which the objects operate when they redesign.

For example, when a designer designs a smartphone, she designs a whole set of interactions—usability, scenarios and networks—not merely the smart phone-as-object itself as proposed by the great Italian designer Achille Castiglioni. Great design is characterized by a harmonious agreement that unites each level of engagement to the others. A relevant example of this is the development of the Apple iPod in the context of the network of content acquisition and delivery that constitutes the iTunes system. Apple’s aim was to create a new, seamless experience that was a radical change from the disjointed way in which people acquired, transferred and listened to music. This re-design of object-interactions depended on getting the three levels of the Dimensions of Engagement to work together seamlessly: the core function of the new way of enjoying music in the context of the social and technical network, the general form factors and functions (which constitute depth) in the context of use-case scenarios, and the touchpoints (e.g., buttons and adjustors) in the context of usability (See Fig. 1).

We have observed, both in research settings, that and training settings that poorly performing design teams are often either not aware of the levels of a product service architecture, or they are not aware on which level they are working or transitioning to and from, or are not in explicit or tacit agreement about whether they are dealing with touchpoint/usability issues or core/network issues. Dimensions of Engagement provide a shared model of team interaction that allows negotiation and anticipation of areas for development. Mastery of the theoretical aspects of the Dimensions of Engagement leads to better cognitive outcomes of instruction, the repetitive practice of gestural articulation, handing off, receiving and transforming object-interactions on the different levels of the Dimensions of Engagement leads to more robust team interactions and skill-based outcomes.

4.1.3 Analytic Questions/Answers and Generative Questions/Answers

Exploring our design teams engage in asking the right questions we build on work by Ozgur Eris (Eris 2003). Eris studied the kinds of questions that designers ask when they are working in teams. He found that a combination of Deep Reasoning Questions and Generative Design Questions are needed for successful design outcomes.

Deep Reasoning Questions (DRQs) are characterized by inquiry concerning specification, comparison, and verification. DRQs are analytic questions that address what is actual and often what is feasible. DRQs ask questions like, ‘what are the dimensions?’ or ‘will this work?’.

Generative Design Questions (GDQs) are concerned with generating a field of possibilities. GDQ’s are generative questions that address a range of potential outcomes. GDQs ask questions like, ‘what happens if we change the user?’ or ‘what are the ways we can change the process?’.

In practice, we have found that the highly articulated terms ‘Deep Reasoning Question’ and ‘Generative Design Question,’ as well as their acronyms ‘GDQ’ and ‘DRQ’, are difficult for design students to remember. We have chosen to simplify them and hence refer to them as Analytic Questions and Generative Questions, respectively.

Furthermore, while Eris does not explicitly give a name to the kinds of answers that are appropriate to the questions, we have adopted the convention of calling them Analytic Answers and Generative Answers. These distinctions are very useful in training scenarios as an agreement between the form of a question and an answer has an impact on team performance.

Significantly, there is a strong correspondence between Eris’s questions and the five phases of MEDGI. In fact, we are led to consider the underlying orientation of Analytic Questions/Answers and Generative Questions/Answers to be the backbone of MEDGI. The first two phases of MEDGI, Mapping and Educing, are purely analytic; they ask and answer ‘what is there now?’ and ‘what works and doesn’t work?’ respectively. The next two phases, Disrupting and Gestalting, are generative; they ask ‘what happens if?’ and provide sketchy answers of potential directions to explore. The final phase of MEDGI is Integrating, which is an Analytic phase when what was potential is made concrete and actual; Integrating asks and answers ‘exactly how is that going to be?’

In both research and training situations, we have observed that a common phenomenon in low performing teams happens when players ask analytic questions during a performance phase when generative questions are a better fit, and vice versa. Moreover, on many occasions, we have noted that players will inadvertently answer an analytic question with a generative answer and vice versa. Both of these instances demonstrate tacit or explicit a misalignment in the team as to where they are in the process or a lack of awareness of the impact of the characteristics of questions on design inquiry. The remedy for this is explicit instruction in the nature of questions and answers in design performance (resulting in better cognitive outcomes) and repetitive practice in asking phase appropriate questions and answering them accordingly (resulting in better skill-based outcomes).

4.1.4 Media Models

Designing-as-Performance is predicated on the model that cognition is both extended and distributed. In respect to distributed cognition, the arc of the design process is broken into smaller phrases or fragments. In respect to extended cognition, the elements that constitute performance are (1) theory and thus language, (2) gestural and behavioral interactions, and (3) the shared objects or representations that design teams enlist when they are redesigning (Edelman and Currano 2011).

The two previous three performative patterns have concentrated on frameworks for phase appropriate interactions, with an emphasis on linguistic and gestural performance. The Media Models framework posits that the tools and representations designers use can be considered cognitive prostheses, augmenting, and shaping how designers speak and behave. Media Models as a performative pattern emphasizes gestural performance in relation to objects and the behaviors that are associated with them. The Media Models framework classifies the tools and representations designers use and share along the axes: concrete to abstract and pluripotent to differentiated.

We have found that the media (tools and representations) designers use and share can encourage Analytic Questions/Answers and Generative Questions/Answers as well as the kinds of gestures (both scope of gesture and the quality of gesture) that are performed in response to the media.

The Media Models framework provides design teams with a theoretical foundation for understanding and making informed choices about the kinds of shared models that will support phase appropriate (MEDGI) extended and distributed cognition, in other words, performance.

The chart below presents the Media Models framework with representative instantiations of the same handheld device, a material analyzer. Quadrants 1 and 4 are well-defined, while Quadrants 2 and 3 are loosely-defined. Quadrants 1 and 4 are associated with Analytic Cognition; Quadrants 2 and 3 are associated with Generative Cognition. Additionally, Quadrants 1 and 2 are associated with a low level of physical expression or enactment, whereas Quadrants 3 and 4 are associated with a higher level of physical expression or enactment (Fig. 2).

Fig. 2
figure 2

Media Models framework

In both research and training scenarios, we have observed how the media itself makes an impact on the discussion and interactions. Confusion and oblique communication frequently are accompanied by mis-matched representations. For example, a carefully rendered drawing (Quadrant 1, affording analytic cognition/engagement) is brought to a generative design session (with the expectation of generative cognition/engagement), only to be ignored or worse, players on design teams get caught in a cycle of asking Analytic Questions (‘how big is that?’ ‘can you fit your hand in there?’). Generative cognition is, by definition more appropriate to move the exploration to new ground, and thus media from Quadrants 2 and 3 would support and afford effective team engagement.

5 Iterative Development and Evaluation

While much of the Designing-as-Performance curriculum content was realized before and during the 2017–2018 research period from work done in the Research to Impact Group, we enlisted an iterative approach of test-reflect-improve in cultivating that content for classroom and workshop use. There were several domains that we tested and consequently improved the content: the Digital Health Design Lab, a full-semester course at the Digital Health Center at the Hasso Plattner Institute, a Master Class called Advanced Coaching Strategies for Teams conducted at the HPI Academy for industry professionals.

5.1 Preliminary Evaluation

5.1.1 Faculty and Staff European University Workshop

A preliminary opportunity to test our training packages happened in the context of a several days DT training of faculty staff from a Polish University. The group, who had previously experienced Design Thinking, was introduced to the theoretical background of the packages followed by practical experience in the form of warm-ups, skills and drills. Specifically, the participants performed MEDGI (at the time a four-step process: MDGI), to practice distributed concept generation; and Crumpled X to work on to practice the media models framework. Feedback collected from the participants highlighted the effectiveness of concept generation when the process is distributed and that ownership of ideas gets dissolved among team members using MEDGI. On the improvements list, we noted that instructions needed to be more crisp and prescriptive for quick adoption. Since the group had an academic background, further preliminary testing was done in Industry and students.

The train-the-teachers workshop enlisted the following training materials:

  1. 1.

    Group Warm-up: One-word story (cultivates extended and distributed cognition)

  2. 2.

    Individual Skill: Crumpled X (Media Models)

  3. 3.

    Teams Drills: Disruption/Integration with objects like glasses, scissors, etc. (MEDGI)

5.1.2 Professional Workshop: Train the Trainer

Coaches from the Hasso Plattner Institute Academy led a workshop for corporate clients from the automotive industry. The context of the workshop was a creative confidence workshop. The HPI Academy coaches ran an hour-long prototype training session based on Designing-as-Performance and Performative Patterns training packages.

The Academy team consequently expanded the professional workshop session to become the Creative Confidence Bootcamp. The Bootcamp enlisted these training materials:

  1. 1.

    Group Warm-up: Human Machine (cultivates extended and distributed cognition)

  2. 2.

    Individual Skill: Crumpled House (Media Models)

  3. 3.

    In Pairs: What is/What if-questions on the Crumpled House media (Analytic and Generative Questions/Answers)

  4. 4.

    Teams Drills: Disruption/Integration with objects like glasses, scissors, etc. (MEDGI)

Valuable insight from this preliminary corporate testing was that despite the emphasis on Cognitive and Skills-based learning outcomes of D-a-P, it reinforces the effects on affective learning outcomes found in other traditional DT training formats. As an anecdote, after the crumpled house individual skill exercise, the participants—proud of what they had accomplished—mounted a “Crumpled House Exhibit” on one of the glass walls of the workshop venue to show their creative achievements.

5.1.3 Legal Design Workshop: Redesigning Contracts

This is Legal Design GbR is a Berlin-based design consultancy that concentrates on the redesign of Law. In collaboration with This is Legal Design, our research group ran a 1-day workshop for a German Law school students that focused on contractual compliance terms. For this workshop, we enlisted MEDGI as a primary development tool. We led participants through several iterations of the MEDGI cycle, and the participants generated different concepts to improve the contract’s terms compliance.

The novelty of this workshop was the introduction of the Overlay individual skill to practice Media Models. Using a tracing paper sheet, the participants sketched out the contract to have a pluripotent—rough—version of the contract they had to redesign. The result was a sketch that enabled them to ask Generative Questions in an effective manner.

One crucial insight the research team gained was that in the context of a 1-day workshop, teaching MEDGI as an overarching D-a-P “play”, was sufficient to create engagement and favorable outcomes for the participants—law students—who had never been exposed to Design Thinking. The same workshop was successfully tested with different law students in Hamburg and Amsterdam.

The Contract redesign workshop enlisted these training materials:

  1. 1.

    Group Warm-up: One-word story (cultivates extended and distributed cognition)

  2. 2.

    Individual Skill: Contract Overlay for Mapping and Educing (Media Models, MEDGI)

  3. 3.

    In Pairs: What if-questions on the Contract sketch for disruption (Generative Questions/Answers, MEDGI)

  4. 4.

    Teams Drills: 3 × Gestalting and Integration (MEDGI as an overarching “D-a-P play”) (Fig. 3)

Fig. 3
figure 3

Contract overlay to apply media models. Using tracing paper, the participants made a sketch to map the different parts of the contracts and highlight the pain points and pleasure points

5.2 Evaluation of Designing-as-Performance and Performative Patterns

After several iterations between the preliminary tests, we evaluated a more defined version of D-a-P and Performative Patterns in two venues and training scenarios—short-term coaches training and long-term student training. The short-term engagement took place at a professional Design Thinking coaches certification program. In this case, an evaluation was done with before and after questionnaires that evaluated their knowledge in the context of coaching interventions based on affective, cognitive and skill-based outcomes (Kraiger et al. 1993). The long-term engagement was a semester-long design class for Masters students in Digital Health. The evaluation for the long-term engagement included assessments for affective, cognitive and skill-based learning outcomes.

5.2.1 Master Class 2019: Advanced Coaching Strategies for Teams (1-Day Workshop)

In the Master Class, the Designing-as-Performance and Performative Patterns training-packages were tested in a short-term training format. The Hasso Plattner Institute Academy offers a Coaches Certification Program, which runs for 12 months and is divided into several training periods, including two three day Train the Trainer sessions. An additional 1 day Master Class is also offered to participants. The Certification Program lead team offered our research team an opportunity to run a master class for coaches based on Designing-as-Performance and Performative Patterns. This led to ‘Advanced Coaching Strategies for Teams’, a 1-day master class in which participants learned research-based techniques for coaching design teams.

The coaches in training learned new theory and robust methods for improving team performance through performative patterns. They were also introduced to the concepts of Affective, Cognitive and Skill-based learning outcomes, and shown examples of how these modalities are taught in sports and music. Coaches were exposed to design theory and practiced associated warm-ups, individual skills and team drills. They learned to identify high and low function team behaviors and were introduced to strategies to get teams back on track when they are performing poorly.

D-a-P and Performative Pattern materials included Warm-ups, Individual Skills, and Team Drills for each of these:

  1. 1.

    MEDGI (and the 4 Card Method, MDGI)

  2. 2.

    Dimensions of Engagement

  3. 3.

    Media Models

  4. 4.

    Analytic Questions/Answers and Generative Questions/Answers

5.2.2 Evaluation

Each semester about 25 coaches can participate in the Certification Program for Coaches at the HPI Potsdam. At our first assessment session, 25 persons were present and thus included in the study, 14 males, 9 females. Most participants reported a moderate to a high level of experience with design thinking, and most participants had prior coaching experience. 6 participants failed to fill the after questionnaire, and their data were discarded, resulting in 19 respondents. The participants filled in a short version of a questionnaire adapted from (Royalty et al. 2014) using a Likert Scale (Table 1).

Table 1 Questions and Results from the adapted Creative Agency Assessment during the short-term evaluation venue “train-the-trainers” with DT coaches

In an open-reflection round, participant’s responses to questions about what coaches did in the ‘Before’ questionnaire were characterized by descriptions of overwhelmingly affective activities such as encouraging teams and ironing out interpersonal issues on the teams. Responses to the identical ‘After’ questionnaire included accounts of more technical interventions that reflected cognitive and skill-based material that was covered in the workshop. Furthermore, the workshop evaluation demonstrated an overall increase of creative confidence of 2.8 points (an average of 0.56 points per question), or 18.3%.

5.2.3 Digital Health Design Lab, Summer Semester (15 Weekly Sessions)

In the Digital Health Design Lab course, the Designing-as-Performance and Performative Patterns training packages were tested in a long-term training format. In the Digital Health Design Lab course, Masters students Performative patterns such as the MEDGI design framework to develop and investigate their research questions throughout the semester. Supported by the Research to Impact Group as well as senior digital health scientists from diverse backgrounds, students used human-centered and design-driven approaches to explore plan, conduct, analyze and report a psychological or medical study using a digital solution. For example, behavior tracking via smartphone.

The Digital Health Design Lab teaching format was built around weekly theoretical and hands-on sessions that included warm-ups, individual skill building, and team drills. The curriculum consisted of extensive warm-ups, individual skill practice, and team drills in each of MEDGI, Dimensions of Engagement, Analytic and Generative Questions/Answers, and the Media Models Framework, as well as theoretical instruction and readings.

5.2.4 Evaluation

To evaluate the impact of the training packages in the class, three learning outcomes were assessed: cognitive, skill-based and affective outcomes (Kraiger et al. 1993). We evaluated individuals and groups for their affective development, theoretical knowledge, and skills through a questionnaire, a multiple-choice test, and a hands-on exam in which they were put into teams and given a redesign challenge (Table 2).

Table 2 Evaluation focus and methods of the effect of D-a-P on affective, cognitive and skill-based learning outcomes

Before the design training with D-a-P materials, students at the Digital Health Design Lab scored, on average, 26 out 50 in a cognitive evaluation. After training with D-a-P materials, the same students scored 46 points out of 50 in the cognitive evaluation, as shown in Fig. 4. This represents an overall improvement of 20 points or 40%.

Fig. 4
figure 4

Results of the cognitive learning outcome evaluation from 16 students taking part in the Digital Health Design Lab at the HPI. Score before corresponds to the test done at the beginning of the semester; and after refers to the test taken at the end of it

To evaluate the effects of D-a-P training on skill-based learning outcomes, we ran hands-on testing. In groups of four participants, the students were evaluated on skill composition—the ability to perform different trained skills—as well as speed and fluency of performance of those skills. The evaluation methods employed were targeted behavior observation—e.g., adequacy of reaction to interventions from other team members and evaluators—by three jury experts using a five item assessment with Likert Scale. All students performed in a range from adequate performance to fluent performance. While we did not do a preliminary assessment of the class and thus can make no detailed assessment of the improvement of skill, we note that there was a palpable and positive development of skills throughout the term. In many instances, team members could automatically perform micro-interactions in sequences, such as Mapping and Disrupting. However, Gestalting proved to be more difficult for some students. Disrupting seems to be easy, Gestalting more difficult (Fig. 5).

Fig. 5
figure 5

(a, b) Team 3 and 4 during D-a-P Final Exam, Skill Based Learning Outcome. Here we look at the proceduralization and the students’ ability to perform D-a-P tasks without continuous monitoring

To assess affective outcomes, we employed the Self-report as an evaluation method using the Creative Agency Assessment (Royalty et al. 2014), Creative Growth Mindset Questionnaire adapted (Dweck 2008), and Innovation Self-Efficacy Questionnaire adapted from (Schar et al. 2017). The students completed the self-report at the start of the semester and at the end of the semester. The results show an increase between before training and after training affective outcome assessments. In detail, the students’ reported creative agency average score increased from 3.5 to 4.2, or 0.7 points, which is an improvement of 20%. The average self-efficacy score increased from 3.4 to 4.2, or 0.8, which is an improvement of 23.5%. The growth mindset average score went from 2.7 to 3.5, or 0.8, which is an improvement of 29.6% (Fig. 6).

Fig. 6
figure 6

Results of the effect of D-a-P training on affective learning outcomes in Creative Agency Assessment, Innovation Efficacy, and Growth Mindset

In comparison to previous research in creative confidence (Royalty et al. 2014), the impact of D-a-P on affective learning outcomes exceeds standard DT training by 4%. While our study engaged 16 student participants and Royalty’s study engaged student 55 participants, both were semester long classes. The improvement rate in Royalty’s study measuring the imact of Stanford d.school training for the Creative Agency Assessment was approximately 17% (Royalty’s publication shows a graph or results, but does not supply numbers). The results for the Creative Agency Assessment study at the Digital Health Design Lab show an improvement of 21% (the pre-D-a-P assessment mean is 3.468 and the post-D-a-P assessment is 4.221, with a SD of 0.333 and 0.135 pre and post respectively). We find these results interesting, because we made no pretense of teaching to support affective outcomes. This suggests that focusing on Cognitive and Skill-Based outcomes have significant and complementary effect on Creative Agency Assessment that at least equal or exceed an approach that concentrates on creative confidence.

6 Discussion

The evaluation of a refined version of the D-a-P training packages provided us with three main takeaways regarding the issues proposed at the beginning of the chapter to overcome in order to bridge the gap between practice and research.

Regarding issues concerning the dominance of affective outcomes over skill-based and cognitive outcomes in design thinking training contexts

A stronger emphasis on skill-based and cognitive learning outcomes can contribute to bringing more clarity to what is the impact of Design Thinking training formats beyond already well accepted affective outcomes. In this sense, by focusing on learning aspects such as the amount of knowledge, accuracy, and speed of knowledge recall, our study could enrich the measurement of the effects of DT training. We observed, throughout our evaluations, that the combination of theory and practice, presented in a well-crafted and structured package enhances the effect of DT training regarding cognitive and skill-based outcomes.

The evaluation of cognitive learning outcomes may seem familiar to the reader—or may even seem old-fashioned. This is not strange, because this evaluation method is widely used in traditional education. From the research results presented in this chapter, such evaluation proved to be beneficial to the ability of the participants to recall knowledge acquired during the training.

Limitations: in the present research project, we did not study the long term effects of the D-a-P training packages. In other words, it’s not clear yet how reliable the transfer of the effects of Designing-as-Performance is after the training is over. Future assessment will need to be carried out to shed light on long term impact.

Regarding difficulties experienced by academics in translating findings into tangible solutions in DT education and industry

While the iterative process which we followed was not free of difficulties, we believe that the Research-Practice gap can be bridged through small iterations and intensive testing—failing included. As researchers, very often, we fell in the trap we were addressing. We found that academics often struggled to place the “practitioner-oriented” training materials into their academic concepts, models, and frameworks.

Regarding the inability of design thinking practitioners and program managers in using research findings to improve their organization’s performance

The preliminary evaluations, train-the-teachers workshop, a corporate workshop for the automotive industry, and the contract redesign workshop for law students showed us that a critical factor for the success of the application of research findings by practitioners has to do with its versatility. A well-structured output with potential for versatile application regarding three aspects: content, context, and length. These insights were confirmed by the result in our evaluation venues, train-the-trainers workshop, and Digital Health Design Lab. As an anecdote, after a content-overloaded workshop, practitioner feedback was cold and sharp: “you have to kill your academic darlings”.

Content-Wise

During our several iterations, we tested D-a-P for providing advanced DT content (e.g., train-the-trainers), as well as introductory DT content. D-a-P was successfully tested in different fields, such as education, DT methodologies, Law, and Health. We found that practitioners can “fill-in” the Performative Patterns with the relevant content depending on the context.

Context

Our pre-evaluation and post-evaluation were carried in diverse contexts ranging from corporate executives to students, from teachers to experienced DT coaches. It has thus shown the suitability of D-a-P in professional education, academic settings, and industry.

Length

From a half-day workshop to a semester, the training packages allowed instructors to easily adjust the duration according to the needs of every project. This finding is especially relevant since one of the early expressions of resistance we received was “there’s no time for theory in Design Thinking, let’s get rid of it!”. In our different evaluation formats, a balance between theory and hands-on proved to be functional to the goal of increasing the effect of DT training on cognitive and skill-based learning outcomes.

7 Conclusion

Designing-as-Performance and Performative Patterns are a work in progress, a translational approach to bridging the research-practice gap. There is more work to do in translating team-based design research outcomes: identifying promising candidate studies, operationalizing these outcomes, and iteratively testing new teaching materials in academic and professional venues.

Nonetheless, Designing-as-Performance stands as a radical, relevant, and rigorous redesign of Design Thinking training. Based on 20 years of empirical studies in design, as well as grounding in contemporary cognitive science, D-a-P provides a robust foundation for the evolution and future development of team-based design. It is our hope that this training approach will make significant contributions to the education and training of designers in a wide array of fields, and that these designers will make a positive contribution to the world.