Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

A number of measures that describe the impact of design thinking on an individual level have been created. They include psychological measures (Royalty et al. 2014), neuro-cognitive measures (Saggar et al. 2015), and performance tasks (Hawthorne et al. 2014). However, tools that capture the work of multiple individuals collaborating together within the bounds of an organization are needed. For the past 5 to 10 years dozens of major organizations have sent employees to multi-day design thinking trainings. In fact, a number of startups have formed in the past few years that specialize solely on providing design thinking trainings. As more individuals are trained, organizations have the ability to create teams, groups, and strategies that rely on a number of people having some level of design thinking expertise. And while individual measures are key to developing employee capacity, organizations must be able to describe their larger efforts in a way that helps them make informed iterations.

This chapter introduces two pilot studies aimed at mapping how design thinking is applied throughout an organization. The first is an Ecology Mapping of Design Thinking that describes the projects and people that are applying design thinking within an organization. In essence, this tool characterizes an organizations’ internal design thinking strategy. The second study measures how teams practicing design thinking perform. The specific focus within the study is on empathy—a core component of design thinking and one that the subjects highly value. Together the two tools being developed and tested are an extension of individual design thinking measures. Eventually this will help organizations better assess how they deploy design thinking across teams and entire business units.

2 Study 1: Developing Ecology Maps of Design Thinking in Organizations

2.1 Background

Dozens of companies have made large commitments to design thinking (Schmiedgen et al. 2016) within the past 10 years. They do so for varying reasons depending on their innovation goals (Royalty et al. 2014). These commitments have led to a clear and growing need: organizations need a better understanding for how design thinking can be used on strategic and project levels. The need is highlighted by the ever increasing number of inter-organizational communities forming strictly to share design thinking practices. In year 1 of this study (2013–2014) we began studying one of these (Royalty and Roth in press). In the current year (2014–2015) we have connected with three more. Being a part of these communities has allowed us to capture the ways members of the communities apply design thinking. Beyond that, the questions community members ask each other reveal the real challenges they face.

To satisfy this need of understanding how design thinking is applied we seek to map out how each organization is applying design thinking, who in the organization is doing it, and why design thinking is their paradigm of choice. This responds to three main research questions:

Within an organization, what are the projects, programs, and people that make up an ecology of design thinking?

How do these ecologies change over time?

What causes them to change?

Developing an accurate and robust framework that can map multiple companies has clear value to practitioners, as well as contributing, to organizational theory. Amabile’s model of innovation in organizations (Amabile 1996a) provides a theoretical framework we have used to inform our map. In her cyclical model, work environment impacts team creativity, while team creativity drives innovation. The work environment is made up of three components: organizational motivation, management practices, and resources. Organizational motivation represents the strategic goals for innovation. Management practices capture how the leaders support the creative work of employees. Resources is a broader category that includes initiatives, human capital, and more. These environmental components provide a great lens with which to look at our organizations’ ecologies. Therefore, any mapping framework needs to include them.

One might argue that the three components of team creativity should be included in the mapping. After all, teams are part of organization. But because we seek to understand the context within which teams and individuals work, we decided to use only the environmental components of Amabile’s model. As this work continues, it may make sense to leverage the rest of the model.

Our work consisted of two main steps. The first step was to understand the key aspects of the ecologies that must be mapped. This allowed us to create the framework. The second step was to collect the relevant data in order to create an initial map of each organization. To do this we collected a mix of qualitative and quantitative data.

2.2 Methods

Data were collected on the use of design thinking from two groups. One of these was through the four communities of practice (each consisting of multiple organizations), and the other targeted interviews at seven organizations.

With regards to the four communities, the primary researcher, was a participant observer in convenings of the four separate communities (Communities A, B, C, and D). Our role in each community was to simply convey what we have learned and to capture what others shared. We avoided driving any agendas or advising organization how they should apply design thinking. The four communities are:

Community A: This group is made up of four companies that come together to teach each other’s employees and share design thinking best practices. It was a focus of year 1. The group has existed for nearly 18 months and has about a dozen regular members. There are bi weekly phone calls to support continual collaboration. The community has convened in person twice and will do so again in early July. We have been a party to all but one phone call in the past year and attended every convening, and will attend in July as well. We capture meeting notes (for ourselves and the community) plus design the reflections used during the in person sessions.

Community B: A large technology and communication firm created a network of design thinking practitioners by training IT teams from client companies. They started 7 months ago and have teams from nearly 20 different organizations (including the host company). There are monthly phone calls that were preceded by two in person training sessions. We have participated in the majority of the phone calls and interviewed participants of the training session, though we were not in attendance.

Community C: Five organizations self-formed to connect for a regular video conference primarily around the topic of measuring design thinking. These sessions began about 4 months ago. There have been three meetings with a fourth scheduled for July. We joined the last session and will attend in July.

Community D: A large medical organization held a 2-day workshop in May connecting design thinking practitioners and researchers. There were representatives from 11 companies and 8 universities. The goal was to launch joint projects between industry and academia. The first of multiple follow up calls is scheduled for late June—which we plan to attend. Much of the data for the mappings and feedback on our framework was provided during the workshop.

To further our understanding of design thinking application, we conducted eight in depth interviews of practitioners from seven different organizations. All the organizations were members of a larger community. The interview subjects were selected based on their leadership role in the network (e.g. meeting attendance and role). They range from entry-level positions to senior leaders. We used the open ended interview protocol developed in the first year of this study (Glaser and Strauss 2009; Royalty and Roth in press). The interviews were open coded with four general categories emerging: people practicing design thinking, projects that use design thinking, programs that use design thinking, and unknowns. The difference between project and programs is mainly one of size. Projects typically involve one or two teams working to solve a specific business goal (e.g., how do we help elderly feel more financially secure). Programs are large and involve many more people (e.g., an incubator program for ten teams). Unknowns are the explicitly stated questions practitioners have for their colleagues from other organizations. We conducted follow up interviews to gather more details to feed our map.

2.3 Initial Results

Using interview and observational data from in depth interviews and community participation in tandem with Amabile’s framework, we created an ecology mapping framework consisting of three components for each organization. These components correspond to the three parts of Amabile’s work environment and are illustrated in Fig. 1.

Fig. 1
figure 1

Ecology mapping framework. (a) Reflecting management, (b) reflecting resources, (c) reflecting management motivation practices

The Innovation Target 2 × 2 (Fig. 1a) shows where the design thinking efforts fall relative to a general innovation framework. Incremental or breakthrough innovations that focus on cost savings or revenue generation. This relates to Amabile’s organizational motivations. We plot known design thinking projects and programs that exist in any given year for each organization.

The Design Activities Diagram (Fig. 1b) captures how much of each activity an organization is doing. This relates to Amabile’s resources. There are four distinct axes, resulting in a spider diagram of each organization. What will be important is the general shape of the resulting diagram. For this iteration we chose axes of: experts (number of), employees trained (percentage of total workforce), training (number of events per year), and projects (number of projects per year).

Finally the Employee Training Profile represents the depth of design thinking capacity in a workforce and where that capacity is located along a leadership spectrum. This relates to Amabile’s management practices. This chart captures the distribution of activity. Design activity is a combination of practicing, leading, and teaching designing thinking. The horizontal axis shows how much design activity exists in different leadership levels.

The initial mappings based on the data collected thus far are presented below.

The map in Fig. 2 shows a relative balance between trainings and project work with a slight bias towards training. There is a range of cost savings and revenue generating projects with more of a focus on radical change. The Employee Training Profile reveals what we hear from many organizations; design thinking capacity at the top and bottom, but not the middle.

Fig. 2
figure 2

Ecology map of a large high tech firm for 2014

This organization represented in Fig. 3 does not have as much design thinking investment as the one above in Fig. 2. You can see only four projects, and a number of trainings done by a few experts on a small part of the organization. This indicates that they are targeting design thinking on a few people. Notice that all the projects are focused on incremental change. This may be a result of the tight margins and low risk profile of the industry.

Fig. 3
figure 3

Ecology map of a large transportation company for 2014

Figures 4 and 5 illustrate a single company across 2 years. In 2013 the company heavily focused on trainings and projects leading to incremental change. The following year they greatly reduced the number of trainings and increased the number of design thinking experts and projects. The projects also tended to focus more on radical change. This reveals a strategic shift the company made. They initially wanted design thinking to be a cultural value of all the innovation teams. However, the pressure for business results in the form of new products meant that the design thinking team needed to shift gears. They moved their experts to support real teams and created an incubator program that invested in specific high potential projects rather than continue to teach as many people as possible.

Fig. 4
figure 4

Ecology map of a large financial services firm for 2013

Fig. 5
figure 5

Ecology map of a large financial services firm for 2014

3 Study 2: Measuring Team Behaviors and Outcomes

3.1 Background

Measuring is a separate but necessary complement to mapping. If the goal of mapping is to see what actions are happening and how they are being executed, the goal of measuring is to understand value of these actions. Measuring design thinking is difficult partly because it is a multifaceted paradigm. It is also contextually dependent (Martelaro et al. 2015). There is some existing work on measuring design thinking. Much of it stems from classroom or laboratory studies (Goldman et al. 2012; Sonalkar et al. 2014; Hawthorne et al. 2014). Measuring design thinking in organizational contexts is relatively unexplored. However, previous work on this project yielded a good starting place; principles for measuring in design thinking in real settings (Royalty and Roth in press). Namely measures should be easy to use and align with the organization’s innovation goals.

Capturing any complex set of behaviors in context is a difficult task. Still, there have been studies that have successfully accomplished that task. Csikszentmihalyi captured subjects actions and “random” times by paging them and having them capture what they were doing (Csikszentmihalyi and Larson 1987). This contributed to his theory of flow. Another study by Amabile measured creative activities employees performed via a daily journal (Amabile et al. 2005).

Based on these previous methods we developed a simple measurement tool aimed at collecting individual design thinking behaviors exhibited at work. It is important to note that we defining behaviors as actions taken that support design thinking methods and mindsets. For example, talking with potential customers is a behavior that supports need finding. We intentionally chose not to capture techniques, like asking open-ended questions, because they are often subtle and difficult to detect. Also, we believe that the value of design thinking is in affecting behavior, not simply applying tools. Finally, our goal is to link behavior and outcomes. We want to show that people who work in way driven by design thinking generate more creative outcomes. This is the focus of our main hypothesis:

Design thinking behaviors will be positively correlated with creative outcomes.

The first subjects were employees at a large North American healthcare management company that we will call Canyon Healthcare. The measures were included as part of the Canyon Healthcare Leadership Program (CLP). The leadership program is comprised of approximately 30 middle managers. These are current Canyon employees that have been identified as the company’s next leaders. The program lasts 10 months, starting early May and going through February. Each participant is expected to work about 15 % of their time on CLP activities. The central challenge of the program is to use design thinking to tackle an ambiguous problem specifically outside their skill set, i.e. make the hospital discharge experience more delightful. The program kicks off with a design thinking training and features two additional design thinking sessions throughout the year. Participants work in teams of four and check in with a trained design thinking coach at least once a week. The project sponsors are senior Caynon leaders who review the outcomes at the end of the program.

CLP is a good setting to explore our measure for three reasons. The first is because all the teams have the same training and schedule. That means they will essentially be focusing on certain behaviors at certain times. The second is the 10-month time constraint. The projects have a fixed amount of time and the duration is enough to collect a sizable amount of data. Finally, the existing mechanism of critique by both coaches and senior leaders produces a relevant measure of creative output; that is the goal of the program. The major drawback is a lack of a control group. However, we can compare the creative output of all the teams and compare that to the amount of behaviors each reported.

3.2 Methods

The behavior measure we developed captures a weekly “snapshot” of activity. Every Thursday we send an email out to all CLP participants asking them to respond to a prompt. Each prompt has a numeric component and a short answer component. See Table 1 for examples. The data are collected through a form embedded in the weekly email. The prompts change depending on what design thinking mode the teams are working on. For example, the program calls on participants to focus on prototyping in July and August, so the prompts then focus on collecting feedback. Periodically a snapshot will include multiple prompts.

Table 1 Snapshot prompts

The quantitative entries from individuals are summed to form a team behavior index for each mode. The qualitative entries are collected and shared with the coach every month. In addition to collecting the snapshots, all participants were given the creative agency/creative growth mindset survey (Dweck 2000, 2006; Royalty et al. 2012) before and after the initial design thinking training.

The coaches, evaluate their teams every 3 months using both the results from the qualitative entries of the snapshots and their general experience with the team. Coaches rate output for each phase of their design thinking process (empathy, define, ideate, prototype) on a scale from 1 to 6. For example, two expert coaches performed the evaluation at the end of the empathy phase. Each coach rated each team on a scale from 1 to 6 on eight total measures spanning three general categories as seen in Table 2. The two coach’s scores for each measure were averaged. The average scores for each measure were then summed across all measure to form a single creative output score for each team.

Table 2 Expert coach evaluation measures

We predict that teams that empathize with more customers will have a higher creative output.

The categories were designed based on two primary factors. The first factor is the overall program goals of CLP. CLP encourages the participants to drive towards solutions that are novel in the healthcare space, yet relevant and meaningful to their customers. The second factor influencing the design of the evaluation comes from accepted definitions of creativity as the production of novel and useful ideas (Amabile 1996b).

3.3 Initial Results

As this study is ongoing, we will present only the results from the first 12 weeks of the program here. This encapsulates the empathy phase where teams primarily collected human centered insights through ethnographic techniques (interviewing, observations, emersions, etc). The phase culminated with an initial point of view statement, an ideation session, and two rough prototypes. The next phase will include iterating the ideas and prototypes.

Eight teams completed snapshots for 9 weeks. There was a ninth team that we excluded from the study because they consisted of only three teammates. The first 2 weeks of the program were filled with introductions and an initial design thinking training. No snapshots were administered because the team projects had not started. The final week of the empathy phase concluded with the entire cohort meeting to share and iterate the initial concepts. Again, participants did not fill out snapshots that week.

The individual response rate was 55 % across all participants. At least one member of a team responded 89 % of the time. Most of absentees come from three teams, one of which had technical difficulties filling out the form from their work email addresses. All nine snapshots featured an empathy prompt. The third and seventh snapshot had a collaboration prompt as well. A prompt asking participants to evaluate their working environment was added to the fourth and eighth snapshot.

Using the grading criteria laid out in the methods section above, teams were each given a total creative output score. Figure 6 plots the total number of people interviewed in the empathy phase versus the team’s creative output.

Fig. 6
figure 6

Empathy engagements vs. creative output

Although the number of teams (eight) makes the sample size too small to run meaningful statistics, the pattern is important to explore if we wish to test this measure on a larger sample. The second and third highest performing teams ranked third and first in empathy engagements, respectively. This is encouraging as it supports our prediction. However, two teams do not follow this pattern and it is not clear if they are outliers or not.

Another comparison between empathy engagements and creative outputs yields a stronger pattern. Figure 7 illustrates the total number of people engaged with over the last 2 weeks of the empathy phase compared with creative output.

Fig. 7
figure 7

Empathy engagements (last 2 weeks) vs. creative output

This suggests that there might be a correlation between empathy engagements and creative output. The fact that this pattern appears stronger when constraining engagements to the final 2 weeks of the first phase may imply that continuing to practice empathy while developing a POV, generating ideas, and building prototypes has a positive effect on those tasks. Although these data are currently too limited to fully analyze statistically, the emerging patterns suggest that this measure should be explored further.

3.4 Limitations

The largest limitation is that this study is too small to detect any statistically significant correlations. However, as a pilot the goal is to determine if a full study should be run and what changes should be made. Another issue is the response rate. Although an 89 % team based response rate is reasonably high, the 55 % individual response rate is fairly low. The next study will either have to raise the individual response rate or place more emphasis on team response (i.e. asking each team member how many empathy engagements the entire team had). That way even if only two of four teammates fill out a snapshot, we have a stronger sense of what the team did as a whole. Finally, it is not clear if summing the three output categories to generate a total creative output is appropriate. Perhaps a more nuanced view of team performance would lead to clearer correlations.

4 Conclusion

The initial results of both pilot studies suggest that these two measures—the ecology mapping and snapshots—have the potential to accurately describe the application of design thinking in real life settings. Larger follow up studies are currently being developed.

Reflecting on feedback given by the participants in the studies, there appear to be some clear use cases for each tool. First, the ecology mapping can provide an overview of what types of projects and activities an organization uses design thinking on. Furthermore, the mapping can show how these efforts change over time. This enables leaders to more easily assess their innovation efforts and make adjustments. There is also the potential to compare design thinking strategies across companies. A core driver of the emergent communities of practice is to understand how others use design thinking. The ecology mapping allows practitioners to compare and contrast their organization with others. This could foster greater collaboration and sharing. The next step is to develop an efficient process that captures data necessary to generate ecology mappings. Then mappings for six to ten companies will be created.

The snapshots have already altered the way coaches connect with their teams. The data they provide help indicate when teams are actively engaged in design thinking. When that activity decreases coaches learn of it from the snapshot responses and can intervene. Another benefit is that all the information collected in the snapshots can be used during project reports and other storytelling settings. Teams can quickly and convincingly communicate the amount of empathy work they performed and show where the insights that drive their process came from. This is important as design thinking work may be perceived as capricious and not rigorous. As the pilot study continues it will be interesting to see how the snapshot data looks over a longer timeline.

Ultimately these measures, once developed, can be combined with individual measures of design thinking. The ability to authentically evaluate design thinking is essential to the spur further growth. Valid measures can help leaders and practitioners iterate towards stronger applications and strategies. But perhaps more importantly, a variety of metrics, mappings, and assessments can demonstrate what the impact of design thinking really is. This is the key question this movement faces.