Introduction

“Why do people make stupid decisions?” Ordinary psychological explanations are easily found, such as “Some people are just not smart enough” or “Some people do not think enough”. In fact, both responses represent typical ordinary assumptions on why humans make bad decisions. The first response suggests constancy and consistency in human decision making: smart people always make good decisions, whereas less intelligent people always make bad decisions. The second answer implies causality: good decisions are the inevitable result of good thinking skills and decision making processes: Just get enough information, think enough, process it intensely—and you will make good decisions!

Both responses may seem plausible from an ordinary psychological point of view. However, both responses do not adequately address the complex mechanisms of human decision making. In some cases, the interrelations between decision making process and decision making outcome may even be the opposite of what ordinary psychological assumptions might suggest: too much involvement in a decision problem can sometimes lead to poorer decisions (see, for example, “sunk cost fallacy”, later in this chapter). And sometimes decisions can be bad—not because of too little information, but because of too much information (see “too much choice effect”, Grant & Schwartz, 2011). In conclusion, no human being is immune to making bad decisions.

Failure in Managerial Decision Making

The notion that all human beings are vulnerable to erroneous decision making processes has been neglected for a long time. Normative models of decision making portrayed humans as “homi oeconomici” and were popular. However, the inadequacy of these models was demonstrated in many empirical studies in the second half of the twentieth century. These studies demonstrated systematic biases in human thinking and decision making (see Tversky & Kahneman, 1974).

Biases can be understood as systematic cognitive deviations from optimal decision making (see Thompson, Neale, & Sinaceur, 2004). They are increasingly discussed in the context of strategic decision making. Strategic decision making is a core task of management boards in companies (Hambrick & Mason, 1984) and marked by a high level of complexity and uncertainty (Eisenhardt & Zbaracki, 1992; Harrison, 1992). Hence, biases that lead to bad decision making outcomes can have serious negative impact—not only on the individual decision makers but also on the company as a whole.

Innovations as (Cognitive) Psychological Processes

In this chapter, the effect of biases on managerial decision making processes is illustrated with regard to innovation projects.Footnote 1 In fact, innovation processes are highly psychological processes (see Klein & Sorra, 1996). Social and organizational psychological research has widely demonstrated that aspects such as leadership, conflict management styles, and communication skills in teams are essential to the success of innovation projects (e.g., Shipton, Fay, West, Patterson, & Birdi, 2005). Interestingly, findings from cognitive psychological research, which mainly deals with thinking, perception and decision making of individual actors, are hardly noticed in the field of innovation research. Although cognitive psychology is often considered a basic psychologic research field, it offers important impulses for practical innovation work as well. The central role of cognitive processes in the course of innovation projects can easily be demonstrated by looking at “key words” that are used to describe innovation: In many companies, innovation management is intended to lead to creativity and ideas. Methods such as brainstorming are combined with slogans such as think big or think outside the box. Finally, when it comes to finding the one good idea out of many ideas, judgments and decisions have to be made.

Cognitive psychology provides theories and studies on these aspects and they are all relevant for innovation processes. However, even after almost 30 years, Van de Ven’s (1986) statement seems to hold true: “Much of the folklore and applied literature on the management of innovation has ignored the research by cognitive psychologists and social psychologists.” (p. 594).

Innovation Processes as Cognitively Challenging Fields of Action

Innovation decision making can be considered as far more challenging than decision making in organizational routine tasks. The following five prototypical features of innovations demonstrate the challenges of innovation contexts (see Krause, 2004) and, in turn, the likelihood of biases and failure in innovation decision making:

Novelty

As the Latin term (innovare = to start something new) etymologically suggests, innovations represent new situations for all actors involved. Therefore, they also demand new ways of thinking: “It matters little, so far as human behavior is concerned, whether or not an idea is ‘objectively’ new. (…). The perceived units of the idea for the individual determines his or her reaction to it. If the idea seems new to the individual, it is an innovation.” (Rogers, 1983, p. 11). Thus, innovations always represent “individual novelties” (Hauschildt & Salomo, 2007, p. 24): established patterns of thinking and deciding have to be brought into question as they may be insufficient ways to deal with new situations that occur in the course of innovations.

Uncertainty

Many strategic innovation decisions have to be made right at the beginning of a project—at the very same time when the level of uncertainty is the highest with regard to essential project aspects (see Souder & Moenaert, 1992). Precise forecasts are nearby impossible (see Jalonen, 2011), and many questions can not be answered accurately: Will there be enough time, money or technological resources in the future to implement an innovation? Will the employees support the innovation plans? Will customers accept or reject the new product or the new service?

Complexity

Funke (1991, p. 186) identifies typical features of complex problem-solving situations (see Fig. 1 for a graphic illustration): (a) a large number of influencing variables or determinants which (b) can influence each other due to their high degree of connectivity and (c) often remain invisible for a long time. Furthermore, new influencing variables often arise in the course of the problem solving process: complex problem-solving situations are dynamic and they tend to change rapidly (d). A final feature of complex problems is “polytely” (e): the complexity of problems very often leads to contrary and even contradictory goals and perspectives of the relevant actors. Hence, innovation projects can be considered typical complex problems. Success and failure of innovation projects depend on a multitude of interconnected influencing factors, such as societal and market factors. For example, the demand for sustainable products in an industry can start innovation activities of all competitors in a market and, in turn, change the entire relevant market. At the same time, decision makers within an organization often do not possess enough information on all influencing factors that might determine innovation success. As innovations are often long-term projects that can take several years to be implemented, new influencing factors are permanently emerging (e.g., new competitors enter the market). Polytely emerges due to the fact that different actors outside of an organization (owners, political interest groups) or in an organization (executives, middle management) have contradictory goals within an innovation project—which can lead to huge conflicts.

Fig. 1
figure 1

Features of complexity (see Funke, 1991, S. 186): (a) large number of variables, (b) connectivity, (c) intransparency, (d) dynamic developments, (e) polytely

Conflicts

Innovation processes are always change processes. They lead to a variety of conflicts, as old and new ways of thinking and “how we do things around here” compete with each other. Power and relations have to be re-negotiated (Scholl, 2004). Conflicts are a consequence and a syndrome of the human struggle with novelty, complexity and uncertainty, as the different opinions reflect different ways of perceiving ambiguous situations.

Volatility

Very often, organizational changes turn out to be much bigger than we dare to predict. In financial markets, “volatility” stands for the extent of fluctuations in prices. These are usually underestimated: After a time of apparently uniform development there is often the conviction that this trend will continue in small steps of change—which is rarely the case (Mieg, 2001). In the business context, strategic decisions often fail because they are based on estimated developments in the future as decision makers omit considering the aspect of volatility (Mintzberg, 1994). High volatility can be expected wherever and whenever the expectations of many stakeholders come together. This applies, for example to large companies, to politics, and generally: to innovations. The financial economist Robert Shiller (2000) called this phenomenon “irrational exuberance”, irrational enthusiasm. Probably two phenomena are critical for volatility: first, we tend to underestimate the extent of possible changes. Second, we make changes even more different than we once expected, as we tend to show a collective overreaction due to our “irrational exuberance”, all striving in the same direction.

Bounded Rationality in Innovation Decision Making

The characteristics of innovation (novelty, uncertainty, complexity, conflicts, volatility) demonstrate that failure in innovation decision making is not necessarily due to insufficient or a lack of “rational” thinking. It is difficult to deal with these challenging characteristics, as human decision makers do not correspond to the ideal of the “homo oeconomicus” (Hilary & Menzly, 2006; Smith & Winkler, 2006). Instead, decision makers can only deal with these characteristics within the limits of their bounded human rationality. The concept of “bounded rationality” is closely linked to the name of Herbert Simon, who defined “bounded rationality” as the “limits upon the ability of human beings to adapt optimally, or even satisfactory, to complex environments.” (Simon, 1991, p. 133). These limits are particularly applicable to strategic decision making in innovation contexts: the limited cognitive information processing capacities of human beings only allow limited perspectives on problems and solutions (Hammond, Keeney, & Raiffa, 1998; March & Simon, 1958; Scholl, 2004; Simon, 1976), which may lead to biases in decision making processes. In what follows, we will provide some examples of biases that often occur in innovation decision making processes and that might add to the likelihood of failure in relevant innovation projects.

Failure Due to Wishful Thinking

Case Study

In an industrial company, each idea for an innovation project had to be submitted along with descriptions of three possible outcome scenarios: In a “best case scenario” an extraordinary successful outcome scenario of an innovation project had to be described. In a “realistic case scenario”, an outcome scenario had to be described that was most likely to be achieved. In a “worst case scenario” an extraordinary negative outcome had to be described and, if necessary, the company’s losses and expenditures in that case.

After a series of failed innovation projects, the management decided to take a closer look at those failed innovation projects. The re-analysis showed that one third of the failed innovation projects actually had even worse outcomes than the ones that were expected in the “worst case scenario”: The company losses were even higher.

A deeper analysis showed that the drastically failed innovation projects had some aspects in common. In particular, critical and sceptical thoughts on the innovation idea were not appreciated at the beginning of the projects. Instead, critical perspectives on the idea were perceived as hindering the project flow—hence, neither project team members nor members of the management team brought up critical aspects at all.

A one-sided positive view of innovation actors is particularly strong at the beginning of innovation projects. “Wishful thinking” (Scholl, 2004, p. 35) guides information processing. Critical aspects, difficulties and challenges are trivialized. In consequence, overoptimistic forecasts are pretty common and anticipated costs or expenditures for resources are underestimated (see Schwenk, 1988).

Failure Due to Overconfidence

Another often observed phenomenon in innovation projects is “overconfidence”. On an individual level, overconfidence describes the tendency to assess one’s own abilities and competencies as more pronounced than they actually are (Nguyen & Schüßler, 2012). In the innovation context, overconfidence may lead managers to overestimate their own knowledge about and their own insight into detail aspects. As a consequence, opinions of others and particularly those, who are lower in hierarchy, will neither be heard nor taken into account (see Scholl, 2004).

Failure Due to the “Not-Invented-Here” Phenomenon

The conviction to be better (informed) than others can be accompanied by the tendency to not adequately compare one’s own judgments and assumptions with comparable project experiences from other organizations: comparable projects in other companies or businesses are not properly studied. Hence, valuable opportunities to learn from others are not taken. This is where the “not-invented-here phenomenon” (Katz & Allen, 1982; Scholl, 2004) comes into play. Experiences, ideas and problem solving strategies of others are not considered, only for one reason: because they didn’t occur within one’s own organization. For instance, the “not-invented-here”-phenomenon can occur in ERP (enterprise resource planning) projects. ERP projects often share a lot of comparable challenges across different organizations: New software solutions need to be suited to structures and processes, which very often turn IT projects into organizational-wide change management projects. Although these change dynamics have been documented in many companies, the relevant lessons-learned are very often not used for better assessments of the situation in one’s own company.

Failure Due to Inappropriate Project Models

The opposite of the “not-invented-here” phenomenon can also lead to failure in innovation decision making. Failure might occur when particularly successful projects from other organizations are uncritically used as models for one’s own project or organization. In this case, information collection is often insufficient as the unique specifics of the reference projects are not considered. Organizations always differ in terms of external factors such as industry, market position or economic situation as well as internal factors such as company size, organizational structure and employee motivation to implement innovations.

While “failure due to the not-invented-here phenomenon” happens because decision makers neglect valuable experiences and developments in other companies, “failure due to inappropriate project models” happens because decision makers try to copy an extraordinary success story without adapting the story to their own situation.

Failure Due to the Confirmation Bias

The “confirmation bias” describes the human tendency to put much more weight on information that confirms one’s own point of view than on information that might contradict one’s own perspective (Bogan & Just, 2009). For example, innovation actors tend to prefer talking to experts and colleagues that will likely confirm their own point of view on an innovation. In addition, ambiguous information, that could be interpreted either pro or contra an idea, is often taken as a proof for one’s own opinion. The “confirmation bias” reveals a paradoxical relationship between the amount of information and decision making quality: Decision makers may have searched and received a great amount of information prior to their decision. But as long as all information points towards a similar direction and, in turn, does not add new or concurring perspectives on the decision making subject, the occurrence of a “confirmation bias” is even more likely (see also Schulz-Hardt, Jochims, & Frey, 2002).

Failure Due to the “Sunk Cost Fallacy”

Case Study

The head of the R&D department within a company announced the development of a new household product. All members of the R&D department agreed on the fact that this product would combine many innovative features that would revolutionize the market.

Due to the strong conviction of their technical stuff, the management board decided to finance the expensive development of a first prototype. After a couple of months, the development of the prototype turned out to be much more complicated than expected. Still the R&D staff decided to continue their efforts as they had already invested a huge amount of time in the new product development. Hence, they requested additional financial resources from the management board. The management board conceded the new budget requests as they had already invested a huge amount of money in the idea.

The whole process of requesting and conceding further resources recurred a couple of times. Finally, the first prototype was developed and presented to potential customers. The customers came to a conclusion quickly: “Maybe the features of this product are innovative. But…we dont need any of these new features.” The idea of the R&D staff to develop a technically sophisticated product did not correspond with the needs of the customers. Those demanded a simple, but easy to use product.

Contrary to the theoretical conception of the “homo oeconomicus”, decision makers are often not willing to revise their own judgments and decisions. Surprisingly this is also the case when new information emerges and clearly challenges the original ideas and judgments. During the course of an innovation project new information may suggest to reconsider and revise the original decisions. In some cases, the new information may even indicate that terminating an unfinished innovation project might be the best option. However, this new information is ignored very often. Instead, decision makers tend to cling to their original hopes, judgments and decisions. Even when new problems arise and become visible, decision makers try to defend and justify their judgments and decisions as long as possible (Kirsch, 1983).

This phenomenon is related to the “sunk cost fallacy” (Arkes & Ayton, 1999, p. 591f.), which has been studied widely from psychologists and economists. The “sunk cost fallacy” describes the human tendency to act according to the principle: “I have already put so much effort into my idea, so I am going to invest even more”. Even if innovation projects are (or are on the way to) failing they are still funded with new money, more time or more resources. In similar ways, the “sunk cost fallacy” can lead decision makers to invest new resources in new product developments—even if it becomes more and more evident that there is little or no demand for the end product. Hence, additional resources are burned, instead of terminating the project at a certain point and accepting the fact that the invested resources are gone (and taking the loss as “learning from the past”). As for “wishful thinking”, the “sunk cost fallacy” is likely to occur when critical perspectives on an innovation project are not taken into consideration or are not allowed (Scholl, 2004).

Ways to Deal with Biases

Biases can influence innovation decision making in a negative way and, in turn, contribute to failure in innovation projects. Hence it seems crucial to find ways of dealing with the effects of “bounded rationality” in managerial decision making.

An important first step to address biases is to embrace and accept the fact that human thinking and decision making is limited (see Scholl, 2004). The famous words of the Greek philosopher Sokrates “I know that I know nothing” seem to be appropriate guiding principles in managerial decision making. The fact that people can learn from embracing their own cognitive limits has been demonstrated in a study performed by Larwood and Whittaker (1977), see also Schwenk (1988). The researchers compared the performance of management students with the performance of actual managers in a management scenario task. In fact, both groups overestimated their own performance in that task. However, the tendency to overestimate their own performance was less pronounced among managers, who admitted that they had overestimated their own abilities in prior management tasks. Hence, being aware of one’s own likelihood to “fail” and being able to connect one’s own failures in decision making with one’s own “bounded rationality” seem to be a promising path to better decision making in the future.

In addition, people can foster the quality of their decision making process by deliberately considering aspects that might contradict their own judgments and opinions (e.g., Herzog & Hertwig, 2009). This “mental antagonist” can help individual decision makers to look at their own opinions from different ankles and, if necessary, to adapt accordingly. Team decision making processes can also profit from a similar strategy by implementing an “advocatus diaboli”—a selected team member that supports the group by critically commenting on ideas, topics and group processes and, in turn, fosters the likelihood that other team members take critical perspectives as well. By way of that, the “advocatus diaboli” can prevent “wishful thinking” and “group think” within working teams (see Janis, 1983).

In fact, the idea to install an “advocatus diaboli” also found its way in the innovation decision making practice of the company illustrated in the first example. After having identified the flaws and biases in past innovation projects, the management decided to provide an “advocatus diaboli” for each project team meeting. Prior to the team meeting, one member of the project team was determined to watch out for potential biases in the decision making process—and to address them immediately.

Conclusion

The present chapter started with the fundamental question of how to explain bad decision outcomes. It was argued that bad decision outcomes do not occur necessarily due to the lack of intelligence or lack of proper reasoning. In fact the nature of complex decision problems and human “bounded rationality” often lead to cognitive biases, which, in turn, lead to unfavourable decision making. We demonstrated the impact of such cognitive biases using examples from innovation projects, which represent typical managerial decision problems. In the last part of this chapter, we outlined one way to address biases in managerial decision making. Reflecting on one’s own cognitive limits seems to be a proper way to handle biases. Paradoxically, it is the realization and acceptance of our own cognitive limits that seem to reduce the likelihood of falling into the traps of biased decision making. This insight is thought-provoking, as there is still a strong pressure on managers to self-present themselves as analytical and rational (Costanzo & MacKay, 2009; Matthiesen & van Well, 2012).

In sharp contrast to the usually reported innovation success stories in business publications, a social-cognitive psychological perspective on failure in innovation decision making may add an important lesson: The presented ideas in this chapter can encourage innovation actors to learn and talk about the capabilities and limits of human decision making and to take them into account prior to important innovation decisions. In addition, a thorough post-hoc analysis of decision making processes in failed innovation projects can be a valuable learning experience for organizations: Instead of “back-and-forth accusations” in the aftermath, a social-cognitive psychological perspective facilitates constructive reflections on how judgments and decisions were made in the course of the innovation process.

Cognitive biases are a part of human decision making. Hence, the goal of this chapter on failure in innovation decision making is not to present ways to “avoid” or “eliminate” biases, as neither avoiding nor eliminating biases is possible. Biases have been observed and studied in almost all fields of human thinking and acting and are more or less pronounced in all of us. Biased decision making, bad decision outcomes and failures are a part of human reality. Hence, a crucial life task seems to not only discuss and identify biases in judgment and decision making, but at the end of the day, to accept the “flawed human condition” in ourselves and others.