There is a concept in medical education that is steadily gaining traction that judgement and decision-making are the most important of a clinician’s skills and, ultimately, the critical conduit for healthcare outcomes. It has been estimated that in the USA, up to 80,000 deaths occur annually in hospitalised patients due to diagnostic failure [1]. The overall number of deaths including non-hospital settings is unknown, but the diagnostic failure rate in the outpatient setting is put at about 5% (12 million US adults annually) [2] whereas the overall rate in the specialities is estimated at 10–15% [3]. While system problems and knowledge deficits account for some of these failures, the majority appear to be due to how clinicians think, in particular how they solve problems, reason and make decisions. Very few, if any, clinicians will have taken a decision-making course during their training. Courses explicitly aimed at promoting reasoning, problem solving, and decision-making are rare or non-existent in medical training. Further, standard texts in major disciplines devote their columns to what needs to be known—medical facts—not how the clinician should think. The prevailing assumption appears to have been that if knowledge acquisition is accomplished, the rest will take care of itself. Historically, decision-making skills have been acquired tacitly in the course of training from mentors, instructors, and trainers.

Decision-making is the ultimate currency of our existence. In fact, flawed personal decisions are the leading cause of death. Keeney estimates that one million of the 2.4 million deaths in the US in the year 2000 were premature and could be attributed to personal decisions [4]. This amounts to about 45% of all deaths. While such bare statistics leave out many issues around social, psychological, and other determinants of behaviour, they do stress the importance of the final common pathway, the decisions we actually make. Literally, the most important decision we have to make in life is how to make decisions. Thus, medical educators need to gain familiarity with current models of decision-making and an understanding of those factors that compromise it and those that optimise it.

A program in critical thinking was established at Dalhousie University Medical School 5 years ago, with the explicit goal of improving clinical reasoning skills (Fig. 1). Its major elements are briefly reviewed here. At the outset, students are introduced to the dominant model for decision-making—dual process theory. Originally developed in the cognitive sciences [5, 6], this theory has been adapted for medicine and its operating characteristics described [7, 8]. Essentially, it provides a scaffold for describing the two major pathways for decision-making: System 1 processes are subconscious, non-verbal, autonomous and described generally as intuitive, whereas System 2 processes are conscious, verbal, deliberate, and generally construed as analytical. Both processes are essential to decision-making processes in the clinical setting [9].

Fig. 1
figure 1

Dalhousie model of clinical reasoning

Rationality is the foremost characteristic of the accomplished decision-maker. Despite ongoing debates in the cognitive sciences literature [10], and in medicine [11], about how rationality should be defined, there appears to be a consensus that it should conform to a normative standard, i.e., how the decision ‘ought’ to be made. Decision-making should be logical, evidence-based, follow the laws of science and probability and lead to decisions that are consistent with rational choice theory. This appears to be the dominant paradigm. An important goal for medical educators is to define those factors that compromise normative rationality so they can be taught to learners (Fig. 2).

Fig. 2
figure 2

Principle sources of rationality failure

Stanovich [12] breaks these factors down into two broad areas. The first is how the brain processes information. A major influence in processing is the cognitive miser function—a tendency of the brain to lessen cognitive work. This is not simple laziness but rather a predisposition of the brain to automatically minimise effort and seek cognitive ease. Kahneman [13] characterises it as the assumption of WYSIATI (‘what you see is all there is’) which is vulnerable to a variety of biases and may lead to hasty decisions that are based on too little information. Another processing problem, though less common, lies in over-thinking a problem which may lead to analysis paralysis [14].

Other rationality failures arise from problems with the software of the individual’s brain, referred to by Perkins as mindware [15]. Essentially, the term describes intrinsic properties of brain function, whether inherited or learned, i.e., whatever an individual can learn to improve reasoning skills, problem solving, and decision-making throughout life. The process depends on metacognition, thinking about thinking—the deliberate act of thinking about what we are thinking during the process of gaining experience in specific cognitive domains. Mindware can suffer from gaps, where essential knowledge has not been acquired or is forgotten, and is not available for use. This appears to be more of a problem for biostatistical knowledge than clinical knowledge per se [16, 17]. It can also suffer from contamination, when the individual’s software is corrupted by bias and fallacious thinking [12]. Strategies for improving rationality depend on the obverse of these failures: making sure problems are treated with sufficient breadth and depth; that sufficient information has been acquired; ensuring that appropriate mindware is available to solve specific problems; identifying and avoiding or mitigating biases that impact decision-making; and being able to identify logical fallacies in reasoning [18]. Nisbett [19] has described a number of useful mindware tools and strategies to improve thinking.

Critical thinking is an integral component of rationality. Training in the essential elements will mitigate some aspects of rationality failures, facilitate improvements, and generally lead to improved thinking [20]. Multiple studies have demonstrated that, providing critical thinking interventions are deliberate and explicit, they will result in demonstrable improvements in problem solving and reasoning skills. A large meta-analysis of critical thinking interventions yielded impressive gains [21]. However, by itself, critical thinking is no guarantee of rationality; it appears to be a necessary but not sufficient condition.

Cognitive and affective biases are a major issue in all decision-making in all domains of human behaviour; medicine is no exception. Understanding and detection of bias is an important feature of good clinical decision-making. Stanovich [22] maintains that rationality may be defined by the degree to which the decision maker is vulnerable to bias. Cognitive bias plays a critical role in a diversity of areas including the broad scientific community [23], the business community [24], the judicial system [25], US national intelligence [26], aeronautics [27], the policies of World Bank [28], the insurance and underwriting industry [29], US foreign policy [30], decisions of nations to go to war with each other [31], healthcare leadership [32], and many others.

In recent years, every major discipline and several subdisciplines of medicine have acknowledged the impact of cognitive and affective biases on clinical judgement in decision-making, especially in the context of the diagnostic process [33]; continuing denial of the critical role of bias in diagnostic failure [34] is not sustainable. The quintessential component of the diagnostic process is clinical decision-making, and all decision-making is vulnerable to bias. From a practical perspective, diagnostic failure is the primary source of medical litigation around the world. The Sullivan Group which has been evaluating emergency medicine litigation for 20 years reports first-hand experience with ‘countless cases where highly qualified veteran physicians and advanced practice clinicians have fallen prey to the impact of deep bias affecting the human thought process’ [35]. It is important, therefore, to educate students about the nature and extent of bias in clinical decision-making, as well as the range of strategies that have been used for cognitive bias mitigation [36]. In a recent review [37], over 40 strategies were described, with a sub-group, forcing functions, appearing particularly effective.

Metacognition is the broad strategy for thinking about thinking. Just as it is a hallmark of cognitive development, so too is it an indicator of cognitive performance.

Reflection, mentioned earlier, appears to be an indispensable attribute for the development of competence in reasoning. It promotes the acquisition of sound mindware. Reflection requires deliberately looking at and thinking about what we are doing and feeling, and then making interpretations [38]. It requires detaching oneself from the immediate situation to review the immediate consequences of one’s decisions and what further consequences might result. It allows subsequent planning, and the development of strategies to improve effectiveness in decision-making.

Mindfulness, the personal awareness of self and, in this case, one’s responsibilities to one’s patient, accomplishes similar goals and may promote awareness of bias and its mitigation [39].

Communication is an essential process in the evolution of the decision-making, with several critical interfaces. In one estimate, 80% of serious medical errors were later attributed to communication problems among the medical staff. Paramount is effective communication with patients and their family and friends [40]. Communication within the team is important; the wisdom of the crowd on balance appears to exceed the rationality of the individual, but there are negative aspects, too, of communication within the team. A number of specific biases may be involved in the exchange of information, especially at handover [41].

Ordering and interpretation of appropriate investigations. The importance of this aspect of decision-making is often under-appreciated. Not infrequently, tests are ordered without thinking, in some cases as a routine, and sometimes in an indiscriminate shotgun approach where ‘the usual suspects are rounded up’ (referred to as the Casablanca strategy) [42]. Yet test ordering is a critical part of the pre-analytic laboratory stage; non-judicious ordering may lead to error. Getting test results before any actual clinical decision-making is done may violate Bayesian reasoning and lead to unnecessary treatment and compromised outcomes. The Choosing Wisely campaign, an initiative of the American Board of Internal Medicine Foundation [43], is aimed at avoiding unnecessary testing, not simply to prevent waste, but to improve clinical reasoning and outcomes.

Patient preferences. No decision about the patient should be made without the patient. Patients need to be fully involved in decisions that are made about their care and need to have their say [44]. Ultimately, any decision about a patient needs active engagement and, whenever possible, informed input from the patient and/or their caregivers.

Conclusion

In 2005, David Eddy remarked that in the 1970s ‘Medical decision making as a field worthy of study did not exist’ [45]. Some would argue that since that time, not much has changed. In a recent survey of directors of clinical clerkships, over half did not offer courses on clinical decision-making, and directors felt that fewer than 5% of students had an excellent grasp of the issue [46]. Although strong gains have been achieved in quantitative decision-making, with some exceptions [47,48,49], there continue to be few initiatives where real ‘flesh and blood’ frontline clinical decision-making are addressed [50].

However, there is now an emerging momentum for real clinical decision-making that does include medical education initiatives [51,52,53,54,55], and the imperative to bring the findings from cognitive science into the medical arena is now better understood. At the report release webcast for ‘Improving Diagnosis in Health Care’, George Thibault, a member of the committee, remarked ‘The critical thinking in understanding the common causes of cognitive errors can be and should be taught to all health professionals, particularly physicians, nurse practitioners and physician’s assistants who will be in a primary diagnostic role and who will work in the diagnostic process’ [56]. The logic here seems fairly clear. We know there are problems in clinical decision-making, reflected in the unacceptably high rate of diagnostic failure; we know that the main causes likely reside in the way clinicians think, rather than in clinical knowledge deficits; we know that cognitive and affective biases are significant contributors to thinking failures; and we know there are a growing variety of options to mitigate these problems. Furthermore, we have an ethical obligation to provide specific training in critical thinking and decision-making in undergraduate, postgraduate, and continuing medical education [57], as well as explicit training in the recognition and mitigation of cognitive biases.