Keywords

Clinical decision support systems (CDSS) are computer systems designed to impact clinician decision making about individual patients at the point in time that these decisions are made. With the increased focus on the prevention of medical errors that has occurred since the publication of the landmark Institute of Medicine report, To Err Is Human, computer-based physician order entry (CPOE) systems , coupled with CDSS, have been proposed as a key element of systems’ approaches to improving patient safety and the quality of care [14]. In addition, CDSS have been a key requirement for “meaningful use” of electronic health records (EHRs) as defined by the Centers for Medicare and Medicaid Services (CMS) [5] and will become even more important with the growth of new models of care that are arising as a result of the passage of the Affordable Care Act (see also Chap. 7) [6]. If used properly, CDSS have the potential to change the way medicine has been taught and practiced. This chapter will provide an overview of clinical decision support systems, summarize current data on the use and impact of clinical decision support systems in practice, and will provide guidelines for users to consider as these systems are incorporated in commercial systems, and implemented outside the research and development settings. The other chapters in this book will explore these issues in more depth.

1.1 Types of Clinical Decision Support Systems

There are a variety of systems that can potentially support clinical decisions. Even Medline and similar healthcare literature databases can support clinical decisions. Decision support systems have been incorporated in healthcare information systems for a long time, but in the past these systems usually have supported retrospective analyses of financial and administrative data [7, 8]. Recently, sophisticated analytic approaches have been proposed for similar retrospective analyses of both administrative and clinical data (see Chap. 3 for more details on data mining approaches to CDSS) [9, 10]. Although these retrospective approaches can be used to develop guidelines, critical pathways, or protocols to guide decision making at the point of care, such retrospective analyses are not usually considered to be CDSS. These distinctions are important because vendors often will advertise that their product includes decision support capabilities, but that may refer to the retrospective type of systems, not those designed to assist clinicians at the point of care. CDSS have been developed over the last 50 years and many of them have been used as stand-alone systems or part of noncommercial homegrown EHR systems (see Chaps. 13, 14, and 15). However, as the interest has increased in CDSS, more EHR vendors have begun to incorporate these types of systems, or at least the capability to include them [11].

Metzger and her colleagues [12, 13] have described CDSS using several dimensions. According to their framework, CDSS differ among themselves in the timing at which they provide support (before, during, or after the clinical decision is made) and how active or passive the support is, that is, whether the CDSS actively provides alerts or passively responds to physician input or patient-specific information. Finally, CDSS vary in how easy they are for busy clinicians to access [12].

Osheroff and colleagues have developed a taxonomy of different types of clinical decision support that broadens the definition to include knowledge bases , order sets, and other ways of supporting clinical care in addition to alerts and reminders [14].

Another categorization scheme for CDSS is whether they are knowledge-based systems , or non-knowledge-based systems that employ machine learning and other statistical pattern recognition approaches. Chapter 2 discusses the mathematical foundations of the knowledge-based systems , and Chap. 3 addresses the foundations of the statistical pattern recognition type of CDSS. In this overview, we will focus on the knowledge-based systems , and discuss some examples of other approaches, as well.

1.1.1 Knowledge-Based Clinical Decision Support Systems

Many of today’s knowledge-based CDSS arose out of earlier expert systems research, where the aim was to build a computer program that could simulate human thinking [15, 16]. Medicine was considered a good domain in which these concepts could be applied. Beginning in the 1970s and 1980s, the developers of these systems began to adapt them so that they could be used more easily to support real-life patient care processes [17, 18]. Many of the earliest systems were diagnostic decision support systems , which are discussed in Chap. 11. The intent of these CDSS was no longer to simulate an expert’s decision making, but to assist the clinician in his or her own decision making. The system was expected to provide information for the user, rather than to come up with “the answer,” as was the goal of earlier expert systems [19]. The user was expected to filter that information and to discard erroneous or useless information, also to be active and to interact with the system, rather than just be a passive recipient of the output. This focus on the interaction of the user with the system is important in setting appropriate expectations for the way the system will be used.

There are three parts to most CDSS. These parts are the knowledge base , the inference or reasoning engine, and a mechanism to communicate with the user [20]. As Spooner explains in Chap. 2, the knowledge base consists of compiled information that is often, but not always, in the form of if–then rules. An example of an if–then rule might be, for instance, IF a new order is placed for a particular blood test that tends to change very slowly, AND IF that blood test was ordered within the previous 48 h, THEN alert the physician. In this case, the rule is designed to prevent duplicate test ordering. Other types of knowledge bases might include probabilistic associations of signs and symptoms with diagnoses, or known drug–drug, drug-allergy, or drug–food interactions.

The second part of the CDSS is called the inference engine or reasoning mechanism, which contains the formulas for combining the rules or associations in the knowledge base with actual patient data.

Finally, there has to be a communication mechanism, a way of getting the patient data into the system and getting the output of the system to the user who will make the actual decision. In some stand-alone systems, the patient data need to be entered directly by the user. In most of the CDSS incorporated into electronic health records , which is the majority of CDSS today, the data are already in electronic form in the EHR, where they were originally entered by the clinician, or they may have come from laboratory, pharmacy, or other systems. Output to the clinician may come in the form of a recommendation or alert at the time of order entry, or, if the alert was triggered after the initial order was entered, systems of email and wireless notification have been employed [21, 22].

CDSS have been developed to assist with a variety of decisions. The example of the IF-THEN rule described above was for a system designed to provide support for laboratory test ordering. Diagnostic decision support systems have been developed to provide a suggested list of potential diagnoses to the users. The system might start with the patient’s signs and symptoms, entered either by the clinician directly or imported from the EHR. The decision support system’s knowledge base contains information about diseases and their signs and symptoms. The inference engine maps the patient’s signs and symptoms to those diseases and might suggest some diagnoses for the clinicians to consider. These systems generally do not generate only a single diagnosis , but usually generate a set of diagnoses based on the available information. Because the clinician often knows more about the patient than can be put into the computer, the clinician will be able to eliminate some of the choices. Most of the diagnostic systems have been stand-alone systems, but researchers at Vanderbilt University incorporated a diagnostic system that runs in the background, taking its information from the data already in the EHR [23]. This system was incorporated into the McKesson Horizon Clinicals system. The use of CDSS at Vanderbilt is described in detail in Chap. 15.

Other systems can provide support for medication orders, a major cause of medical errors [1, 24]. The input for the system might be the patient’s laboratory test results for the blood level of a prescribed medication. The knowledge base might contain values for therapeutic and toxic blood concentrations of the medication and rules on what to do when a toxic level of the medication is reached. If the medication level was too high, the output might be an alert to the physician [24]. There are CDSS that are part of computerized provider order entry (CPOE) systems that take a new medication order and the patient’s current medications as input, the knowledge base might include a drug database and the output would be an alert about drug interactions so that the physician could change the order. Similarly, input might be a physician’s therapy plan, where the knowledge base would contain local protocols or nationally accepted treatment guidelines, and the output might be a critique of the plan compared to the guidelines [25]. Some hospitals that have implemented these systems allow the user to override the critique or suggestions, but often the users are required to justify why they are overriding it. The structure of the CDSS knowledge base will differ depending on the source of the data and the uses to which the data are put. The design and implementation considerations, including usability and other implementation issues, are discussed in Chaps. 4 and 6.

1.1.2 Nonknowledge-Based Clinical Decision Support Systems

Unlike knowledge-based decision support systems, some of the nonknowledge-based CDSS use a form of artificial intelligence called machine learning, which allows the computer to learn from past experiences and/or to recognize patterns in the clinical data [26]. This type of approach is described briefly in Chap. 2 and in detail in Chap. 3. Artificial neural networks and genetic algorithms are two types of nonknowledge-based systems. These types of systems will become more important in the future as data analytics and other “big data” applications become more widely used in healthcare [9, 27].

Although, as Ozaydin et al. describe in Chap. 3, research has shown that CDSS based on pattern recognition and machine learning approaches may be more accurate than the average clinician in diagnosing the targeted diseases [2830], many physicians are hesitant to use these CDSS in their practice because the reasoning behind them is not transparent [29]. Most of the systems that are available today involve knowledge-based systems with rules, guidelines, or other compiled knowledge derived from the medical literature. The research on the effectiveness of CDSS has come largely from a few institutions where these systems were developed, although in recent years as commercial systems have become more widespread, there is a growing literature on their effectiveness in a variety of settings [31, 32].

1.2 Effectiveness of Clinical Decision Support Systems

Clinical decision support systems have been shown to improve both patient outcomes, as well as the cost of care. Many of the published studies have come out of a limited number of institutions including LDS Hospital, Partners’ Healthcare , Regenstrief Medical Institute and, Vanderbilt University [31]. Chapter 13 describes Partners’ system, Chap. 14 describes the CDSS deployed in the HELP system at LDS Hospital and Intermountain Health Care, and Chap. 15 describes the system at Vanderbilt . It is interesting that all three of these pioneering institutions are now moving to commercial EHRs , but the lessons they have learned over the years will also be useful for using CDSS in commercial systems.

In addition, systematic reviews include an increasing number of studies from other places that have shown positive impact [32, 33]. Chapter 9 by Lobach provides a framework for evaluating CDSS and discusses the evaluation data on CDSS in more detail. CDSS can minimize errors by alerting the physician to potentially dangerous drug interactions, and the diagnostic programs have also been shown to improve physician diagnoses [3437]. The reminder and alerting programs can potentially minimize problem severity and prevent complications. They can warn of early adverse drug events that have an impact on both cost and quality of care [4, 3740]. These data have prompted the Leapfrog Group and others to advocate their use in promoting patient safety [3]. The Leapfrog Group also has developed an evaluation tool to help hospitals check the safety of their systems [41]. Many of the studies that have shown the strongest impact on reducing medication errors have been done at institutions with very sophisticated, internally developed systems, and where use of an EHR , CPOE, and CDSS are a routine and accepted part of the work environment [31]. As more places that do not have that cultural milieu, or a good understanding of the strengths and limitations of the systems, begin to adopt CDSS, integration of these systems may prove more difficult [42].

Several published reviews of CDSS have emphasized the dearth of evidence of similar effectiveness on a broader scale and have called for more research, especially qualitative research, that elucidates the factors which lead to success outside the development environment [43, 44]. More recent studies have examined some of these factors [45]. Studies of the Leeds University abdominal pain system, an early CDSS for diagnosis of the acute abdomen, showed success in the original environment and much more limited success when the system was implemented more broadly [46, 47]. As Chap. 9 shows, while the evidence is increasing, there are still limited systematic, broad-scale studies of the effectiveness of CDSS. In the future those data are likely to be more available. Not only is there a lack of studies on the impact of the diffusion of successful systems, but actual use of CDSS is variable [48]. However, use has clearly been increasing over the last decade. In 2003, for instance, there were few places utilizing CDSS [49, 50]. The KLAS research and consulting firm conducted an extensive survey of the sites that had implemented CPOE systems [50]. As KLAS defined these systems, CPOE systems usually included CDSS that were defined as, “. . . alerting, decision logic and knowledge tools to help eliminate errors during the ordering process”[50]. Although most of the CPOE systems provided for complex decision support , the results of the KLAS survey showed that most sites did not use more than ten alerts and that many sites did not use any of the alerting mechanisms at order entry [50]. By 2013, The Office of the National Coordinator for Health Information Technology (ONC) found that 74 % of physicians were using CDSS that provided warnings of drug interactions or contraindications and 57 % had implemented at least one clinical decision support rule that provided reminders for guideline-based interventions or screening tests [48].

Metzger and McDonald report anecdotal case studies of successful implementation of CDSS in ambulatory practices [13]. While such descriptions can motivate others to adopt CDSS, they are not a substitute for systematic evaluation of implementation in a wide range of settings. Unfortunately, when such evaluations are done, the results have sometimes been disappointing. A study incorporating guideline-based decision support systems in 31 general practice settings in England found that, although care was not optimal before implementing the computer-based guidelines, there was little change in health outcomes after the system was implemented. Further examination showed that, although the guideline was triggered appropriately, clinicians did not go past the first page and essentially did not use it [25]. Alert overrides are also a frequent occurrence [51] and there are suggestions that physician characteristics influence the overrides [52]. Another study found that clinicians did not follow the guideline advice because they did not agree with it [53]. Configuring systems to avoid these problems is a challenge that ONC has tried to address [54]. In addition, Payne et al provided recommendations for improving the usability of CDSS for medication ordering [54].

There is a body of research that has shown that physicians have many unanswered questions during the typical clinical encounter [55, 56]. This situation should provide an optimal opportunity for the use of CDSS, yet a study tracking the use of a diagnostic system by medical residents indicated very little use [57]. This is unusual given that this group of physicians in training should have even more “unanswered questions” than more experienced practitioners, but this may be partially explained by the fact that the system was a stand-alone system not directly integrated into the workflow. Also, Teich et al. suggest that reminder systems and alerts usually work, but systems that challenge the physicians’ judgment, or require them to change their care plans, are much more difficult to implement [58]. A case study of a CDSS for notification of adverse drug events supports this contention. The study showed that despite warnings of a dangerous drug level, the clinician in charge repeatedly ignored the advice. The article describes a mechanism of alerting a variety of clinicians, not just the patient’s primary physician, to assure that the alerts receive proper attention [24]. Bria made analogies to making some alerts impossible to ignore. He used the example of the shaking stick in an airplane to alert the pilots to really serious problems [59]. In addition to the individual studies, Kawamoto et al. [45] examined factors associated with CDSS success across a variety of studies. They found that four factors were the main correlates of successful CDSS implementation . The factors were:

  1. 1.

    Providing alerts/reminders automatically as part of the workflow;

  2. 2.

    Providing the suggestions at a time and location where the decisions were being made;

  3. 3.

    Providing actionable recommendations; and

  4. 4.

    Computerizing the entire process.

Thus, although these systems can potentially influence the process of care, if they are not used, they obviously cannot have an impact. Integration into both the culture and the process of care is going to be necessary for these systems to be optimally used. Institutions that have developed such a culture provide a glimpse of what is potentially possible (see Chaps. 13, 14, and 15). However, Wong et al., in an article published in 2000, suggested that the incentives for use were not yet aligned to promote wide-scale adoption of CDSS [42]. With the availability of the incentives for meaningful use of Health IT from 2010 onward, there has been more adoption of EHRs in general, as well as CDSS, but there are also complaints about the usability of the systems. Chapter 4 explores the usability issues of CDSS and Chap. 6 describes strategies for optimal design and implementation of CDSS.

There are several reasons why implementation of CDSS is challenging. Some of the problems include issues of how the data are entered. Other issues include the development and maintenance of the knowledge base and issues around the vocabulary and user interface. Finally, since these systems may represent a change in the usual way patient care is conducted, there is a question of what will motivate their use, which also relates to how the systems are evaluated.

1.3 Implementation Challenges

The first issue concerns data entry, or how the data will actually get into the system. Some systems require the user to query the systems and/or enter some or all of the patient data manually. This is especially likely with the diagnostic decision support systems [34]. Not only is this “double data entry” disruptive to the patient care process, it is also time consuming, and, especially in the ambulatory setting, time is scarce. It is even more time consuming if the system is not mobile and/or requires a lengthy logon. Much of this disruption can be mitigated by integrating the CDSS with the EHR. As mentioned above, today most EHRs have integrated decision support capabilities. What that means is if the data are already entered into the medical record, the data are there for the decision support system to act upon, and, in fact, many systems are potentially capable of drawing from multiple ancillary systems as well. This is a strength, but not all clinical decision support systems are well-integrated, and without technical standards assuring integration of ancillary systems, such linkages may be difficult. There are also a number of stand-alone systems, including some of the diagnostic systems and some drug interaction systems, for example. This means that patient data have to be entered twice—once into the medical record system, and again, into the decision support system . For many physicians, this double data entry can limit the usefulness of such systems.

A related question is who should enter the data in a stand-alone system or even in the integrated hospital systems. Physicians are usually the key decision makers, but they are not always the people who interact with the EHR. In fact, in recent years, non-physician medical scribes are often the main people interacting with the EHR [60]. One of the reasons for linking CDSS with physician order entry is that it is much more efficient for the physician to receive the alerts and reminders from decision support systems . The issue concerns not just order entry, but also mechanisms of notification. The case study mentioned earlier described a situation where the physician who received the alert ignored it [24]. These systems can be useful, but their full benefits cannot be gained without collaboration between the information technology professionals and the clinicians.

Although it might not seem that vocabularies should be such a difficult issue, it is often only when clinicians actually try to use a system, either a decision support system or electronic health record or some other system with a controlled vocabulary, that they realize either the system cannot understand what they are trying to say or, worse yet, that it uses the same words for totally different concepts or different words for the same concept. The problem is there are no standards that are universally agreed upon for clinical vocabulary and, since most of the decision support systems have a controlled vocabulary, errors can have a major impact.

1.4 Future Uses of Clinical Decision Support Systems

Despite the challenges in integrating CDSS, when properly used they have the potential to make significant improvements in the quality of patient care. While more research still needs to be done evaluating the impact of CDSS outside the development settings and the factors that promote or impede integration, it is likely that increased commercialization will continue. CDSS for non-clinician users such as patients are likely to grow as well (see Chap. 10). There is increasing interest in clinical computing and, as mobile computing become more widely adopted, better integration into the process of care may be easier.

Similarly, trends in cloud computing and service oriented architecture are leading to new approaches for delivering CDSS to the user (see Chap. 5 for more details on service oriented architecture for CDSS) [61]. As discussed in Chap. 12, genomic data will become increasingly available for use in clinical care and CDSS that can be used with decisions around genomic medicine will also be needed. Finally, as the data for electronic health records become more standardized and shareable, the use of decision support in the public health arena is likely to increase.

In addition, the concerns over medical errors , patient safety , and meaningful use of health IT (see Chap. 7) have prompted a variety of initiatives that will lead to increased incorporation of CDSS. Physicians are legally obligated to practice in accordance with the standard of care, which at this time does not mandate the use of CDSS. However, that may be changing. The issue of the use of information technology in general, and clinical decision support systems in particular, to improve patient safety, has received a great deal of attention [1, 2]. Healthcare administrators, payers, and patients, are concerned, now more than ever before, that clinicians use the available technology to reduce medical errors . The Leapfrog Group [3] early on advocated physician order entry (with an implicit coupling of CDSS to provide alerts to reduce medication errors) as one of their main quality criteria, and CPOE, e-prescribing and clinical decision support are required for meaningful use (see Chap. 7).

Even if the standard of care does not yet require the use of such systems, there are some legal and ethical issues that have not yet been well addressed (see Chap. 8 for a fuller discussion of these issues). One interesting legal case that has been mentioned in relation to the use of technology in health care is the Hooper decision. This case involved two tugboats (the T.J. Hooper and its sister ship) that were pulling barges in the 1930s when radios (receiving sets) were available, but not widely used on tugboats. Because the boats did not have a radio, they missed storm warnings and their cargo sank. The barge owners sued the tugboat company, even though the tugboat captains were highly skilled and did the best they could under the circumstances to salvage their cargo. They were found liable for not having the radio, even though it was still not routinely used in boats. Parts of the following excerpt from the Hooper decision have been cited in other discussions of CDSS [62].

. . . whole calling may have unduly lagged in the adoption of new and available devices. It never may set its own tests, however persuasive be its usages. Courts must in the end say what is required; there are precautions so imperative that even their universal disregard will not excuse their omission. But here there was no custom at all as to receiving sets; some had them, some did not; the most that can be urged is that they had not yet become general. Certainly in such a case we need not pause; when some have thought a device necessary, at least we may say that they were right, and the others too slack. [63]

It has been suggested that as CDSS and other advanced computer systems become more available, the Hooper case may not only provide legal precedent for liability for failure to use available technology, but the legal standard of care may also change to include using available CDSS [64]. Since this area is still new, it is not clear what type of legal precedents will be invoked for hospitals or practices that choose to adopt, or avoid adopting, CDSS. It has been suggested that while the use of CDSS may lower a hospital’s risk of medical errors , healthcare systems may incur new risks if the systems either cause harm or are not implemented properly [65, 66]. In any case, there are some guidelines that users can follow that may help ensure more appropriate use of CDSS.

1.5 Guidelines for Selecting and Implementing Clinical Decision Support Systems

Significant parts of this section and smaller parts of other sections were reprinted with permission from Berner ES. Ethical and Legal Issues in the Use of Clinical Decision Support Systems. J. Healthcare Information Management, 2002;16(4):34–37.

Osheroff et al. offer practical suggestions for steps to be taken in the implementation of CDSS [14]. The “five rights” of clinical decision support (right information to the right person in the right intervention format through the right channel at the right time in workflow) that Osheroff et al. advocate are a good summary of what needs to be done. The guidelines below address other issues such as those involved in selecting CDSS, interacting with vendors, and assuring that user expectations for CDSS are appropriate. They also touch on legal and ethical issues that are discussed in more detail in Chap. 8.

1.5.1 Assuring That Users Understand the Limitations

In 1986, Brannigan and Dayhoff highlighted the often different philosophies of physicians and software developers [67]. Brannigan and Dayhoff mention that physicians and software developers differ in regard to how “perfect” they expect their “product” to be when it is released to the public [67]. Physicians expect perfection from themselves and those around them. Physicians undergo rigorous training, have to pass multiple licensing examinations, and are held in high esteem by society for their knowledge and skills. In contrast, software developers often assume that initial products will be “buggy” and that eventually most errors will be fixed, often as a result of user feedback and error reports. There is usually a version 1.01 of almost any system almost as soon as version 1.0 has reached most users. Because a CDSS is software that in some ways functions like a clinician consultant, these differing expectations can present problems, especially when the knowledge base and/or reasoning mechanism of the CDSS is not transparent to the user. The vendors of these systems have an obligation to inform the clinicians using the CDSS of its strengths and limitations.

1.5.2 Assuring That the Knowledge Is from Reputable Sources

Users of CDSS need to know the source of the knowledge if they purchase a knowledge-based system . What rules are actually included in the system and what is the evidence behind the rules? How was the system tested before implementation ? This validation process should extend not just to testing whether the rules fire appropriately in the face of specific patient data (a programming issue), but also to whether the rules themselves are appropriate (a knowledge-engineering issue). Sim et al. advocate the use of CDSS to promote evidence-based medical practice, but this can only occur if the knowledge base contains high quality information [68].

1.5.3 Assuring That the System Is Appropriate for the Local Site

Vendors need to alert the client about idiosyncrasies that are either built into the system or that need to be added by the user. Does the clinical vocabulary in the system match that in the EHR ? What are the normal values assumed by a system alerting to abnormal laboratory tests, and do they match those at the client site? In fact, does the client have to define the normal values as well as the thresholds for the alerts? The answers to the questions about what exactly the user is getting are not always easy to obtain.

When users ask questions about the sources of knowledge or its content, they may find that the decision support system provided is really just an expert system shell and that local clinicians need to provide the “knowledge” that determines the rules. For some systems, an effort has been made to use standards that can be shared among different sites, for example, the Arden syntax for medical logic modules [69], but local clinicians must still review the logic in shared rules to assure that they are appropriate for the local situation. Using in-house clinicians to determine the rules in the CDSS can assure its applicability to the local environment, but that means extensive development and testing must be done locally to assure the CDSS operates appropriately. Often a considerable amount of physician time is needed. Without adequate involvement by clinicians, there is a risk that the CDSS may include rules that are inappropriate for the local situation, or, if there are no built-in rules, that the CDSS may have only limited functionality. On the other hand, local development of the logic behind the rules may also mean that caution should be exercised if the rules are used at different sites. The important thing is for the user to learn at the outset what roles the vendor and the client will have to play in the development and maintenance of the systems. Although systems have decision support capabilities, the effort involved in customizing the CDSS for the local site may be considerable, and the result may be that CDSS capabilities are underutilized.

1.5.4 Assuring That Users Are Properly Trained

Just as the vendor should inform the client how much work is needed to get the CDSS operational, the vendor should also inform the client how much technical support and/or clinician training is needed for physicians to use the system appropriately and/or understand the systems’ recommendations. As CDSS for genomic medicine (see Chap. 12) become available this new area may require even more training, since users may be unfamiliar with the medical content as well as the CDSS. It is not known whether the users of some CDSS need special clinical expertise to be able to use it properly, in addition to the mechanics of training on the use of the CDSS. For instance, systems that base their recommendations on what the user enters directly or on what was entered into the medical record by clinicians have been shown to reach faulty conclusions or make inappropriate recommendations if the data on which the CDSS bases its recommendations are incomplete or inaccurate [70]. Also, part of the reason for integrating CDSS with physician order entry is that it is assumed the physician has the expertise to understand, react to, and determine whether to override the CDSS recommendation. Diagnostic systems, for instance, may make an appropriate diagnostic suggestion that the user fails to recognize [36, 71, 72]. Thus, vendors of CDSS need to be clear about what expertise is assumed in using the system, and those who implement the systems need to assure that only the appropriate users are allowed to respond to the CDSS advice.

As these systems mature and are more regularly integrated into the healthcare environment, another possible concern about user expertise arises. Will users lose their ability to determine when it is appropriate to override the CDSS? This “de-skilling” concern is similar to that reported when calculators became commonplace in elementary and secondary education, and children who made errors in using the calculator could not tell that the answers were obviously wrong. Galletta et al. report that when a computerized spell checker program provided incorrect advice, their research subjects made more errors than they did without the spell-checker [73]. Similar results were found in a study using the decision support programs that provide diagnostic interpretations for electrocardiograms [74]. The solution to the problem is not to remove the technology, but to remain alert to both the positive and negative potential impact on clinician decision making.

1.5.5 Monitoring Proper Utilization of the Installed Clinical Decision Support Systems

Simply having a CDSS installed and working does not guarantee that it will be used. Systems that are available for users if they need them, such as online guidelines or protocols, may not be used if the user has to choose to consult the system, and especially if the user has to enter additional data into the system. Automated alerting or reminder systems that prompt the user can address the issue of the user not recognizing the need for the system, but another set of problems arises with the more automated systems. They must be calibrated to alert the user often enough to prevent serious errors , but not so frequently that they will be ignored eventually. What this means is that testing the system with the users, and monitoring its use, is essential for the CDSS to operate effectively in practice as well as in theory.

1.5.6 Assuring the Knowledge Base Is Monitored and Maintained

Once the CDSS is operational at the client site, a very important issue involves the responsibility for updating the knowledge base in a timely manner. New diseases are discovered, new medications come on the market, and issues like the threat of bioterrorist actions prompt a need for new information to be added to the CDSS. Does the vendor have an obligation to provide regular knowledge updates? Such maintenance can be an expensive proposition given both rapidly changing knowledge and systems with complex rule sets. Who is at fault if the end user makes a decision based on outdated knowledge, or, conversely, if updating one set of rules inadvertently affects others, causing them to function improperly? Such questions were raised over 30 years ago [75], but because CDSS are still not in widespread use, the legal issues have not really been tested or clarified.

The Food and Drug Administration (FDA) is charged with device regulation and has recently begun to reevaluate its previous policy on software regulation. Up until recently, many CDSS have been exempt from FDA device regulation because they required “competent human intervention” between the CDSS’ advice and anything being done to the patient [76]. In 2014, the FDA, ONC and the Federal Communications Commission (FCC), in the FDASIA Health IT Report, adopted a risk-based framework to clarify what types of software required more extensive oversight [77]. Even if the rules change and CDSS are required to pass a pre-market approval process, monitoring would need to be ongoing to ensure the knowledge does not get out of date, and that what functioned well in the development process still functions properly at the client site. For this reason, local software review committees, which would have the responsibility to monitor local software installations for problems, obsolete knowledge, and harm as a result of use, have been advocated [78].

1.6 Conclusion

There is now growing interest in the use of CDSS. More vendors of information systems are incorporating them. As skepticism about the usefulness of computers for clinical practice decreases, the wariness about accepting the CDSS’ advice, that many clinicians currently exhibit, is likely to decrease. As research has shown, if CDSS are available and convenient, and if they provide what appears to be good information, they are likely to be heeded by clinicians. The remaining chapters in this book explore the issues raised here in more depth. Underlying all of them is the perspective that, as CDSS become widespread, we must continue to remember that the role of the computer should be to enhance and support the human who is ultimately responsible for the clinical decisions.