Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Hospitals, if they wish to be sure of improvement,

Must find out what their results are

Must analyze their results, to find their strong and weak points.

Must compare their result with those of other hospitals…

These words, written by Ernest A. Codman in describing his “End Results” thesis, are just as true today as when they were written in 1917 [1].

Continuous quality improvement requires ongoing data collection and analysis. This chapter will assess the importance of high-quality data to assess the quality of surgical care given, to identify areas for improvement, to assess the effectiveness of quality improvement initiatives, and for ongoing monitoring.

Importance of High-quality data. High-quality data is the key ingredient for quality assessment – without it any subsequent conclusions could be erroneous and potentially dangerous. “Garbage in–garbage out” is a one of the first rules in assessing data, and any limitations of the data collected need to be fully understood before any further analysis can be done. How the data is collected, from what sources, and by whom it is actually collected is critical and will impact the outcome. (Hutter lehman) Specific data definitions and how objective or subjective they might be, will also be important. Inaccurate data will lead to erroneous results. Certain data points might be able to be captured with administrative datasets; however, other data points need to be recorded at the time of care (e.g., in CABG, the pump run time), or need interpretation of clinical data by a clinician or trained data collector to appropriately assess key clinically rich variables.

Need for rigorous statistical analysis, and responsible reporting. High-quality data alone is not sufficient. The data must be analyzed thoughtfully, and interpreted appropriately in order to make responsible conclusions upon which we can determine quality. Identifying significance where none exists or not identifying a difference that does exist can be equally harmful. For example, closing a hospital or a hospital service based on perceived poor quality (where quality of care is actually good or acceptable) has significant impact on those patients who no longer have access to care, as well as to the caregivers. Keeping a hospital open, that does have quality deficiencies, can also cause harm to patients.

CQI (Continuous Quality Improvement) and P-D-C-A (Plan-Do-Check-Act). Highly reliable organizations in any industry continuously monitor data to assure safety and excellence. We as surgeons and healthcare providers need to do the same. Quality control in many of today’s high functioning companies are based on P-D-C-A, otherwise known as the “Plan-Do-Check-Act” cycle or the “Deming’s Cycle.” Central to this hypothesis is the ability to measure new and existing processes, and compare results against the expected results to ascertain any differences. It is an iterative process, and creates an ongoing cycle to improve the quality of care. In Six Sigma programs, this P-D-C-A cycle is referred to as “Define, Measure, Analyze, Improve, Control” (DMAIC). Regardless of the names, the core concept is the ability to accurately measure outcomes, and compare results from one process to another.

Donabedian Principle

Avedis Donabedian described the principles most commonly used today to assess the quality of healthcare. He helped to put a framework on assessing quality by focusing on Structure, Process, and Outcomes.

Structure includes the setting where care takes place and includes not only the bricks and mortar or physical location and resources, as well as the experience of the staff and the coordination of their care. Hospital and surgeon volume have also become a marker for many of these structural factors. Accreditation programs such as the JCAHO, and the Leapfrog Group, rely heavily on such easily captured metrics.

Process measures measure the care that patients actually receive. Examples include patients who are prescribed a Beta blocker after an MI, the measurement of a hemoglobin A1C in diabetics, and in surgery, adherence to the Surgical Care Improvement Program (SCIP) measures like whether or not preoperative antibiotics were given. Although these things are measurable, the direct link to the process and the outcomes are not always clear. Furthermore, few processes that lead to high-quality care have been described.

Outcomes include “the end results” that impact a patient, and are most commonly reported for surgery procedures. Operative mortality, complication rates, readmission rates, length of stay, functional status and patient experience are some variables considered outcomes.

Critical to assessing the quality of surgical care is choosing the right measure to focus on – Structure, Process, or Outcome. John Birkmeyer and colleagues have described a framework to assess the appropriate metrics, based on the procedure volume, and the inherent risk of the procedure. (Fig. 12.1) For high volume procedures, with high inherent risk, such as CABG, assessing an outcome like mortality would be appropriate. For high volume procedures, with low inherent risks, like inguinal hernia repair, then perhaps process measures or patient centered outcomes should be measured. For low volume procedures, with high risk, like esophagectomy, then perhaps a structural metric like hospital volume is most appropriate to assess.

Fig. 12.1.
figure 1_12

Recommendations for when to focus on structure, process, or outcomes.

What Is Quality?

Although the Donabedian principle is useful in determining how to measure quality, it does not by itself describe what “quality” really is. I propose a working definition for the quality of surgical care which takes into account many aspects of the surgical decision making process and ultimate care of the surgical patient which need to be considered and assessed (Hutter):

  • Quality of surgical care means

  • the right patients,

  • getting the right operation,

  • in the right setting,

  • while minimizing complications and

  • maximizing clinical effectiveness.

The right patients addresses questions about access to care, as well as appropriateness of care, including medical versus surgical treatment.

Getting the right operation addresses the questions of procedure comparisons (procedure A versus procedure B), which is where most of surgical outcomes research has historically been focused.

In the right setting, has been a recent focus and a direct result of the outcomes research movement, and touches on the issues of hospital volume, surgeon volume, surgeon training, specialization, regionalization, systems, processes, multidisciplinary approaches, and accreditation programs.

While minimizing complications looks at morbidity and mortality of the procedures. Many think that morbidity and mortality are currently well characterized, but in reality we have little standardization of data definitions or of the way that data are captured, infrequent or ineffective risk-adjustment, and data are not universally captured, making comparisons between institutions difficult, if not impossible.

Maximizing effectiveness focuses on disease-free survival, recurrence rates, functional status, reduction in comorbidities, and patient satisfaction. Patient experience, which includes quality of life and satisfaction with the process or receiving care, unfortunately is rarely taken into consideration. Assessing value, which entails accurate assessments of cost as well as quality, is critical. Comparative effectiveness between surgical procedures and their alternatives, as well as compared to the opportunity costs for alternative uses, should be the determinant of how our healthcare dollars are best used. Such data are not currently available.

Data Collection Systems

Perhaps one of the greatest accomplishments of the outcomes research movement is the increased recognition of the inability to define “quality.” One of the greatest benefits of this outcomes movement has been the advances in the statistical sophistication and rigid standards of today’s research studies and publications. Another benefit has been the development of multi-institutional, prospective, risk-adjusted data collection systems that are based on standardized definitions. These systems were developed due to the need to define quality of care, and as a result of the inherent limitations of administrative and claims data. Outcomes reporting systems were initially developed in the field of cardiac surgery and are now moving into other fields with programs developed by the Society of Thoracic Surgeons, and by the Veteran Affairs Hospitals with the National Surgical Quality Improvement Program (NSQIP). The American College of Surgeons (ACS) has developed the ACS–NSQIP as their platform for quality and safety. National data collection programs for cancer care, trauma programs, and also for accreditation programs in bariatric surgery have been developed. These reporting systems are now giving us a more objective look at some of the characteristics of “quality.” Public reporting of the quality of surgical care is becoming more commonplace – STS is now reporting hospital results for CABG to the public in “Consumer Reports” (Ferris).

ACS–NSQIP

The ACS–NSQIP is a national, validated, risk-adjusted data collection program based on standardized definitions and collected by audited, trained data reviewers. Thirty-day mortality and complication rates following surgical operations are assessed. Real-time, procedure-specific, online reports are available, based on nationally benchmarked data. Multiple risk-adjusted reports are developed two times a year for morbidity, mortality, as well as procedure and complication specific models. The program was initially started in the Veteran Affairs (VA) system, and following an AHRQ funded feasibility study was then expanded to private sector hospitals as the ACS–NSQIP. (Khuri) The program is about to expand into options to include “Essentials”, which is a streamlined data collection program to decrease the number of variables, and thereby the costs and burden of data collection, as well as “Procedure Specific”, which will allow increased sampling of high risk procedures and will include procedure specific risk-adjustment and outcomes variables (Birkmeyer blueprint).

A bariatric surgery data collection program has been developed for the American College of Surgeons – Bariatric Surgery Center Network (ACS–BSCN) and includes not only bariatric surgery specific variables, but also tracks patients beyond 30-days, to 6-months, 1-year and then yearly thereafter. Data is collected by trained data collectors (lessening the costs of requiring clinical nurse reviewers), and data definitions were chosen to be more objective so as to require less clinical oversight. Data assesses not only morbidity and mortality, but also clinical effectiveness of the procedures including reduction in weight and weight-related comorbidities such as diabetes, hypertension, hypercholesterolemia, gastroesophageal reflux disease, and obstructive sleep apnea. A similar data collection program has been developed by the American Society for Metabolic and Bariatric Surgery/Surgical Review Corporation.

The ACS–NSQIP data collection programs provide high-quality, clinically rich data, with national benchmark comparisons, and risk-adjusted analyses, that can be used as the engine for any surgical quality improvement program. The Bariatric Surgery Data Collection Program demonstrates how such a program can be expanded to assess outcomes longitudinally – beyond 30 days – and to include assessment of clinical effectiveness as well as morbidity and mortality. It also demonstrates how such data can be used to derive accreditation.

Despite this progress, all current data collection programs do not assess all the necessary components to determine the true quality of surgical care – the right patient, getting the right operation, in the right setting, while minimizing complications and maximizing effectiveness. Data about appropriateness, comparative effectiveness of surgical and nonsurgical treatments, the impact of regionalization or accreditation, patient experience, and of course data defining value are noticeably lacking.

Conclusion

High-quality data is the engine that drives continuous quality improvement. Good data, coupled with sound statistics, and thoughtful conclusions can lead to responsible reporting of the quality of care delivered. Such data can inform quality assurance and quality control through the iterative processes of continuous quality improvement. The Donabedian principle of structure, process and/or outcomes is a useful framework for assessing the quality in healthcare. To assess the true quality of care delivered, multiple domains above and beyond measuring morbidity and mortality need to be assessed including appropriateness, comparative effectiveness, the setting such as regionalization or accreditation, patient centered outcomes, and of course value. Though progress has been made in national data collection programs, there is much more we need to measure if we are to truly inform improvements in the quality of healthcare.