Introduction

Since a report by the Institute of Medicine (IOM) entitled “Crossing the Quality Chasm,” efforts have been extensive to establish, monitor, and reward quality care [1]. The IOM has defined quality as the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge [2]. Poor quality can involve overuse of care, underuse of care, or wrong care. Good quality is signified by providing patients with appropriate services in a competent manner, with good communication, shared decision-making, and cultural sensitivity. Health care stakeholders are the payers, the patients, and the health care providers. Achieving good quality health care requires involvement from all such parties. To measure quality of health care, quality measures or metrics must be established.

Measuring quality of cancer care is important for several reasons. First, patients, payers, and providers use the results to make informed decisions about treatment. For example, a patient may choose hospital A over hospital B because of differences in quality measure “grades” between the two hospitals. Similarly, a provider may recommend treatment A over treatment B to a patient because of trade-offs of quality of care and outcomes. Second, measuring quality of cancer care can help improve patient care. For example, if a health care system reports that only 50% of their cancer patients have adequate pathology reporting, they can evaluate the processes of care that are preventing better pathologic reporting. Implementation of a standardized pathologic synoptic report may assist in achieving a higher rate of adherence. Third, measuring quality of cancer care is important for policy decisions. For example, if we were to find that preoperative testing did not result in any improvement of outcome, then we may reconsider the need for that particular preoperative test.

Three general aspects of care are studied in assessing quality of care. These three dimensions are based on the definition of quality assessment by Donabedian [3]. These three items are structure, process, and outcomes. Structure refers to health system characteristics, such as a safety net hospital or an academic institution. Process refers to what the health care provider does, and outcome refers to what happens to the patient. For assessing quality of cancer care, it is most important to measure processes of care rather than outcomes of the patient because the eventual outcome of the patient depends on a wide variety of factors, only some of which can be measured.

Processes of Care

Assessing processes of care is a vital method of measuring quality of care. The National Quality Forum (NQF) was established in 1999 to serve as a clearing house and as an expert panel to disseminate quality measures. Currently, there are 55 measures endorsed by the NQF pertaining to cancer [4••]. None of these measures are directly attributable to head and neck cancer. An example of a cancer performance measure for colon cancer is that at least 12 regional lymph nodes are removed and pathologically examined for resected colon cancer. Another example is the performance measure ensuring that women who receive lumpectomy for breast cancer are afterwards treated with radiation.

Establishing quality measures or performance measures should be a thorough and exhaustive process that examines the level of evidence of clinical research (Table 1). The first step in establishing a quality measure is to perform an exhaustive literature review to determine what level of evidence exists for a process of care. An expert multidisciplinary panel is often convened to review the literature and also add commentary. Once the quality measure is written, performance of physicians and/or the facility is evaluated by adherence to these quality metrics. The best process measures come from evidence from research that a particular practice results in improved outcomes. For example, for patients with extracapsular extension of cancer in their cervical lymph nodes after resection of head and neck cancer, chemoradiation yields better locoregional control than radiation alone [5]. Strong consensus is also essential for development of a quality measure. The quality measure can be written to allow for patient preferences. In the previous example of postoperative chemoradiation, a patient may choose not to undergo treatment; thus, the quality measure may reflect that the treatment was offered or recommended rather than the measure that the treatment was actually performed.

Table 1 Levels of evidence according to the US Preventive Task Force, Department of Health and Human Services, 1996

Outcomes

Assessing outcomes of care is also an important aspect of measuring quality of care. The IOM reports three general categories of outcomes: clinical status, functional status, and patient satisfaction. Clinical status is considered to be the biologic outcome of disease; for example, 5-year survival after cancer diagnosis is a clinical outcome. Other clinical outcomes are 30-day readmission rate, postoperative wound infections, and 30-day mortality rates. Functional status measures how the disease affects the patient’s ability to interact in physical, emotional, and cognitive activities. Karnofsky performance status has been used to evaluate cancer patients’ functional status since 1949 [6]. It has been shown to predict survival. Despite its ability to measure only physical performance, it has been demonstrated to correlate significantly with quality of life. Patient satisfaction generally reflects patients’ feelings about the care they received. Adherence to treatment regimens is associated with patient satisfaction. Patients who are more satisfied are more likely to complete and follow through with treatment regimens [79]. However, interestingly, no correlation between patient satisfaction and quality of processes of care has been demonstrated [1012]. Thus, it is not best to use patient satisfaction as a way of measuring quality of care. Good outcomes measurement must include adjustment for factors that are beyond the health system’s control (eg, age, socioeconomic status, insurance status, race, comorbidities). Measuring outcomes as a way of measuring quality of care only is valid when the outcome that is being measured is directly a result from a process of care.

Adherence

Once quality measures are established through the process described above, how is adherence to quality measures assessed? First, administrative records can be used albeit lacking in clinical detail. For example, stage of cancer is not coded and thus would be missing from administrative records or claims information. Second, medical records are sources rich with clinical detail and thus can be used as sources for measuring adherence. However, perusing medical records is labor intensive and not feasible on a national scale to evaluate patterns of care. A third source would be cancer registries. These registries were established by the National Cancer Act and collect a wide variety of data elements such as stage, first course of treatment, and survival. However, the level of detail in cancer registries is thin, and thus may be insufficient to use as a source of monitoring quality cancer care. Certain elements may be evaluated using cancer registries (eg, adequacy of lymph node removal and pathologic assessment); however, postoperative chemotherapy and/or radiation may be more difficult to assess. Even more difficult to assess from cancer registries would be the completion of recommended therapy. A combined database of cancer registry and claims database, such as the Surveillance, Epidemiology and End Results (SEER)–Medicare database may be a better source than either the administrative database or the cancer registry alone. These limitations of the data sources point to the need for a better reporting system.

Studies such as the National Initiative for Cancer Care Quality (NICCQ) demonstrate that many patients do receive appropriate care. This study was initiated by American Society of Clinical Oncology (ASCO) and demonstrated that 82% to 87% of women with breast cancer receive guideline concordant care [13]. Blayney et al. [14••] implemented this ASCO-sponsored Quality Oncology Practice Initiative (QOPI) at their National Cancer Institute Comprehensive Cancer Center and demonstrated that measuring. The QOPI is a voluntary program developed and sponsored by ASCO to assist oncology practices in quality self-assessment. These investigators found that measuring and sharing performance status with physicians did result in a change of physician behavior.

The Physician Consortium for Performance Improvement, convened by the American Medical Association in 2000, is dedicated to “enhancing quality of care and patient safety by taking the lead in the development, testing, and maintenance of evidence-based clinical performance measures and measurement resources for physicians” [15]. It is comprised of representatives from over 100 medical societies including the American Academy of Otolaryngology–Head and Neck Surgery and the American Head and Neck Society. They have published one set on oncology performance measures [16], all of which are the same as the NQF oncology measures. These measures were written by the two largest societies for medical and radiation oncology, ASCO and the American Society for Therapeutic Radiology and Oncology.

The Quality Movement in Head and Neck Surgery

Several papers have evaluated the quality of care in head and neck cancer. Dr. Randal S. Weber stated in his presidential address for the American Head and Neck Society in 2007 that “we have a unique opportunity and a societal obligation to reengineer head and neck cancer care for the betterment of our patients” [17]. Dr. Weber stated that quality or performance measures were one aspect of this process to improve the quality of head and neck cancer care. Two papers have reported that treatment for head and neck cancer is more likely to be concordant with recommended guidelines when it is performed at tertiary care centers [18, 19]. This variation in care is one example of poor quality of head and neck cancer care. In addition, investigators have reported that receipt of care for advanced laryngeal cancer in centers other than teaching/research hospitals is associated with higher risk for death [20]. Treatment at low volume facilities for early-stage laryngeal cancer is also associated with higher risk for death [21]. With such variation, there clearly is a need for quality measures in head and neck cancer care.

The American Academy of Otolaryngology–Head and Neck Surgery has established the Guideline Development Task Force. This Task Force does not establish quality performance measures but rather writes treatment guidelines. The guidelines are useful in standardizing care and decreasing variation in care that can lead to poor quality of care. They have already established treatment guidelines for cerumen impaction [22], otitis externa [23], hoarseness [24], benign paroxysmal positional vertigo [25], otitis media with effusion [26], and acute sinusitis [27].

Treatment guidelines for head and neck cancer have been developed in a multidisciplinary format by the National Comprehensive Cancer Network (NCCN) [28••]. Head and neck cancer is one of many cancers for which the NCCN website offers extensive clinical references and guideline therapy. Thus, development of performance measures has been the priority for most oncologists. The NQF [4••] houses these performance measures. Currently, 55 performance measures pertain to oncology.

There are no head and neck cancer–specific performance measures in the National Quality Forum database. Several general NQF-endorsed performance measures can be applied in head and neck cancer care. One example is that of completeness of pathology reporting. Several publications have described radiation oncology’s experience with quality assurance. One study described quality assurance processes for a multi-institutional phase 3 trial comparing conformal radiation to intensity-modulated radiation therapy (IMRT) for head and neck cancer in the United Kingdom (PARSPORT) [29]. Standard operating procedures were established for each site that included exercise in target volume definition and treatment planning. Multidisciplinary quality assurance of radiation target volume has also been proposed as one way of improving quality and decreasing variation [30].

The American Head and Neck Society established the Quality of Care Committee in 2007 to address this gap in knowledge. Its mission is to formulate evidence-based quality of care measures for patients with head and neck neoplasia. The committee will also promote compliance with these standards as a framework for measuring quality of care in head and neck surgery. Steps for developing these quality of care measures included 1) identifying a neoplastic disease of high prevalence in the head and neck surgical practice; 2) identifying common measurable treatment practices during the preoperative, course of treatment, and posttreatment periods; 3) performing literature reviews to identify evidence for the measures from step 2; and 4) proposing measures by which practitioners can evaluate their treatment practices.

A multidisciplinary committee was formed and began work in the summer of 2006 by vetting disease sites. After much discussion, the Committee decided to focus on oral cavity cancer as an initial undertaking. The group then divided into three working groups concentrating on pretreatment, treatment, and posttreatment measures. An exhaustive literature search for high level of evidence was performed and then quality measures were developed from this search. The committee discussed the different measures that emerged from this stage of the process, and agreed on two to three measures for pretreatment, treatment and posttreatment care, respectively. The quality measures were developed by consensus, appropriately referenced, and submitted to the Executive Council of the American Head and Neck Society that approved the first set in December 2007 (Table 2) [31••]. The second set was developed for laryngeal cancer. These measures are similar to the oral cavity quality measures and were approved by the American Head and Neck Society’s Executive Council in October 2009

Table 2 Quality measures approved by the American Head and Neck Society

Measuring Adherence to Head and Neck Quality Measures

Now that these measures for the two most common head and neck cancers have been established, dissemination of the measures and assessing adherence to these measures are the next steps. The oral cavity measures were published in June 2008 [31••] and announced at the Seventh International Conference on Head and Neck Cancer in July 2008. Several institutions have begun evaluating their own institution’s track record in adhering to these quality measures.

At Emory, we compared adherence to reporting of College of American Pathologists (CAP) criteria during two time periods, 2000 to 2004 and 2005 to 2009, after implementation of a standardized pathologic synoptic reporting mechanism in January 2005. We found statistically significant improved reporting for several pathologic parameters as a result of the implementation of a standard pathology reporting template. Within the study, we also evaluated the National Cancer Database as a data source for measuring adherence. The National Cancer Database is a nationwide hospital-based cancer registry that collects 70% of all cancer diagnoses in the United States. As previously discussed in this article, cancer registries are limited in the scope and breadth of data collection. Indeed, the only CAP pathologic feature reported in National Cancer Data Base (NCDB) was extracapsular extension and it was collected only after 2004. Other features such as perineural invasion, depth of invasion, and angiolymphatic invasion were not available in NCDB.

Investigators at M. D. Anderson also evaluated their institution’s track record in adherence to select quality measures. Again, they relied on medical records for the data abstraction. The processes used both at Emory and at M. D. Anderson were labor intensive and not feasible to perform on a national level. In addition, although both institutions had electronic medical records, neither system was equipped to generate reports of adherence to quality measures.

These studies demonstrated the need for another mechanism by which one is to measure adherence to head and neck quality measures. One possibility would be to develop a secure internet reporting mechanism via a secure browser by which individuals and facilities can enter data specifically for these two sets of quality metrics. For example, a secure server, much like internet shopping sites or online banking sites, could be established to preserve patient confidentiality. The individual would then be able to enter clinical characteristics and de-identified patient features to establish an entry. Then drop-down menus could be incorporated to answer relevant questions. This information would then be sent electronically to a central repository. This central site would then be able to generate data reporting adherence to quality measures. The NCDB has a similar mechanism by which their Commission on Cancer sites submit records via a secure internet link. The American College of Surgeons’ Commission on Cancer does not receive any information that would allow them to identify the patient’s identity with certainty. The Commission on Cancer can generate reports for the institution. Several programs within the American College of Surgeons uses this submission protocol, including the National Surgical Quality Improvement Project, the Thoracic surgery database, the Trauma database, and the Bariatric Surgery database. Establishing such a case submission mechanism and a central clearing house of this adherence data would be essential to implementation of these quality initiatives.

Data systems will serve as the backbone of the efforts to improve quality of health care [32]. Performance data will serve as impetus to change. These data systems will also allow for a national survey of quality and will help identify areas of high and areas of low quality. Despite tremendous investments at the federal and local levels, gaps persist in the availability of data needed to measure quality of care and to perform research on quality. The IOM convened an expert panel 10 years ago to introduce a data collection strategy that would align with the quality movement. The panel stated that the ideal cancer care data system would have 10 attributes including a set of well-established quality measures, reliance on computer-based medical records, standard reporting of cancer stage, treatment, national population–based case selection, established benchmarks for quality improvement, data systems for internal quality assurance purposes, public reporting, adaptability, and privacy protections.

Conclusions

In head and neck surgery, we are just beginning our journey in quality of care assessment. We have established quality measures for the two most common head and neck cancers, and are exploring ways to improve data collection within our specialty. The American Head and Neck Society has been instrumental in developing quality measures for oral cavity and laryngeal cancers. We anticipate that adherence to these quality measures will be imperative and will raise the quality of care for all patients. As health care reform moves forward, increased emphasis will be placed on rewarding high quality care and providing disincentives for care that is not of proven effectiveness.