Introduction

Digital radiography systems are in use throughout the medical imaging community and now represent the standard of care at many hospitals and imaging centers. To date, however, there is little reported in the technical literature on the quality performance, as measured in terms of reject rates, associated with the clinical use of these systems. The term reject refers to radiographs of patients that are judged by the technologist acquiring the image to be clinically unacceptable and needing to be repeated. Nonpatient captures, such as images that are used for quality control (QC) purposes, are also categorized as rejects.

The data required to calculate reject rates for digital systems have historically been difficult to obtain.1 This problem has been further compounded by the lack of the software infrastructure necessary to centrally compile data for radiology departments that have multiple digital-capture devices.2 Quality assurance (QA) tools such as digital dashboards and device clustering software platforms are now available from some manufacturers (Carestream Health at http://www.carestreamhealth.com/). These software tools facilitate access to the objective data necessary to analyze and report on reject statistics and digital radiography equipment-utilization performance across an entire institution.

We describe the methodology used to compile a comprehensive database consisting of more than 288,000 computed radiography (CR) patient image records from two hospitals having all-digital radiology departments, and we report on the results of the reject analysis performed on that database.

Materials and Methods

A reject-tracking tool was activated on 16 Kodak DirectView CR Systems (Rochester, NY,USA) at a university hospital (UH) and on 4 Kodak DirectView CR Systems at a large community hospital (CH) (Carestream Health at http://www.carestreamhealth.com/). These 20 devices represented all of the CR systems within the 2 hospitals. With the reject-tracking software enabled, technologists were required to enter a reason for rejection for any rejected image before the CR system would allow another image to be scanned. This ensured that every captured CR image, whether accepted or rejected, was accounted for in the database and in the subsequent reject analysis. Table 1 shows the reject-reason terminology that was used in the reject-tracking tool at each hospital. The reasons for rejection are configurable within the reject-tracking software, and before the start of this investigation, each site had preestablished their own list of reasons.

Table 1 Reasons for Rejection for UH and CH

A research workstation was integrated into the picture archiving and communication systems (PACS) network at each hospital for the purpose of providing a centralized remote query and retrieval mechanism for the image records stored within the CR systems. Each workstation consisted of a computer (Precision 370, DELL Computer, Round Rock, TX, USA) equipped with a 19 in. color LCD monitor (VA902b, ViewSonic, Walnut, CA, USA), a 3-MP high-resolution diagnostic display (AXIS III, National Display Systems, San Jose, CA, USA) and a 250-MB portable hard drive (WD2500B011, Western Digital, Lake Forest, CA, USA). Customized software (not commercially available) was loaded onto the research workstations, which allowed image records to be remotely and automatically downloaded from each of the CR systems. An image record was composed of image-centric information including the CR device identifier (ID), body part, view position, technologist ID, exposure information, and—if the image was rejected—the reason for rejection. The image record also contained the unprocessed image for all rejects and for many of the accepted exams. If the image record contained the unprocessed image, the diagnostic rendering state was also captured so that the image processing could be reproduced according to the hospital preferences. Protected health information was scrubbed from each record so that they were not traceable to the patient.

Image records were collected from all four CR systems at the CH for a period of 435 consecutive days. Image records were collected from all 16 CR systems at UH for a period of 275 consecutive days. The database was populated with both accepted and rejected records. For 6,000 of the clinically rejected records, image pairs were created that consisted of the clinically rejected image, i.e., one not suitable for diagnosis, along with one subsequently repeated image of acceptable diagnostic quality. The data from each CR system was then compiled into a common database containing more than 288,000 image records. The data collection protocol was approved by each hospital’s investigational review board.

The reject portion of the database initially included records for both clinical images and nonpatient images. Records corresponding to nonpatient images, such as phosphor plate erasures and test phantom exposures, were filtered from the reject database using a combination of computer-based image analysis and visual inspection. The filtering process reduced the initial size of the CH portion of the reject database by 38% and the UH portion of the reject database by 25%. The filtered database was then analyzed to determine the frequency distributions of accepted and rejected patient images and to compute the reject rates across different exam types. Reject rates were calculated by dividing the number of rejected images by the sum of the number of rejected and accepted images.

Results

A summary breakdown for each hospital of the exam-type distribution of accepted and rejected images and corresponding reject rates is shown in Table 2. The analysis revealed the reject rate for CR patient images across all departments and across all exam types was 4.4% at UH and 4.9% at CH. The most frequently occurring exam types, having reject rates of 8% or greater, were found to be common to both institutions (skull/facial bones, shoulder, hip, spines, in-department chest, pelvis). The reject rates for in-department chest versus portable chest exams were found to be dramatically different at both sites with a ninefold greater reject rate for in-department chest (9%) versus portable chest (1%).

Table 2 Database Summary from UH and CH

Table 3 shows a detailed breakdown of the frequency of occurrence of rejected exams by body part and view position for CH. Table 4 shows the equivalent breakdown for UH. For Tables 3 and 4, the rows are sorted from top to bottom by most-to-least frequently occurring body part type for the combined total of accepted and rejected exams. Similarly, the columns are sorted from left to right by most frequently occurring to least frequently occurring view position for the combined total of accepted and rejected exams. Tables 5 and 6 were sorted in the same manner as Tables 3 and 4 and show the distribution of reject rates for each body part and view position for CH (Table 5) and UH (Table 6).

Table 3 Body Part and View Position Distribution of Rejected Patient Exams Collected from Four CR Systems Over a 435-day Period at a Large CH
Table 4 Body Part and View Position Distribution of Rejected Patient Exams Collected from 16 CR Systems over a 275-day Period at a UH
Table 5 Body Part and View Position Distribution of Reject Rates for Patient Images Collected from Four CR Systems over a 435-day Period at a Large CH
Table 6 Body Part and View Position Distribution of Reject Rates for Patient Images Collected from 16 CR Systems over a 275-day Period at a UH

The combination of positioning errors and anatomy cutoff was the most frequently occurring reason for rejection, accounting for 45% of all rejects at CH and 56% at UH. Improper exposure (either too low or too high) was the next most frequently occurring reject reason (14% of rejects at CH and 13% at UH), followed by patient motion (11% at CH and 7% at UH). Smaller percentages of rejects were attributed to artifacts, clipped or missing markers, and/or unspecified other reasons.

Chest exams (including in-department and portable chest exams) were the single-most frequently performed CR procedure at both institutions (26% at UH and 45% at CH). Whereas both institutions also have dedicated digital radiography (DR) rooms, the number of DR rooms at UH is greater, which partially explains the relatively lower overall percentage of CR chest exams. A further influencing factor of this difference is the inclusion of five CR systems from within the orthopedic clinic at UH, a high-volume facility accounting for very few chest exams. At both institutions, approximately half of all CR chest exams were captured in-department and half were captured using portable x-ray equipment. It should be noted that when the body part was designated as chest, it was interpreted that the image was captured within the radiology department with the patient in the erect position. We have a high level of confidence that this interpretation was accurate for all images labeled with either the posteroanterior (PA) or lateral view positions but suspect that this interpretation may be inaccurate for some images labeled as chest anteroposterior (AP). With the nominal workflow, the technologist specifies the body part and view position information for each image before it is scanned. This is done for purposes of indexing the appropriate image-processing algorithm parameters and for populating the Digital Imaging Communications in Medicine image header (http://medical.nema.org). The CR systems were all configured so that if the technologist did not specify the body part and view position, the system would default to chest AP. A visual inspection of a large sampling of accepted chest AP images revealed that some of these appeared to have the characteristics of a portable chest and, more appropriately, should have been assigned portable chest. However, there was no practical way to retrospectively confirm the exam type. The percentage of rejected chest AP images having portable chest x-ray imaging characteristics is considerable, suggesting that the acquiring technologist may not have overridden the default body part designation before rejecting the image. Furthermore, a small percentage of rejected images labeled chest AP were visually identified as other exam types altogether (nonchest). We suspect that this may have occurred for workflow efficiency purposes, based on the technologist knowing a priori that the image would be rejected based on the acquisition situation, e.g., “the technologist observed that the patient moved during the exposure.” The result of these situations was that a thorough visual inspection of chest AP-rejected images had to be performed—and erroneous data set aside—before calculating reject-rate statistics.

Discussion

Problems were encountered with the integrity and the consistency of the data that was collected from both sites. These problems resulted from a combination of factors, including limitations in commercial CR capture software and hardware infrastructure, lack of standard terminology and associated definitions for QA deficiencies, and inconsistent QA practices.

Extensive filtering of data records had to be performed to eliminate nonpatient images from the analysis, e.g., phosphor plate erasures and test phantom exposures. Test phantom exposures performed for QC procedures were generally labeled as body part “Pattern” and as the reason for rejection “Test,” thus they were easily filtered in the reject analysis. However, no specific protocol was established to label CR scans performed for purposes of plate erasure, which flooded the reject database. There was a significant number of rejected images assigned “Test” as the reason for rejection, but they were inappropriately labeled using the default body part of chest AP. Plate erasure images were detected and set aside from further analysis by ad hoc filtering and visually reviewing rejected images that had abnormally low exposure index (EI) values, which is a vendor-specific parameter that is indicative of the exposure to the CR plate.

Whereas the “spirit” of the QA deficiency terminology was similar at the two sites (Table 1), the specific language used for labeling a rejected image was configured differently in the CR system software at each site; and in a few instances, they were configured differently among CR systems within the same site. There were also examples of redundant terminology. For instance, CH had choices in the list of reasons for rejection that included “High Index” and “High Exposure Index;” UH had choices in their list of reasons for rejection that included “Low Exposure Index” and “Underexposure.” Moreover, examples were found through independent visual review that indicated that the interpretation of the terminology was also inconsistent among technologists. For example, there was ambiguity discovered in the use of “Positioning Error” and “Anatomy Clipping.” Each of these terms can have a well-articulated, unambiguous definition; however, visual review of the rejected images from each site indicated that the terms were used, essentially, interchangeably. An interesting observation was discovered when visually characterizing rejected and subsequently repeated and accepted image pairs that were initially rejected because of patient motion. Ostensibly, two very distinct interpretations of patient motion existed because a significant percentage of each type of interpretation was identified. The first type of motion reject consisted of examples where the patient moved during the actual exposure with the motion defect manifesting itself in the image as blur. This was clearly evident when comparing the original rejected image with the subsequently repeated and accepted image. Not surprisingly, most of the images that fit this categorization were the typical long-exposure time exams such as the lateral chest. A second, very different type of rejected image that was assigned to “Patient Motion” as the reason for rejection, manifested itself in the image not as blur but, rather, as an incorrect anatomical positioning. Again, by comparing the reject with the subsequently repeated accepted image, it appeared that the patients had moved to a different, but stationary, position from the time the technologist originally positioned them. The use of motion as the reason for rejection is certainly legitimate in this situation; although this type of rejected image might alternatively, and just as legitimately, be termed a “Positioning Error.” Another interesting, highly confounding example of terminology ambiguity was observed in images that were assigned “Underexposure” as the reason for rejection. EI frequency distributions were generated for images rejected for “Underexposure.” Somewhat surprisingly, it was found that approximately 30% of these instances had EI values that were well within the range for a normal exposure. Upon visual inspection of these cases, it was discovered that the images were rendered by the image-processing software to be too bright, which is a classic characteristic of an underexposed image for a screen-film system. Further characterization of these images, coupled with making brightness and contrast adjustments to the image rendering, showed that virtually all chest x-ray exams that fit this category had suboptimal positioning with too much of the abdomen included within the field of view. The poor positioning caused the image-processing algorithms to render the images with lower than desired average density, which, in turn, resulted in the technologist interpreting the image as being underexposed. Another 20% of the images that were rejected because of underexposure were found to have normal EI values; however, upon visual review, they were found to have excessive noise appearance. Further characterization of the noise in these images revealed that the noise appearance was the result of images that were captured using CR plates that were not recently erased, which is noncompliant with the manufacturer’s recommended quality control procedure for preventing stray radiation artifacts, a.k.a., “stale plate noise.”

Whereas rejection because of overexposure occurred infrequently, these cases are cause for concern because of considerations regarding patient dose. The authors reviewed all of the rejected images that were assigned “Overexposure” or “High Exposure Index” as the reason for rejection. In more than 90% of the cases, these images were rendered suboptimally as a consequence of the impact on the code-value histogram caused by extremely high exposure. Another way of saying this is that simple brightness and contrast adjustments could likely have salvaged these images without needing to expose the patient to additional radiation—a patient who already received a relatively higher-than-normal exposure. The use of “Overexposure” as a reason for rejection with CR for cases where reprocessing is an option suggests additional training is required.

A protocol issue existed with the use of “Other” as the reason for rejection. Considerable numbers of rejected images were assigned the reason “Other.” A comment field was provided in the reject-tracking software to accompany the use of “Other.” However, this was not a software-required entry, and the comment field was most often left blank.

The order-of-magnitude difference in reject rates between in-department and portable chest exams seemed surprising at first because the known difficulties of capturing diagnostic-quality portable chest images would logically be expected to cause an increase in the number of images that need to be repeated. Some of the problematic aspects of portable chest imaging include infrequent use of antiscatter grids, inconsistent techniques, difficulty in positioning patients, patients unable to maintain a breath hold, and less-capable x-ray generators. These factors, taken together with the diagnostic tasks requiring visualization of low-contrast features, such as tube and line tip placements and pneumothorax, should increase the probability of limited-quality images. Close observation of technologist rounds at two hospitals, however, revealed that the dramatic difference in reject rates between in-department and portable chest x-ray exams was related to the inability of technologists to view the images at the point of capture. CR systems are often centrally located; in some cases, the CR units are located on different floors of the hospital than the intensive care unit. Technologists may expose a CR cassette and not view an image until an hour or more after capture, at which point the workflow impact of performing a repeat is prohibitive. The unfortunate consequence of this scenario is that an increased percentage of suboptimal portable chest images may be released to the PACS for interpretation.

Conclusions

Comprehensive and accurate digital radiography QA requires that a mechanism be put into place to force technologists to enter reject data information into a database, e.g., the capture device software should require that this data be entered for each rejected image before another image can be scanned. The reject data must include the reason for rejection, technologist ID, patient ID, and equipment- and exposure-related information. Moreover, the software and hardware infrastructure must be in place so that all image records, including both accepted and rejected records, are centrally accessible and appropriately organized. Digital dashboards that centrally collect and compile image statistics are now available to accomplish this function. However, mechanisms to enable a QA technologist or medical physicist to visually inspect rejected images must also be provided.

Standardized terminology and definitions for QA deficiencies must be established, along with the associated training, to eliminate the inconsistent and sometimes inappropriate labeling of rejected images. Protocols must be established that require the comment fields to be complete whenever there is a nonspecific reason for rejection. Unless the image is tagged as a reject, systems generally do not provide a way to prevent a QC image from being delivered to the PACS. Consequently, protocols must be implemented whereby images that are rejected because of preventive maintenance or QC-related reasons are properly labeled so they are easily distinguished from patient-related rejected images. One way to ensure that this occurs is to require that for each rejected image, the technologist specify the exam type and reason for rejection, i.e., eliminating the notion of a default exam type. This should reduce the number of erased-plate images that are mislabeled. Adopting standardized terminology and adhering to best-practice protocols will allow sites to more fully understand their QA performance and drive them to more focused training programs.

Better QC methods may significantly benefit portable chest x-ray image quality, including having the capability to display digital radiography images at the point of capture. Mobile CR and DR systems now provide this capability.

To summarize, there is an opportunity to improve the completeness and accuracy of reject analysis for digital radiography systems through the standardization of data entry protocols and improved reporting and analysis methods. Accurate reject analysis provides the basis from which to develop targeted training programs and helps to mitigate the largest source of patient repeat exposures.