Background

Diagnostic radiology facilities are moving to a digital environment where cassette technology (CR) is being replaced by direct and indirect digital radiographic detector technology. Direct and Indirect digital radiographic detector technology differs primarily in the way that the systems turn an X-ray signal into an electronic signal, where direct digital technology converts incident X-rays into electrons (or electronic charge) using a semi-conductor Thin-Film-Transistor (TFT) array; while Indirect digital technology converts incident X-rays into light using a scintillator and then the light is converted into electric charge through interaction with a photodiode and TFT array.

The current generation of indirect digital detectors claim a consistent and high quality of performance [1]. Additionally, digital systems are making use of raw image data for increasingly complex tasks related to patient dose, image processing and automated self-reporting. With systems that so heavily rely on accuracy and consistency of performance, facilities must be confident that the equipment can in fact perform as claimed.

Since the infancy of this technology it has been realised that system performance may vary over time, even though a detector may meet all specifications at the time of acceptance. This performance variation could be due to deterioration of mechanical components, susceptibility of electronic components to environmental conditions or standard statistical variation due to technical limitations around reproducible system performance [2].

Within Australia, there is relatively little in the way of national performance standards, testing criteria or control recommendations for direct or indirect digital technology for general radiographic detectors. A testing regime has recently been released by the Royal Australian and New Zealand College of Radiologists (RANZCR) [3] as a suggested measure of site based quality control, however there is minimal evidence published locally or internationally to support this stance. This study was undertaken in an effort to provide some openly available and peer reviewed data to establish a baseline for digital detector performance capabilities which can then be used to create or adapt quality control regimes.

The performance criteria that were assessed were based on technical parameters of general X-ray systems that were likely to vary over time, relate to patient dose or affect image quality. The performance criteria were selected to provide a robust and effective measure of system consistency capable of being measured using an efficient methodology. To this end, the performance criteria were selected based on international standards and recommendations from recognised technical bodies including the IEC [4], AS/NZS [5], AAPM [6], IPEM [7], KCare [8] and AIFM [9, 10].

Method

Two models of CsI indirect digital detectors were assessed (Table 1).

Six fixed Varian PaxScan detectors were assessed. All were installed in fixed vertical bucky assemblies and were new when first tested. All tests were conducted with the vertical bucky aligned 90 degrees to floor, using the default daily wall QC protocol (linear image processing), with the source-to-image-distance (SID) set to 150 cm, 150 cm focussed grid in place (r15/n80), 42.5 × 42.5 cm collimation to cover the entire active detector area, tube voltage set to 81k Vp, all three AEC chambers selected and with a nominal detector dose of 2.5 µGy set. The manufacturer supplied 0.6 mm copper filter was fitted to the collimator.

Table 1 Detector technical data [1]

Six wireless Pixium detectors were assessed. All were wireless image receptors for use in a table bucky or ‘out of bucky’ radiography and were new when first tested. All tests were conducted with the detector docked in the landscape orientation within the table detector dock, using the default daily table QC protocol (linear image processing), with SID set to 115 cm, 115 cm focus grid inserted (r12/n92), 43 × 34 cm collimation to cover the entire active detector area, tube voltage set to 70kVp, all three AEC chambers selected, with a nominal detector dose of 2.5 µGy set. 0.6 mm of copper filtration was fitted at the collimator.

System DAP, AEC, EI, system transfer function and uniformity were assessed before the initial measurements were taken for each system to ensure that each system was performing to manufacturers specification and as clinically expected. Additionally, nominal detector sensitivity steps were assessed for accuracy in delivered air kerma and flat-field images were assessed for visible artifacts.

The final set of data collection for each of the detectors was preceded by a detector uniformity and exposure index (EI) calibration. This was used to assess the effect that these calibrations have on the performance characteristics. These calibrations require manufacturer training and are recommended by the manufacturer at a 2 yearly frequency.

The following performance criteria were assessed at 12 time points over a 6 month period (Table 2).

Table 2 Performance criteria

Results

Table 3 identifies the average, standard deviation and range of results measured across Varian and Pixum detectors. Figures 110 show the temporal performance characteristics of each individual detector for the performance metrics measured (DAP, tube current–time product, EI, MPV and uniformity).

Table 3 Detector performance results
Fig. 1
figure 1

DAP readings for Pixium detectors (table bucky)

Fig. 2
figure 2

DAP readings for Varian detectors (wall bucky)

Fig. 3
figure 3

AEC response measurements for Pixium detectors (table bucky)

Fig. 4
figure 4

AEC response measurements for Varian detectors (wall bucky)

Fig. 5
figure 5

EI for Pixium detectors (table bucky)

Fig. 6
figure 6

EI for Varian detectors (wall bucky)

Fig. 7
figure 7

Mean pixel value for Pixium detectors (table bucky)

Fig. 8
figure 8

Mean pixel value for Varian detectors (wall bucky)

Fig. 9
figure 9

Uniformity for Pixium detectors (table bucky)

Fig. 10
figure 10

Uniformity for Varian detectors (wall bucky)

Discussion

System DAP, AEC, EI, MPV and Uniformity were monitored over a 6 months period to establish consistency, trends and differences between two different models of indirect digital detectors.

Temperature was measured to identify performance fluctuations that may have been due to temperature change. Anecdotal evidence suggests that this kind of performance fluctuation is a commonly observed phenomenon in facilities where temperature is not or cannot be strictly controlled. Over the study period the temperature variation was low (±0.9 °C, Table 3), as the facility operated under strict environmental monitoring and control.

DAP accuracy is established at acceptance and annually thereafter, so the displayed value was monitored in this study. There was consistency over time within systems as well as between systems with the maximum deviation from average of 10.6 % (individual system) and 11.6 % (between systems) for all table and wall exposures (Figs. 1 and 2, Table 3). The variation that was seen between time points is expected due to variations in delivered X-ray tube current–time-product (mAs), which are most likely to be related to the performance of the AEC system, rather than variation in X-ray tube output. The suggested control limit for DAP variation is ±15 % change from baseline with no considerable change in X-ray field size or delivered mAs.

Delivered tube current–time product was monitored as a method of assessing AEC performance. The tube current–time product was shown to be relatively consistent over time within systems with a maximum variation of 0.1 mAs (6 %) for any individual system (Figs. 3 and 4). The tube current–time product was shown to be relatively consistent over time between systems with a maximum variation of 0.2 mAs from average for any system (12.5 %, Table 3). A suggested control limit for AEC response/tube current–time product variation over time is baseline ±0.2 mAs. This has been shown to be achievable and within typical performance for these systems.

EI was shown to be relatively accurate for the Pixium wireless detectors, with an expected value of 250 for the nominal detector dose of 2.5 µGy (Table 3). This value was seen to drift over time, and the greatest deviation in EI (29.7 %) corresponded to the greatest deviation in detector uniformity (56.9 %) seen in detector eight (Figs. 6 and 10). It is suspected that the EI deviation is actually due to the reduction in pixel value intensity at the periphery of these images. While this would not affect the IEC/AAPM defined EI [6], this manufacturer chooses to display a ‘Clinical EI’ (CEI) as well as calculating a ‘Physical EI’ (PEI). The PEI is calculated according to the IEC/AAPM definition of EI [4, 6], however is rarely displayed on clinical images; as such the CEI was used as a monitoring value as it is the clinically used value. Published data [11] and experience has shown the CEI to be an accurate measure of the IEC/AAPM defined EI if an adequate level of detector uniformity is achieved. It is suggested that, in accordance with IEC and AAPM recommendations [4] [6], the EI (and CEI in this specific case) can achieve a value of expected EI ±20 %, where expected EI is defined as detector air kerma (µGy) multiplied by 100. EI drift is shown to be noticeable in this data, and it is shown to be corrected following the detector calibration performed before the final set of measurements was taken. With reference to Fig. 5, there is evidence of an increase in the accuracy of the EI post calibration, where all of the detectors are clustered closer to the expected EI of 250 (Figs. 5 and 6).

EI for the Varian fixed detectors was shown to drift quickly outside of the tolerance levels specified by the IEC and AAPM (expected EI ±20 %) [4, 6] for two of the six detectors with a maximum deviation of 40 and 28 % from the nominal EI of 250 for detectors nine and ten respectively, while the other four remained fairly consistent around the nominal EI of 250. As with the wireless detectors, after detector calibration the EI was re-established at the expected level. The results from Figs. 5, 6, 9, 10 suggest that this has some correlation with the detector non-uniformity, as the two detectors that were unable to achieve expected EI ±20 % (detectors 9 and 10) also had COV values higher than the other detectors (7, 8, 11, 12). The proposed control limit of expected EI ±20 % should be achievable if digital detectors are calibrated regularly. This may need to be at intervals more frequent than the biennial manufacturer recommendation.

Mean pixel value was measured to ensure that the system transfer function was as expected, with these systems having a dose to pixel value intensity function of MPV ≈100 multiplied by DAK + offset (with typical offset values of around 50), where DAK is the Detector Air Kerma in µGy. As such, it is expected that the MPV should be consistent and approximately equal to the nominal detector dose (µGy) multiplied by 100. MPV was shown to be relatively consistent with the nominal value of 250 + offset with a maximum deviation of 8 % seen for detector 12 (Figs. 7 and 8). It was also shown that the MPVs did not deviate considerably from baseline values for either group of detectors, with an average variation of 4 %. It is suggested that a control limit of ±20 % variation from baseline would reasonably capture any issues with the standard transfer protocol.

Detector uniformity was found to be relatively consistent for both groups of detectors, with a trend of slightly increasing COV over time for all detectors. The wireless detectors had a higher average COV compared to the Varian fixed detectors (8 % compared to 3 % average COV, Figs. 9 and 10). An interesting trend in the Varian fixed detectors was mentioned above, where there seemed to be two distinct groupings of COV values. This separation remained after detector calibration. A control limit of COV ≤10 % is suggested for both detector types. This recommendation is based on guidance from IPEM [7] and KCare [8] and has been verified by experience where flat field images with a COV >10 % are visibly non-uniform, typically as an intensity gradient or shading.

Limitations of the current investigation included the scope of technology assessed and measurement methodology. The two models of detectors assessed in this investigation have been utilised by a number of digital radiography manufacturers, however they are not representative of all indirect digital detector technologies, as such the results and conclusions identified in this investigation may not be broadly applicable to all digital detectors. All data (except temperature) was collected at the imaging workstation to allow for an efficient and practical methodology that could be transitioned into a Digital Detector Quality Assurance (DDQA) Program, this however limits the analysis that can be conducted. Of particular interest is the uniformity calculation, whereby the ROI utility of the system was limited to specifically sized regions (50 × 50 or 20 × 20 mm) with MPV of the region as the only output. Uniformity is more traditionally measured as the COV within a single ROI that incorporates a majority of the useful field of view such that the equation SD/MPV uses the standard deviation of the entire image compared to the MPV of the entire image. The available measurement and displayed utilities may vary between imaging systems making it difficult to adapt the presented methodology into a DDQA program for different manufacturers.

Recommendations and conclusions

The performance of the digital detectors as measured in this study, have been used to establish the proposed control limits in Table 4. These suggested control limits are based on a single exposure per detector for twelve digital detectors at twelve time points over a 6 month period. These suggested control limits are compared to other published recommendations to identify variations in recommendations. While the suggested control limits were derived from a small sample size over a small period of time, they show reasonable agreement with published recommendations. It is expected that generalised recommendations would differ slightly as they account for the typical performance of an entire technology rather than the suggested control limits that were derived from a nominally identical make/model of detector. Variances also arise due to the differences in testing methodology as well as criteria chosen to set limit on (e.g. variance from Expected EI verses variance from baseline EI).

Table 4 Proposed Control limits

DAP, tube current–time product and mean pixel value were shown to be consistent over time, while uniformity and EI consistency were shown to benefit from detector uniformity calibrations. Because system operators aim for a specified EI range, incorrectly displayed EI values can lead to image repeats and/or manual system adjustments by users resulting in under or overexposed images degrading image quality and clinician confidence in reporting and potentially increasing patient radiation dose. Uniformity variation, in addition to affecting the CEI as discussed above, also becomes visually evident above the 10 % COV range, typically in the form of shading across the image. Non-uniformities like this can also appear as shadowed regions around the periphery of the images which could potentially affect diagnosis.

Not all systems will report a CEI, and given the method of calculation, CEI can see significant variation due to minor non-uniformities in an image. As such, for constancy testing, consideration could be given to temporarily activating the PEI rather than using CEI values.

It is recommended that these control limits are achievable with modern indirect digital detector technology undergoing appropriate preventative maintenance that would include detector calibrations ideally at a 6 monthly interval, and no longer than a 12 monthly interval.