Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Nuclear cardiac imaging is a typical example that image quantitation has an important role in data interpretation and patient diagnosis. Reproducible and reliable image quantitation relies on robust techniques and well-designed computational algorithms. This in great part related to computer technology and various image-processing tools. However, the principal motivation for computer analysis is to evaluate an attribute of the image as a metric in an algorithmic manner, independent of observer bias or variability [1].

Tl-201 was one of the radionuclides that received initial attention in cardiac scintigraphy due to its analogous properties to potassium ions. It was commercially available in 1976 and utilized in patients with intermediate likelihood of coronary artery disease (CAD) and after that for risk stratification in patients with known or suspected CAD [2]. Image quantitation in myocardial perfusion imaging has passed through several steps to reach the state-of-the-art quantitation strategies seen today. Planar myocardial thallium-201 scintigraphy was the common mode of data acquisition before the advent of tomographic imaging. In that imaging procedure, a number of planar views were sequentially acquired and were known as stress and redistribution images and acquired immediately (10-min), 2–4 h, and 6–24 h poststress using mainly three planar views: anterior, anterior, 45° and 70° left anterior oblique.

Due to the planar nature of the images and absence of correction techniques for attenuation, scatter, and resolution compensation, images were interpreted with significant loss of spatial and contrast resolution. These degrading factors along with extracardiac tissue activities served to reduce the overall accuracy of the test. While the images were subjectively interpreted for possible myocardial ischemia, this was limited by intra- and interobserver variability, and there was early recognition for automated techniques to reproducibly assist in image quantitation. One of the initial approaches devised to quantify the myocardial planar images was based on generating linear intensity profiles of the early and late Th-201 images providing temporal and spatial representation of the tracer distribution. The method simply depended on coregistering the two data sets and reduction of the 2D images into horizontal count profiles [3]. Another approach was proposed to use a count circumferential profile to quantify the tracer uptake based on radial lines drawn from the center of the heart toward the myocardial walls at certain angular intervals [4]. Both methods are shown in Fig. 15.1.

Fig. 15.1
figure 1

Early methods for image quantitation in myocardial perfusion Tl-201 scintigraphy. (a) The quantitation is based on the peak activity determined from a series of linear horizontal profiles drawn over the myocardium while in (b) is based on collecting a number of circumferential radial profiles taken from a user specified centre and then constructed into a line profile

The advent of tomographic imaging and its introduction in routine nuclear medicine has significantly improved image contrast by removing the underlying and overlying undesired structures. This has made marked improvements in image quality especially, after approval of Tc-99m-based myocardial tracers.

The introduction of Tc-99m-labeled compounds and the increase in the number of detector heads have also improved the statistical quality of the acquired images. Unlike Tl-201, with Tc-99m-labeled tracers, a longer time between injection and imaging is permitted. In addition, a relatively large dose can be administered, which is translated into a situation of high count rate and improved lesion detectability. Tc-99-based compounds also allow performing electrocardiograph (ECG)-gating myocardial perfusion with substantial improvement to count statistics and identification of true perfusion defects from those induced by attenuation artifacts.

Further improvements to the diagnostic performance of scintigraphic myocardial perfusion images were achieved by adding x-ray computed tomography (CT) to single-photon emission computed tomography (SPECT) imaging in a new hybrid SPECT/CT or PET/CT devices (discussed in Chap. 10). Anatomically guided perfusion interpretations have opened a new gate for enhancing the diagnostic capability of SPECT imaging in providing more insights as well more information about the stenosed vessels and the affected myocardium.

SPECT and positron emission tomography (PET) are the two tomographic techniques that provide three-dimensional (3D) or four-dimensional (4D) information about heart function and perfusion/metabolism. Many factors have been associated with the development of nuclear cardiac imaging since its introduction in the field. Some of these are availability of radiopharmaceuticals, fast computer processors, and imaging devices with good characteristic performance. These developments have more progressed in the recent past because of several features, which are summarized as follows:

  1. 1.

    Advances in computer technology, including high-speed processors, storage media, and large-capacity memory chips have facilitated the development of various image-processing tools, providing robust data analysis, reliable data quantitation, and better image display. Another remarkable outcome was the introduction of iterative reconstruction into the clinic, where reconstruction time was a limiting factor due to unavailability of powerful computer systems.

  2. 2.

    The recognition of image-degrading factors and their impact on image quality and quantitative accuracy have motivated researchers to develop and introduce new correction strategies able to enhance the diagnostic performance of cardiac protocols.

  3. 3.

    The practical implementation of resolution recovery in cardiac studies has been shown to influence significantly the acquisition time, the amount of the injected dose, or both.

  4. 4.

    New camera designs with or without semiconductor technology dedicated to cardiac imaging are relatively a new trend by which better image quality, patient convenience, and comfort as well as scanner throughouput can be realized.

  5. 5.

    The relatively recent introduction of SPECT/CT systems has improved the performance of attenuation correction and added a diagnostic value to myocardial perfusion imaging by providing more insights into the anatomy of coronary vessels in addition to calcium scoring and more information beyond that.

  6. 6.

    Molecular cardiac imaging has also become an interesting area of research and development and will exploit the potential diagnostic capabilities of radionuclide cardiac SPECT and PET tracers.

  7. 7.

    This last feature has encouraged a number of research groups as well as industry to manufacture multimodality small-animal imaging devices of superb spatial resolution and adequate sensitivity that could help to identify and elucidate the molecular aspects of cardiovascular disorders.

2 Data Acquisition

SPECT myocardial perfusion imaging has been a well-established diagnostic technique in assessment of patients with suspected or known CAD. In cardiac SPECT studies, two data sets are usually acquired: stress and rest images. The former are obtained by exercising the patient using a treadmill or by a pharmacological stress agent. The radiopharmaceutical is injected at peak exercise to be an indicator of occluded vessels when the patient undergoes tomographic scanning. The rest study is performed after injection with the patient in complete resting conditions on the same or a different day. The imaging protocol differs among institutions such that rest and stress examinations can be performed on the same day using the same radionuclide, such as Tl-201 (stress/redistribution), or Tc-99m-labeled compounds (stress/rest or rest/stress). Another protocol involves performing the stress and rest studies on two different days. The other option is to inject the patient with two different tracers (rest Tl-201/stress Tc-99m); the imaging procedure is performed the same day using an appropriate energy window setting.

Data acquisition is carried out by a rotating gamma camera equipped with one, two, or three detectors encompassing a rotational arc of at least 180°. Data reconstruction is usually performed using the analytical filtered backprojection algorithm or iterative reconstruction, with a smoothing low- pass filter applying an appropriate cutoff frequency and order.

As mentioned, the cardiac images are subjectively interpreted based on visual assessment of tracer distribution within different myocardial segments and depiction of hypoperfusion extent and severity. Although this is the gold standard approach, it remains influenced by interobserver and intraobserver variability along with the expertise of the reading staff. To reduce this variability and standardize the uptake of the tracer by the various segments, a number of software programs have been developed to aid and act as a second observer in the reading session. These methods vary in their theoretical assumptions; geometric modeling of the left ventricle (LV); 2D versus 3D approaches; thresholding and segmentation; valve definition and apical sampling, degree of automation and user intervention; whether count-based or geometric-based; or a combination of these options.

Examples of the commercially available programs are Quantitative Perfusion and Gated SPECT (QPS/QGS, Cedars-Sinai Medical Center, Los Angeles, CA); Emory Cardiac Toolbox (ECTb, Emory University, Atlanta, GA); 4D-MSPECT, which was developed at the University of Michigan Medical Center; Gated SPECT Cardiac Quantification (GSCQ, Yale, New Haven, CT) method; and others. These methods were evaluated in the literature and some found wide-spread and clinical acceptance among users in quantifying and displaying myocardial perfusion and functional parameters.

There are also some software tools developed to aid in image interpretation or to determine the quality of study interpretation. Some rely on artificial intelligence such as neural networks and case based approaches to provide increased confidence to the reading physician. Expert systems were also developed to mimic human experts and to rely on a knowledge base of heuristic rules to yield a computer-assisted patient diagnosis. In these approaches, the polar map or the reconstructed images are used as inputs for reading and quantifying the myocardial images.

3 Quantitative Methods

3.1 Quantitative Gated/Perfusion SPECT

Thie QGS/QPS method was introduced to sample, analyze, and quantify the myocardium using an ellipsoidal model [5, 6]. Data samples are extracted using equally spaced points in the longitudinal and latitudinal directions. Myocardial sampling is implemented by averaging the wall counts from the endocardial to epicardial borders rather than using the maximal pixel count along the radial profile [7]. By fitting the normal rays on the midmyocardial surface using asymmetric Gaussian functions, the endocardium and epicardium are estimated by certain percentages (i.e., 65%) of the standard deviation (SD) of the Gaussian fit. The peak of the Gaussian function is used to locate the midmyocardial point. For outlining myocardial areas of poor tracer uptake, the SDs are combined with those of each of its four spatial neighboring profiles. Further refinement is then applied by anatomical constraint of constant myocardial volume throughout the cardiac cycle [5]. This approach samples the myocardial points in a 3D ellipsoidal model through equally spaced points, regardless of the heart size; therefore, homologous points can be extracted and pooled to generate normal limit values. Due to finite sampling, the collected points are scaled to represent the curvature of the myocardium from which they are extracted [5]. This approach was developed by Cedars Sinai Medical Center in an integrated software package. An output display of the program is shown in Fig. 15.2.

Fig. 15.2
figure 2

Quantitative gated/perfusion SPECT method

3.2 Emory Cardiac Toolbox

The Emory Cardiac Toolbox (ECTb) method works in 3D space and uses the short-axis data set [8, 9]. The ECTb method uses Fourier analysis for wall-thickening estimation and detects a circumferential maximum count profile by applying an anatomically based model accounting for wall thickening to generate theoretical endocardial and epicardial surfaces [8, 10]. The software package integrates myocardial perfusion and function in one application. The program is automated with the possibility to change the short-axis radius and center. Data sampling is performed on the SPECT short-axis slices using a hybrid cylindrical-spherical coordinate system. Cylindrical geometry is used to sample the middle and basal part of the myocardium; the myocardial apex is sampled based on spherical modeling. The center of the coordinate system is the LV long axis, and the search space is limited by the LV radius, apex, and base. The valve plane is defined by two intersecting planes: one perpendicular to the LV long axis in the lateral half of the LV and one angled plane in the septal half of the LV. The program uses eight frames per cardiac cycle in gated myocardial perfusion SPECT studies. In case of contouring a perfusion defect, the algorithm forces the hypoperfused segment to be a smooth connection between adjacent noninfarcted portions of the wall, and because this segment is not thickening, it is pinned to its end diastolic positions [8].

3.3 4D-MSPECT

4D-MSPECT is a commercially available algorithm that was developed at the University of Michigan Medical Center [11, 12]. The algorithm works to process the data on the basis of a 2D gradient image from which the initial estimates of the ventricle are made. A series of 1D and 2D weighted splines are used to refine the endocardial and epicardial surface estimates. The 4D-MSPECT model also uses a cylindrical-spherical coordinate system for myocardial sampling. The former is used to sample the myocardium from the basal wall to the distal wall, and the latter is used to sample the apex. Weighted spline and data thresholding are used to refine surface estimates, and based on Gaussian fitting, myocardial wall position and thickness are estimated. It has the capability for manual processing when the automatic module fails to accurately delineate the myocardial boundaries. 4D-MSPECT has achieved good correlations with reference techniques in evaluating the myocardial functional parameters [13]. Unlike QGS and ECTb, 4D-MSPECT differs in defining the valve plane in the sense that it permits the mitral valve plane to move as much as 20 mm inward toward the apex during systole [14]. A snapshot of 4D-MSPECT is shown in Fig. 15.3.

Fig. 15.3
figure 3

4D-MSPECT

3.4 Pfast Method

pFAST stands for Perfusion and Functional Analysis for Gated SPECT (pFAST; Sapporo Medical University, Sapporo, Japan) [15, 16]. In this method, the gravity center of each short-axis image is initially determined. A long-axis central line is then identified along all gated long-axis images. Spline curves and a threshold of 30% are used to define the LV base and the epicardial outlines to calculate the maximum circumferential profiles. The epicardial surface is defined as the outer point with 50% of peak activity, which shows a definite viable myocardial mass. Endocardial volume is estimated using a geometric technique. More refinements are performed to precisely determine the endocardial points using Fourier approximations. Finally, the ejection fraction (EF) is determined from the standard EF formula.

3.5 MultiDim

MultiDim (Stanford University Medical School) is a 3D method based on calculating statistical parameters of count distribution moments from the short-axis image volume [17, 18]. The method requires some operator intervention for masking the LV and image thresholding. Masking is performed by manually fitting an ellipsoidal mask around the LV. Thresholding is performed by drawing a region of interest at the base of the LV cavity at end diastole and subtracting the mean value from each pixel [19]. Count sampling is carried out by radial profiles originating from the LV center using equally spaced longitudinal and latitudinal angles across the short-axis images. The endocardial wall is defined as the maxima of the first derivative of the squared activity profiles. Regional wall motion is derived from the phase and amplitude of the cyclic wave, representing the temporal variation of the first moment of the count distribution. However, the regional thickening is derived from the phase and amplitude of changes in the second moment of the density distribution multiplied by the maximum density. The volumes are calculated from the endocardial surfaces for each time segment [18].

3.6 Gated SPECT Cardiac Quantification

Gated SPECT Cardiac Quantification (GSCQ) is another method that is based on k-means cluster classification to separate the cardiac region from other extracardiac structures [20]. The myocardial surface boundaries are determined using hybrid count-geometric analysis for the calculation of the LV volumes and EF. The method uses thresholding and the nongated data to determine a cutoff value that serves to separate the LV volume. More refinements and constraints are carried out to remove the small remaining volumes within the image and to accurately define and obtain a clean long-axis LV binary image [21]. The long-axis images are resliced to obtain the most apical and basal slices in addition to the myocardial apex. The first apical slice is defined as the first short-axis slice containing the LV cavity, while the position of the last basal slice is defined as the last short-axis slice containing the basal limit of septum plus 10 mm toward the LV base. The algorithm models the apex as a semiellipsoid in 3D space [21].

3.7 Left Ventricular Global Thickening Fraction

The left ventricular global thickening fraction (LVGTF) is a count-based method that relies on fractional myocardial thickening to derive the EF, thereby avoiding the step of calculating the LV volumes [22, 23]. It depends on detection of myocardial wall thickening during systolic contraction. The method heavily depends on the partial volume effect, in which the myocardial wall thickness is less than twice the spatial resolution of the imaging system. The pixel counts in end diastolic and end systolic images are used to quantify the myocardial thickening without edge detection or geometric measurements. However, the method uses the systolic and diastolic counts in addition to geometric assumptions to derive a regional thickening fraction and hence to calculate the LV EF [22].

3.8 Layer of Maximum Count

Layer of Maximum Count (LMC) method is a different approach that uses the prolate-spheroid geometry to sample the myocardium; it was developed to solve the problem of small hearts [24, 25]. In patients with small hearts, most of the currently available methods tend to underestimate the LV and overestimate the EF. The midmyocardial surface is defined by the LMCs to determine the corresponding EF (i.e., EFmax). The LV EF is then calculated by performing a calibration between the EFmax and EF estimated from a reference technique, setting the intercept to zero to calculate the regression slope, which is then used to measure the EF in patients with small LVs [2527]. The method has been evaluated in a population with small hearts versus other quantitative methods using gated blood pool and echocardiography as reference nuclear and nonnuclear techniques, respectively. In the former situation, the LMC outperformed the other methods with moderate correlation and poor interchangeability with gated blood pool studies in patients with small LV. However, in comparison to echocardiography the same method showed a lower correlation but significant in the measurements of EF in patients with normal LV size. A drawback of the method is its dependency on other accurate techniques to derive a calibration factor required to estimate the EF for small LVs.

3.9 Cardiac Function Method

The cardiac function method (CAFU; Exini Diagnostics, Sweden) is a nongeometric model-based technique that uses an active shape algorithm [28]. Identification and delineation of the LV is based on a heart-shaped model, and through an iterative process the model is adjusted to optimize the fit with the image data. The algorithm uses 272 landmarks distributed in 17 layers from apex to base with 16 landmarks in each layer [29]. These landmarks are also utilized to give an estimate of myocardial wall motion and thickening. In the former, the normal distance from the landmark to the myocardial surface in both end diastolic and end systolic wall is measured, while thickening calculation is a count ratio for the landmarks in both end diastolic and end systolic frames. The LV volume is calculated using the endocardial surface and the LV valve plane with no constraint placed on basal wall motion [28, 29].

Most of the commercially available quantitative cardiac SPECT methods integrate myocardial perfusion and function in the same software package, and some have more quantitative and display features for multimodality and image fusion using SPECT, PET, and CT images. Quality assurance tools that allow the user to identify patient motion, artifacts, count density, gating problems, attenuation correction, LV segmentation and identification of myocardial boundaries, and other volumetric problems have also been embedded in these algorithms. Among those are raw data display, histogramming, valve plane fine-tuning, fusion controls, as well as measures of quality of gated SPECT studies. One of the display features of multimodality imaging is the possibility of aligning the CT vascular coronary tree on the 3D PET or SPECT functional data so that functional perfusion mapping can be visualized, with superimposition of coronary anatomy providing additional diagnostic information.

4 Quantification of Perfusion Abnormality

One of the earliest approaches introduced to quantify the distribution of myocardial activity in cardiac tomography was reported more than two decades ago [30]. This approach was based on quantifying the 3D activity distributions within the myocardium into a 2D polar map or “bull’s-eye.” The polar map is constructed through modeling the myocardium into a cylindrical and spherical coordinate system as mentioned in this chapter. Sampling the counts from the cylindrical part is implemented by drawing radial profiles from the center of the short-axis image normal to the myocardial wall (36–60 radial profiles). The maximal pixel count is recorded for each profile and plotted versus the corresponding angle to produce count circumferential curves. The apical portion of the myocardium is sampled by vertical long-axis slices to minimize the effect of partial volume. Sampling the apical portion of the myocardium is illustrated in Fig. 15.4 using different sampling approaches.

Fig. 15.4
figure 4

Various methods used for sampling of myocardial apex in quantitative cardiac SPECT. (c) and (d) is one of the earlier methods used to sample the mid- and basal myocardium in addition to spherical sampling of myocardial apex

The count circumferential profiles are used to construct the polar map, which consists of concentric annuli representing the LV from the apex to the base. Furthermore, a scaling process is performed of the sampled myocardium so that the number of data points remains constant for all patients. However, the polar map distorts the heart shape, size, and geometry [31].

To determine the variability and normal limits of the tracer distribution in the myocardium, a normal database is generated based on normal patients or patients with low pretest likelihood (<0.5%) of CAD. The mean value and SD are calculated from the circumferential profile of the normal data set with determination of a threshold value for segmental abnormality.

Polar maps provide quantitative measures of defect extent and severity. In QGS, myocardial sampling is not based on a circumferential profile drawn normally on the LV surface; the entire LV is modeled as a 3D structure with a standard number of equally spaced points regardless of the LV size [7]. In ECTb, the actual defect extent is calculated from the 3D activity distribution rather than from the polar map representation [9]. It is presented as a percentage of the abnormality with respect to the total myocardial volume, individual vascular territories, or actual mass of the hypoperfused myocardium.

To localize the extent of the defect and to pinpoint the location, myocardial segments below a defined threshold are colored black while maintaining the color of the normal ones [32]. Defect severity is expressed in units of SD below the normal mean by a measure called defect severity or total severity score and is displayed in a polar representation referred to as a defect severity map. On this map, severity is scaled by the number of SDs below the normal to a color-coding table so that the most normal region and most abnormal region are differently colored to easily identify the abnormality. Severity score also takes into account the extent and severity of the abnormality and is measured by the number of SDs below the mean of the entire extent of the abnormality [33].

Polar maps have been a simple tool to reduce the whole LV into a 2D image that facilitates the interpretation process by looking at all segments at once. It also provides a measure of defect reversibility based on normalizing the rest images with respect to the stress images and color-coding scheme. However, volume-weighting and distance-weighting approaches serve to improve one feature over another. The former map tends to distort the defect location but offers an accurate assessment of the defect size. The latter tends to distort the defect size at the cost of improving the accuracy of the defect location. It is therefore recommended not to solely depend on a polar map without paying attention to tomographic slices [34]. Partial and significant reversibility can be determined based on certain percentages of the defect extent. Moreover, measurements of ischemic or scar fractions for a given perfusion defect can be calculated in addition to assessment of myocardial viability [35].

4.1 Summed Perfusion Scores

Another semiquantitative approach used to quantify tracer uptake is implemented by dividing the myocardium into 20 segments or the recommended 17 segments [36]. The perfusion of each segment is scored according to a 5-point scoring system: 0–4 (0 = normal, 1 = equivocal reduction, 2 = definite but moderate reduction, 3 = severe reduction of tracer uptake, 4 = absent uptake of radioactivity). The global measure of perfusion is then determined by summing the regional scores of all segments in stress and rest data. This scoring process results in a Summed Stress Score (SSS) and a Summed Rest Score (SRS). The difference between SSS and SRS is the Summed Difference Score (SDS), which is analogous to reversibility. High values for the SSS are an indication of large or severe defects, whereas high values of SDS provide an indication of reversibility and lower values indicate fixed or mostly fixed defects. The SRS is related to the amount of infarcted or hibernating myocardium [37]. (see Table 15.1).

Table 15.1 Summary of perfusion scores and percent abnormal myocardium in 17 and 20 segment model (Adapted from Fuster V, O’Rourke RA, Walsh RA, Poole-Wilson P. “Hurst’s The Heart,” 12th edn. 2007)

These global perfusion summed scores provide a reported measure for both defect extent and severity, and they can be calculated either visually or by computer-based methods. Reproducibility and diagnostic performance have been reported in a number of studies, including those automated and conducted by human observers [38]. The SSS is employed to stratify patients into different risk groups according to the following: Individuals with SSS < 4 are considered normal or nearly normal, those with scores of 4–8 are mildly abnormal, those with scores of 9–13 are moderately abnormal, and those with SSS > 13 are severely abnormal.

4.2 Percent Abnormality

Another global measure of perfusion abnormalities is calculated by normalizing the summed scores to the maximal possible score. In case of the 17-segment model and 5-point scoring system, the maximal possible score is 17 × 4 = 68 whereas for the 20-segment model it can be calculated as 20 × 4 = 80 (Table 15.1). This percentage measure is called the percentage abnormal myocardium and is applicable to other scoring systems and different myocardial segments. For example, a patient with summed scores of 17 in the 17-segment model will have a percentage abnormality of 17/68 = 40%, which bears diagnostic and prognostic information similar to a patient with summed scores of 20 in the 20-segment model (20/80 = 40%) [39]. Expressing the amount of ischemia as percentage myocardium by this approach provides intuitive implications that are not possible with the perfusion scoring system and is applicable to other segmental models and other methods that calculate the percentage of abnormal myocardium [40]. Figure 15.5 shows a comparison between the 17- and 20-segment models.

Fig. 15.5
figure 5

A comparison between 17 and 20 segment models

4.3 Generation of Normal Limits

Different schemes were developed to generate normal databases to distinguish abnormal from normal patients in the quantification approaches mentioned. Some of these provide user-specified generation tools for normal limits that are incorporated in the software program with several options for myocardial radiotracer, patient gender, imaging protocol, processing parameters, and so on.

The Emory method is based on collecting a number of patients with a low likelihood of CAD (<0.5%) and deriving a composite pool to extract the mean and SD of the normal limits. Then, a patient study is compared to the normal database to examine and assess the perfusion defects. This approach goes through a number of steps to optimize the threshold value, which is then used to quantify perfusion abnormality in a given patient study. It has specific inherent characteristics, making it dependent on the injected radiopharmaceutical, acquisition protocol, processing parameters, and population studied. It needs a number of normal patients (20–30) or low likelihood of CAD from both genders, another group of patients with significant variation in perfusion location and severity “pilot population,” and a validation group of patients to assess the performance of the algorithm in both genders with reported coronary angiography [41, 42]. Thus, approximately 150 patients are required to establish a normal database.

The group at Cedars Sinai (Los Angeles, CA) developed quantification techniques that are based on various assumptions. One requires a number of normal patients along with another group of patients with a wide range of perfusion abnormalities. The threshold of abnormality is determined by an optimization step in which the computer-generated scores are maximized with the visual scores to obtain an individual segmental threshold. Because of the large number of patients needed to represent a large data set of segmental hypoperfusion in addition to another group of those with normal or low likelihood of the disease makes it relatively difficult to generate on-site-specific normal limits [6]. Further investigations have revealed another global assessment that combines perfusion defect extent and severity into a continuous measure referred to as the total perfusion deficit (TPD). It provides an overall assessment of hypoperfusion either by vascular territory or for the entire myocardium. In this approach, a reduction in the number of patients required for generation of normal limits is accomplished by obviating those patients with an abnormality. Furthermore, no optimization step is required to derive a segmental threshold, and the technique is based on patients with a low likelihood of the disease [43].

The methods mentioned do not permit aligning the stress with the rest images in a specific geometric orientation, and comparison of a patient study is carried out for the stress and rest separately. This perhaps limits the quantitation algorithm to precisely determine the spatial location of ischemia in stress and rest images. Furthermore, a comparison with database normal values does not account for intrapatient perfusion changes. Faber et al. have shown that by accurate image alignment, changes of 10% and 15% could be detected with false-positive rates of 15% and 10%, respectively, concluding that the mean uptake values can show a statistical significance if the difference is 10% or more in single perfusion studies of single patients [44]. On the other hand, a new measure of ischemia was developed by Slomka et al. [45] to coregister the stress and rest data. The rest images are iteratively reoriented, resized, and normalized to provide the best fit with the stress scans. They have used a new normalization technique based on 10-parameter search criteria and allows determination of the amount of ischemia in stress and rest images without a normal database.

Note from this discussion that developers vary in their representation for the defect extent and severity and differ in their definition and optimization for the threshold value on which segmental abnormality is determined [46, 47]. This has been examined by some comparative studies conducted to look at the variations that exist among the quantitative cardiac SPECT methods and to investigate their diagnostic performance versus reference techniques. These reports included evaluation for the degree of automation, summed scores (SSS, SDS, SRS), regional and total defect extent using receiver operating characteristic curves and appropriate correlation and agreement statistical tests. Some studies were also performed in comparison to coronary angiography; hence, the sensitivity, specificity, accuracy, and normalcy rates for detection of CAD were estimated for the algorithms. The absence of institutional normal limits as a cause for those variations was also investigated by some researchers if institution-specific normal databases were used. The results, however, demonstrated that significant differences among the various methods do exist in estimating the myocardial perfusion parameters [4648].

5 Quantification of Myocardial Function

5.1 Gated Cardiac SPECT

The introduction of ECG gating to SPECT myocardial perfusion imaging has potentially improved the diagnostic and prognostic information in assessing patients with suspected or known CAD [49]. The improvement of count statistics by use of Tc-99m-based tracers, multiple detector systems, advances in computer technology and development of automated quantitative methods has allowed simultaneous acquisition of myocardial perfusion combined with ECG gated imaging in a feasible manner. As a result, assessment of patient global and regional LV function together with perfusion quantification can be carried out on a routine basis. This in turn has led to a tremendous amount of clinically relevant and valuable information for decision making in patients with CAD [50]. Figure 15.6 summarizes the steps involved in the calculation of myocardial perfusion and functional parameters.

Fig. 15.6
figure 6

Steps involved in acquiring, processing, and quantifying myocardial perfusion gated SPECT studies using the quantitative algorithms

Further diagnostic information can be obtained by coregistering data obtained from CT angiography and the metabolic/perfusion images using hybrid SPECT/CT or PET/CT systems. This allows for integrating a large amount of information that was not possible to obtain in a single imaging session. Global functional measures such as end diastolic volume (EDV), end systolic volume (ESV), and EF, in addition to the regional parameters such as regional wall motion and wall thickening, can be evaluated by most of the commercially available quantitative gated SPECT methods.

5.2 Acquisition and Processing

Myocardial perfusion gated SPECT imaging is carried out using three ECG leads placed on the patient’s chest and connected to an ECG trigger device. This helps the computer system identify the beginning of the cardiac cycle (the R-R interval) and thereby it can divide the temporal changes of the heart contraction into small time intervals determined by the number of frames/cycle selected during acquisition setup. The number 8 or 16 frames per cardiac cycle is often chosen since the former provides better count statistics while the latter provides better temporal resolution. Figure 15.7 shows an acquisition for eight frames per cycle. It is also possible to use 32 frames per cardiac cycle to determine the diastolic function. However, this occurs with significant reduction of counts collected over the cardiac frames given the same acquisition time.

Fig. 15.7
figure 7

This diagram shows the R-R interval for one heart cycle and the corresponding change in the blood volume. ECG-gating provides a mean for recording the volume change over the heart cycle. This happens by dividing the cycle into gates or frames (e.g., eight frames) or even higher 16 or 32

The commercial methods provide several tools to process gated and ungated projection data to extract perfusion as well as functional information from gated myocardial perfusion SPECT studies. Functional information obtained from the reconstructed images are LV volumes (EDV and ESV), regional and global EF, myocardial wall motion, and wall thickening.

5.3 Volumes and EF Estimation

The underlying notion of ECG gating myocardial perfusion SPECT studies is to obtain several pictures of the heart during the periodic contraction. The higher the accuracy in modeling and outlining the LV in these different phases, the better is the reliability of the quantitative results. As mentioned, methods vary in their assumptions for LV geometry. Some are based on a ellipsoidal, cylindrical-spherical, or prolate-spheroidal model, and others are purely count-based techniques. One common step is the segmentation process, by which the LV should be identified and separated from other structures. Methods also vary in their segmentation for the LV as the inclusion of extracardiac tissues can confound the quantitative results [51].

Once the LV has been segmented and the valve plane defined together with determination of myocardial base and apex, outlining of myocardial boundaries can then be estimated. Different approaches have been suggested and implemented to identify endocardial and epicardial boarders as well as myocardial base and apex. Automated modes are often used to delineate the myocardial boundaries; however, in case of contouring errors, operator intervention could be helpful. This also depends on user expertise in addition to interobserver variability. Automated quality control approaches were also developed to judge the quality of the LV shape segmentation as well as valve plane definition in some software programs [52]. These quality control algorithms could provide high accuracy in identifying failure cases of LV segmentation, leading to an improvement in perfusion quantitation.

Volume-based techniques calculate the ventricular volumes by constructing a time-volume curve using either 8 or 16 frames/cycle. The maximum and minimum points on the volume curve correspond to EDV and ESV, respectively. The EF can then be calculated as a percentage:

EF = (EDV – ESV)/EDV*100

Figure 15.8 shows the output display of the ECTb for stress and rest studies together with EDV, ESV, and EF calculations.

Fig. 15.8
figure 8

Emory cardiac toolbox display for stress and rest studies where left ventricular volumes and EF in addition to wall thickening polar map are shown

5.4 Regional Function

Assessment of regional myocardial function has incremental diagnostic and prognostic information over myocardial perfusion parameters alone [53]. Myocardial wall motion and thickening are relatively not uniform as compared to myocardial perfusion. Wall motion is the excursion of the endocardium from end diastole to end systole. A 6-point scoring system is generally used to assess motion abnormality: 0 = normal, 1 = mildly hypokinetic, 2 = moderately hypokinetic, 3 = severely hypokinetic, 4 = akinetic, 5 = dyskinetic. Visual assessment of wall thickening is often based on the partial volume phenomenon, in which the intensity of the myocardial wall is proportional to the size or degree of wall thickening during cardiac contraction. A 4-point scoring system is used to assess wall thickening: 0 = normal, 1 = mildly reduced, 2 = moderately to severely reduced, 3 = no thickening.

Computer scoring for wall motion and thickening was also developed to reduce observer variability. It calculates the regional function on a segment-by-segment basis in a similar way to calculation of myocardial perfusion. In QGS, regional motion is measured as the distance (in millimeters) between a given endocardial point at end diastole and end systole perpendicular to the average midmyocardial surface between end diastole and end systole [54].

Myocardial thickening is calculated as the percentage increase in myocardial thickness and can be quantified by geometric-based, count-based, or combined methods using geometric count-based techniques [55]. The first detects the spatial position of endocardial and epicardial surfaces to calculate the myocardial thickness in both end diastole and end systole. However, count-based techniques rely on the partial volume effect.

It should be noted that normal wall motion and thickening are not always concomitant since in some pathological conditions a discordance can take place, resulting in abnormal wall motion and preserved thickening or vice versa [56].

The European guidelines stated that

Visual interpretation of myocardial wall motion and thickening remain up to the moment the conventional tool in assessing myocardial contractility in myocardial perfusion SPECT and quantitative measures provided by software programs shouldn’t be used as the sole determinant [57].

5.5 Diastolic Function

In addition to the determination of the systolic function by myocardial perfusion gated SPECT, myocardial diastolic function can also be estimated which is a useful clinical indicator of LV function and precedes systolic dysfunction in many cardiac diseases. It is advisable that early diagnosis and an appropriate therapy be performed before further progression to diastolic heart failure and cardiac death [58]. Diastolic function can be evaluated by nuclear methods and with other radiographic techniques [59]. Parameters of diastolic function are peak to filling rate (PFR), which is a clinically useful parameter describing LV filling properties; time to peak filling rate (TTFR); and the mean filling fraction (MFR/3), which is the mean filling rate over the first third of diastole.

Parameters of diastolic function require a significantly larger number of gating intervals than often are used. This higher temporal resolution is needed to accurately determine volume changes over a short period of time. The derivatives of the time-volume curve yield information about the rates of emptying and filling. Fourier fitting with three or four harmonics is often used to smooth the time-activity and derivative curves to reduce the statistical fluctuations of the acquired data.

Peak rates and average rates are usually measured in units of end diastolic volumes per second (EDV/s). Per cardiac cycle, 12, 16, or 32 frames may be employed; however, better estimation can be achieved with the highest possible framing rate. One study compared the diastolic and systolic functions using 32 frames versus 8 and 16 frames/cycle taking the gated blood pool as a reference [60]. Accurate assessment of the diastolic as well systolic function was obtained when 32 frames/cycle was applied. Furthermore, lower systematic errors for both measures were found with the highest temporal sampling. In a population of 90 patients, Akincioglu et al. derived the normal limits for PFR and TTFR as 2.62 ± 0.46 EDV/s and 164.6 ± 21.7 ms, respectively, with abnormality thresholds of PFR < 1.71 EDV/s and TTFR > 216.7 ms, respectively, applying 16 frames/cycle, and measurements were performed using the QGS software program [61].

5.6 Phase Analysis

Assessment of cardiac mechanical dyssynchrony is an important step for patients scheduled to undergo or who have undergone cardiac resynchronization therapy (CRT) [62]. CRT is used to improve heart function by restoring the heart rhythm contraction in patients with an irregular heartbeat, called LV dyssynchrony. Chen et al. [63] developed a tool for measuring the onset of mechanical contraction based on phase analysis of the cardiac cycle in gated myocardial SPECT imaging using Fourier transform. A phase array is extracted from the 3D count distribution in the eight bins of the gated short-axis slices. This phase array conveys information about the regional mechanical contraction in a 3D fashion, and a number of quantitative indices are derived from the phase array histogram, such as the peak of the histogram, SD of phase distribution, and phase histogram bandwidth (95% confidence interval). Phase histogram skewness and kurtosis can also be calculated, which indicate the symmetry and peakedness of the distribution, respectively. These measures correspond to specific attributes of the histogram curve and in turn should have clinical relevance in the overall assessment [9, 63] (see Fig. 12.26 in Chap. 12).

5.7 Tomographic ECG-Gating in Equilibrium Radionuclide Angiocardiography (ERNA)

In planar gated blood pool imaging, the anatomical geometry of the cardiac chambers limits a clear separation between right ventricle (RV) and LV. The overlapping atrial and ventricular structures obscure an accurate outlining of the RV. However, tomographic equilibrium radionuclide angiocardiography (ERNA) should be able to estimate the RV parameters more accurately than planar imaging. Adding the tomographic option to the planar gated blood pool imaging allows for better visualization of cardiac chambers and depiction of contractile motion. In this instance, SPECT data can provide better separation of the LV and RV together with information about the contractile function. ERNA can also provide an estimate of the LV and RV volumes and EF in addition to ventricular filling and emptying parameters [64]. A number of research studies were conducted to measure the normal limits for global and regional parameters of diastolic and systolic function using gated blood pool tomographic imaging [65, 66]. Figure 15.9 shows many functional parameters that can be obtained from tomographic ERNA.

Fig. 15.9
figure 9

Calculation of regional and global systolic and diastolic myocardial functional parameters for both RV and LV in gated blood pool SPECT using a count based method. (From [65] with permission from Springer Science+Business Media.)

A comparison between four different methods (QBS, BP-SPECT, QUBE, and 4D-MSPECT) [6770] revealed that all methods tended to underestimate the LV volumes (EDV and ESV) with a trend of greater underestimation as the volume of the LV increased; different trends were observed among algorithms to estimate the RV volumes [71]. In LV EF estimation, most algorithms showed good correlation with the reference values with no significant trends observed across the range of EFs studied. However, all methods showed greater overestimation with an increase of the RV EF [71].

The differences observed in the study just discussed [71] are consistent with others [72], and this can be explained as caused by several reasons. The algorithmic assumptions vary among methods, which can be fixed count threshold, derivative or gradient-based edge detection, knowledge-based boundary detection, watershed voxel clustering, or neural network-based segmentation [73]. Other patient-related factors, such as definition of pulmonary outflow tract, enlarged ventricles, and difficulty of the RV geometry, contribute to suboptimal results of volume and EF estimation. Moreover, other technical and processing parameters are yet to be determined and practically optimized [74].

5.8 Transient Ischemic Dilation

Transient ischemic dilation (TID) of the LV is measured as a ratio of the LV cavity in the stress images and the LV cavity in rest images. This index has its clinical significance in multivessel stenosis and increased risk of adverse outcomes [75]. It can be calculated from both gated and ungated data sets. The normal limits are constrained by the protocol used since in some circumstances the LV chamber appears different in size, especially when different radionuclides are used (Tc-99m- vs. Tl-201-based images) or patient-detector distance varied greatly in both studies. The myocardial walls in the Tl-201 images appear thicker than in the Tc-99m images, resulting in a smaller cavity size in Tl-201 images. This should be accounted for in interpreting the results of TID. Normal limits and values of TID vary in the literature (1.012–1.40), and sources of this variability are perhaps different radionuclides (Tc-99m, Tl-201, or both), type of stress (exercise vs. other pharmacological stressors), imaging protocol (single- or 2-day protocol), time of imaging, or other factors [76].

6 Factors Affecting Gated SPECT

Many variables were found to influence the performance of the quantitative gated SPECT methods in estimating the LV volumes and EF. These variables can be classified into acquisition, processing, and patient-related factors.

Selection of the matrix size, zooming factor, count density, framing rate (8 or 16 frames/cycle), and angular resolution (number of projections) and rotation arc (180° vs. 360°), collimators, radionuclide used (Tc-99m vs. Tl), and other factors were found to influence the performance of the estimation task [7786]. Processing parameters such as reconstruction algorithm (FBP vs. iterative reconstruction), photon attenuation, scatter, resolution recovery, filtering, cutoff value, and zooming during reconstruction were also reported [8790]. Some other factors are related to the patient and have been studied in the literature, such as irregular heart rate, gating errors (e.g. T-wave elevation), patient motion, severe perfusion defects and difficulty in outlining the myocardial boundaries, small hearts (underestimation of volumes and overestimation of EF), and high extracardiac activity [9194]

Nevertheless, a well-defined acquisition and processing protocol could serve to optimize the results of gated studies. Moreover, quality control software tools as well as visual assessment of patient raw data are also helpful in depicting count variation and patient motion and detecting rejected heart beats.

The variations shown by the quantitative perfusion software programs were also evident in estimating the LV volumes and EF [9597]. Relatively large agreement limits were found among methods in addition to systematic and random errors. As a result, the users should be aware of the underlying assumptions of the method used in clinical practice as well as understanding of the sources of error and technical pitfalls. Although most of the methods showed good correlations in comparison to accurate techniques in cardiac imaging achieving acceptable accuracy and reproducibility, interchangeability of values would be of limited clinical outcome, and patient monitoring must be performed in adherence to a single software program.

7 Conclusions

Cardiac SPECT imaging provides a tremendous amount of information about myocardial perfusion and function. Many factors are key and still contributing to the development of cardiac scintigraphy, such as radiopharmaceuticals, instrumentation, and computer technology. Software packages developed to provide such quantitative indices are helpful in patient diagnosis and have become important tools in nuclear cardiology laboratories. The quantitative parameters provided by those programs together with their automated features are unique among other cardiac imaging modalities, and to a considerable extent this has made nuclear cardiac imaging a well-established diagnostic imaging modality. Further investigations are warranted to explore the qualitative and quantitative capability of cardiac SPECT with use of hybrid imaging techniques.

A number of points should be taken into consideration when using the myocardial perfusion quantitation methods in clinical practice:

  • These algorithms assist as a second observer in the reading session, and the visual assessment by an experienced reading physician must be at the forefront. Even with fully automated methods, careful inspection should be followed to check and verify the results of contour generation.

  • Quantitative cardiac SPECT methods were shown to differ significantly in their performance along with the degree of automation.

  • Normal limits for myocardial perfusion and function were also shown to differ among algorithms and tended to be gender specific.

  • Interchangeability of these algorithms is clinically limited to use in patient monitoring or the decision-making process.

  • Quantitative perfusion methods were designed to quantify the myocardial tracer uptake in a relative sense and do not provide an absolute measure of tracer distribution. Furthermore, they do not not account for image-degrading factors such as photon attenuation, scatter, and resolution effects.

  • New approaches developed to evaluate perfusion defect abnormality should be extensively validated and assessed under a wide range of patient conditions with the available acquisition and processing protocols used in daily practice. Furthermore, validation studies should include not only phantom experiments but also patient data and should be compared to well-established and accurate techniques.

  • For proper implementation of the quantitative methods in clinical practice and for better data interpretation, users should be aware of the basic assumptions and concepts underlying those algorithms.