Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The field of molecular imaging finds its roots in nuclear medicine, which since its inception had a major focus on task-based optimization of image quality, and on in vivo quantitative assessment of metabolic and physiological parameters [1]. This standpoint reflects the limited spatial resolution and high noise characteristics of SPECT and PET compared to high resolution structural imaging modalities (CT and MRI), which provide exquisite anatomical details. The disparities between performance characteristics of currently available scanners and their potential degradation with time can be delicate and tricky to put into evidence through qualitative visual interpretation. This has motivated the development of objective and reproducible metrics to observe and adjust changes in system performance, for intercomparison studies as well as for quality assurance and quality control tasks. The use of molecular imaging in the assessment of metabolic and physiological parameters linked to specific diseases further motivated quantitative molecular imaging.

The quantitative potential of molecular imaging made it possible to measure in vivo different physiological parameters including but not limited to organ function, tissue perfusion, tracer biodistribution and kinetics, and many other physiological parameters that necessitate accurate quantification. Quantitative analysis provides a direct link between the time-varying activity concentration in organs/tissues and relevant quantitative parameters representing biological processes taking place in the same organs/tissues [2].

The rate of specific tracer uptake in a tissue, organ or organ system depends on many aspects including its rate of delivery, local biochemical reactions, physical half-life and biological clearance. Quantitative molecular imaging using SPECT/PET considers all these factors to noninvasively provide a numerical estimate of discrete physiological characteristics of tissues or organs. Various physiological processes can be quantified from such measures, for example the rate at which the brain or a tumor is metabolizing glucose, referred to as metabolic rate for glucose (MRGlc), usually expressed in micromoles of glucose/100 g of tissue/min. These measures can then be correlated to clinical outcomes such as tumor evolution or response to therapy in such a way to relate disease physiology to its progression. These quantitative estimates can also serve as early surrogate endpoints in preclinical therapy trials.

Recent advances in dedicated small-animal imaging instrumentation enabled to contribute unique information in biomedical research [3]. Basic research laboratories focusing on molecular imaging-based preclinical research demand multiple competences, resources and trained personnel far beyond what is required to run clinical facilities. These laboratories usually consist of multidisciplinary teams working in close collaboration to solve basic research questions through quantitative regional estimation of physiologic or pharmacokinetic parameters from dynamic radiotracer studies.

To take full advantage of the quantitative capabilities of PET imaging, subject-specific correction of background and physical degrading factors must be performed [4]. While most of these corrections are performed in clinical imaging using sophisticated computational models, compensation for these effects is hardly considered in preclinical imaging where much attention has focused on physical performance of SPECT/PET scanners, namely the spatial resolution and sensitivity (see Chaps. 4 and 5) and animal preparation (see Chap. 18). The major challenges to quantitative preclinical PET imaging when the target is to quantify physiological or pharmacokinetic processes can be categorized in five classes [5]:

  • Instrumentation and measurement factors: factors related to imaging system performance and data acquisition protocols;

  • Physical factors: those related to the physics of photon interaction with biologic tissues;

  • Reconstruction factors: issues related to assumptions made by image reconstruction algorithms;

  • Physiological factors: factors related to motion and other physiological issues;

  • Tracer kinetic factors: issues related to difficulties in developing and applying tracer kinetic models, especially at the voxel level (parametric imaging).

The above referenced issues (except instrumentation factors which are addressed in Chaps. 4 and 5 of this volume) are discussed at some level of detail in the following sections.

2 Advances in Image Reconstruction Strategies

The basic principle of image reconstruction is that an object can be accurately reproduced from a set of its projections taken at different angles by an inversion procedure. The analytic solution to this inverse problem has been known for about one century thanks to the pioneering work of the Austrian mathematician J. Radon. In essence, two major classes of image reconstruction algorithms have emerged in PET: direct analytical methods and iterative methods. Until about two decades ago, the most widely used methods for image reconstruction in PET were direct analytical techniques because they are relatively quick and their derivation straightforward. However, the resulting image quality is limited by the over simplified line-integral model of the acquired projection data. Alternatively, iterative reconstruction techniques are computationally much more intensive but the resulting images demonstrate improvements (principally arising from more accurate statistical modeling of the system response) which have enabled them to replace analytic techniques not only in research settings but also in the clinic [6].

Time-dependent reconstruction can be handled either by considering series of independent ‘static’ reconstructions [7] or direct time-dependent 4D reconstructions [8]. The former remains the common approach to dynamic PET image reconstruction consisting of independent reconstruction of tomographic data within each dynamic frame. Following this step, one arrives at a set of dynamic images intended to specify the variation of activity over time throughout the reconstructed field-of-view (FOV). This is still the de facto standard approach applied in routine clinical and preclinical studies. As opposed to static imaging, dynamic PET reconstruction can provide additional very useful information, depending on the particular tracer and study design.

The foundations of image reconstruction are covered in detail in recent reviews [6, 7, 9] and textbooks [1, 10] that offer comprehensive coverage of image reconstruction techniques including appropriate representation of the object, measured data and the mathematical derivation of algorithms. There is also increasing interest in the use of advanced 4D image reconstruction strategies for research applications [8]. Therefore, this section only briefly summarizes novel developments in PET reconstruction algorithms, with particular emphasis on statistical iterative reconstruction techniques given their popularity, promise and wide adoption by the medical imaging community. Future directions for PET image reconstruction are also considered, addressing mainly the issues of improving the modeling of the data acquisition process and task-specific determination of the parameters to be estimated in image reconstruction [7].

2.1 Analytic Reconstruction Techniques

The inverse problem in the context of analytic reconstruction is expressed in a continuous framework with the algorithmic realization implemented as a discrete approximation of the continuous solution. Following the notation used in [7], let us recall that direct analytic inversion procedures in emission tomography assume that a 2D parallel projection representing a set of lines of response (LORs), \( p\left(s,\widehat{u}\right) \), is equivalent to a set of line integrals through the radioatracer distribution f(r) (the 3D X-ray transform) [11]:

$$ p\left(s,\widehat{u}\right)=\underset{-\infty }{\overset{+\infty }{{\displaystyle \int }}}f\left(s+x\prime \widehat{u}\right) dx\prime $$
(17.1)

where the 3D vector r in the imaging volume is decomposed into the 2D parallel projection position vector s = [y' z']T and the 1D orientation unit vector \( {x}^{\prime}\widehat{u} \). This line-integral equation represents lines through the FOV, where each detector-pair is regarded as an LOR i (specified by a displacement s from the centre of the FOV and an orientation \( \widehat{u}=\left(\varphi, \theta \right) \) and with the vector q replaced by the continuous function p.

The derivation of the image f given the projection data p is carried out through the inversion of equation (1) using the central section theorem [7, 12]. This theorem states that a central plane of the 3D Fourier transform F(k) of the 3D image f(r) is equal to the 2D Fourier transform P(k) of the 2D parallel projection data at the same orientation \( \widehat{u}=\left(\varphi, \theta \right) \). The equivalent in the 2D (slice-by-slice) image reconstruction case is that the 1D Fourier transform of a 1D parallel projection being equal to a single line through the 2D Fourier transform of f. One can observe that the superposition of these 1D Fourier transforms (one from each projection angle) creates the 2D Fourier transform of f. However, a 1/|r| weighting of the contributions to the 2D Fourier transform will result from this superposition. In other terms, the ramp filter defined as the inverse of this weighting, i.e. |r| in frequency space is used to balance the irregular contributions. Filtering the projection using the convolution operator followed by backprojection is often used as an alternative to image reconstruction in Fourier space. As such, the tracer distribution ƒ is reconstructed from the acquired projection data p in two steps: (1) filtering in which the projections are filtered by the ramp filter |r|, and (2) backprojection in which the intensity of each pixel is calculated from the contribution of the filtered projections.

A number of analytical reconstruction techniques were suggested in the literature, including simple backprojection, which produces a blurred version of the object to be reconstructed [13], backprojection followed by filtering (BPF) [14] and filtered backprojection (FBP)/convolution backprojection (CBP). In the context of 3D PET imaging, the 3D reprojection method (referred to as 3DRP) emerged as the most popular approach and has been commonly used in practice since its inception [15]. Given the considerable computational resources required by 3D reconstruction, various approximate techniques have been proposed to rebin the data from the oblique projections into 2D direct sinogram data sets to enable the application of 2D reconstruction techniques, thus decreasing the computation time. Fourier rebinning (FORE), in which oblique rays are binned to a transaxial slice using the frequency-distance relationship of the data in Fourier space [16], emerged as the most promising technique.

As mentioned earlier, despite the advantages of analytic reconstruction techniques (quick, simple, easy to implement…), their drawbacks (noise, streak artifacts, interference between regions of low and high tracer concentration…) motivated their replacement by iterative techniques both in clinical and research setting. Considering current state-of-the-art and recent progress in statistical reconstruction techniques, interest in analytic reconstruction approaches has substantially declined and, as such, this category of techniques will not be discussed further in this chapter.

2.2 Iterative Reconstruction Strategies

The main limitation of analytic reconstruction techniques discussed above is their reliance on the line-integral model [Eq. (17.1)], assuming that the measured projection data are perfectly consistent with the tracer distribution, a prerequisite that is certainly not materialized in practice owing to the presence of noise and considering the impact of other physical degrading factors [7]. The advantage of iterative techniques is their ability to not only accommodate more complex models of the PET data acquisition process in a realistic way, but they also enable to employ non-orthogonal basis functions. Iterative methods are frequently used to solve problems involving optimization. The image reconstruction problem can be regarded as a special situation where the aim is to determine the ‘best’ approximation of the tracer distribution given a set of measured projection data.

Iterative reconstruction strategies have been around since the early days of tomographic imaging but emerged as prominent techniques and potential replacements for analytic techniques following the introduction of statistical reconstruction approaches more than three decades ago [17]. The three major issues that delayed the widespread adoption of iterative reconstruction techniques in the clinic, namely the large memory requirements owing to the size of the transition matrix, the additional computational complexity compared to single-pass analytic techniques and the lack of a well defined stopping criterion that can be used to objectively choose the number of iterations in a given scenario, are no longer a matter of concern at the present time given recent advances in computer technology. Still the latter criteria remains a hot research topic which is attracting the interest of active research groups since excess numbers of iterations can lead to an unacceptable noise level in the reconstructed images if it is not appropriately controlled [18]. In practice, a preset number of iterations is commonly used and, as such, iterative algorithms are not run into convergence.

Similar to analytic techniques, iterative algorithms make use of a backprojection operator to estimate the tracer distribution from the projection data. However, contrary to analytic methods, they also include a forward projection operator which is used to compute the projection data corresponding to a given tracer distribution and attenuation map. The projection/backprojection operators are applied multiple times by iterative algorithms depending on the selected number of iterations and, as such, their algorithmic implementation in terms of accuracy and computational efficiency is crucial to achieve the best possible performance.

An iterative reconstruction technique consists of two basic components: (1) the parameters to estimate (a set of voxel intensities representing the tracer distribution) and, (2) the system model describing the relationship between the tracer distribution and the mean of the measured projection data. It should be emphasized that it is the mean of the measured projection data which is usually modeled. Iterative algorithms provide the flexibility required to appropriately model and parameterize this mean. Generally, the system model, mapping from the parameters to the mean, is time-invariant [7].

Several iterative reconstruction strategies have been devised with the expectation maximization (EM) algorithm applied in PET as an iterative method to compute maximum likelihood (ML) estimates of the tracer distribution, being the most popular approach [17]. This approach assumes that the measured projection data consist of samples from a set of random variables whose probability density functions are linked to the actual tracer distribution according to a mathematical model of the data acquisition process.

The EM algorithm entails two different steps [19]: (1) computation of current projection data from the tracer distribution estimated at the preceding iteration using the forward projection operator according to a predefined system matrix (a ij ), starting from an initial guess (usually a uniform cylinder) at the first iteration, (2) the present estimate f new j is updated by multiplying the preceding estimate f old j by the backprojection of the ratio of measured (p i ) over the estimated projections in such a way to maximise the likelihood. The ensuing ML-EM equation is therefore given by:

$$ {f}_j^{new}=\frac{f_j^{old}}{{\displaystyle \sum}_l{a}_{lj}}{\displaystyle \sum}_i{a}_{ij}\frac{p_i}{{\displaystyle \sum}_k{a}_{ik}{f}_k^{old}} $$
(17.2)

Based on an ordered sets approach, an accelerated version of the EM algorithm referred to as the Ordered Subsets EM (OSEM) algorithm was proposed in 1994 [20]. This algorithm handles the projection data in subsets (blocks) within each iteration so as to speed up convergence by a factor proportional to the number of subsets. Many studies reported that OSEM generates images of comparable quality to those generated by the EM technique in a fraction of the processing time.

Traditionally, PET images were reconstructed using analytic techniques (FBP) following correction for the various physical degrading factors (attenuation, scatter, randoms, etc.). An appealing feature and attractive asset of iterative techniques is that the physical model of the data acquisition process and scanner geometry can be incorporated into the reconstruction algorithm through the use of weights or penalties. Statistical reconstruction methods often incorporate corrections for photon attenuation and degradation of spatial resolution (resolution recovery). Additional constraints and penalty functions can also be incorporated to reduce statistical noise or to ensure that the image has other desirable properties, thus allowing the algorithm to be tuned to meet the requirements of specific clinical protocols (task-specific). There is extensive literature demonstrating substantial improvement of image quality and quantitative accuracy of PET images when using iterative reconstruction techniques especially when applied to low count projection data (poor statistics) typically encountered in oncologic or other similar studies. Figure 17.1 compares an 18F-NaF skeletal rat PET study reconstructed using analytic (3DRP) and iterative (ML-EM) reconstructions. Note the significant improvement in image quality and spatial resolution when using iterative reconstruction. Clinical and preclinical scanner manufacturers have gradually improved their reconstruction software by incorporating correction for photon attenuation, scatter, random events, spatial resolution degradation and other factors by modeling these effects into iterative reconstruction algorithms. This trend is expected to keep on into the future and traditional analytic algorithms will become obsolete.

Fig. 17.1
figure 1

Comparison of maximum intensity projections of an 18F-NaF skeletal rat scan generated using analytic (3DRP) reconstruction (top) and iterative list-mode EM reconstruction (bottom). The iterative EM method benefits from improved modeling of the acquired PET data which significantly improves image quality and spatial resolution. Reprinted with permission from [7]

One of the main issues faced by iterative algorithms is the ill-conditioning of the inverse problem in PET image reconstruction. Regularization is often employed to counterbalance this through the use of appropriate techniques including post-reconstruction smoothing, early termination of the iterative process or Bayesian priors which are used to modify the ML objective to a maximum a posteriori (MAP) objective) [7].

If there is no a priori knowledge about the tracer distribution, maximising the likelihood is equivalent to maximising the posterior. However, some a priori knowledge is always available, i.e. the reconstructed image should not be too noisy [19]. A prior distribution favouring smooth solutions can be defined through a Markov random field or a Gibbs random field [21]. The probability of a voxel in a Markov random field depends on the intensities of voxels in the neighbourhood of that particular voxel according to a Gibbs distribution. This category of techniques has been successfully used in small-animal PET imaging and implemented on some commercial scanners [22]. Anatomical information derived from MRI has also been used to tune the noise suppressing prior in MAP-type algorithms by limiting smoothing to within organ boundaries revealed by the anatomical data [23, 24]. If the limited spatial resolution of the PET scanner is modelled, then this category of algorithms can produce a strong resolution recovery near anatomical boundaries.

Resolution recovery image reconstruction has received considerable attention during the last decade [25]. These techniques have been reported to improve both the noise properties and spatial resolution of the reconstructed images, potentially resulting in more accurate quantification. This is achieved by more accurate modeling of the relationship between the image and projection data within the system matrix. These techniques can be implemented either in image or projection space based on accurate characterization of the spatially invariant or variant scanner-specific point spread function (PSF). The system matrix, usually combining various physics and instrumentation related factors, such as scanner geometry, positron range, parallax error, intercrystal scatter, and non-colinearity of annihilation photons, can be estimated using analytic calculations [26], Monte Carlo modeling [27] or experimental measurements [2831]. The latter proved to be the favorite approach for deriving the spatially variant PSFs through accurate measurements of the spatial resolution characteristics of the PET scanner. For this purpose, a point source is commonly used to sample and parametrize the spatially varying PSF at different positions in the transaxial/axial FOV. Such algorithms have been recently implemented on clinical PET scanners and have proved to be useful for small-animal imaging [32].

Recent advances in the field focus on task-specific image reconstruction by incorporating the targeted objective within the reconstruction process, thus enabling to better suit the predefined task. This could be either the generation of high quality images with better spatial resolution and noise properties (usually required in clinical diagnostic imaging), or direct estimation of kinetic parameters of interest (usually needed in preclinical research involving small-animal studies). Estimating the metabolic rate of glucose from [18F]-FDG PET scans instead of dynamic images of the time course of [18F]-FDG concentration in tissue is one such example. PET images usually contain very complex noise distributions that need to be modeled for accurate tracer kinetic modeling. Strategies for direct estimation of kinetic parameters from the dynamic data set can simplify this task since such methods make use of the measured data, which are known to follow the simple independent Poisson distribution [8]. This concept was used in the framework of the EM algorithm to estimate kinetic parameters by maximizing the Poisson log-likelihood of obtaining the measured dynamic data [33]. Such approaches have been further extended and revisited by various groups and are described more in detail in the above referenced review [8].

3 Scatter Modeling and Correction

With the introduction of commercial preclinical PET scanners, small-animal imaging is becoming readily accessible and increasingly popular. The choice of a particular system being dictated in most cases by technical specifications, special attention has to be paid to methodologies followed when characterizing system performance. Since different methods can be used to assess the scatter fraction, differences may be methodological rather than reflecting any relevant difference in the performance of the scanner. Standardization of the assessment of performance characteristics is thus highly desired [34].

Little has been published on modeling the scatter component in small-animal PET scanners owing to the relatively small scatter fraction when imaging rodents (compared to clinical imaging). The origin of scatter for small-animal imaging has not been well characterized, but, has been proposed to stem mainly from the gantry and environment rather than the animal itself [35]. This point of view is supported by the fact that scatter correction usually does not involve correcting for scatter in the detector itself. Figure 17.2 illustrates the difference in terms of origin and shape between object and detector scatter components.

Fig. 17.2
figure 2

(a) Schematic diagram of the origin and shape of detector scatter component for a cylindrical multi-ring PET scanner geometry estimated from a measurement in air using a line source. (b) Schematic diagram of the origin and shape of object scatter component estimated from measurements in a cylindrical phantom using a centred line source. Both single and multiple scatter are illustrated

The scatter component for a prototype PET scanner based on avalanche photodiode (APD) readout of two layers of lutetium oxyorthosilicate (LSO) crystals with depth of interaction information, called the Munich-Avalanche-Diode-PET (MADPET), was assessed using Monte Carlo calculations [36]. In a more advanced version of this prototype (MADPET-II), the scatter fraction in a mouse-like cylindrical phantom (6 cm diameter, 7 cm height) containing a spherical source (diameter 5 mm) placed at its center was 16.2 % when the cylinder is cold and increased to 37.7 % when the cylinder is radioactive for a lower energy discriminator of 100 keV and no restrictions in the acceptance angle [37]. One study reported a scatter fraction of 25–45 % in the rat brain using 11C-raclopride and an increase in distribution volume ratio of 3.5 % after scatter correction [38]. Monte Carlo simulations showed that small-animal PET scanners are also sensitive to random and scattered coincidences from radioactivity outside the FOV [39].

Important contributions to the field were made by Bentourkia et al. [40] who used multispectral data acquisition on the Sherbrooke small-animal avalanche photodiode PET scanner to fit the spatial distribution of individual scatter components of the object, collimator and detector using simple mono-exponential functions. The scatter fraction of this scanner for rat imaging was estimated to be 33.8 % with a dominant contribution of the single-scatter component (27 %) as assessed by Monte Carlo calculations [41]. The position-dependent scatter parameters of each scatter component are then used to design non-stationary scatter correction kernels for each point in the projection. These kernels are used in a non-stationary convolution-subtraction method which consecutively removes object, collimator, and detector scatter from projections [42]. This technique served as basis for the implementation of a spatially variant convolution subtraction scatter correction approach using dual-exponential scatter kernel on the Hamamatsu SHR-7700 animal PET scanner [43].

The SF and noise equivalent count rate (NECR) are usually measured using various discrete phantoms of different uniform size [35, 44, 45]. The idea of the NEMA protocol for scatter fraction estimation is to use a uniform cylindrical phantom with a line source inserted at predefined radial displacements to give an estimate of the scatter fraction, which is representative of the whole phantom [46]. For this purpose, the scatter fraction and count rate performance are determined using a mouse- and rat-sized phantoms and an 18F line source insert. The mouse-sized phantom is 70 ± 0.5 mm long and 25 ± 0.5 mm in diameter with a cylindrical hole (3.2 mm diameter) drilled parallel to the central axis at a radial distance of 10 mm. The rat-sized phantom has a diameter of 50 ± 0.5 mm and a length of 150 ± 0.5 mm with a cylindrical hole (3.2 mm diameter) drilled parallel to the central axis at a radial distance of 17.5 mm. For instance, the scatter fraction for the X-PET™ subsystem of the FLEX Triumph™ PET/CT scanner was measured to be 7.9 % for the mouse-sized phantom, whereas a value of 21 % was reported for the rat-sized phantom [47]. These values were measured to be 19 % and 31 %, respectively, for the LabPET™-8 scanner [48].

Monte Carlo simulation studies have shown that the optimum radial displacement of the line source required for relevant assessment of the scatter fraction for a range of phantom sizes was ~3/4 of the phantom radius from the center [44], which is very close to the position recommended by the NEMA NU-4 standard for animal scanners [46] contrary to the NU-2 standard for clinical whole-body scanners [49].

Exhaustive experimental measurements were also carried out to characterize the magnitude and origin of scattered radiation for the microPET II small-animal PET scanner [35]. It has been shown that for mice scanning, the scatter from the gantry and room environment as measured with a line source placed in air dominates over object scatter. The environmental scatter fraction rapidly increases as the lower energy discriminator decreases and can be over 30 % for an open energy window of 150–750 keV. The scatter fraction originating from the mouse phantom is very low (3–4 %) and does not change considerably when increasing the lower energy discriminator. The object scatter fraction for the rat phantom varies between 10 and 35 % for different energy windows and increases as the lower energy discriminator decreases.

Likewise, the measured scatter fractions for the mouse (rat) NEMA count rate phantom in the microPET-R4 and the microPET®—FOCUS-F120 were 14 % (29 %) and 12.3 % (26.3 %), respectively, when using a low energy discrimination of 250 keV [50]. If the discriminator is increased to 350 keV, the scatter fraction drops to 8.3 % (19.6 %) and 8.2 % (17.6 %), respectively. Scatter correction is thus more important for rat scanning whereas a large energy window (i.e., 250–750 keV) would be more appropriate for small objects (e.g. mouse brain scanning) to increase system sensitivity.

Several observations reported that within specific rodent species, especially rats or small rabbits, there is substantial variation in body size when rodents are litters or correspond to diabetic models put on high calorie diets. Also the rodents’ body shape is not uniform throughout the axial direction. The body cross-section of rodents’ specially rats and large species increases from head towards pelvic region [51]. Moreover, it has been suggested that a phantom representing a varying range of cross-sections and dimensions would be more suited for the assessment of these parameters for clinical PET systems [52]. Furthermore, it is nowadays common practice to increase the throughput of rodent PET studies by simultaneous scanning of multiple rodents placed at different radial offsets in the scanner’s FOV [53].

More recently, Prasad et al. [51] reported on the design and development of a cone-shaped phantom for the measurement of object size-dependent SF and NECR and, second, to assess these parameters as a function of radial offset, object size and lower energy threshold for two small animal PET scanners, namely the X-PET™ and LabPET™-8, using the developed cone-shaped phantom. The optimized dimensions of the cone-shaped phantom were 158 mm (length), 20 mm (minimum diameter), 70 mm (maximum diameter) with taper angle of 9°. Depending on the radial offset from the centre of the central axial FOV (3–6 cm diameter), the SF for the cone-shaped phantom varied from 26.3 to 18.2 %, 18.6 to 13.1 % and 10.1 to 7.6 % for the X-PET™, whereas it varied from 34.4 to 26.9 %, 19.1 to 17.0 %, and 9.1 to 7.3 % for the LabPET™-8, for lower energy thresholds (LETs) of 250, 350 and 425 keV, respectively. The SF increases as the radial offset decreases, LET decreases and object size increases. Overall, the SF is higher for the LabPET™-8 compared to the X-PET™ scanner. Figure 17.3 illustrates a Monte Carlo model of the LabPET™-8 small animal PET scanner with a cone-shaped phantom in the FOV (left). The SF for FOVmouse corresponding to the cone-shaped phantom for the LabPET™-8 scanner using a LET of 250, 350 and 425 keV is also shown (right). The SF estimates are shown for a line source located at the center and at 10 and 15 mm radial offset [51].

Fig. 17.3
figure 3

Monte Carlo simulation model the LabPET™-8 small animal scanner (left). Reprentative example of the variation of the SF (in %) as a function of radial offsets for the cone-shaped phantom using a LET of 250, 350 and 425 keV for axial FOVmouse (right)

The same authors characterized the magnitude and spatial distribution of the scatter component in small-animal PET imaging when scanning single and multiple rodents simultaneously and assessed the performance of model-based scatter correction under similar conditions [54]. The modelled scatter component for the LabPET™-8 scanner using the single-scatter simulation (SSS) technique was compared to Monte Carlo simulation results. A good agreement was observed between calculated and Monte Carlo simulated scatter profiles for single- and multiple-subject imaging. In the LabPET™-8 scanner, the detector covering material (kovar) contributed the maximum amount of scatter events while the scatter contribution due to lead shielding was negligible. The increase in SF ranged between 25 % and 64 % when imaging multiple subjects (three to five) of different size simultaneously in comparison to imaging a single subject.

Similar approaches were undertaken for positron emission mammography (PEM) units where both high spatial and contrast resolution and sensitivity are required to meet the needs of early detection of breast tumors, hence avoiding biopsy intervention. In this context, a scatter correction method for a regularized list-mode ML-EM reconstruction algorithm was proposed [55]. The object scatter component is modeled as additive Poisson random variable in the forward model of the reconstruction algorithm. The mean scatter sinogram, which only needs to be estimated once for each PEM configuration is estimated using lengthy Monte Carlo simulations [54].

A more recent approach aiming at simultaneous correction for attenuation and scatter was suggested [56]. This is achieved by analytical assessment of the spatial distribution of scattered photons using both emission and transmission images combined with prior knowledge of the probability of Compton scattering and scanner detection efficiency. The authors reported improved performance compared to the model-based approach proposed by Watson [57].

4 Attenuation Compensation

Similar to scatter correction [58], little has been published on attenuation correction (AC) in small-animal PET imaging owing to the low magnitude of attenuation factors when imaging rodents (compared to clinical imaging). The magnitude of the correction factors ranges from approximately 45 for a 40 cm diameter human subject and decreases down to 1.6 for a 5 cm diameter rat) to nearly 1.3 for a 3 cm diameter mouse [59]. This elucidates why the problem of photon attenuation has been overlooked in small-animal imaging even in the third generation of preclinical PET scanners [60]. PET scanner calibration factors are usually determined with and without AC given that AC is till not well established in small-animal imaging.

Fahey et al. [61] have shown that the use of transmission (TX)-based AC improved the quantitative accuracy but also reduced the precision as indicated in the variability of the attenuation corrected data. This can be compensated by noise reduction schemes such as segmentation of the TX data. Another study compared several measured TX-based techniques for deriving the attenuation map on the microPET Focus 220 animal scanner [62]. This includes coincidence mode with and without rod windowing, singles mode with two different TX sources (68Ge and 57Co), and post-injection TX scanning. Moreover, the efficiency of TX image segmentation and the propagation of TX bias and noise into the emission images were examined. It was concluded that 57Co-based AC provides the most accurate attenuation map having the highest SNR. Single-photon TX scanning using 68Ge sources suffered from degradations resulting from object Compton scatter. Monte Carlo simulation studies also demonstrated that background contamination in the 68Ge singles-mode data due to intrinsic 176Lu radioactivity present in the detector crystals can be compensated using a simple technique [63]. Compensating for scatter improved the accuracy for a cylindrical phantom (10 cm diameter) but overcorrected for attenuation for a mouse phantom. Low-energy 57Co-based AC also resulted in low bias and noise in post-injection TX scanning for activities in the FOV up to 20 MBq. Attenuation map segmentation was most successful using 57Co single-photon sources, however, the modest improvement in quantitative accuracy and SNR may not rationalize its use, particularly for small-animals. More sophisticated techniques using multiple sources for TX scanning where each point source is surrounded by a plastic scintillator coupled to a miniature photomultiplier tube to allow collection of the energy the positron must lose before annihilation were also developed [64, 65]. The LoR joining the current source position and detector position is identified through the pulse provided by the energy lost in the plastic scintillator whereas scanner’s conventional detectors provide the second pulse.

Combined anato-molecular PET/CT imaging also provides a priori subject-specific anatomical information that is needed to correct the PET data for photon attenuation and other physical effects. The potential use of small-animal CT for AC is now well established and is considered to be one of the potential applications of low-dose microCT imaging which can drive the further development of dual-modality small-animal PET/CT [66]. Similar to SPECT/CT [67, 68], the accuracy of CT-based attenuation correction (CT-AC) in preclinical imaging was demonstrated using phantom and animal studies where the low-dose CT was suitable for both PET data correction and PET tracer localization [59]. The principle of CT-AC is shown in Fig. 17.4. Some of the advantages of the technique are the low statistical noise and high-quality anatomical information, small crosstalk between PET annihilation photons and low energy X-rays, and higher throughput imaging protocols. Noise analysis in phantom studies with the TX-based method showed that noise in the TX data increases the noise in the corrected PET emission data whereas the CT-based method was accurate and resulted in less noisy images. For small-animal imaging, hardware image registration approaches that rely on the use of custom made imaging chambers which can be rigidly and reproducibly mounted on separate PET and CT preclinical scanners [69] is a reasonable alternative to combined PET/CT designs [7072]. Calculated AC was reported to provide similar correction compared to CT-AC for a cylindrical phantom and a mouse for which the attenuation medium volume matches the PET emission source distribution [73]. However, it undercorrects for attenuation when the emission image outline underestimates the attenuation medium volume (unmatched source distribution and attenuation medium).

Fig. 17.4
figure 4

Strategy for implementation of CT-based attenuation correction on a preclinical PET/CT scanner

One should note that accurate and robust conversion of CT numbers derived from low-energy polyenergetic X-ray spectra of a CT scanner to linear attenuation coefficients at 511 keV is essential for accurate implementation of CT-AC of PET data. Several conversion strategies have been reported in the literature including segmentation [74], scaling [75], hybrid (segmentation/scaling) [76], bilinear or piece-wise scaling [77], quadratic polynomial mapping [78, 79], and dual-energy decomposition methods [80]. Energy-mapping methods are generally derived at a preset tube voltage and current. These methods are widely used and validated on clinical PET systems [81]; however, they still need to be thoroughly investigated on dedicated high resolution, small FOV scanners such as those used for small-animal imaging. PET images of a transverse slice of the NEMA NU 4- 2008 image quality phantom reconstructed with and without attenuation correction are shown in Fig. 17.5. A horizontal line profile is also shown [79].

Fig. 17.5
figure 5

Transaxial slices showing (a) uncorrected and (b) CT-based attenuated corrected PET images of the NEMA NU 4- 2008 image quality phantom acquired on the FLEX Triumph™ PET/CT scanner. (c) The corresponding horizontal profiles through images shown in (a) and (b)

Despite the wide acceptance of CT-AC as a reliable technique allowing to achieve more accurate quantification in high resolution preclinical PET imaging, further work is still needed to explore its broad potential, in particular when combined with scatter and beam hardening correction of cone-beam CT data [82]. The impact of X-ray scatter in cone-beam CT subsystems coupled to PET scanners on combined preclinical PET-CT systems has been investigated in a limited number of studies [8385]. Most of the approaches estimate the scatter-to-primary ratio (SPR) from projections in the 3D cone-beam geometry using the beam stop method or from Monte Carlo simulations. Alternatively, analytical models were derived to estimate the first order X-ray scatter by approximating the Klein–Nishina formula so that the first order scatter fluence is expressed as a function of the primary photon fluence on the detector [86].

Following the introduction of hybrid small-animal PET/MRI systems [8792] (see also Chap. 15), MRI-guided attenuation compensation has received a great deal of attention in the scientific literature [9395]. This is a very active research topic that will certainly impact the future of hybrid PET/MRI technology. The major difficulty facing MRI-guided attenuation correction lies in the fact that the MRI signal or tissue intensity level is not directly related to electron density which renders conversion of MRI images to attenuation maps less obvious compared to CT. One approach uses representative anatomical atlas registration where the MRI atlas is registered to the subject’s MRI and prior knowledge of the atlas’ attenuation properties (for example through coregistration to CT atlas) is used to yield a subject-specific attenuation map [96]. This is the basis of the method proposed by Chaudhari et al. [97] for mice imaging, using the Digimouse atlas. The critical and crucial part of the algorithm is the registration procedure which might fail in some cases with large deformations [98, 99]. The other fundamental question that remains to be addressed is: does the global anatomy depicted by an atlas really predict individual attenuation map? [93].

5 Partial Volume Effect Correction

The quantitative accuracy of PET is hampered by the low spatial resolution capability of currently available scanners. The well accepted criteria is that one can accurately quantify the activity concentration for sources having dimensions equal or larger than two to three times the system’s spatial resolution measured in terms of its full width at half maximum (FWHM) [100, 101]. Sources of smaller size only partly occupy this characteristic volume and, as such, the counts are spread over a larger volume than the physical size of the object owing to the limited spatial resolution of the imaging system. It should be emphasized that the total number of counts is conserved in the corresponding PET images. In this case, the resulting PET images reflect the total amount of the activity within the object but not the actual activity concentration. This phenomenon is referred to as the partial volume effect (PVE) and can be corrected using one of the various strategies developed for this purpose [102].

In clinical PET imaging, partial volume errors are great sources of errors impacting PET image quantification. In preclinical PET imaging, partial volume errors are expected to be less severe owing to the higher spatial resolution of dedicated small bore systems. However, it is a matter of fact that the quantitative accuracy in small animal imaging still bears inherent limitations especially for quantification of tracer uptake in small organs such as mouse brain or heart [103]. It has been reported that dedicated preclinical PET scanners can provide accurate quantification to within 6 % for features larger than 10 mm. About 60 % of object contrast was retained for features as small as 4 mm [61].

The high cost of dedicated preclinical instrumentation and the interest expressed by several groups to conduct preclinical research studies in facilities equipped only with commercial scanners for clinical studies motivated the use of clinical PET scanners for imaging laboratory animals. Moreover, various strategies were developed to scan multiple rodents simultaneously at different radial offsets in the scanner’s FOV to increase the throughput of PET scanners [104]. However, the limited spatial resolution of clinical scanners deteriorates image quality and hampers the quantitative accuracy by enhancing the impact of partial volume errors in small-animal imaging. Despite these limitations, several investigators successfully carried out research experiments involving scanning laboratory animals on clinical PET scanners [105107]. Many of these studies used simple approaches for partial volume correction using the method based on the calculation of recovery coefficients (RCs) [108, 109]. In this approach, the correction is performed by multiplying the uptake value in a specific region of interest (ROI) with a size-dependent RC. RCs are commonly calculated through experimental measurements using objects with known size, shape, activity concentration and location in the scanner’s FOV [110]. Many of these approaches accounted for both ‘spill-out’ (loss of activity) and ‘spill-in’ (increase of activity) to and from the surrounding tissues, respectively. Few studies reported on partial volume correction for preclinical studies involving the use of small-animal PET scanners [103, 111]. Correction for partial volume even using this simple approach significantly improved both accuracy of small-animal PET semi-quantitative data in rat studies and their correlation with tumour proliferation, except for largely necrotic tumours [111].

More sophisticated approaches described in the literature were designed specifically to compensate for partial volume effect in rat brain imaging [112], tumour imaging [113] and cardiovascular imaging [114117] including correction for partial volume effect when using image-derived input function for kinetic modelling [118, 119]. Most of these studies demonstrated significant improvement in quantitative accuracy when partial volume effect is applied. It has also been demonstrated that image correction and reconstruction techniques, object size and location within the FOV have a strong influence on the resulting partial volume effect [120]. Among the above referenced approaches, methods using the wavelet transform to incorporate the high resolution structural information (CT or MRI) into low resolution PET images seem to be promising and should be investigated further [121].

6 Issues Pertaining to Quantification and Kinetic Modelling

The kinetic modeling of PET data depends on the radiotracer used for imaging, the data acquisition protocol and the biological tissues or organs under study. Each radiotracer behaves differently in the body, and the same tracer could be affected differently in different types of tissue. The general principles of kinetic modeling are extensively reviewed elsewhere and will not be discussed here [122, 123]. Basically, two approaches were adopted for kinetic analysis: (1) tracer-dependent models performed on a voxel-by-voxel basis to produce parametric images and, (2) grouping voxels representing homogeneous tissues in volumes of interest (VOIs). The former approach bears the inherent drawback of generating noisy time-activity curves (TACs) at the voxel level and, as such, it is tricky to fit the model to the data for short-lived radionuclides. To this end, a number of approaches have been suggested to tackle this challenge, including spatial noise reduction techniques, cluster analysis and spatial constrained weighted nonlinear least-square methods and wavelet denoising approaches [124]. Alternatively, the latter approach is more robust because of the averaging of the voxels contained in each VOI, thus enabling to handle the data with better statistical properties. This also allows significant reduction of computation time since the processing is limited to a predefined number of VOIs instead of a large number of voxels. Yet, this approach presents some fundamental limitations particularly when the assumption of tissue homogeneity within a VOI is questionable.

In the context of small-animal imaging, kinetic modeling of tracers presents several issues and additional challenges compared to human studies that need to be addressed through research before the technique can be exploited to its full potential [125]. This includes improved image correction, reconstruction and analysis techniques, and the use of blood plasma data coupled with advanced kinetic models. In addition to these technical concerns, two other issues still need to be carefully addressed. These include the impact of anesthesia on the physiological processes being studied [126] and the radiation dose delivered to the animals [127129], particularly in longitudinal studies where the animal serves as its own control.

The data acquisition process in small-animal imaging involving kinetic modeling consists of the following steps [125]: (1) animal handling and preparation, (2) tracer injection, usually through the tail vein, (3) dynamic or list-mode PET data acquisition, and (4) direct blood sampling or acquisition of an early dynamic frame at the level of the heart if image-based derivation of the input function is sought. It is often common practice to launch the acquisition simultaneously with bolus injection. Special attention has to be paid to dead-time correction particularly at the beginning of the scan to measure the relatively high initial activity. Likewise, appropriate cross-calibration procedures between the PET scanner, dose calibrator and well counter should be performed. The different clocks used should also be synchronized to limit decay correction errors. The total acquisition time has consequences on the size of the data sets if acquired in list-mode format and on the animal’s physiology. It is often assumed that the physiological and metabolic processes under investigation remain unchanged during the course of the study excluding cases where the aim of the study is to influence these processes (e.g. activation studies).

The input function is often required for kinetic modeling studies to determine the amount of the radiotracer in the blood (or plasma) delivered to tissue, and can be assessed from arterial blood samples. However, blood sampling is risky, and in the specific case of small animals, the procedure is very tricky and yet there may not be enough blood to extract especially in repetitive studies. Therefore, only a small fraction of the total blood volume is usually withdrawn to limit side effects. This obviously has an impact on the number of samples which have to be withdrawn either manually or automatically by means of blood sampling devices in a discrete or continuous fashion. Novel automated microfluidic blood sampling devices allow taking only few samples corresponding to a small fraction of the total blood volume [130, 131]. As mentioned above, alternative methods rely on extracting the input function directly from images. In this case, the radioactivity in the arterial blood can be assessed by defining VOIs on the left atrium chamber or on the left ventricle (in cardiac studies) [132, 133]. However, special attention has to be paid to correct for spillover from blood to tissues or from tissue to blood (owing to the small size of the heart relative to the scanner’s spatial resolution) and for the metabolites. Another alternative is to use standard arterial input function derived from an automated slow bolus infusion of 18F-FDG adjusted by few measured blood samples [134]. Promising approaches for derivation of the input function without blood sampling include reference tissue approaches [135, 136] and decomposition of images using factor analysis [137, 138] or independent component analysis [139]. Alternatively, beta-probes have also been used to determine the input function [140, 141], however, complex surgery might be needed to estimate accurately the input function [142].

Automatic quantitative analysis of molecular PET data is essential as it provides the potential to enhance the consistency among different interpreting observers regardless of their experience and to reduce the variability across institutions in multicentre trials. For example, the development of tracer-specific small animal PET probabilistic atlases [143] correlated with anatomical (e.g. MRI) templates enabled automated volume-of-interest or voxel-based analysis of small animal PET data with minimal end-user interaction [144]. One such software tool was developed by Kesner et al. [145] to enable the assessment of the biodistribution of PET tracers using small animal PET data. This is achieved though non-rigid coregistation of a digital mouse phantom with the animal PET image followed by automated calculation of tracer concentrations in 22 predefined VoIs representing the whole body and major organs. The development of advanced anatomical models including both stylized and more realistic voxel-based mouse [146149] and rat [150153] models obtained from serial cryo-sections or dedicated high resolution small animal CT and MRI scanners will certainly help to support ongoing research in this area [154, 155]. Figure 17.6 illustrates a representative slice of PET, CT, cryosection, and Atlas and overlay images through the Digimouse model [147]. More recently, a methodology for fully automated atlas-guided analysis of small animal PET data through deformable registration to an anatomical mouse model was reported [156]. Representative image registration results between experimental mouse studies and the Digimouse atlas are shown in Fig. 17.7. Direct segmentation of functional PET images enabling to alleviate the need of the corresponding anatomical information or an Atlas have also been reported [157]. Such approaches are useful for automated calculation of TACs for the various organs/tissues of interest, which is required for the purpose of kinetic modeling.

Fig. 17.6
figure 6

Spatially registered (from left to right) PET, CT, cryosection, Atlas and overlay images for a coronal slice through the Digimouse model. Reprinted with permission from [147]

Fig. 17.7
figure 7

Illustration of representative deformable registration example between the Digimouse and an experimental mouse studies showing: (a) overlay of the Digimouse atlas onto corresponding CT images, (b) actual 18F-FDG PET/CT mouse study, (c) mouse study shown in (b) with overlay of the segmentation onto CT image (seven organs), (d) CT to CT registration of the Digimouse and actual mouse study shown in (c), (e) overlay of the transformed segmentation (seven organs) using registration parameters obtained in (d) onto CT image, and (f) transformed PET/CT study using registration parameters obtained in (d). Reprinted with permission from [156]

7 Summary and Future Directions

It is gratifying to see in overview the progress that quantitative analysis of small-animal PET data has made, from cumbersome manual techniques, through semi-automated approaches requiring the availability of multimodality imaging, and more recently towards atlas-guided fully automated analysis approaches. Significant attention has been devoted to optimizing algorithmic designs and computational performance and to balancing conflicting requirements. Approximate methods suitable for applications that do not require accurate quantitative measurements and more sophisticated approaches for research applications where there is greater emphasis on accurate quantitative measurements, are being addressed. Quantitative high resolution preclinical multimodality imaging will undoubtedly be an accurate and cost-effective method for conducting various basic research studies to enable the understanding of complex diseases and physiological processes and might also assist in drug discovery and many other applications relying on the use of small-animal models of human disease.

Technical challenges remain for quantitative imaging, particularly in the areas of motion tracking and correction when scanning freely moving awake rodents [158160], in accurate image quantification predominantly when using exotic non-conventional radionuclides [161, 162], and in the development of more accurate kinetic models at the voxel level. As these challenges are met, and experience is gained, quantitative small-animal molecular imaging will attract more interest and make a more profound impact on biomedical research.