Keywords

FormalPara Learning Objectives
  • To understand the basic principle of PET quantification

  • To realize that uptake images represent summed images of several biological processes

  • To get an understanding when and how PET data should be quantified

  • To realize that quantification requires attention to many data acquisition, data processing, and data analysis issues

  • To understand the need for an accurate arterial input function

  • To know when a reference tissue input function is sufficient

  • To understand the basic principles of tracer kinetic modelling

  • To understand the basic principles of parametric imaging

  • To understand the limitations of simplified semiquantitative methods

1 Characteristics of PET

The two most important characteristics of positron emission tomography (PET) are its high sensitivity (pico- to nano-molar range) and its ability to accurately measure the concentration of a positron-emitting radionuclide within the human body. In fact, PET was developed in the 1970s as a noninvasive in vivo method to measure regional physiology (originally blood flow, oxygen utilization, and glucose metabolism) in humans [1, 2]. Its high sensitivity facilitated the development of tracer studies of neuroreceptors in the 1980s. The high sensitivity of PET made it possible to use (tracer) amounts of labeled compounds that did not affect the receptors themselves (negligible receptor occupancy). At the same time, proper quantification remained important for correct interpretation of the signals captured in the image. Both sensitivity and quantitative accuracy are unique for PET and make it the method of choice for investigating molecular targets and interactions in vivo.

Key Learning Points

  • The two most important characteristics of PET are high sensitivity and quantitative accuracy.

  • PET is the method of choice for measuring molecular targets and interactions in vivo.

2 Issues in Quantification

Clinically, PET studies are usually performed using simple qualitative imaging. However, in many circumstances, quantifying the data provides additional, clinically relevant information (e.g., measuring absolute myocardial blood flow at rest and stress in a myocardial perfusion scan). The particular type of study (and associated scanning protocol) will depend on the actual research or clinical question that needs to be addressed. For example, for staging in oncology, it is important to identify and localize sites with increased [18F]FDG uptake, and, consequently, a simple static scanning protocol with visual (qualitative) interpretation is sufficient. In other cases, e.g., for monitoring response to therapy, full quantification using a dynamic scanning protocol and arterial sampling may be essential. Finally, for many applications, an intermediate semiquantitative method may be adequate. Nevertheless, all semiquantitative methods should first be validated by comparing results with those derived from a fully quantitative method.

Key Learning Point

  • The level of quantification required depends on the clinical or research question to be addressed.

3 Data Acquisition Issues

PET is based on the coincidence detection of two annihilation high-energy photons resulting from the interaction of an emitted positron with an electron in tissue. Therefore, the line of response (the line along which the annihilation took place) can be identified, which is the basis for the quantitative nature of PET. This is primarily due to the fact that the total path length of the two annihilation photons through tissue is known. A separate acquisition of a CT scan provides information about attenuation, to correct the detected photons for attenuation within the tissues. Absolute measurements of radioactivity concentrations require attention to detail, and the following data acquisition issues should be considered.

3.1 Normalization

Normalization of the scanner is needed to correct for differences in sensitivity between detector pairs (similar to performing a “flood” correction with a single-photon gamma camera). In general, the procedure prescribed by the manufacturer suffices. This procedure needs to be repeated occasionally depending on the possible drift of the scanner. Again, the manufacturer usually has a daily quality control procedure in place to detect changes that would compromise scanner performance.

3.2 Injected Activity

The injected activity should be in a range that can be handled by the scanner. It is clear that reconstructed images will be very noisy if the activity is too low, making it difficult to extract reliable (precise) radioactivity measurements at least at the voxel level. On the other hand, if the activity is too high, the signal may be dominated by random coincidences (quadratic increase with activity) rather than true coincidences (linear increase with activity). As the true coincidences are obtained by subtracting the random counts from the total counts [3], the true signal will become increasingly noisy, as it is obtained by subtracting two large numbers from each other. In addition, with very high activities, dead-time may become an issue, and the response of the scanner could even become nonlinear.

3.3 Cross-Calibration

Fully quantitative PET studies usually require measurement of the arterial input function. Measurement of such an input function will be discussed later, but independent of those measurements themselves, the scanner needs to be cross-calibrated against a well-counter used for measuring manual blood samples. This is done by scanning a homogeneous phantom and measuring an aliquot of the radioactivity solution in the well-counter.

For semiquantitative studies the PET signal is often normalized to injected activity. This means that a cross-calibration between scanner and dosimeter is required. This cross-calibration needs to be carried out with care, as technically it is more demanding than the cross-calibration with a well-counter.

Key Learning Points

  • For accurate results it is important to check performance of the scanner on a regular basis.

  • The injected dose should be low enough such that the scanner response is linear and high enough to avoid excessive noise.

4 Data Processing Issues

Before data can be analyzed to extract quantitative biological information, the acquired data need to undergo several correction and processing steps. Each of these steps could affect the accuracy of the final results. The most common issues are listed below.

4.1 Decay and Dead-Time

Prior or during reconstruction, acquired PET data need to be corrected for decay (if analysis is based on decay-corrected data), dead-time, random coincidences, and scattered coincidences. These corrections routinely are incorporated in the scanner software. Usually these corrections are taken for granted, but for dynamic scanning protocols, it may be useful to check their accuracy, e.g., the accuracy of dead-time corrections for very short frames.

4.2 Random and Scattered Coincidences

Nowadays the correction for random coincidences is usually performed by subtracting the random signal from the total signal (see above). More challenging is the scatter correction, which accounts for about 30–50% of the total number of acquired counts. This correction is based on both measurements and calculations, e.g., by modelling the scatter distribution within the subject based on counts measured outside the subject [4]. Particularly vulnerable are areas with low tracer uptake, where an error in the scatter contribution has more impact on the accuracy of estimated radioactivity concentrations. This means that the accuracy of the scatter correction algorithm should really be checked if a region with low tracer uptake is important for the final outcome measures (e.g., reference tissue models, target-to-reference tissue ratios).

4.3 Image Reconstruction

An essential step in the processing of acquired PET data is their reconstruction into images displaying the distribution of radioactivity within the field of view. Originally, a filtered back projection (FBP) algorithm was used. This algorithm provides an analytical solution to the reconstruction problem. It is accurate but at the same time sensitive to noise (poor image quality for low count densities), sometimes even showing streak artifacts. State-of-the-art scanners are no longer equipped with the FBP algorithm, at least not as the standard choice. Nowadays, an iterative reconstruction algorithm will be the standard option for clinical studies. Iterative reconstructions provide significantly better image quality (hence their popularity), but they may be biased especially in high-contrast images where the optimal solution has not been reached within the limited (preset) number of iterations. It is important to check the validity of an iterative reconstruction algorithm, especially when an image-derived arterial input function is used [5].

4.4 Tissue Attenuation

During reconstruction acquired data need to be corrected for tissue attenuation. For (older) stand-alone PET scanners, this could be done by acquiring a transmission scan using an external positron-emitting source. In theory, this is the most accurate method to correct for tissue attenuation, as the external source allows for direct measurements of the attenuation of 511 keV photons, the main limitation being the number of counts that can be acquired within a reasonable scanning time.

In current state-of-the-art PET/CT scanners, correction for tissue attenuation is based on a (fast) low-dose CT scan [6]. Accuracy of this method has been well validated, although misregistration of the emission and transmission scans can occur if the CT scan is acquired during breath-hold and the emission scan during tidal respiration. In addition artifacts may occur due to metal implants.

Attenuation correction for PET/MRI is an entirely different issue. As it is not possible to image bone with standard MRI sequences, attenuation needs to be estimated in a different way. Two approaches have been followed. In the first method, attenuation coefficients are allocated to three or four different tissue types, which are defined by segmentation of standard MRI images [7]. This method is actually implemented on most commercial PET/MRI scanners. It is, however, well documented that this type of attenuation correction (which basically ignores bone) can lead to substantial errors and therefore it is not suitable for quantitative PET studies unless serial measurements are performed (e.g., measuring response to therapy), where the error will be similar for successive scans. A more promising technique is the use of ultrashort MRI sequences, which may be able to image bone directly. This is a field that still is in development and the best method has yet to be identified.

4.5 Patient Motion

An important source of error can be patient motion during scanning, especially when long dynamic scans are acquired. It is, therefore, important to monitor each patient carefully during scanning and to correct for motion immediately, when possible. This can, for example, be done by projecting laser beams on the skin and to mark the projected points at the start of the scan. In addition, it is important to assess whether patient motion has occurred retrospectively, i.e., before analyzing a scan, especially when volumes of interest are small. For a dynamic scan, it is then possible to co-register all dynamic frames to the same frame (if there is enough anatomical detail in the individual frames; otherwise a series of short frames need to be added together). It should be noted, however, that if there has been motion during a scan, there will also be a mismatch between the attenuation (transmission or low-dose CT) scan and the frames that were acquired after the motion occurred. This means that co-registration is not sufficient but that also the attenuation correction needs to be adjusted [8]. This is also the case when there is patient movement between attenuation and emission scans. In principle, it is also possible to monitor patient motion online using, for example, an external laser beam system, which provides a means to correct for motion during a frame rather than just between frames. Although systems have been available for some years, they have not found wide application.

4.6 Volume of Interest Definition

Whether data are analyzed on a region of interest basis or at a voxel level, at some stage volumes of interest (VOI) need to be defined for comparing results between different patient groups, for evaluating changes over time, or simply for reporting results to the scientific community. If the structures of interest can easily be identified on the PET scan, such as for tumors or metastatic lesions on an [18F]FDG scan, VOIs can be defined directly on the PET scan. The simplest way is to manually define VOIs. However, results will be very operator dependent and test-retest reliability may suffer. A better option is to use a fixed-size VOI centered on the maximum value. This reduces operator variability but may be prone to errors when following effects of therapy in small lesions (that may get smaller thereby imposing a variable partial volume effect). Another common option is to use a thresholding technique, where all voxels are incorporated that have a higher uptake than a certain percentage of the maximum (e.g., 50% or 70%). To account for variable background levels, the threshold values can also be adjusted for the actual background level.

If the structures of interest are not directly related to uptake in the PET scan, but to anatomical structures, a (low-dose) CT scan or, especially for brain studies, an MRI scan should be used. For brain studies it is common practice to use one of the VOI templates that have been published [9]. In that case, PET and MRI scans need to be co-registered, as templates usually are based on MRI. Again, co-registration accuracy for individual scans should be checked. Interestingly, often VOIs are defined for gray matter only, following a gray/white matter segmentation of the MRI scan. For multicenter studies and for studies comparing results with published data, it should be noted that this gray/white matter segmentation may depend on the MRI sequence. In addition, results from different scanners may not be the same even if the same sequence is used. This leads to some inter-study variability that should be taken into account when interpreting results.

4.7 Partial Volume Effects

Measured concentrations in small structures suffer from partial volume effects, i.e., the concentration in a structure smaller than twice the spatial resolution of the scanner will be underestimated. This is due to the fact that part of the activity within the structure will spread out of the boundaries of the structure due to the limited spatial resolution. Similarly, activity from outside such a structure will spill-in. Consequently, measured concentrations may be underestimated or overestimated. Many different algorithms have been developed to correct for partial volume effects [10, 11], but there is no consensus on which is the most accurate one, and results may differ. In the brain, for example, partial volume corrections usually are based on gray/white matter segmentation of MRI scans. As mentioned above, this segmentation is scanner-dependent, and therefore partial volume corrections based on these segmentations will exaggerate interinstitutional differences. In addition, partial volume corrections may result in bias as they usually are based on regular (phantom) structures, while tissue structure are less regular. In general, it is good practice to analyze PET data with and without partial volume corrections. This could, for example, be a way to check whether observed differences between patient groups are due to partial volume effects or not.

In some cases, such as measuring myocardial blood flow using [15O]H2O, it is possible to include a partial volume correction term in the operational equation [12]. In other words, in fitting the data also a term accounting for partial volume is included. This leads to increased accuracy and there is no need to rely on anatomical imaging techniques.

Key Learning Points

  • Accuracy of various corrections should be checked under conditions that resemble actual scans.

  • Attenuation correction on PET/MRI scanners should be validated for each application.

  • Several methods exist to correct for patient motion.

  • For larger motions, an adjustment of the attenuation correction may also be required.

  • Reconstruction algorithms may have an effect on the final results, and therefore the same algorithm should be used throughout a study.

  • Definition of volumes of interest should also remain consistent (and reproducible) within a study.

  • Where appropriate, data should be corrected for partial volume effects, but even in those cases, data should also be analyzed without partial volume correction.

5 Data Analysis Issues

After acquiring and processing PET data, taking into account all issues mentioned above, the resulting images should be ready for analysis. At this stage, these images should provide an accurate estimation of the spatial distribution of radioactivity within the field of view of the scanner, nothing more and nothing less. However, the clinician will not really be interested in radioactivity measurements, but rather in the underlying biology reflected by these radioactivity concentrations. In other words, a final step has to be taken, extracting relevant biological information from these radioactivity measurements. To do so, two issues are relevant.

5.1 Arterial Input Function

Obviously, the actual uptake in tissue will depend on the injected activity. If in the same patient (at the same time) twice as much would have been injected, the tissue concentration would also have been twice as high. However, when a tracer is administered intravenously, it will circulate through the body. Exchange with tissue at the capillary level will depend not only on the arterial concentration at that time but also on residual tissue activity from exchange at earlier times. Over time, tracer will be cleared from the circulation by uptake in all tissues, by clearance from the body (e.g., through the kidneys), and by metabolism (of the tracer itself). This means that the arterial input can vary from subject to subject, between patient groups, and within one subject as a result of an intervention (e.g., a drug). For fully quantitative studies, it is therefore necessary to measure the arterial input function, i.e., the radioactivity concentration in arterial plasma as a function of time.

In general, a tracer is administered using a bolus intravenous injection. Arterial concentrations first peak and then decline continuously during a scan, usually with very rapid changes early after injection. The arterial input function can be measured by taking multiple manual arterial blood samples during the scan, initially with fast sampling to catch the peak of the curve and subsequently with lower frequency when changes in the arterial concentration become slower. Although quite labor intensive, fast manual sampling certainly is a feasible option. The main drawback in terms of accuracy is the fact that timing errors are easily made, as it is very difficult to keep track of the exact times arterial samples are actually taken, especially when unexpected hiccups occur, given that initially a frequency of once per 5–10 s is required.

A better solution is the use of an automatic online blood sampler in which blood is withdrawn continuously from an artery (usually the radial artery) using a dedicated pump, with the radioactivity in the line monitored by an external detector [13]. Using this setup a time resolution of 1 s can easily be obtained. Apart from saving manpower (not only with respect to sampling itself but also when processing the samples), the advantages of an online system are better time resolution and no potential timing errors. A potential disadvantage of online blood withdrawal is the risk of obstruction due to blood clots. Care should be taken to keep the line open throughout the study period by regular flushing, e.g., immediately following manual (see below) samples. In addition, because of the distance between cannula and detection site, a correction for delay (compared with arrival at the tissue) should be made. It should be noted that a correction for delay should also be made in case of manual sampling as the transit time from the heart to the wrist will be longer than that from the heart to, for example, the brain. For manual samples this delay is ignored, as it is almost impossible to measure with the time resolution of manual samples. Depending on the application, it may also be necessary to correct for dispersion in the tubing [14], but for many tracers this is not really necessary as long as every effort is made to limit delay and dispersion by using as few connectors as possible and keeping the line as short and thin (e.g., 1 mm) as possible.

An alternative (less accurate) approach for measuring concentrations in blood is to include a large vascular structure (such as the aorta, carotid, or iliac artery) in the image and determine the total blood activity from a volume of interest set within the structure.

Most ligands are metabolized in the body (liver). This means that for a radiotracer at least one of the metabolic products (metabolites) will be radioactive. These radioactive metabolites will also circulate through the body and could compromise the signal obtained with the parent radiotracer. For brain studies it is therefore good practice to use only tracers for which the radioactive metabolites do not cross the blood-brain barrier. As the brain signal is then due to the parent tracer alone, it is logical that the arterial plasma curve needs to be corrected for radioactive metabolites to extract the true arterial input function

A metabolite-corrected arterial plasma input function can be derived from an online measured arterial whole blood curve by using information from a limited number of manual samples withdrawn during the scan at times where changes in the curve are not too rapid (i.e., after at least 5 min). Both whole blood and plasma radioactivity concentrations of these samples need to be measured together with metabolite fractions. The whole blood concentrations can be used to calibrate the curve obtained from the online samples. The plasma-to-whole blood radioactivity ratios of these samples are fitted to a mathematical function, usually an exponential. The whole blood curve should then be multiplied with this mathematical function to generate a (total) plasma curve as function of time. The measured plasma metabolite fractions are then used to derive the parent fraction for each sample, which is the fraction of total radioactivity in plasma that is due to the originally injected tracer. This fraction over time should also be fitted to a mathematical function, often a so-called Hill function [15]. Finally, the total plasma curve derived above should be multiplied with this mathematical function, resulting in the final metabolite-corrected arterial plasma input function.

5.2 Tissue Uptake

The acquired PET data results in images displaying the distribution of radioactivity concentrations (uptake) in organs. Ideally, uptake is associated with a single biological process, but in reality it is not. In fact, it is the result of various processes, some of which may interact with each other. At any given time after injection, total uptake is the sum of intravascular activity, free tracer in tissue (not bound to anything), nonspecific binding (i.e., bound to tissue proteins), and specific binding (bound to the target of interest), all of which change over time. The problem with a single measurement is that one can only guess how large the contribution of each signal is. For example, a highly vascularized tumor will have a larger contribution from intravascular activity than a more necrotic tumor, but separation of vascular and tumor contributions cannot be derived from a single image. In addition, over time, the specific signal may increase at the cost of the free signal, and again kinetics of that process cannot be derived from a single measurement. Therefore, dynamic scanning is required for fully quantitative PET studies.

By projecting VOIs onto a dynamic scan, tissue time-(radio)activity curves can be extracted reflecting uptake, retention, and clearance in those VOIs. Even such a time-activity curve, however, only represents total uptake in a VOI (tissue) as a function of time. Each time-activity curve is a direct result of (the history of) the arterial input function. A tracer kinetic model is needed to extract relevant quantitative biological, physiological, or molecular information from the measured tissue tracer concentrations.

Such a model is a mathematical description of the fate of the tracer in the tissue under study. In practice all tracer kinetic models are compartmental models in which the uptake of a tracer is distributed over a (limited) number of discrete compartments [16]. For most applications single-tissue or two-tissue compartmental models suffice. It should be noted that the various compartments do not necessarily reflect physical compartments. For example, nonspecific and specific bindings are different compartments in a compartmental model although they are spread out over the same physical space. The main purpose of the compartments is that they provide a convenient way to describe tracer kinetics.

6 Compartmental Models

6.1 Single-Tissue Compartmental Model

As indicated by the name, in a single-tissue compartmental model, tissue is represented by just a single tissue. This is a valid model for a perfusion tracer in which the tracer enters and clears the tissue in a flow-dependent manner without having any significant molecular interactions (such as binding to a target) within that tissue. It can also be used for measuring uptake in tissue, including binding, if interaction within the tissue cannot be clearly separated.

The basic differential equation of a single-tissue compartmental model, illustrated in Fig. 10.1, is given by:

$$ \mathrm{d}{C}_{\mathrm{T}}(t)/\mathrm{d}t={K}_1\cdot {C}_{\mathrm{P}}(t)-{k}_2\cdot {C}_{\mathrm{T}}(t) $$
(10.1)

where CT and CP are concentrations as function of time t for tissue and metabolite-corrected arterial plasma, respectively, and K1 and k2 represent rate constants for influx into and efflux out of the tissue, respectively. In other words, the rate of change of the tissue concentration is the difference between influx and efflux.

Fig. 10.1
figure 1

Schematic diagram of the single-tissue compartmental model. CP and CT represent arterial plasma and tissue concentrations, K1 and k2 represent influx and efflux rate constants, and VB represents blood volume within the PET region

Note that if radiolabeled metabolites would also enter the tissue, another input function (i.e., representing radiolabeled metabolites in plasma) and additional rate constants describing rates of exchange of those metabolites would be needed, seriously complicating the overall equation. Therefore, here it is assumed that radiolabeled metabolites do not enter the tissue, but in practice this needs to be validated for each tracer.

Equation (10.1) describes tissue uptake and clearance. In contrast to in vitro measurements where tissues can be washed, in vivo PET measurements do not allow for separating tissue from blood. Measured VOI concentrations are based on activity in both tissue and blood (vessels). To account for this, Eq. (10.1) has to be extended as follows:

$$ {C}_{\mathrm{PET}}(t)=\left(1-{V}_{\mathrm{B}}\right)\cdot {C}_{\mathrm{T}}(t)+{V}_{\mathrm{B}}\cdot {C}_{\mathrm{A}}(t) $$
(10.2)

where CPET is the concentration as measured by the PET scanner, CA the concentration in arterial whole blood (this time not corrected for metabolites, as the metabolites will also circulate in the blood), and VB the fractional blood volume.

The solution of Eq. (10.1) is given by:

$$ {C}_{\mathrm{T}}(t)={K}_1\cdot {C}_P(t)\otimes \exp \left(-{k}_2\cdot t\right) $$
(10.3)

where ⊗ represents a convolution operation. In other words, CT at a given time is not directly dependent on CP at that time, but on the history of CP. By substituting Eq. (10.3) into Eq. (10.2), the following solution for the measured PET concentration is obtained:

$$ {C}_{\mathrm{P}\mathrm{ET}}(t)=\left(1-{V}_{\mathrm{B}}\right)\cdot {K}_1\cdot {C}_{\mathrm{P}}(t)\otimes \exp \left(-{k}_2\cdot t\right)+{V}_{\mathrm{B}}\cdot {C}_{\mathrm{A}}(t) $$
(10.4)

in which the measured PET concentration is a function of two (measured) input functions, i.e., the metabolite-corrected arterial plasma curve and the non-corrected arterial whole blood curve. In addition, the equation contains the parameters K1, k2, and VB, characteristic of the tissue under study, that can be obtained by nonlinear regression analysis (fitting the tissue time-activity curves with both vascular curves as input functions until the fitted parameters K1, k2, and VB converge).

The fitted rate constants K1 and k2 also allow for calculation of an important characteristic of tissue, the volume of distribution or partition coefficient, which is the ratio of tissue and plasma concentrations at equilibrium. The volume of distribution VT is defined as:

$$ {V}_{\mathrm{T}}={C}_{\mathrm{T}}/{C}_{\mathrm{P}} $$
(10.5)

where CT and CP now represent equilibrium (independent of time t) tissue and arterial plasma concentrations, respectively. It essentially indicates what tissue concentration would be achieved in case of a constant arterial plasma concentration. As mentioned above, VT can also be calculated from K1 and k2, i.e., from a study in which no equilibrium is reached. This can be derived by imposing equilibrium conditions onto Eq. (10.1). In that case, there will be no change in any concentration, and the left hand of Eq. (10.1) will be 0:

$$ \mathrm{d}{C}_T(t)/\mathrm{d}t={K}_1\cdot {C}_{\mathrm{P}}(t)-{k}_2\cdot {C}_{\mathrm{T}}(t)=0 $$
(10.6)

It then follows that:

$$ {V}_{\mathrm{T}}={K}_1/{k}_2 $$
(10.7)

6.2 Two-Tissue Compartmental Model

If the tracer undergoes (kinetically measurable) interactions within tissue, more compartments need to be added to properly describe tracer kinetics. The most used model is a two-tissue compartmental model, which is shown in Fig. 10.2. This is the typical model for, among others, receptor studies. A good ligand will bind only to a single type of receptor, but the PET signal is still the sum of free ligand, nonspecifically bound ligand (i.e., binding to various proteins), and ligand specifically bound to the receptor. In theory this even requires a three-tissue compartmental model, but in practice the free and nonspecific compartments are lumped together into a single non-displaceable compartment, as it is assumed that kinetics between free and nonspecific compartments are very fast.

Fig. 10.2
figure 2

Schematic diagram of the two-tissue compartmental model. CP, CND, and CS represent arterial plasma, and non-displaceable and specific tissue concentrations, respectively. K1 to k4 are rate constants for the transport between compartments, and VB is the blood volume within the PET region

For a two-tissue compartmental model, the measured PET concentration is still given by Eq. (10.2). The difference with the single-tissue compartmental model is the expression of CT, which for the two-tissue compartmental model is given by:

$$ {C}_{\mathrm{T}}(t)={C}_{\mathrm{ND}}(t)+{C}_{\mathrm{S}}(t) $$
(10.8)

where CND(t) and CS(t) represent concentrations in non-displaceable and specific compartments, respectively, both as function of time. The kinetics of these two compartments is given by the following differential equations:

$$ \mathrm{d}{C}_{\mathrm{ND}}(t)/\mathrm{d}t={K}_1\cdot {C}_{\mathrm{P}}(t)-{k}_2\cdot {C}_{\mathrm{ND}}(t)-{k}_3\cdot {C}_{\mathrm{ND}}(t)+{k}_4\cdot {C}_{\mathrm{S}}(t) $$
(10.9)
$$ \mathrm{d}{C}_{\mathrm{S}}(t)/\mathrm{d}t={k}_3\cdot {C}_{\mathrm{ND}}(t)-{k}_4\cdot {C}_{\mathrm{S}}(t) $$
(10.10)

As for the single-tissue compartmental model, the solutions of both Eqs. (10.9) and (10.10) contain convolutions similar to the one shown in Eq. (10.3). These solutions can be substituted in Eq. (10.8), which in turn can be substituted in Eq. (10.2). The final nonlinear equation, now containing two convolutions [17], can again be fitted using nonlinear regression analysis, resulting in values for the four rate constants and VB.

Fitting five parameters (K1, k2, k3, k4, VB) from a single time-activity curve is a difficult task, especially since nonlinear regression is sensitive to noise. In addition, there can be high correlation between parameters, especially between k2 and k3 (both representing efflux from the first compartment), which means that the precision of fitted rate constants, also called micro-parameters, can be low. The rapidly increasing mathematical complexity with additional compartments (each compartment adds another convolution) means that attempts of using a three-tissue compartmental model have been rare and, in general, unsuccessful.

Even if the precision of parameters derived from the two-tissue compartmental model is low, it is still possible to obtain valuable tissue information by using combinations of parameters, so-called macro-parameters. The most interesting macro-parameter for receptor studies is the non-displaceable binding potential BPND (more precisely it is the binding potential relative to the non-displaceable compartment), which, for tracer experiments, is given by [18]:

$$ {\mathrm{BP}}_{\mathrm{ND}}={k}_3/{k}_4 $$
(10.11)

BPND is of special interest because of its direct relationship with parameters known in pharmacology. k3 and k4 are related to pharmacological parameters in the following way [19]:

$$ {k}_3={f}_{\mathrm{ND}}\cdot {k}_{\mathrm{on}}\cdot {B}_{\mathrm{avail}} $$
(10.12)
$$ {k}_4={k}_{\mathrm{off}} $$
(10.13)

where fND represents the free fraction in the non-displaceable compartment, Bavail represents the number of available receptors, and kon and koff represent the rate constants for association and dissociation of the ligand-receptor complex, respectively. It is tempting to use the pharmacological parameter Bmax (maximum number of receptors) in Eq. (10.12), but this is not correct as some receptors might be occupied by endogenous ligands, hence the use of Bavail. In pharmacology, the equilibrium dissociation constant Kd is defined by:

$$ {K}_{\mathrm{d}}={k}_{\mathrm{off}}/{k}_{\mathrm{on}} $$
(10.14)

Combining Eq. (10.11) with Eq. (10.14) results in:

$$ {\mathrm{BP}}_{\mathrm{ND}}={f}_{\mathrm{ND}}\cdot {B}_{\mathrm{avail}}/{K}_{\mathrm{d}} $$
(10.15)

Assuming that fND is constant, Eq. (10.15) illustrates that BPND is related both to the number of available receptors (Bavail) and to the affinity of the ligand for the receptor (Kd).

Reliability of estimated BPND values can be poor, especially when kinetics between both tissue compartments are fast, making it difficult to distinguish them from each other. In that case, a better outcome measure will be VT, which tends to have higher precision.

VT is defined by Eq. (10.5) and, at equilibrium, both dCND(t)/dt and dCS(t)/dt are 0. From Eq. (10.10) it then follows that:

$$ {C}_{\mathrm{S}}=\left({k}_3/{k}_4\right)\cdot {C}_{\mathrm{ND}} $$
(10.16)

where CS and CND are now independent of time. From Eqs. (10.5), (10.8), and (10.16), it follows that:

$$ {V}_{\mathrm{T}}={C}_{\mathrm{T}}/{C}_{\mathrm{P}}=\left({C}_{\mathrm{ND}}+{C}_{\mathrm{S}}\right)/{C}_{\mathrm{P}}=\left(1+{k}_3/{k}_4\right)\cdot {C}_{\mathrm{ND}}/{C}_{\mathrm{P}} $$
(10.17)

Next, from:

$$ \mathrm{d}{C}_{\mathrm{ND}}(t)/\mathrm{d}t+\mathrm{d}{C}_{\mathrm{S}}(t)/\mathrm{d}t={K}_1\cdot {C}_{\mathrm{P}}-{k}_2\cdot {C}_{\mathrm{ND}}=0 $$
(10.18)

it follows that:

$$ {C}_{\mathrm{ND}}=\left({K}_1/{k}_2\right)\cdot {C}_{\mathrm{P}} $$
(10.19)

Finally, substituting Eq. (10.19) into Eq. (10.17) results in:

$$ {V}_{\mathrm{T}}=\left({K}_1/{k}_2\right)\cdot \left(1+{k}_3/{k}_4\right)=\left({K}_1/{k}_2\right)\cdot \left(1+{\mathrm{BP}}_{\mathrm{ND}}\right) $$
(10.20)

which shows the relationship between VT and BPND. It can be seen that VT does not only reflect specific binding (BPND) but that it also contains a non-displaceable component (K1/k2), which is primarily due to nonspecific binding.

In the brain BPND can still be calculated from VT if another region exists with a similar level of nonspecific binding, but without specific binding, i.e., a region that is devoid of the receptor being targeted. The volume of distribution VT′ of such a reference region is given by:

$$ {V}_{\mathrm{T}}^{\prime }={K}_1^{\prime }/{k}_2^{\prime } $$
(10.21)

where K1′ and k2′ are the rate constants for the reference tissue.

If the blood-brain barrier is symmetric (i.e., increased transport into the brain is matched by a similar increase in rate of transport out of the brain) and the level of nonspecific binding is constant across the brain, the following equality holds:

$$ {K}_1/{k}_2={K}_1^{\prime }/{k}_2^{\prime } $$
(10.22)

Combining Eq. (10.20) with Eq. (10.21) results in [20]:

$$ {\mathrm{BP}}_{\mathrm{ND}}=\left({V}_{\mathrm{T}}-{K}_1/{k}_2\right)/\left({K}_1/{k}_2\right)=\left({V}_{\mathrm{T}}-{V}_{\mathrm{T}}^{\prime}\right)/{V}_{\mathrm{T}}^{\prime }={V}_{\mathrm{T}}/{V}_{\mathrm{T}}^{\prime }-1 $$
(10.23)

In general, this estimation of BPND is more stable than direct fitting of BPND, but it can only be performed if a reference tissue devoid of receptors does exist. In addition, the calculation is only valid if the blood-brain barrier is intact (assumption of constant K1/k2 over the brain).

6.3 Reference Tissue Models

The last paragraph of the previous section describes a means to calculate BPND indirectly from VT values derived from a two-tissue compartmental model, provided a reference tissue exists that does not express the receptor being studied. In general, this method works very well, but it should be realized that an arterial plasma input function is needed to fit for VT. Apart from being invasive, measuring a metabolite-corrected arterial plasma input function is labor intensive.

If a reference tissue exists, a better alternative to derive BPND is the use of a so-called reference tissue model. Figure 10.3 shows a schematic diagram of the full reference tissue model, which is directly related to the two-tissue plasma input model. Kinetics of the target tissue are still described by Eqs. (10.9) and (10.10). For the reference tissue, however, the specific compartment does not exist, and kinetics are described by a single-tissue compartmental model, analogous to Eq. (10.1):

$$ \mathrm{d}{C}_{\mathrm{R}}(t)/\mathrm{d}t={K}_1^{\prime}\cdot {C}_{\mathrm{p}}(t)-{k}_2^{\prime}\cdot {C}_{\mathrm{R}}(t) $$
(10.24)

where CR(t) represents the tissue concentration as function of time in the reference tissue.

Fig. 10.3
figure 3

Schematic diagram of the full reference tissue model. ND and S represent non-displaceable and specific compartments, respectively. In the solution of this model, the target tissue concentration CT is expressed in terms of the reference tissue concentration CR with four fit parameters (R1 = K1/K1′, k2, k3, BPND)

By combining Eq. (10.24) with Eqs. (10.8)–(10.10), it is possible to express CT(t) as function of CR(t) rather than CP(t), resulting in an operational equation that enables fitting for R1 (=K1/K1′), k2, k3, and BPND [20]. As target tissue time-activity curves are now expressed relative to reference tissue time-activity curves, it is no longer possible to fit for K1, but rather for its value (R1) relative to K1′.

The main limitation of the full reference tissue model is correlation between parameters, especially between k2 and k3, even more so than for the two-tissue plasma input compartmental model. As a result, convergence tends to be slow, and care should be taken to avoid solutions that correspond to local minima. This typically involves multiple fits using different starting values in order to find the global minimum. Due to these limitations, use of the full reference tissue model has been limited.

To improve fitting stability and speed, the simplified reference tissue model (SRTM) was developed (Fig. 10.4). This model assumes that the exchange between non-displaceable and specific compartments is fast enough, so that the net effect of BPND can be represented by an apparent change in clearance from the tissue (i.e., k2). The resulting differential equation for the target tissue is given by [21]:

$$ \mathrm{d}{C}_{\mathrm{T}}(t)/\mathrm{d}t={K}_1\cdot {C}_{\mathrm{P}}(t)-{k}_2\cdot {C}_{\mathrm{T}}(t)/\left(1+{\mathrm{BP}}_{\mathrm{ND}}\right) $$
(10.25)

while kinetics of the reference tissue are still given by Eq. (10.24).

Fig. 10.4
figure 4

Schematic diagram of the simplified reference tissue model. In this model, the non-displaceable binding potential BPND results in a reduced apparent k2 for the target tissue, after non-displaceable (ND) and specific (S) compartments are lumped together. In the solution of this model, the target tissue concentration CT is expressed in terms of the reference tissue concentration CR with three fit parameters (R1 = K1/K1′, k2, BPND)

The net result of this simplification is that only three parameters (R1, k2, BPND) need to be fitted rather than the four needed for the full reference tissue model. More importantly, this reduction in the number of parameters also results in a much simpler operational equation, containing only one rather than two convolutions [21]:

$$ {C}_{\mathrm{T}}(t)={R}_1\cdot {C}_{\mathrm{R}}(t)+{k}_2\cdot \left\{1-{R}_1/\left(1+{\mathrm{BP}}_{\mathrm{ND}}\right)\right\}\cdot {C}_{\mathrm{R}}(t)\otimes \exp \left\{-{k}_2\cdot t/\left(1+{\mathrm{BP}}_{\mathrm{ND}}\right)\right\} $$
(10.26)

This three-parameter model is much faster than the original four-parameter model, and, in general, results are very stable. More recently it has been shown that SRTM might also be valid even if kinetics of specific binding are not as fast as expected in the underlying assumptions. Nevertheless, use of SRTM and also of FRTM should always be validated against the optimal plasma input model. In particular, in both FRTM and SRTM (and any method that relies on a reference tissue), it is assumed that K1/k2 is constant over the brain, an assumption that is likely to be violated when the blood-brain barrier is damaged [22].

Key Learning Points

  • An arterial input function should be acquired using an automatic online blood sampler.

  • Additional manual samples are needed to derive a metabolite-corrected arterial plasma input function.

  • Fitting tissue data requires two input functions, a metabolite-corrected arterial plasma input function and a non-corrected arterial whole blood input function.

  • A tracer kinetic (compartmental) model is needed to extract specific biological information from tissue time-activity curves.

  • The number of compartments depends on the kinetic behavior of the tracer, but usually two-tissue compartments are sufficient.

  • Binding potential relative to the non-displaceable compartment BPND is the best parameter to characterize reversible binding to a target.

  • If BPND cannot be determined reliably, volume of distribution VT can be used, but it should be realized that VT contains a nonspecific component.

  • A reference tissue model can be used if (1) a region exists that is devoid of the binding site under study and (2) the level of non-displaceable uptake in target and reference tissues is the same.

7 Practical Modelling Issues

7.1 Weighting Factors

Quantification of (dynamic) PET studies requires fitting of tissue data using plasma or reference tissue input functions, resulting in estimates of various parameters (rate constants, fractional blood volume). In fitting a tissue time-activity curve, it should be realized that the precision of successive data points is not the same. Data are expressed as concentrations, but typically early frames are short (e.g., 5–10 s), while late frames can be as long as 10 min. In other words the number of acquired counts is much higher in the later frames, and, for similar concentrations, these later frames will have higher precision. On the other hand, especially for short physical half-lives, the tracer will decay, and if data are decay corrected, later frames will have lower precision. To account for both effects during fitting, each point of a time-activity curve should be weighted according to its precision.

Several weighting schemes have been proposed, and it seems that all of these schemes are better than not weighting at all (i.e., giving the same weight to all points), basically because all take into account differences in precision between data points [23]. The most logical scheme is based on total (true) counts per frame, and this is the scheme presented here.

Suppose A total counts (not corrected for decay) have been acquired in a frame of duration L. Then the total count rate R for that frame equals A/L. Total counts will be distributed according to Poisson statistics, so:

$$ \mathrm{Var}(A)=A $$
(10.27)
$$ \mathrm{SD}(A)=\sqrt{A} $$
(10.28)
$$ \mathrm{COV}(A)=\sqrt{A}/A=1/\sqrt{A} $$
(10.29)

where Var represents variance, SD standard deviation, and COV coefficient of variation. Assuming no error in frame duration L, the following equations can be derived:

$$ \mathrm{COV}(R)=1/\sqrt{A} $$
(10.30)
$$ \mathrm{SD}(R)=R/\sqrt{A} $$
(10.31)
$$ \mathrm{Var}(R)={R}^2/A $$
(10.32)
$$ \mathrm{Weight}(R)=A/{R}^2=L/R={L}^2/A $$
(10.33)

It should be noted that these weights apply to non-decay-corrected data. It is common, however, to work with time-activity curves that are corrected for decay, so that these curves only reflect biological processes. In case of decay correction, R is not equal to A/L, but needs to be modified to account for physical decay:

$$ R=f\cdot A/L $$
(10.34)

where f is the decay correction factor for the frame. This factor is given by:

$$ f=\lambda \cdot A/\left\{\exp \left(-\lambda \cdot {T}_{\mathrm{s}}\right)-\exp \left(-\lambda \cdot {T}_{\mathrm{e}}\right)\right\} $$
(10.35)

where Ts and Te are start and end times of the frame (note that L = Te − Ts) and λ is the decay constant of the radionuclide used. The corresponding weighting factors for decay-corrected data are then given by:

$$ \mathrm{Weight}(R)={L}^2/\left({f}^2\cdot A\right) $$
(10.36)

7.2 Model Selection Criteria

In Sect. 10.6 a number of compartmental models have been described. When analyzing patient data, however, it is important to stick to one model, as each model will have some bias, but this bias will be different for different models. Nevertheless, bias should be limited as much as possible, and, therefore, it is essential to fit data to the model that gives the best description of the kinetics of the tracer. This is not necessarily the model with the highest number of parameters, as at some stage adding more parameters will not change the quality of the fit significantly. The main problem of too many parameters is that these parameters cannot be fitted with any degree of precision, i.e., the compartment they are associated with is not identifiable. Therefore, it is important to identify the optimal model balancing the number of additional parameters against the improvement in sum of squares associated with the fit.

There are several model selection criteria that can be used to select the optimal model for a certain tracer (and application). The most widely used one is the Akaike information criterion (AIC), which is given by [24]:

$$ \mathrm{AIC}=N\cdot \ln \left(\mathrm{SS}\right)+2P $$
(10.37)

where N is the number frames, P the number of parameters, and SS the residual sum of squares.

The model that produces fits with the lowest AIC value is then considered to be the preferred (best) model. Equation (10.37) demonstrates that additional parameters are penalized, as they will increase the AIC value. In other words, additional parameters are only considered if they result in a sufficient decrease in the sum of squares of the fit.

It is good practice to use a few additional model selection criteria to check consistency in model selection, i.e., to make sure that indeed the optimal model is selected for further analysis of the data. Other selection criteria are, for example, the Schwarz criterion and the F-test . The Schwarz criterion is given by [25]:

$$ \mathrm{SC}=N\cdot \ln \left(\mathrm{SS}\right)+P\cdot \ln (N) $$
(10.38)

Again, the model that produces the lowest SC value is considered to be the optimal model. The F-test is given by [26]:

$$ F=\left\{\left({\mathrm{SS}}_1-{\mathrm{SS}}_2\right)/\left({P}_2-{P}_1\right)\right\}/\left\{{\mathrm{SS}}_2/\left(N-{P}_2\right)\right\} $$
(10.39)

where subscripts 1 and 2 represent fits with the lowest and highest number of parameters, respectively. Unfortunately, evaluating results is more cumbersome than for Akaike and Schwarz tests, as an F-statistic table is required to assess significance.

Key Learning Points

  • When fitting tissue time-activity curves, individual data points should be weighted to account for differences in precision between data points.

  • Identification of the optimal model should be based on an objective model selection criterion.

8 Parametric Imaging

8.1 Linearizations

The optimal tracer kinetic model is defined using nonlinear regression analysis of time-activity curves derived from predefined volumes of interest. As mentioned earlier, such a time-activity curve can contain contributions from several sources (compartments), e.g., free, nonspecifically, and specifically bound tracer. If only part of the VOI is abnormal, the resulting time-activity curve will be a mixture of normal and abnormal tissue. Apart from problems associated with tissue heterogeneity, this “dilution” with normal tissue may mean that the disease-related abnormality will not be detected. If the abnormality is related to one process only, for example, specific imaging, while other processes (in this case free and nonspecific binding) are unchanged, it is questionable whether an abnormality in the original radioactivity images will be picked up. Clearly, it is preferable to perform tracer kinetic modelling at the voxel level in order to fully utilize the resolution of the scanner. Unfortunately, the one disadvantage of nonlinear regression is its sensitivity to noise. As a result, fitting at the voxel level would result in parametric images (i.e., images that directly represent the distribution of a certain parameter) with very high noise levels, and, consequently, it will still not be possible to detect subtle regional changes in such a parametric image. In addition, because of the high noise levels, fitting will be slow, as it will take time to obtain convergence (for each voxel).

Nevertheless various parametric imaging methods have been developed successfully. To avoid noise amplification, they are all based on some form of linearization of the original nonlinear operational equations.

8.2 Patlak and Logan Plots

The first (and best known) linearized method is the Patlak plot [27], which originally was developed for analysis of [18F]FDG data but which can be used for any tracer with irreversible uptake (for a two-tissue compartmental model, it means that k4 = 0), providing the net rate of influx Ki from plasma into the irreversible compartment. It can be shown that, after an equilibration period, the plot of CT/CP against ∫CP/CP becomes a straight line [27]:

$$ {C}_{\mathrm{T}}/{C}_{\mathrm{P}}={K}_i\cdot \int {C}_{\mathrm{P}}/{C}_{\mathrm{P}}+{V}_{\mathrm{i}} $$
(10.40)

where Vi is the initial volume of distribution (i.e., volume of distribution of free tracer) and, for a two-tissue compartmental model, Ki is given by:

$$ {K}_{\mathrm{i}}={K}_1\cdot {k}_3/\left({k}_2+{k}_3\right) $$
(10.41)

Ki reflects the net uptake in the irreversible compartment. K1 is the rate of transfer from plasma to the non-displaceable compartment (or in the case of [18F]FDG the free compartment). Once a molecule is in this first compartment, the chance of moving on to the second compartment is k3/(k2 + k3), and so the rate of net uptake in the second (irreversible) compartment is this ratio times K1. Note that for [18F]FDG, Ki is directly proportional to glucose metabolism, explaining why the method became well known. Using the Patlak plot on a voxel-by-voxel level, it is possible to generate images of glucose metabolism rather than [18F]FDG uptake images.

Many years later a comparable linearization was developed for reversible tracers. In this Logan plot [28], ∫CT/CT is plotted against ∫CP/CT, which, after an equilibration time and ignoring intravascular activity, results in a straight line with a slope that represents VT:

$$ \int {C}_{\mathrm{T}}/{C}_{\mathrm{T}}={V}_{\mathrm{T}}\cdot \int {C}_{\mathrm{P}}/{C}_{\mathrm{T}}+K $$
(10.42)

where K is a constant that depends on k2, k3, and k4.

Later the method was adapted to allow for a reference tissue input [29]. With respect to reference tissue models, more methods have been proposed, in particular various versions using multi-linear regression analysis [30, 31].

All parametric methods mentioned above are based on a linearization of the operational equation with subsequent linear regression analysis of the data (fitting a straight line) to obtain desired outcome measures. The major advantage of this approach is that linear regression does not lead to noise amplification, resulting in high-quality images of the parameter of interest. A disadvantage is that bias may be introduced, as fitted parameters are not independent of each other. For example, CP occurs in both the left- and right-hand sides of Eq. (10.40). In addition, because of interdependency of X and Y parameters, it is more difficult to calculate uncertainties in the estimated parameters. Finally, only macro-parameters (e.g., Ki or VT) can be obtained.

8.3 Basis Function Methods

Another, more general, approach is the basis function method, which in theory can be used for all compartmental models. The method has indeed been used for plasma input models [32], but is best known for its implementation of SRTM, known as RPM [33]. Taking SRTM as an example, it can be seen that its operational equation, i.e., Eq. (10.26), contains both a linear and a convolution term. In the basis function method, prior to fitting, first the convolution term is calculated for a series (usually 50–100) of so-called basis functions that cover the entire range of physiological values for the parameters involved in the convolution. Consequently, for each basis function, Eq. (10.26) becomes linear, and this linear expression can be fitted by linear regression. Next, linear regression is performed for all basis functions. Finally, parameter values are obtained from the fit with the basis function that results in the lowest sum of squares.

RPM provides parameter estimates of R1, k2, and BPND for each voxel. From Eq. (10.22) it follows that:

$$ {R}_1={K}_1/{K}_1^{\prime }={k}_2/{k}_2^{\prime } $$
(10.43)

This can be rearranged to:

$$ {k}_2^{\prime }={k}_2/{R}_1 $$
(10.44)

It will be clear that each set of R1 and k2 values will provide another k2′ value. However, k2′ should be a constant as it represents k2 of the reference tissue. This provides a means to reduce the number of fit parameters from 3 to 2 [34], thereby further improving stability. After running RPM with three parameters, k2′ is fixed to the median value (not mean, as this would be sensitive to outliers), and subsequently a second run is performed with only two parameters (R1 and BPND).

8.4 Validation

As mentioned above, all parametric methods are based on some form of linearization. This also means that each parametric method essentially is an approximation of the full compartmental model and that it may show nonlinear sensitivity to noise. Interestingly, it seems that no single parametric method is ideal for all tracers. In other words, for every new tracer (and application), results of all methods need to be compared at a regional level with those of the optimal compartmental model to identify the best method for parametric imaging of that tracer.

8.5 Semi-Quantification

Many studies that claim to be quantitative use in fact semiquantitative methods, such as tumor-to-blood ratios, standardized uptake values (SUV), or, in the brain, SUV ratios (SUVr).

SUV represents uptake in a volume of interest (or voxel) normalized to injected activity and body weight (alternatively, lean body mass and body mass index have also been used). This approach basically assumes that delivery to a tissue (arterial input function) is directly related to the injected activity, i.e., that not only the height but also the shape of the input function is predictable. For some tracers this may be valid, but for others there may be significant intersubject variation in plasma clearance. In addition, in intervention (drug) studies, the intervention itself may affect body uptake and clearance, so the arterial input function can be significantly different (especially in shape) before and after the intervention, even in the same subject [35].

In SUVr studies, SUV of the target tissue is normalized to SUV of a reference region, and differences in arterial plasma input function will have a smaller effect on the results, as they affect both target and reference tissues equally. However, SUVr applications assume that there is some kind of equilibrium between both tissues. This may not be the case if flow is affected in one or both of the regions, as equilibrium may be slower than assumed when validating the method [36]. In addition, the rate of clearance from the tissues may affect the bias [37].

Semiquantitative methods are attractive, as they are based on simple scanning protocols (static scans). However, just as for the parametric imaging methods listed above, they should be validated for each new application and even for the same application in a new patient group. In addition, simulations should be performed to characterize their sensitivity to possible violations of the underlying simulations. Finally, because of intersubject variability in kinetic parameters that cannot be measured using SUV or SUVr themselves (which are based on total uptake), test-retest variability can be substantially higher than for fully quantitative methods. This means that in some applications (many) more patients need to be included in a trial [38]. Especially in drug trials, where changes in delivery and/or flow may occur, semiquantitative methods should only be used with the utmost care.

Key Learning Points

  • Parametric images are based on linearizations of underlying compartmental models.

  • Parametric imaging methods are fast (linear regression) and do not amplify noise, but they should be validated as bias may occur.

  • The best parametric imaging method is tracer-dependent.

  • Although semiquantitative methods such as SUV and SUVr are attractive, given that they are based on simple scanning protocols, they should be validated even more vigorously, as substantial bias may occur.

  • SUV and SUVr should be avoided in longitudinal studies where changes in tracer delivery can be expected.

9 Conclusions

Although PET is an intrinsically quantitative imaging modality, true quantification requires attention to detail in data acquisition, data processing, and data analysis. For fully quantitative studies, dynamic scanning is required together with compartmental analysis of the data, in first instance using an arterial plasma input function. Only after proper validation, i.e., comparison with full quantitative analyses and appropriate simulations, can other methods such as reference tissue models and parametric imaging methods be used.

Simplified scanning protocols and data analysis methods should only be used when tracer kinetics are fully understood. This applies to new tracers but also to new applications (different patient population or different intervention) using an existing tracer. Otherwise it is possible that the simplification is invalid for addressing the specific research or clinical question. When fully quantitative studies have been performed, it is always possible to retrospectively apply a (validated) simplified analysis to the data, so that existing data are not lost. In contrast, when using a simplified method that later appears to be invalid implies that all data acquired with that method will be lost. Moreover, even progress in the entire field may be delayed, as others will base their research on incorrect results from those studies.

Clearly, the level of quantification depends on the clinical or research question at hand. As mentioned above, for staging in oncology even a qualitative approach will often suffice (finding metastases). Simplified quantitative and semiquantitative methods can be useful, provided they have been validated against fully quantitative methods and are still valid for the clinical or research question at hand. In practice, the challenge is to find the optimal balance between accuracy and precision.

Key Learning Points

  • For a new tracer and even for an existing tracer used in a new patient group or with a different intervention, simplified methods should only be used after rigorous validation against fully quantitative models.

  • The level of simplification depends on the clinical or research question to be addressed, aiming for an optimum trade-off between accuracy and precision.