Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Imaging in nuclear medicine has become an established part of standard medical practice for recognition and treatment of a wide variety of medical disorders. This chapter gives a compact overview of the physics that governs such imaging algorithms.

All imaging methods in nuclear medicine are based on radioactive nuclei, attached to molecules of interest (radiopharmaceuticals). A specific set of selection criteria has been assembled that identifies such radiopharmaceuticals. The detectors that are used are specifically tailored to capture the emitted radiation. They fall into three categories: scintillators, semiconductors, and light detectors. Based on the examination, the detectors are assembled to diagnostic tools. In single-photon emission computed tomography (SPECT), each photon is detected and processed independently. A set of complex collimators can be used to determine the incoming direction of the impact photon. In positron emission tomography (PET), a pair of photons is both emitted and captured for each valid event. Since the photons in PET are emitted in opposite direction, no mechanical collimation is needed. The line that connects the interactions of both photons is called the line of response. A new variety of a PET device, a time-of-flight (TOF) PET, has such accurate chronometers that the position of emission within the line of response can be determined based on the time difference of both interactions. The image reconstruction is the final step of the imaging algorithm, transforming the results of the measurements into a three-dimensional distribution of radiotracer density.

It is obvious that it is impossible to give all complexity of the field in the allotted space. The reader is kindly referred to the references for further reading and in-depth understanding.

2 Radiation Used in Imaging in Nuclear Medicine

In nuclear medicine, radioactivity is used to image metabolic processes characteristic to certain medical conditions. The images provide insight to the function of the tissue, which makes them a functional imaging modality, giving complementary information to anatomical imaging such as CT or plain MRI. The radioactivity is emitted by unstable nuclei chemically bound to a pharmaceutical specific to the imaged process. The isotopes that are used in such procedures are called radionuclides, and compound is called a radiopharmaceutical.

When preparing radiopharmaceuticals [1], the following points have to be considered:

  • Radiation type. Gamma radiation with energy between 140 and 511 keV is ideally suited for medical applications, since a substantial part (up to 40 % for certain nuclides) passes through body unmodified, and is relatively simple to detect. Radiation of photons with lower energies or other radiation types (alpha and beta particles) is predominantly absorbed in the body. Usage of more energetic photons is rare, since they present a challenge in detection.

  • Half-life. A physical quantity A, activity, states number of disintegrations of a radioactive sample per unit time. The activity drops exponentially with a nuclide specific half-life t 1/2:

$$ A(t)={A_0}\cdot {{\mathrm{ e}}^{{-\frac{{\ln (2)}}{{{t_{1/2 }}}}t}}}={A_0}\cdot {2^{{-\frac{t}{{{t_{1/2 }}}}}}}, $$
(3.1)

where A 0 is the initial activity. The isotopes are chosen so that their half-lives reflect the metabolic processes that are being investigated; in any case, half-lives between hours and days at most are suitable. Table 3.1 lists half-lives of radionuclides commonly used in nuclear medicine.

Table 3.1 List of common radionuclides for imaging in nuclear medicine
  • Specific activity. The specific activity is the activity of the sample divided by its mass. For minimum interference, the specific activity injected should be as high as possible. Although specific activity of radionuclides can be very high (≈GBq/pmol), the specific activity of radiopharmaceuticals is much lower due to chemical contamination of sample with nonradioactive isotopes of the radionuclide.

  • Purity. Purity is the portion of activity sourced by the desired compound. Purity should be high, since the remaining radioactivity will (a) expose patients to redundant radioactive dose and (b) add noise to detection. Notorious example is a well-controlled contamination of 99mTc with its parent 99Mo during extraction from generator.

  • Chemical properties. The radiopharmaceuticals must mimic all relevant properties of the equivalent pharmaceutical, and elaborate procedures were established to this end. The common methods are a direct replacement, where radioactive isotopes are used in place of their stable counterparts, like 11C for 12C in 11C-choline; creation of analogs, most notably 18F-FDG as a glucose analog; and chelation, e.g., binding of 99mTc in non-active parts of a complex substance.

  • Production. Radioactive sources are produced in nuclear reactors through chemical purification of fission products or activation and, on a smaller scale, in cyclotrons. For convenience, generators, i.e., long-lived parents of interesting nuclides (99Mo for 99mTc or 82Sr for 82Rb) are shipped from remote nuclear reactors to medical facilities. Conversely, cyclotrons are commonly installed at the facilities themselves to provide on-site access to short-lived PET radionuclides (Table 3.1).

3 Detection of Radiation

Detectors for nuclear medicine are optimized for detecting γ radiation in 140–511 keV energy range. In this section we will give an overview of the most common interaction types of photons in this energy range and list sensitive materials used for their detection.

3.1 Interactions of Photons with Matter

The passage of photons through matter is marked by discrete, point-like interactions. As a consequence, some photons can pass through obstacles unchanged, and it is only their number that diminishes. For an incident current j 0 of photons on a target of thickness d, the outbound current j will be

$$ j={j_0}\cdot {{\mathrm{ e}}^{{-\mu \cdot d}}}, $$
(3.2)

where μ is the material specific attenuation coefficient.

Figure 3.1 illustrates the interactions that dominate the specified energy range: the Compton scattering (CS) and the photoelectric effect (PE), each contributing a partial attenuation coefficient to the total attenuation coefficient μ = μ C + μ pe.

Fig. 3.1
figure 00031

Schematic representation of the most common interactions for photons used in medical imaging: photoelectric effect (left) and Compton scattering (right)

In the photoelectric effect, the photons are completely absorbed in the electric field of the target nucleus (see Fig. 3.1, left). The energy of the photon is transferred to an electron from the atomic shell of the target atom. The emitted electron is referred to as the photoelectron, and its kinetic energy is equal to the energy of the incident photon minus its binding energy. Since the nucleus is involved in the interaction, probability for PE is higher for electrons in strongly bound shells close to the nucleus. When the energy of the incoming photons exceeds the binding energy of those shells, a sharp rise in the partial attenuation coefficient μ pe can be observed in Fig. 3.2. The most energetic of those edges corresponds to the K-shell transitions, where electrons from the innermost shells (1 s) of the target atoms are excited. Above the K threshold, the attenuation coefficient scales as

$$ {\mu_{\mathrm{ pe}}}\propto \frac{{\rho \cdot Z}}{A}\frac{{{Z^{4-5 }}}}{{{E^{2.5-3.5 }}}} $$
(3.3)

with ρ, A, and Z being the density, mass, and atomic numbers of the target material and E the energy of the incoming photon. The exponents for Z and E vary within the energy range of photons used in imaging. Note the strong Z dependence and the sharp drop with increasing energy of the photons.

Fig. 3.2
figure 00032

Attenuation coefficient for photons in Si (left) and BGO (right). Note the logarithmic scales

The Compton scattering is an elastic scattering of a photon with an electron in the atomic shell. The energy transfer between the particles is sufficient to excite the electron, called a Compton electron, from the atom. Since other particles, including the nucleus, can be considered as spectators only, a relation between the scattering angle θ of the photon and the transferred energy E e can be derived:

$$ 2\;{\sin^2}\frac{\theta }{2}=\frac{{{m_{\mathrm{ e}}}{}{c^2}{}{E_{\mathrm{ e}}}}}{{E\cdot (E-{E_{\mathrm{ e}}})}}. $$
(3.4)

The quantum effects do not extend to the scattering angle, which is a continuous variable. As a consequence, the spectrum of the emitted electrons is a continuum with a sharp drop, the Compton edge, corresponding to the maximum energy transfer at backscattering. The partial attenuation coefficient μ C is proportional to

$$ {\mu_{\mathrm{ C}}}\propto \frac{{\rho \cdot Z}}{A} $$
(3.5)

and has, conversely to μ pe, a weak relationship to Z and little variation with energy over the given range.

Figure 3.2 illustrates both partial and total attenuation coefficients in two materials, one with a low Z (Si, left) and one with a high Z (BGO, right), which span the range of sensors and materials used in nuclear medicine detection. The strong PE energy dependence breaks the photon interactions to two regions. By labeling E q as the energy where μ C(E q) = μ pe(E q), the photons with energy below E q will be predominantly photoelectrically absorbed, while above E q, they will undergo CS, with a narrow region where both processes are equally probable. Below E q, because of the Z dependence of μ pe, there will be huge difference in attenuation coefficient in different media: at 30 keV for Si with μ = 3 cm−1 and BGO with μ = 170 cm−1 the difference is almost two orders of magnitude. Above E q, μ predominantly scales with ρ, since Z/A ≈ 1/2 for most materials, resulting in variation confined to less than one order of magnitude.

3.2 Sensitive Materials: Scintillators

Scintillators are materials that convert ionization to visual light [2]. For detection of gamma radiation, the signal is produced in two stages—the incident photon undergoes one of the interactions mentioned above to create an electron with substantial kinetic energy, be it Compton- or photoelectron, and then the electron’s energy is converted to countable photons of visual light. Table 3.2 lists some properties of common scintillators used in nuclear medicine. The most commonly used scintillator material is NaI, being relatively inexpensive and malleable. For PET, detection efficiency is crucial, and devices are based on BGO (chemically Bi4Ge3O12) or L(Y)SO. The name L(Y)SO describes two distinct crystals—proprietary LSO (Lu2SiO5), and an open source LYSO, which is a 10 % admixture of YSO (Y2SiO5). Both have nearly identical properties and will be treated as a single substance throughout the text.

Table 3.2 Properties of common scintillators in nuclear medicine [3], for LaBr3 see [4]

The important parameters of scintillators are (see Table 3.2):

  • Photon yield. Number of visual photons created per keV kinetic energy of the electron. Higher numbers indicate larger signals enabling better energy and timing resolution. Note advantage of recent engineered materials like L(Y)SO or LaBr3 over BGO.

  • Detection efficiency. The efficiency of a material can be expressed through the half-value thickness (d 1/2 = ln(2)/μ), which is the thickness of material required to detect half of the incoming photon flux. It should be considered that only photons interacting through PE will contribute to the image (Sect. 3.4.3); therefore, it is desirable that the relative Compton scattering probability is small compared to photoelectric effect. Table 3.2 states d 1/2 and the ratio (CS/PE) for two most common photon energies, 140 keV of 99mTc and 511 keV of annihilation photons. To maximize efficiency, materials composed of high Z elements (83Bi, 71Lu and 53I) are used, which benefit from the strong Z 4–5 dependence of μ pe. Since μ is proportional to ρ, the materials are also relatively dense, from 3.67 g/cm3 of NaI to 7.3 g/cm3 for L(Y)SO. At the low end of the energy spectra, almost all materials have sufficiently low values of d 1/2 and (CS/PE), so that the other properties become crucial for material selection. At the high end, the efficiency becomes crucial; however, the reign of BGO has been overthrown by the superior light yield and lightningly fast decay time of L(Y)SO.

  • Decay time. Average duration of light emission after the interaction. A short decay time implies large prompt signals and improves timing resolution. Note the excellent values for L(Y)SO and LaBr3.

Hardly any material is perfect, and the final choice is governed by the requirements of each specific application.

3.3 Light Detectors

Scintillators require extremely sensitive photodetectors able to detect single photons. The most common devices are photomultiplier tubes (PMTs), composed of a photocathode and a series of multiplication dynodes (8–12 stages are common). On the photocathode, the visual photons are converted (back) to photoelectrons with the quantum efficiency (QE) measuring the conversion rate of photons to photoelectrons emitted towards the first dynode. For standard PMTs, QE ranges between 1 and 3 photoelectrons for ten initial photons depending on the photocathode composition. Subsequent dynodes multiply the signal to about 106 times the original photoelectron count. Since the electron multiplication is done in vacuum, the PMTs are in general bulky and fragile and very sensitive to all external influences, even the one of the terrestrial magnetic fields.

Silicon-based light detectors were introduced to imaging in nuclear medicine to circumvent some of those difficulties. Figure 3.3 illustrates the principle of operation. The device is a reversely operated diode with graded dopant concentration (n+pπp+). The uneven doping concentration is reflected in the nonhomogeneous electric field, which is strong in the multiplication region and moderate in the active region. Light enters through the bottom entrance window and produces charge carriers in the active region of the device. A moderate field splits the carriers; the electrons are steered towards the high field region and multiplied. Since charge is multiplied within the crystal, no evacuated container is needed and ordinary magnetic fields will not break the multiplication chain. The same charge containment argument makes QE of silicon-based devices larger than the one of PMTs with a common value of 4 or more carrier pairs per 10 visual photons.

Fig. 3.3
figure 00033

Schematic representation of APD operation

Based on the multiplication factor, two types of devices are constructed:

  • In avalanche photodiodes or APDs [5], the multiplication factors are between 100 and 1,000, preserving the proportionality of the collected electron current to the incoming light flux. This multiplication factor is at least an order of magnitude smaller than in PMTs, which is normally not a problem, but its strong dependence on temperature, applied voltage, and device nonuniformity is. The timing resolution depends on the multiplication and is also slightly worse than in PMTs.

  • The silicon photomultipliers or SiPMs [6] operate in Geiger mode, each photon interaction generating a complete device discharge. To keep proportionality, the device is split into tiny cells, squares with sides between 25 and 100 μm. The proportionality is broken when multiple photons interact in a single cell and get counted as a single photon. The dynamic range is limited by the number of cells per device; however, the saturation effects are tolerable, since the devices are generally small (1–10 mm2) to limit their capacitance and to allow decent timing, and are finely segmented to as much as few thousands of cells per device. Present devices are extremely sensitive to changes in operating conditions, most notable the applied reverse voltage and ambient temperature. Furthermore, the small cell size implies complex readout and electronics.

3.4 Sensitive Materials: Solid-State Detectors

Solid-state detectors are modified semiconductor devices. Most sensors are reversely operated diodes, segmented to the desired resolution. The reverse bias depletes the sensor of thermally generated carriers. As a photon strikes the detector and creates a Compton or a photoelectron, the energy of the emitted electron is converted into a countable number of electron–hole pairs. The electron–hole pairs are split in the applied reverse bias and collected to respective electrodes.

In general, the number of created pairs in semiconductors, given in Table 3.3, supersedes the yield of scintillators (Table 3.2), giving excellent energy resolution. The diodes can be made to almost arbitrarily small sizes, and resolutions down to 1 μm were measured [7]. The solid-state detectors are currently found in applications where energy or spatial resolutions are crucial (e.g., small animal imaging, Compton camera, high precision scanners [8,9]). The relatively long charge collection times and available manufacturing algorithms favor thinner sensors. To match scintillator efficiencies, multiple layers of segmented sensors have to be used, increasing channel count and device complexities.

Table 3.3 Properties of semiconductors being introduced in nuclear medicine: material, yield given as number of carriers per keV energy loss of the initial electrons, half-value length d 1/2 and ratio of probabilities CS/PE for Compton scattering versus photoelectric effect, both at 140 and 511 keV photon energy and approximate collection times (they depend on operating conditions and material thickness)

4 Planar Imaging and SPECT

The most popular radiotracer, 99mTc, emits a single photon with energy of 140 keV per decay. To detect the origin of the registered photon, Anger [10] developed a mechanically collimated camera shown in Fig. 3.4. The present detectors are improved versions of the same principle, with multiple pinholes instead of a single one, and better scintillators and light detectors.

Fig. 3.4
figure 00034

Schematic drawing of Anger camera. From [10]

The sensitive part of the camera consists of a flat scintillator block of NaI or similar material (see Table 3.2), measuring roughly 0.5 × 0.5 m2 and has a thickness of 1–2 cm, corresponding to approximately ten half-lengths for 99mTc radiation. One of the flat faces of the crystal is completely covered by a relatively small number (≈50) of circular PMTs arranged in a hexagonal pattern, illustrated in Fig. 3.5. The opposite side is facing the object through a mechanical collimator that limits the direction of the incoming photons to (nearly) perpendicular to the face of the camera.

Fig. 3.5
figure 00035

Illustration of hexagonal arrangement of circular PMT on an Anger camera

The camera registers a planar image, a projection of the source distribution. In analogy to photographic procedure such an image is called a scintigraph and the process scintigraphy. Scintigraphs are used directly in bone scintigraphy where planar image determines presence and approximate location of remote diseases.

Multiple views give volumetric information on source distribution, and rotation of a single camera or imaging with multiheaded cameras is common. The procedure is called single-photon emission computed tomography (SPECT) since a computer aids in reconstructing the volumetric spatial resolution from detected views. Most frequent examinations with SPECT are studies of myocardial perfusion with a dual-head system.

We will derive basic properties of scintigraphy and SPECT, specifically spatial resolution and efficiency, based on a study of a single planar imaging device.

4.1 Spatial Resolution

The spatial sensitivity of the camera comes from the distribution of light among the coupled PMTs. A very common algorithm is the centroid or the center of gravity algorithm. The position where the photon impacts in the camera plane is estimated by the bidimensional quantity r:

$$ \mathbf{r}=\frac{{\sum\limits_{i=1}^N {{s_i}\cdot {{\mathbf{r}}_i}} }}{{\sum\limits_{i=1}^N {{s_i}} }}, $$
(3.6)

where N is the total number of PMTs, r i is the projection of the center of the ith PMT on the face of the camera, and s i is the signal (amount of light) collected in the ith PMT. The position of interaction in direction perpendicular to the camera face is rarely estimated, leading to depth-of-interaction (DOI) artifacts.

This basic strategy is further refined by:

  • Omitting PMTs with signals below a certain threshold, since they only contribute noise.

  • Aligning gains of individual PMTs.

  • Post-correcting with a known response matrix. Prior to imaging, a source is moved in a rectangular grid over the camera face to record distribution of signals among PMTs as a function of source location. During the measurement, the distribution of signals among the PMTs is statistically compared to the collection of pre-calibrated responses. Several minimization techniques are available [11, 12] to spot the most likely candidate, and the position associated with the candidate is the estimated impact position.

The inherent resolution of an Anger camera without a collimator is 3–4 mm FWHM. Surprisingly enough, this resolution is achieved with PMTs as large as 5–10 cm. This size is sufficient because the signal is shared among several PMTs. For that to happen, the scintillator should have a small attenuation coefficient μ for the light it produces. Modern production techniques allow for reasonable regulation of this parameter through crystal growth, edge treatment, and geometry.

The delta method of approximate variance can be used to estimate the variation of the centroid estimator r:

$$ \mathrm{ Var}(\mathbf{r})={{\sum\limits_{i=1}^N {\left( {\frac{{\partial \mathbf{r}}}{{\partial {s_i}}}} \right)}}^2}\mathrm{ Var}({s_i}). $$
(3.7)

The variance of the signal will be dominated by light collection statistics [3], giving Var(s i ) = s i . The partial derivative can be calculated to yield

$$ \frac{{\partial \mathbf{r}}}{{\partial {s_i}}}=\frac{1}{S}(\mathbf{r}-{{\mathbf{r}}_i}), $$
(3.8)

where S is defined as the total signal: \( S=\sum\nolimits_i {{s_i}} \). Since S here is used for illustration of the statistical variation, it makes sense to estimate its value for the weakest link in the signal generation chain, i.e., for the conversion of visual light to photoelectrons on the photocathode. Therefore, in the following, S is the number of created photoelectrons in a given photon interaction.

Should the majority of the signal be collected by M PMTs at an average distance w = |rr i | from the interaction point, the variance will be just

$$ \mathrm{ Var}(\mathbf{r})=\frac{M}{S}{w^2}. $$
(3.9)

The following discussion will be limited to two cases (see Fig. 3.5):

  • One, where the interaction point is between two PMTs

  • Two, where the interaction point is right inside a given PMT

In the first case, the two nearest tubes will collect most of the signal. Therefore, the distance w corresponds to the radius of the phototube, M is equal to 2, and S to the number of generated photoelectrons in all tubes. For a 140 keV interaction in NaI viewed by a tube with QE of 0.1, the number S is 532 photoelectrons, as can be verified by using the numbers in Table 3.2. Then, even for a d = 5 cm diameter tube, the resolution will be

$$ {\delta_{\mathbf{r}}}=2.35\sqrt{{\mathrm{ Var}(\mathbf{r})}}=2.35\sqrt{{\frac{M}{S}\frac{{{d^2}}}{4}}}=3.6\;\mathrm{ mm}, $$
(3.10)

where a factor of 2.35 was used to convert the square root of the variance to FWHM of the distribution.

In case two, most of the signal is collected in the central PMT, which, with w = 0, does not contribute to the variance. All variation comes from the six nearest neighbors which collect a certain portion of the signal. This portion is determined by the product of light attenuation μ and PMT spacing/diameter d:

  • If the crystal is opaque, μ·d » 1, and there is no signal on the nearest neighbors. As a benefit, there is no additional statistical variation of the position estimate, which always correspond to the center of the tube giving the signal. Unfortunately, the ability to determine position much smaller than the tube dimension is also lost—a 5 cm tube will yield a 5 cm resolution.

  • If the crystal is very transparent, there will be signals not only from the nearest neighbors but also from tubes further out. Although the signal is dominated by statistical noise, there are other components to signal variation (electronic noise, drifts in scintillator response, and operating parameters) that will start to accumulate once the contribution of more tubes is added to the signal. The actual implementation will of course depend on the ability of the system to control this additional noise sources, but most economic way is to allow none but the nearest six tubes to contribute. In this case, M in (3.9) is set to 6, w to tube diameter d, and S to 532, and the variance has to be scaled with the portion of the signal collected on the six nearest tubes (typical values are between 10 and 20 %). The resolution will be

$$ {\delta_{\mathbf{r}}}=2.35\times 0.15\times \sqrt{{\frac{M}{S}{d^2}}}=4.8\;\mathrm{ mm}. $$
(3.11)

The simplified model correctly accounts for resolution better than tube dimensions but overestimates the positioning errors. Since the actual device resolution is dominated by errors imposed by mechanical collimation, details on positioning algorithms in scintillators can be safely ignored.

4.2 Mechanical Collimation

The mechanical collimation is a scheme that allows determination of the direction of the interacting photons based on the known placement of mechanical obstacles placed in photons’ path. For example, in Anger camera, all photon paths towards detector save for a hole with diameter 2r are blocked by a thick lead block. The most important parameters of the collimator are (illustrated for the Anger camera):

  • The spatial resolution, R coll, is the FWHM of the projection of a single point source from the object onto the face of the sensor. For a single circular pinhole, a point source at a distance b will draw a circle with diameter \( {R_{\mathrm{ coll}}}=2\cdot r\cdot \frac{b+a }{b} \) onto a detector located at a distance a behind the collimator.

  • The efficiency, g, is given as a number of photons directed towards the collimator that reach the detector, for a single pinhole \( g={{{\pi {r^2}}} \left/ {W} \right.} \), taking the ratio of solid angle subtended by a circular hole compared to the area of the collimator W.

Mechanical collimators suffer from resolution-efficiency trade-off. For a pinhole, both are related to the hole diameter, and equating for r, the trade-off is expressed as

$$ g\propto R_{\mathrm{ coll}}^2. $$
(3.12)

Relation (3.12) governs mechanical collimation in general, giving rise to multiple collimator species, listed in Table 3.4—there is a high efficiency collimator which sacrifices resolution for efficiency, and there is a high-resolution collimator which does the opposite. Still, very few of the incident photons are detected even for moderate spatial resolutions, and performance of a mechanically collimated camera is driven by the collimator. Thicker collimators with rougher trade-offs have to be used for more penetrable gamma radiation beyond 140 keV, resulting in a fourfold drop in sensitivity (compare MEHS collimator to LEHS counterpart in Table 3.4).

Table 3.4 Performance of typical collimators

4.3 Energy Resolution and Its Impact on Scattering Removal

Energy resolution of the detector is important in event classification. Consider Fig. 3.6: three types of events can be detected in the sensor. We can deduce position of the source from the last two but not from type 1 events, since in that case the photon directional information was lost due to Compton scattering inside the object. Since this scattering also reduced energy of photon #1, such events are recognized by analyzing the photon energy. A common strategy is to require the energy of the detected photon to be equal to the energy of the initial photon, i.e., selecting type 2 events only. This also disqualifies events of type 3, which by themselves are perfectly legitimate for source position estimation.

Fig. 3.6
figure 00036

Illustration of different types of events in photon detection

The energy resolution of the detector will determine effectiveness of object scattering removal. Consider a photon with energy E undergoing photoelectric effect in scintillator with yield Y coupled to a PMT with quantum efficiency QE. The number of photoelectrons will be

$$ S = \mathrm{ QE} \cdot Y \cdot E. $$
(3.13)

For a typical Anger camera made of NaI and a 140 keV interaction, S = 532 photoelectrons with a Poisson type variance σ 2 = S. The relative energy resolution is \( 2.35/\sqrt{S}\approx 10\% \) FWHM, resulting in a typical admissible energy window of 14 %. A better yield Y gives \( \sqrt{Y} \) times better resolution, and a correspondingly narrower energy window can be used to reject scatter and background radiation.

5 PET

Positron emission tomography exploits electronic collimation of short-lived positron emitters, like those listed in Table 3.1. The detectors are placed in a stationary ring around the object with a typical ring diameter of D = 80 cm and thickness h ≈ 20 cm. The ring is subdivided into identical detector elements, blocks, which have a high level of autonomy. Figure 3.7 shows a photograph of such a block, consisting of a segmented BGO scintillator block coupled to a set of four PMTs.

Fig. 3.7
figure 00037

Photograph of a BGO module for a PET

The scintillator is segmented to tightly packed bars about 20–30 mm deep along the expected impact direction and with perpendicular dimensions of 4–6 mm. Each block contains between 64 and 144 bars that form a face measuring 5 × 5 cm2. The longitudinal cuts between the bars do not reach along their full lengths and stop just before the PMT interface to allow light sharing between neighboring PMTs. A unique signature of signal sharing among PMTs can be recognized for interaction in each of the bars.

Between 100 and 200 blocks are assembled in a full PET ring. Each block provides a trigger signal based on the sum of the signals of all PMTs in the block, with the system trigger based on single triggers from all blocks combined in a complex trigger logic circuit.

The following subsections describe basic properties of PET devices: electronic collimation, spatial resolution, efficiency, and timing resolution (including the principle of TOF-PET). For more details on TOF-PET, see also Sect. 5.4.1.

5.1 Electronic Collimation

The principle of electronic collimation in PET is illustrated in Fig. 3.8. The radionuclide decays emitting a positron, which annihilates close to the point of emission. The annihilation creates a pair of photons of equal energy (511 keV), traveling in (almost) exact opposite directions. Once both photons are detected, a line of response (LOR) that connects both interactions, and consequently the annihilation point, is constructed. Intersection of many LORs determines the position of the source.

Fig. 3.8
figure 00038

Illustration of PET operation concept

5.2 Spatial Resolution

Assuming Gaussian distribution of each parameter, the system spatial resolution R sys isFootnote 1

$$ {R_{\mathrm{ sys}}}\approx \sqrt{{R_{det}^2+R_{\mathrm{ range}}^2+R_{{180^{\circ}}}^2}} $$
(3.14)

with contributions from the detectors (R det), the positron range (R range) and photon acolinearity (R 180°).

The contribution R range, illustrated in Fig. 3.8, reflects the distribution of annihilation points around the positron emission point. The distribution is cusp shaped with sharp initial drop and long tails. Typical values of FWHM and FWTM (full width at one-tenth of the maximum) of the positron range for common PET radionuclides range respectively between 0.1 and 1.1 mm and between 1.0 and 4.0 mm, depending on the radionuclide.

Acolinearity derives from the fact that positrons annihilate while still traveling, so photons have to share its momentum and do not, in practice, always fly in exact opposite directions. The error on the estimated annihilation position is R 180° ≈ 0.0022·D [13], being D the ring diameter.

In a typical scanner for 18F imaging, the collimation effects R range and R 180° add up to

$$ R_{\mathrm{ range}}^2+R_{{180^{\circ}}}^2={{\left( {0.1} \right)}^2}+{{\left( {0.0022\times 800} \right)}^2}\mathrm{ m}{{\mathrm{ m}}^2}={{\left( {1.76\;\mathrm{ mm}} \right)}^2}. $$
(3.15)

The contribution of the detector resolution to the system resolution is proportional to the resolution of each detector, which is roughly equal to the size of crystal segmentation. Combining R range and R 180° with bar sizes of 4 or 6 mm, the total resolution R sys equals to 4.37 or 6.25 mm, respectively. For small animal imaging (microPET [14]), bar sizes down to 1 mm are used, which combined with the contribution for acolinearity contribution (scaled due to the smaller ring diameter) provides resolutions of 1 mm FWHM. Even submillimeter resolution was achieved in solid-state detector PET prototypes [15], using the excellent resolution properties of silicon detectors.

5.3 Efficiency

The detection efficiency g is the portion of emitted photons that contribute to the final image and is a combination of the geometric and the detector efficiencies, reduced by the object absorption.

The geometric efficiency corresponds to the solid angle subtended by the detector at the given annihilation position. The ring arrangement of the PET detector assures full geometric efficiency when both photons are emitted in the plane of the ring, irrespective of the position of annihilation. As soon as photons directed off-plane are included, the efficiency varies with annihilation position and ring geometry. To estimate the scale of efficiency, let us assume an annihilation occurring at the center of the plane that bisects the ring. The angle subtended by the ring is equal for both photons and expressed as

$$ {P_{\mathrm{ s}}}=\frac{{{\Omega_{\mathrm{ s}}}}}{{4\pi }}=\frac{{h\cdot (D/2)}}{{2\cdot {(D/2)^2}}}=\frac{h}{D}. $$
(3.16)

The efficiency will drop for both off-center and off-plane annihilations.

The detector efficiency is related to the portion of photons incident on the detector that give a valid interaction. For each of the two photons, its detection probability η will be determined by the half-value thickness d 1/2 of the sensitive material and by the length of the photon path u within the sensor as \( \eta =1-{2^{{-{u \left/ {{{d_{1/2 }}}} \right.}}}} \) (Table 3.2). Typical scintillator detectors have u/d 1/2 ≈ 3 and η of 90 % or more is common. This efficiency is further restricted by the criterion that only photons with energy falling in a selected window are accepted, in order to reject photons scattered in the object. This implicates that also photons undergoing Compton scattering in the detector are rejected. Indeed, ratio of Compton scattering to photoelectric absorption at 511 keV is substantial (see PE/CS, Table 3.2), and consequently, probability of a valid interaction of each of the photons, ε, is equal to η reduced by a factor between 2 (BGO) and 3 (LYSO). Since both photons must give a valid interaction to yield a useful line of response, the total detection efficiency of the ring will equal ε 2.

The portion of emitted photons that are not scattered in the body is given by Q = eμ·T, where T is the sum of tissue lengths traversed by each of the photon, and μ is the total attenuation coefficient μ pe + μ C. Taking a typical distance T = 10 cm and μ(511 keV) = 0.1 cm−1 of water as the main body constituent, Q ≈ 0.36 is expected.

From the arguments above, we can set the scale of the total detection efficiency g:

$$ g=\left( {\frac{h}{D}} \right){\varepsilon^2}{{\mathrm{ e}}^{{-\mu \cdot T}}}=1\%. $$
(3.17)

The detection efficiency in PET is approximately 2 orders of magnitude larger than in SPECT (compare Table 3.4).

5.4 Timing Resolution in PET, TOF-PET

The timing resolution of a PET detector corresponds to a variation in delay t d between arrivals of the trigger signals from detector elements hit by the photons. The timing resolution is important in separating events caused by a single annihilation from an overlap of multiple annihilation events; the latter illustrated in Fig. 3.9, where an obviously erroneous line of response, is assigned to detected photons coming from uncorrelated annihilations.

Fig. 3.9
figure 00039

Illustration of improper line of response connecting interactions of subsequent emissions

The synchrony of a pair of interactions in two detector elements is judged by the relative delay of signal arrival from both detector elements. A common filtering method is to select events, where t d s is within a timing window of duration Δt. The contamination ψ of the sample with random coincidences depends on probabilities of a true, P true or a random event P random, to occur, ψ = P random/(P random + P true). In determination of either probability, a valid single interaction starting the timing window stopwatch is assumed, and conditional probability of a second interaction matching the corresponding event type is calculated.

For a true event, the second interaction is caused by the second photon created in the same annihilation as the first interaction. Due to the ring detector geometry, detection of the first photon sets the second photon on a detector impinging track with a high probability P g ≈ 1. Still, effects of limited detector efficiency and energy window, factorized in ε, and object scattering, factorized in Q, must be considered. On top of that, an imperfect timing resolution will further reduce P true. To illustrate the dependence of P true on Δt assume the cumulative delay distribution P(t d), giving the probability that signal from the second detector module arrived by t d after the first interaction, is a linear function up to a resolution parameter σ t and 1 afterwards. In fact, P(t d) shaped as an error function would be more appropriate, but the general features of such function are implicitly contained in our simplified picture. The shape of P true following that of P(t d) is shown in Fig. 3.10, saturating at P g εQ for long-timing windows.

Fig. 3.10
figure 000310

Simple sketch of random and true coincidences ratio versus timing window

For a random event, the second interaction is caused by either of the photons created in the annihilation of the next emitted positron. The interactions in the ring will occur at a rate of 2AP r εQ, where

  • The activity A of the observed object determines the average rate of positron emissions.

  • The geometric efficiency P r is the probability that either of the photons is directed towards the ring, which is, due to the ring geometry, very similar to P s (3.16).

  • Parameters ε and Q are the single-sided detector efficiency and probability of penetration of the photon through the body, respectively.

The waiting time between random interactions in the ring will be exponentially distributed, and probability P random is just the probability that an event will occur within the interval of duration Δt, selected by the first interaction.

$$ {P_{\mathrm{ r}\mathrm{ andom}}}=1-{{\mathrm{ e}}^{{-2\cdot A\cdot {P_{\mathrm{ r}}}\cdot \in \cdot Q\cdot \Delta t}}}. $$
(3.18)

Figure 3.10 illustrates the described dependencies. Effect of the filtering is clearly illustrated with a short Δt favoring true events as long as initial slope of P true exceeds that of P random, requiring \( \frac{{{P_{\mathrm{ g}}}}}{{{\sigma_{\mathrm{ t}}}}}>2\cdot A\cdot {P_{\mathrm{ r}}} \). This poses a limit on the required timing resolution for a given activity:

$$ {\sigma_{\mathrm{ t}}}<\frac{1}{A}\frac{{{P_{\mathrm{ g}}}}}{{2\cdot {P_{\mathrm{ r}}}}}. $$
(3.19)

Roughly speaking, timing resolution should exceed 1/A, the average time between emissions, with a relief factor P g/2·P r. For the most common case of P g of almost 1 and P t on the scale of h/D = 0.2 (see Sect. 3.5.3) for a ring geometry, the relief factor amounts to a couple of units. A PET scanner can tolerate activity of as much as 100 MBq sources with a timing windows of 12 ns with a contamination ψ of 10 % or less. The contamination is reduced further by limiting the set of detector elements that are enabled to accept a second interaction to those geometrically opposite to the element signaling the first interaction.

Improving timing resolution beyond that required by (3.19) has other benefits in addition to the reduced contamination ψ. Consider the time-of-flight (TOF) PET principle, illustrated in Fig. 3.11. The photon directed towards left will travel a distance of D/2 − Δx before interaction and the right one D/2 + Δx. This difference translates to a delay of \( {t_{\mathrm{ d}}}={{{2\cdot \Delta x}} \left/ {c} \right.} \) between interactions, with c the speed of light. A detector pair with a resolution of σ t will be able to separate sources \( \delta x={{{c\cdot {\sigma_{\mathrm{ t}}}}} \left/ {2} \right.} \) apart based on timing information. The timing resolution in scintillators scales as [16]

$$ {\sigma_{\mathrm{ t}}}\propto \sqrt{{\frac{{{\tau_{\mathrm{ d}}}}}{S}}}, $$
(3.20)

where S (3.10) is the photoelectron count. Looking at Table 3.2, we see that LYSO (Y = 32, τ d = 40 ns) will be approximately five times better than BGO (Y = 8, τ d = 300 ns). A resolution of σ t = 500 ps, possible in L(Y)SO, gives a δx = 7 cm. This is actually much worse than the spatial resolution of 4.35 mm estimated above. Therefore, timing information is not used directly in source reconstruction, but it can be useful in noise control. Consider an image reconstructed from non-TOF data: all points along the line of response must be assigned equal probability. In TOF, however, it is possible to assign different probabilities to different segments along the LOR, when t d is measured (see histogram illustrated in Fig. 3.11). The net effect is as much as twice the information content per detected photon pair in typical TOF-PET system compared to a non-TOF-PET system [17]. This gives either a twice improved image for a fixed event count or, conversely, half the events required for a given image quality.

Fig. 3.11
figure 000311

Illustration of time-of-flight PET. Both detectors measure time of interaction, based on the measured delay, a probability profile along the line of response is created, assigning different weights to different positions

6 Image Reconstruction

Finally, it is the task of image reconstruction techniques to resolve the actual volumetric distribution of the radiation sources based on the collected measurements. Images in a standard sense can be obtained either by projecting or slicing this distribution along a certain object axis with source densities represented by an appropriate selection of shades. The process is rather simple for scintigraphs—they are by themselves projections to detector plane, and images are obtained by segmenting the detector plane to bins and counting interactions in each bin. A true volumetric measurement techniques such as SPECT and PET, requires a complete measurement, that is, a view of the object from all sides. Then the object is segmented to cubes which are called voxels, and, based on the measurements, each voxel gets assigned a certain number of sources. Planar images are formed as slices of this volumetric histogram. The process of obtaining source intensities from measurements is in principle simple but complicated in real life due to small signal-to-noise ratio. Here, the signal is the count of the good events, and the noise is contributed by mis-collimated events, such as unrecognized object scattering, collimator penetration, random coincidences, events from background radiation, etc. This makes image reconstruction a crucial step in detection process and complex reconstruction techniques were developed featuring efficient noise suppression. These exceed the scope of this chapter and will be discussed later in this book.