Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1.1 Molecular Medicine Has Arrived

“Modern medicine” has morphed into “Molecular Medicine” and it has taken on this new identity in part due to the capabilities of imaging. Where imaging used to mean the view through the microscope in 1800s, it became the view on the silver halide photographic plate in the time of the early 1900s. The advent of highly sensitive photographic emulsions and manufacturing of uniform films gave science the ability to perform autoradiographs of the distribution of radiopharmaceuticals in pathologic anatomy which added a physiologic “photo” onto the classic artwork of Frank Netter. This new “view” provided a way to discern pathology—an “outside-in” view—of internal structures. Autoradiography has changed over the years and is not simply histology slides with silver grains (see the chapter on Autoradiography; Chap. 5).

I place the advent of “modern” imaging at the hands of Hal Anger who exploited the known NaI crystal detectors for recording ionizing radiation to capture gamma photons from emitting radionuclides. He combined a packed array of photomultiplier tubes onto a large NaI crystal (approx. 10–15 cm diameter) and covered the crystal with lead shielding machined with a designed array of small holes (collimation). The result was a way to take a 3-D object with radioactivity dispersed in the volume and acquire an image that showed the 3-D object as a 2-D projection (i.e., a typical photograph) by collecting an electronic map of oscilloscope events (recorded positional scintillations on photographic film using a camera with an open aperture) which acquired for a period of time, provided a collection of events that as an aggregate, created an “image” of the distribution. Today, with hardware advancements and computational systems that were unimagined in the mid-1950s, we utilize novel chemistry and synthetic methods to create radiopharmaceuticals tagged with specific radionuclides which allow us to examine pathophysiology in three dimensions and over time. The use of even more regions of the electromagnetic spectrum from infrared (optical imaging) through radiowaves (MRI, fMRI, and MRS imaging; see Chaps. 11 and 12), we can diagnose disease and evaluate treatment success for, to name a few, cardiac disease, Alzheimer’s disease, diabetes, cancer, infectious diseases, and so many other pathologies, but now with exceptional resolution in both space and time.

Alzheimer’s disease was discovered in the 1800s through examination of postmortem brain tissues for the presence of amyloid. Today we can view amyloid deposition in vivo with noninvasive imaging platforms and novel molecular probe which have affinity for amyloid. We have an alternative as well by looking for a pharmacodynamic effect of amyloid deposition such as reduced glucose metabolism. Knowledge of either the extent of amyloid deposition itself or the manifestation of amyloid as reduced cognition or glucose utilization may potentially provide treatment strategies and also allow us to measure their effectiveness.

Molecular medicine is now the exploitation of patho-specific “biomarkers” to detect disease. Physicians can now “find, fight, and follow” disease (e.g., diagnose, treat, and measure success of treatment) using imaging and the need for new novel disease biomarkers is in high demand. This chapter introduces the reader to the history and significance of selected innovative physical and molecular biomarker probes and the instruments that are now using these probes. The reader gains an appreciation of the cleverness and the scientific simplicity that imaging has brought to the toolbox of medical sciences in the fight against disease.

1.2 Molecular Medicine: What Is an “Image”

The intent of this book is to introduce the reader to the use of imaging in drug and biologics development by impressing upon the reader the wide array of tools now at their doorstep. Modern medicine has advanced to heights our scientific predecessors, even those of 20 years ago, could not foresee except in science fiction. Looking into the human body in three dimensions, with clarity and spectacular resolution, we can now watch the brain “think” with imaging tools such as functional magnetic resonance imaging (fMRI), see if a cognition deficit is regionally defined with glucose utilization, measure a heart variable such as blood flow (angiography) or metabolism (glucose or palmitate utilization), infection localization, cancer structure, metabolism and viability (CT, PET and thymidine analogs, MR, resp), and so much more. In some ways, however, the imaging field has been like the “blind man and the elephant” where one imaging modality defines one part of “the picture” and another modality defines another. In this scientific review we attempts to cover many of the modalities of medical imaging which have grown in clinical and nonclinical acceptance as both notional (exploratory) and definitive (regulatory body acceptance) laboratory tools to add to the drug development laboratory repertoire.

Biomarkers include the physical and chemical system responses to disease or injury like the expression of a protein or up (or down)-regulation of a receptor. Biomarkers include, for example, biologics that are used to monitor and direct therapy in chronic diseases such as cancer (e.g., carcinoembryonic antigen, CEA), diabetes (hemoglobin-Alc), and autoimmune disease (Rheumatoid factor). LaBaer (2005) outlines eight “biomarker rules”, however, as this review focuses on imaging biomarkers, emphasizes his Rules #3, #4, and #8 which read, respectively: #3: “Consider the target and control populations carefully”, #4: “focus on developing a sensitive and specific test …”, and #8: “remember these samples come from people—and mice (and other animal models) are not people”.

In “imaging” we do not refer to the probe as the “biomarker.” Imaging biomarkers are the product or “view” as seen through an imaging system, with or without the use of a bioprobe, to define the expression, or “mark”, of disease or injury. Biomarkers include changes in anatomy (i.e., bone density or tissue fluid balance), drug actions, receptor binding, genomic expression or adduct formation (DNA/RNA adducts), metabolite formations (metabolomics and proteomics), change in enzyme rates, urinary and blood related adducts (i.e., hemoglobin and albumin), organ and tissue uptake, cascade initiation, clearance of an initiating drug as well as the inhibition or acceleration of a secondary biologic expressions, and more. Indeed, biologic measures that can be validated as part of a drug effect or system response can be considered, with regulatory scrutiny, a “biomarker.” The time course, identity, and relative abundance (signal) of biomarkers must be fully characterized in order to use them as surrogate investigative measures in research or diagnosis (LaBaer 2005; Colburn 1995, 1997; Daniels and Hughes 1997; De Gruttola et al. 1997; Deyton 1996; Ellenberg and Hamilton 1989; Frank and Hargreaves 2003; DeMeyer and Shapiro 2003; Bocan 2010, Fleming and DeMets 1995). Bocan (2010) is a fine introduction on imaging biomarkers and the imaging platform technologies that are discussed in this volume.

A typical biomarker is not generally a step function, i.e. “ON” and then “OFF”, but may be expressed as a step function when defined as a family of events that together are assigned a threshold limit for “ON” or “OFF”. A urinary metabolite may appear rapidly as a biologic response, increase over time, and decline while urinary adducts, as a continuum of the response which can last for months before declining, may appear late. The presence of both as a family of events is a “biomarker”. Biomarkers are best described and validated in terms of their pharmacokinetics with expression variables such as time to first appearance, or T a, which is dependent on sensitivity of the assays, the slope of marker expression (rate to full expression which can be dose-dependent and nonlinear), T max (time to maximum expression), C max (maximal concentration at target or site of measurement), and washout (loss of effect or clearance of the biomarker, both expressed as a decay half-life). The reappearance of a biomarker upon repeated challenge may be limited due to receptor saturation (maximal binding), catabolism and resynthesis of the receptor as well as reinsertion into the receptor site, or depletion of expression (consumption rates and delay of replacement rates). Biomarkers must be measurable, their kinetics understood, and the link to their source validated.

Imaging is a scientific discipline which one can utilize the full electromagnetic spectrum from high energy gamma rays to low energy radiowaves and mathematically transforms the energy frequencies detected into a readable format—an “image”—to describe a natural phenomenon. In some respects, imaging is simply a “pattern recognition” tool used to describe pathologies or anatomical features through a translation process. An example of an image pattern is, for example, a “heat map” of selected genes which, as an aggregate expression, form a recognizable (sensitivity threshold) “pattern” representing “ON” and “OFF” genetic switches (Fig. 1.1). Figure 1.1, taken from one of my previous publications (Moyer and Barrett 2009), is such an example. Here peripheral blood (PB) metagene color profiles describe different gene expression patterns when the blood is exposed to chemotherapy or radiation therapy. Genetic array maps are used to show the expression levels of a family of specific genes in peripheral blood taken from healthy subjects where a gene is “ON” (yellow to red) or “OFF” (dark blue to light blue). Exposure of the PB to chemotherapy or radiation elicits specifically different expression patterns of metagene change suggesting shared and unique genetic responses to the stressors. The graphics on the right are “leave-one-out” depictions (mathematical transforms) of the heat map arrays (the “image”) and provide, in this case, a way to demonstrate which specific gene expression patterns are for chemotherapeutic-treated versus radiation-treated PB. The heat map “image” is thus a tool to translate a biologic event or a physiologic change following a stimulus (drug or biologic action).

Fig. 1.1
figure 00011

Metagene expression patterns (left side images) displayed as “heat maps” where rows are specific peripheral blood (PB) samples and columns are specific genes. In chemotherapy or radiation therapy these genes display expression changes. The genetic expression maps for pre- and post-irradiation and pre- and post-chemotherapy are compared versus an untreated (“healthy”) “normal” heat map. The graphic (right) is a “take-one-away” assay which is a mathematical representation of the five treatment groups. Thus the heat map“images” can provide discrete translatable data allowing interpretation of genetic changes. (courtesy: John Chute, Duke Univ and data from Dressman et al. 2007 and Meadows et al. 2008; Reproduced from Bioanalysis, May 2009, Vol. 1, No. 2, Pages 321–356 with permission of Future Science Ltd)

Imaging platforms are unique instruments which exploit the electromagnetic spectrum by varying temporal and quantitative measures created by the pharmacodynamics of biomarkers. Imaging, and the mathematical transform of images, can expand the physiologic limits of our eyes to see outside of the confines of the visible spectrum. Different wavelengths of energy can be used to describe different things not unlike a set of drill bits can be used to probe a target (a building board) so an objective (insertion of a screw) can be achieved. Conventional microscopy uses visible light energies but, to achieve the higher resolution of electron microscopy, one uses narrower wavelengths in the x-ray energies which can “see” smaller objects via a more defined diffraction of structural angles where the larger visible light wavelengths are impeded. Electron micrographs thus use high energy “light” to visualize structures where a lower energy “light” probe is incapable of discerning structures.

Selecting the right imaging system and probe (i.e., wavelength) is a universal tool and goes beyond biology with applications in virtually all scientific disciplines. An example taken from space sciences is depicted in Fig. 1.2. The figures describe the amorphous Crab Nebulae as viewed using different wavelengths (energies) of “light” (where “light” is the electromagnetic spectrum). When viewed using broad coverage wavelengths of light the structure of the nebulae is simply amorphous. Use of the Chandra x-ray telescope, however, reveals unique detail not seen with higher (gamma) or lower resolving (microwave, infrared) energies. Viewing with the x-ray wavelength defines the structure as a swirling and axially- oriented vortex. The x-ray wavelength even allows viewing over time to see “motion” within the structure. Information derived from any imaging platforms thus depends on the system and each system will reveal different content.

Fig. 1.2
figure 00012

False color renditions of the Crab Nebulae viewed using different wavelengths of the electromagnetic spectrum. While portions of the electromagnetic spectrum reveal amorphous structure, x-ray imaging uniquely reveals a vortex and physical structural detail. Each image is from the NASA Astronomy Picture of the Day web site http://antwrp.gsfc.nasa.gov/apod/astropix.html and assembled by the author for this depiction (Reproduced from Bioanalysis, May 2009, Vol. 1, No. 2, Pages 321–356 with permission of Future Science Ltd)

1.2.1 Development of Imaging Sciences

Medical imaging had a rich history in the twentieth century, especially in the miniaturization of electronic systems, powerful and rapid computational systems, and molecular structure and synthetic capabilities which each has driven the development of imaging platforms. Specific examples of achievement over the past century include:

  • 1900–1920s x-rays employed for imaging and applied as “health elixir”

  • 1930s: Radioiodine thyroid functional uptake; x-ray “shoe fittings”, Tc-99m discovery (a “man-made” low energy—144 keV—short half-life—6 h—isotope, easily capable of leaving the body and yet being absorbed by a NaI crystal); gamma counting

  • 1940s: Geiger counter use for flow renograms/cell labeling C-14 and other biologic-relevant isotopes useful in the photosynthesis; tissue autoradiography (grain-counting histology); the Manhattan Project and the handling of nuclear materials

  • 1950s: Radioisotope chemistry: Tc-99m conjugations; C-14 photosynthetic and metabolic studies; larger array detectors; accelerator sources of radionuclides

  • 1960s: The Anger Camera; chelation chemistry; concept of positron camera; digital computing begins; computational programming, CT systems (Hounsfield units)

  • 1970s–1980s: Trace metals (i.e., mercury in tuna fish) spawns novel isotope work; development of Mabs; novel animal models—SCID technologies; concepts of computational 3D imaging (tomography) allowing for PET/SPECT/CT/MR/US

  • 1980s: Computational systems refinement/complexity; high capacity/rapid computer systems; PET synthetic chemistry; new positron radiotracers; high Tesla MR systems; PCR and molecular probes; recombinant proteins; knockout/in animals

  • 1990s: Biomarkers: PCR defines pathologies; PEGylated proteins, peptides, Mabs and fragments, glycans, new chelation methods; exploitation of cancer biomarkers; image-guided radiotherapy; platform miniaturizations for animal studies

  • 2000s: Molecular/optical imaging; Quantum dots; use of knockout mice

  • 2010The future : Hybrid imaging technologies (MR-PET-CT); carbon nanotube probes and delivery systems and nanotechnology for drug delivery—more

An imaging “biomarker” is intended to be a direct measure of a pharmacodynamic “effect” from a specific pathology or physiologic function. Images, as described earlier, are simply patterns. Conventional radiographic images, like photographs, are analog density patterns expressed as two dimensional views (2-D; x vs. y) of three dimensional objects (3-D; x, y, and z directions; i.e., “bread slice technology, or tomography) plus a time rate of change component. Modern imaging can collect either analog or digital data and, with the use of computer systems, display and quantify regions of interest off of an image to display or solve for rate constants. We can “see” (interpret) information from the 2-D displays but digital 3-D displays improve pattern recognition with the removal of confounding overlaid information (noise) and using digital picture element (pixel) values one can quantify the biomarker response and calculate statistical certainty from an image pattern.

As with the metagene genetic array “heat map” image described earlier, a thermal image of a biologic system is also a “heat map”. Infections, inflammatory mediators, burns, or any other cause can initiate changes in localized blood flow which can be detected by infrared sensors. Thermography is an old military imaging system to find heat sources on the battlefield. Here the technology is refined with smaller sensors to see finite differences on the skin surface. Thermographic imaging has been adopted clinically due to its simplicity, use of various color displays (user preferences), large pixel arrays in the display of event rates (thermal emission in the infrared or long wavelength; 600–800 nm energies), and generally provides a quick clinical interpretation of inflammatory pathologies, for example, by increased skin blood flow. Thermography has been used in the evaluation of burns but there are limits in resolution with increased depth of the blood flow foci. A patient’s localized increase in blood flow can be indicative of widely different sources of injury as can be seen in Fig. 1.3.

Fig. 1.3
figure 00013

Thermographic images—a pattern recognition “image” where increased blood flow is mapped. (a) Infrared (IR) thermogram of a patient suspected of a left side stress fracture of the fibula. This was not evident on a radiograph but DITI showed sufficient evidence of local pathology to justify a scintigraphic (nuclear medicine) image which diagnosed a stress fracture. (b) Inflammatory pattern in the neck region that correlated with a diagnosis of digastric lymph node inflammation with swelling. With permission: Meditherm®; web site: http://www.meditherm.com/thermography_default.htm

Biomarkers may be divided into four principle categories: (1) predictive, (2) prognostic, (3) diagnostic, and (4) dosimetric (adopted from Okunieff et al. 2008). “Predictive biomarkers” are those that suggest a certainty of a future event or known change to be evident in physiologic status and are available before a drug or action is applied to a target. High cholesterol is a predictive indicator of future heart disease. “Prognostic biomarkers” include those that foretell a future event and are available at the time of symptoms (pathology) or following a drug intervention. Prolonged elevation of plasma glucose after a meal is such a marker and indicates a prognosis of diabetes. This is different from “predictive” in that there is less involvement of a future event as the condition already exists. “Diagnostic biomarkers” are those that are common in clinical practice and available at the time of symptoms (pathology) or useful markers to follow a drug or biologic’s action on a target. These types of markers facilitate a clinical decision to treat or not to treat. The use of ultrasound during a cardiac stress test is an imaging platform to assist viewing cardiac wall motion irregularities. Dosimetric biomarkers are those that represent outcomes of pharmacologic, radiologic, or other intervention (positive or negative over or under expression of a biomarker) in response to an event or stimulus. The drug-induced cardiac stress test is again an example where we could apply a pharmacologic agent instead of a treadmill. Adenosine, dipyridamole (Persantine), and dobutamine are the most widely available pharmacologic agents for cardiac stress testing. Table 1.1 describes the four categories of biomarkers and is an adaptation of work by Okunieff (2008) to describe these categories with respect to imaging.

Table 1.1 Types of biomarkers and how they are used in imaging

Biomarkers which are useful in imaging reflect on a variety of targets, targeted tissues, and/or biological properties and characteristics as seen in the following:

  • Metabolism—measure glucose utilization of tissues (brain, heart, tumors) using F-18 fluorodeoxyglucose (FDG)—obtain standardized uptake values (SUV)

  • Growth—DNA turnover using thymidine uptake for turnover, F-18 FLT

  • Organ or tumor size—CT /MR: anatomical, RECIST; PET/SPECT: functional domain of a tumor (living tissue vs. living plus necrotic combined

  • Vasculature—MR (blood flow), MR and CT angiography, SPECT/PET for platelet adherence, clot formation, and inflammatory responses

  • Markers/receptors—Mabs, peptides, aptamers, cell surface properties

  • Density differences—CT for mass density, MR for water content and mobility

  • Physical reflectivity—Ultrasound and bubble technologies for measuring flow and defining structures via edge detection

  • Heat—thermal imaging where tumor tissue exhibits higher caloric consumption and emission of waste heat

  • Hypoxia—xenon lung ventilation studies, and functional MR blood flow (fMRI)

  • Gene triggers/promoter sites—optical tracers may be quantum dots or fluorescent probes or, for research purposes, genetically modified mice where a luciferase gene is inserted adjacent to a promoter site

  • Cell trafficking and cell surface properties—glycan cell surface decoration and speciation, labeled cells can traffic differently and have different elimination kinetics due to infection; stem cell therapies can be followed to observe engraftment.

In theory, for a biomarker to serve as an effective end point or substitute for the clinical outcome, effects of intervention on the biomarker must reliably predict the overall effect on the clinical outcome. In practice, this requirement frequently fails (DeMeyer and Shapiro 2003). Any drug applied as a therapeutic also has the possibility that it may affect a clinical outcome by unintended, unanticipated, unrecognized, and potentially saturable mechanisms of action that operate independently of the disease process. Fleming and DeMets (1995) provide several examples of how selection of a biomarker may be incorrect and may actually be indicative of unrelated pathways and outcomes. Biomarker response can be confusing in the milieu of all the other potentially confounding biologic activities which, in turn, can alter biomarker responses. True biomarkers must be elicited by the intervention on a disease and then be reflective of the clinical outcome. In 2003, Eckelman (2003) published a detailed treatise on biomarker behavior by imaging knockout mice, e.g., genetically altered mice, i.e. a specific animal model generated for improved target specificity. Knockout mice were shown to have high utility by decreasing the “biologic noise” early in the drug discovery phases of nonclinical experiments and could be seen to reduce time, animal numbers, and cost by avoiding “classical” (but inherently more “noisy”) pharmacologic models.

In 2003, Smith et al. (2003) provided a treatise on “Biomarkers in Imaging: Realizing Radiology’s Future.” Their sense of the science was that imaging of biomarkers was a definitive way to shorten the “bench to bedside” timeline. Imaging biomarkers may have an enormous potential to shorten the drug development timeline but they should never be considered a solution to every preclinical or clinical question. They also noted that imaging could be a successful tool in determining the PK and/or PD of new drug candidates. Imaging could be used to validate binding (measure rate constants) with respect to known cellular or organ-specific disease targets. Measurement of the effect of formulation change on in vivo distribution over time, specific targeting efficiencies, the time to C max at the target, metabolism/catabolism rates, and even elimination rate would all be a huge cost savings if they could be determined in fewer animals needed to capture all these endpoints. Imaging also affords multiple views over time such as in cancer therapy (i.e., CT RECISTFootnote 1 and MRI measurements), or when the target is elusive or moving, i.e., heart motion and blood flow (US flow measures, cardiac imaging), or when the target is not anatomy but rather simply a function, i.e., thinking (regional brain blood flow with fMRI; PET correlations). Table 1.2 is adapted from their paper and describes, from my perspective, how imaging of biomarkers that can be used over various imaging platforms to improve the drug development timeline.

Table 1.2 Role of imaging biomarkers in drug development and clinical medicine

Table 1.3 describes the parameters that currently represent the path of drug development and the impact that an imaging biomarker may have on that parameter (adapted from Smith et al. 2003).

Table 1.3 Biomarkers in imaging: traditional end points in drug development and the effect of use of an imaging biomarker

1.2.2 Selected Imaging Modalities, Biomarkers, and Drug Development

1.2.2.1 Computed Tomography

Computed tomography (CT) is one of the most widely used medical imaging technologies used to detect density differences and in vivo edge discrimination. The history of CT is really that of three historiesFootnote 2: (1) the history of tomography itself, (2) the development of the algorithms used to reconstruct the image, and (3) the development of high-speed digital computers. CT measures the density of objects and has units of density called Hounsfield unitsFootnote 3. The initial prototype CT scanner used in clinical trials shown here had some less than inspiring specifications: scan time: 9 days, reconstruction: 2.5 h; print the image: 2 h; resolution: 80 × 80 (voxel elements for a 20 cm × 20 cm 2-D slice). The modern device is a rotating x-ray source surrounding an object (patient) where the instrument measures a fan beam of x-rays onto a set of detectors on the opposing side of the object. A series of x-ray projections around an object and the resulting absorption maps of the incident beam and reduction on the x-rays reaching the detectors (attenuation) is reconstructed to represent the interior of an object.

CT scans can, for example, measure bone density in assessing an osteoporosis patient. Therapeutic intervention with a drug to enhance bone formation or prevent bone loss can be measured over time using CT where the change in Hounsfield units (HU) of density serves as a biomarker of treatment success or failure. Also, detection of lung tumors or mammary tumors using CT represents a contrast change in regional HU relative to the air-filled lung regions or less dense fatty tissues of the breast. Radiotherapy, as incident beam radiation to treat the breast or lung tumors, will increase the water content of the lung tissues affected by the absorbed energy and lead to less contrast by CT over time until the tumors are killed and resolved. Changing to another imaging biomarker such as metabolic markers or perfusion markers can improve clinical observations during treatment.

CT, as well as standard x-ray imaging, can utilize contrast media to resolve objects such as tumors and vascular structures. The HU changes associated with the distribution of a contrast agent within a tumor or vascular system (i.e., carotid arteriography, deep leg venography, or coronary angiography) can be definitive in diagnosis. The utility of contrast agents is their added improvement in target-to-nontarget ratio (increased statistical discrimination) which can improve a clinical biomarker. While CT is a well-adopted imaging modality in both the clinical setting and the nonclinical laboratory, it is not of high interest other than as a marker of density in animal test systems and overlay in multi-modality imaging, i.e., with PET, SPECT, and optical probes. We will not have a chapter on CT except as a co-investigative tool for the imaging platforms discussed in this book.

It is important for the reader to recognize some imaging tools like CT have become so much a part of the clinical setting that their inherent risks are lost or ignored in their everyday use. CT in the USA has grown to where it has become a standard clinical tool, and is often seen as technically required for protection of physician liability. As such, CT has become a primary diagnostic at many institutions and it is used in excess with radiation dose complications due to overuse and accidents. Physicians have been known to ignore the concomitant dosimetry in favor of a “more precise” or “higher resolution” image. The New York Times in June 2011 provided an article on the overuse of CT scans in hospitals across the USA where they reported on the rate of duplicate chest CT scans (refer to: http://www.nytimes.com/2011/06/18/health/18radiation.html?pagewanted=all&_r=0). The link includes an interactive map (shown as Fig. 1.4) where one can select individual hospitals and view their respective chest CT scans per annum and how many are repeated: http://www.nytimes.com/interactive/2011/06/17/us/hospital-ct-scans.html?ref=health The dose of x-rays from a chest CT is not insignificant and doubling the patient’s exposure and adding to their cumulative lifetime dose is a serious consideration.

Fig. 1.4
figure 00014

Static image of an interactive map, from an article in the New York Times, of the United States showing each hospital in the USA with a color code for the rate of duplication of CT scans. The blue dots are below 20 %, yellow for a 20–40 % risk, and red are hospitals which are above a rate of 40 % per annum. Note the predominant regional location for RED is mostly in the middle states and southern states (Source: Center for Medicare and Medicaid Services; see link in text)

It is imperative in this context, that the Reader understand, for each of the imaging modalities described in this book, that there are inherent benefits as well as risks for each and every modality described in this text. Each modality, its purpose and risks, must be considered in their nonclinical as well as clinical environments. The primary clinical risk for CT is with respect to ionizing radiation, but other imaging signal systems employ other kinds of risks including magnetic field effects, infrared heating, DNA intercalation or delivery of a biomarker at sufficiently high focus to initiate injury or delayed risk of injury. Users must always understand the risks to the animal handlers and any other imaging personnel.

1.2.2.2 Nuclear Medicine and Radiotracer Technologies

1.2.2.2.1 Autoradiography

One of the original biologic images was the 1896 image of Roentgen’s hand with the delineation of the bone structure and the ring on his finger. The discovery that radiation could provide an image on photographic film led to many applications of contact radiography which matured to the modern slice technologies of CT, SPECT, PET, and MR mimicking the whole body slice to a more useful histologic tool, autoradiography, to show the biodistribution of radionuclides and radiolabeled biochemical entities.

One of the first techniques employed using radioactive materials was contact radiography. The technique added significantly to the field as an adaptive technology to histology (Caro and Tubergen 1962). Radiolabeled substances and their distribution following injection, or other mode of entry, can be “seen” using autoradiography. Radioactive tissue can be prepared as a histologic slide, immersed in a fine silver halide photographic emulsion. Exposure of the tissue section allows for silver grain formation over time and development of the slide, like a photo, and inspection of the slide by microscopy following standard histologic staining, can reveal the distribution of the radioisotope in the tissue. With the success of histologic plus radiotracer imaging, whole animal (body) autoradiography (WBA) was developed as a macro tool to map the tissue kinetics and biodistributions of drugs in whole animals. Highly engineered cryomicrotome devices for whole animal sectioning were developed to avoid “chatter” (change in section thickness) and provide uniform 20–50 μm sections over sample (carcass) distances of 10–30 cm (depending on the device) allowing the sectioning of small mammals such as the Cynomolgus or the Marmoset nonhuman primate. The devices as refrigeration units also sublimate thin sections overnight and the tissue section can then be placed in intimate contact with high resolution x-ray films or highly sensitive electronic photo-plates in direct contact systems (for beta emitting isotopes) such as the phosphoimager systems (Johnston et al. 1990; Solon 2002, 2007; Solon and Kraus 2002, and (http://www.perkinelmer.com/Catalog/Family/ID/Cyclone%20Plus%20Phosphor%20Imagers). Readers are directed to the chapter on Autoradiography which will provide a look at the modern analytical assays of this technique which still utilize whole body sections to quantitate regional distributions of molecular entities but with advanced analytical tools such as MALDI (matrix-assisted laser desorption/ionization mass spectrometric imaging) and NIMS methodologies (Solon et al. 2010). For a practical web-based review of autoradiography and especially as it relates to F-18 FDG PET vs. C-14 FDG imaging/quantitation, the Reader is directed to a full review provide through Loats, Inc., as “Application Notes—Metabolic Autoradiography” http://www.loats.com/AppNotes-2DeoxyGlucose.PDF.

As was indicated in a previous section, formulation changes often can lead to surprising changes in biodistribution of drugs. One such example was a cytokine I was working with which was formulated in sodium dodecyl sulfate (20 %) for solubility and the cytokine exhibited a pulmonary toxicity which was interpreted as a “normal” response for the indicated cell stimulations that were expected. Upon radiolabeling the cytokine and examining the immediate post-dosing distribution using whole body autoradiography (WBARG) of the radiolabel, it was observed that the protein was aggregating in vivo and that the aggregates were accumulating as punctate islands of radioactivity in the spleen (Fig. 1.5). Upon reformulation in a glycine buffer at the appropriate concentration to reduce in vivo aggregation, the spleen in thin section whole body autoradiography films were seen as uniform and toxicity, resulting from localized cytokine stimulation, was reduced significantly and there was a noted improvement in efficacy due to the resultant increase in the AUC and C max parameter estimates (data not shown) which allowed the reduction in the effective dose and widened the cytokine’s therapeutic index (TI).

Fig. 1.5
figure 00015

WBARG of a rat showing two serial adjacent sections (separated by about 300 μm) 4 h after being administered a cytokine radiolabeled with I-125. The image depicts the localized uptake (dark grains) in the spleen demonstrating RES clearance of protein aggregates. The cytokine was administered 4 h prior to the animal being euthanized and frozen for cryostat sectioning. The images show the aggregation of the protein and clearance via the splenic islands (white pulp by image—anatomy correlation (superimposed histology and image). The formulation effect of promoting in vivo aggregation led to RES clearance and focal stimulation at sites of aggregated protein which led to unwanted toxicities. Reformulation promoted a more uniform biodistribution and allowed for a reduction in the clinical dose for the same level of efficacy in an animal cancer model (personal archives)

1.2.2.2.2 Planar, SPECT, and PET Imaging

Nuclear imaging has become a standard noninvasive clinical tool to examine functional anatomy and functional processes. In the 1950s Hal Anger developed several of the first nuclear medicine imaging systems at Donner Laboratory at the University of California Berkeley (Wagner 2003). One of the first was the rectilinear scanner, a moving bed that traveled over an array of 64 photomultiplier tubes (four rows of 16 tubes) to provide a two dimensional “whole body” image (Fig. 1.6). An Am-241 photon source over the bed was used to provide a silhouette image of the object being scanned to allow for spatial identification of radioactivity in the image silhouette. He later created the first gamma “camera” (now known as an Anger camera) which was a large NaI crystal with an array of photomultiplier tubes. The array produced a 2-D map of counts from radioactivity impinging on the crystal and the camera was able to discern a planar image.

Fig. 1.6
figure 00016

Imaging laboratory at Donner Laboratory, University of California at Berkeley (circa 1972) where Hap Anger developed these two devices. (a) To the left is an original Anger camera and to the right is a whole body scanner. (b) An early WBS image (isotope unknown) with an Am-241 suspended above the patient serving to create a patient silhouette allowing for regional 2-D localization of the isotope in the body (images from the author’s archives)

Radioisotopes with appropriate gamma energies for absorbance by 1 cm thick NaI detectors were sought as these had the appropriate efficiencies to use the devices. Energies below 50 keV had low capacity to escape deep tissues to interact with the detector and energies in excess of 250 keV were to energetic and would have low detector efficiencies (ability to create a scintillation event in the crystal). The 140 keV gamma emitting Tc-99m discovered by Nobel laureates Segre and Seaborg in 1939 (daughter isotope of Mo-99 decay) met the energy and half-life (6 h) need for the Anger camera and discoveries on ways to attach (chelate) Tc-99m to drugs, proteins, peptides, and other chemical entities had a dramatic effect in opening the door for nuclear medicine development.

The discovery of CT in the 1960s led to computational method to reconstruct images from projections and created the notion of imaging tomographic sections—“bread slices”—through the body (Friedland and Thurber 1996). Out of these computational methods and the known physical properties of isotopes emitting positrons (positive nuclear emissions which annihilate upon meeting an electron and resulting in the formation of two opposing gamma rays of 511 keV energies 180° in opposition) led to Positron Emission Tomography (PET). It was only a matter of time, and truly inventive reconstruction mathematics, to take single photon emission isotopes, like Tc-99m, Tl-201, and many others, and create a way to do 3-D imaging which we term Single Photon Emission Computed Tomography (SPECT). Three major textbooks on SPECT and PET imaging technologies have been published recently (Christian et al. 2004; Valk et al. 2003; Phelps 2004).

In PET, it is the emission of the 511 KeV gammas that results in (1) a sufficiently high enough energy to have low tissue absorption, (2) a physical reduction in scatter by only counting coincident events in a ring detector system, and (3) construction of a linear chord for coincident photons an the progressive overlay of these ring events creates a defined distribution of the chord representing the 3-D location of the radioactivity in the object (patient or animal). PET and SPECT systems differ as depicted in Fig. 1.7. For educational purposes the reader is directed to a web-based collection of PET and SPECT imaging studies where there are details on the way 3-D images are collected, processed, and interpreted clinically.Footnote 4

Fig. 1.7
figure 00017

PET and SPECT imaging systems. The PET system is depicted in the left image where a ring of detectors creates the array of positron coincident chords which are reconstructed into the PET “image”; The SPECT system is a single, double, or triple head (detector) camera where single photon tracks are collected. In both cases a reference scan is first obtained to determine the image’s attenuation correction coefficients for each projected angle. (Reproduced from Bioanalysis, May 2009, Vol. 1, No. 2, Pages 321–356 with permission of Future Science Ltd.)

1.2.2.2.3 PET and SPECT in the Molecular Imaging of Cancer

Imaging for the detection of cancer, from its first use with conventional chest x-ray and mammography, has been propelled by major advances in instrumentation molecular mechanisms of cancer. Imaging instrumentation and electronic noise and scatter correction algorithms have significantly improved resolution, data fidelity through computational advances, and the introduction of molecular probes has exceeded expectations and opened many new avenues of research. The Journal of Nuclear Medicine in 2008 has an entire issue dedicated to the molecular imaging of cancer (J Nucl Med, Suppl 2, 2008) and it introduced, as the first article in the issue, the dramatic changes in nuclear imaging instrumentation (Pichler et al. 2008; Vastenhouw and Beekman 2007). Advances in PET and SPECT imaging, novel molecular probes, and the understanding of genetic diseases, biomarker expression/metabolism, etc., have all advanced the art and been instrumental in the defense of regulatory approval for several new therapeutics. A major treatise on the employment of molecular probes in PET/SPECT oncologic drug development was published in 2005 and covers essentially all areas of oncologic interest that may be investigated via nuclear medicine approaches (Kelloff et al. 2005).

Modern cancer imaging employs a wide variety of radiolabeled (for PET and SPECT) or contrast-labeled (for CT and MR) probes to image phenotypic expression of biomarkers. These probes include peptides, Mabs and Fab’ fragments, aptamers, cell markers, growth markers, lipids, angiogenesis markers, metastatic markers, hypoxia markers, and several others. New imaging tools with higher resolution and the mathematics of image processing employed in planar, SPECT and PET modalities, MR and optical imaging platforms has revolutionized cancer medicine.

Three dimensional images of objects as small as the nude mouse are now possible using microSPECT (and microPET systems) and small regions can be resolved using projections of axial slices (coronal, sagittal, and transverse) (Vastenhouw and Beekman 2007). Both the MicroPET and MicroSPECT systems are marketed with the capability of doing CT imaging for the same specific slice to provide density correction (isotope attenuation correction) which facilitates image statistics correction for small animals. The Reader is directed to our chapter on cancer imaging which covers a wide variety of small animal systems applicable to the nonclinical drug development laboratory setting.

1.2.2.2.4 Drug and Biomarker Kinetics Using Nuclear Imaging

Imaging, particularly nuclear medicine and radiotracer technologies, provides a view of pharmacokinetic behavior of biomarkers as one can map the distribution, transit, targeting, and elimination of radiolabeled drugs. Nuclear medicine is unique in imaging as it can provide both anatomical and functional measures of biology. The “functions” that can be described using nuclear imaging include several important pharmacokinetic and pharmacodynamic expressions such as:

  • The “input function”: IV, IM, SQ, oral, nasal, lymphatic; input to organ systems

  • The “transit function”: gastrointestinal absorption and transit time, lymphatic flow, blood flow, mucosal transit

  • The “distribution function”: receptor-based elimination from the blood; oil–water coefficients, blood brain barrier, and other stops

  • The “binding function”: receptor affinity (Kd value), the “on-off rate”

  • The “time to effect” and/or “time to toxicity” pharmacodynamic functions

  • The “degradation function”: enzyme kinetics, pH effect, metabolism

  • The “elimination function”: renal, hepatic, biliary, ventilation, sweat, e.g., geriatric differences (impaired renal or hepatic clearance); effects of concurrent medications (drug–drug interactions); pediatric (body surface area relationships)

  • The “allometric function”: pharmacokinetic parameter estimates based upon allometric scaling using body surface area, heart rate, etc.

  • The integral of all these functions is what can be called the “drug signature

Three mathematical methods are commonly practiced in the analysis of F-18 FDG drug kinetic behavior from images. These include the Logan Plot (Logan et al. 1990), the SUV (Standardized uptake value) (Christian et al. 2004; Phelps et al. 1979; Ferl et al. 2007; Krohn et al. 2007; Huang 2000; Keyes 1995), and the Reference Tissue Method (Sandella et al. 1998). The Logan Plot defines the kinetics of radiolabeled compounds in a compartmental system where the compartments are described in terms of a set of first-order, constant-coefficient, ordinary differential equations. The Standardized Uptake Value, or SUV, is a common method to define tumor metabolic rate for glucose in relative terms. It does not require an input function as does the Logan Plot. It relies upon uptake in a region of interest (tumor) standardized to total dose distribution (nontarget). The SUV method defines a region uptake against a reference (no uptake) tissue (brain = cerebellum; lung tumor = other lung region; heart = skeletal muscle). The SUV is typically done at one time point after a defined clearance but can be done dynamically. There are drawbacks to this method as described by Huang (2000) and Keyes (1995). The third methodology is the Simplified Reference Tissue Method or SRTM. This is a common method employing a reference ligand kinetic model but does not require an input function. The model assumes one compartment and requires a starting (reference) ligand kinetic model (example: erythrocyte uptake). The SMRT differs from SUV as it does not correct for nonspecific binding and typically uses the cerebellum as a reference (baseline) region-of-interest (ROI) for the input function term.

Each of these mathematical methods of image analysis has applications, with modifications, to a variety of biomarker candidates. The Readers are encouraged to read the chapters on Oncology and Allometrics which touch on these processes. The dose selection allometrics may not be important in use of radiotracers simply because these are typically non-physiologic doses, but rather in the use of the drugs under study using imaging where molar excess concentrations may be used in safety studies (or in the case of MR contrast agents), target saturation could occur, or differences in receptor occupancy through species specificity of the drug or biologic under study may be involved.

1.2.2.2.5 Imaging the Neuroendocrine System, Neuroantomy, and Function

PET is currently the preferred technology for state of the art brain metabolism imaging, CT is the preferred imaging modality for general anatomic imaging, and MRI is the current standard for brain imaging for specific water-mediated signaling. CT, while used extensively as a first pass imaging system is limited to anatomical injury assessment where displacement is measured by a range of densities from bone to air. Tissue contrast may identify clinically relevant targets such as distortion of structure from a tumor. The initial anatomic information aids in further imaging using PET or MRI systems. Sossi (2007) provides an excellent review of PET brain imaging describing new detectors, reconstruction algorithms, and the use of PET in movement disorders and Alzheimer’s disease.

Brain damage was observed in the early 1980s in subjects using crudely synthesized methamphetamine which created a byproduct known as MPTP (Schober 2004). This byproduct is a neurotoxin to the dopamine centers of the brain and left drug abuse victims with an irreversible loss of their dopamine production (substantia nigra) and early Parkinson’s symptomology. Figure 1.8 depicts a nonhuman primate with unilateral destruction of the substantia nigra using MPTP and imaging the brain 2 weeks later using C-11 β-CFT.Footnote 5 Movement disorders such as Parkinson’s disease are studied using new radiotracer dopamine analogs. Two new biomarkers of dopaminergic neurologic diseases include C-11 β-CFT ([C-11]2-carbomethoxy-3-(4-fluorophenyl)-tropane) and C-11 tetrabenzine. A mouse model for PET imaging of MPTP-induced degeneration of dopaminergic neurons has been successfully developed using the radiotracer 18F-DTBZ, an analog of MPTP, and allows for evaluation of new Parkinsons disease therapies (Toomey et al. 2012).

Fig. 1.8
figure 00018

Images of a nonhuman primate subjected to the neurotoxin MPTP which has a severe toxic effect of destruction of the substantia nigra in the mid-brain (6.5 kg Rhesus macaque). MicroPET P4 images at 2 weeks post-lesion induction using C-11 β-CFT shows 90 % denervation of the dopamine transporter in the substantia nigra; the normal contralateral side shows specific binding of the C-11 β-CFT. (Reproduced from Bioanalysis, May 2009, Vol. 1, No. 2, Pages 321–356 with permission of Future Science Ltd.)

1.2.2.2.6 Imaging Alzheimer’s Disease

Alzheimer’s disease (AD) is a dementia is characterized by the accumulation of Aβ-amyloid which serves accumulate over time and force denervation through cortical neuron separations plus impeding vascular flow. Amyloid deposition and confirmation of Alzheimer’s Dementia had classically been detected only at death with confirmatory histologic staining of the brain with thioflavin stains and Oil Red O. Original imaging with F-18-FDG PET to detect regional brain usage of glucose helped define the actual regions of the brain affected first by amyloid deposition. Researchers at the University of Pittsburgh developed a thioflavin-like radiotracer called Pittsburgh Compound-B labeled with C-11 (11C-PIB). C-11 PIB specifically binds to fibrillar amyloid-beta (Aβ) plaques (Mintun et al. 2006) and can help discriminate AD from frontotemporal lobar degeneration (FTLD), a non-Aβ amyloid form of adult dementia (Rabinovici et al. 2007). As mentioned earlier in this chapter, a new imaging agent for detection of amyloid with a longer half-life isotope, F-18 (2 h vs. 20 min) was recently approved by the FDA (AMYViD; a Lilly and AVID Pharmaceuticals venture). This new imaging agent is now art of the Lilly therapeutic drug venture for AD for testing their monoclonal Ab, Solanezumab, in the clinical setting using AMYViD as a POS diagnostic imaging agent. This is in line with the earlier statement of “Find, Fight and Follow” as a paradigm of developing diagnostics that can help with the therapeutic efficacy assessments.

Advances is the development of RNA aptamers has used them as biomarkers for imaging and the detection of AD (Ylera et al. 2002). High-affinity RNA aptamers against the βA4(1–40) have been isolated from a combinatorial library of ∼1015 different molecules. The apparent dissociation constants K d of these aptamers for Aβ-amyloid are 29–48 nM, which is quite acceptable for nuclear imaging agents, especially if at that nanomolar concentrations, it does not elicit activation. Heiss and Herholz (2006) provide a review of brain receptor imaging and describe the increasing number of potential probes for neurologic biomarkers. In the field of autism, Williams and Minshew (2007) describe the impact that imaging would have on the study of autism and the potential to develop therapeutics to relieve this disease’s increasingly social and financial impact. Esposito et al. (2008) describe their efforts in measuring neuroinflammation in AD using C-11 arachadonic acid and PET imaging. Imaging and molecular medicine are coming even closer in the article by Diehn et al. (2008) as they provide evidence that surrogate imaging probes may be able to be tied to specific gene expressions in brain cancers. Agdeppa and Spilker (2009) have provided a major review on imaging agent development and give an excellent overview of microdosing (as described above) and “theranostics”, that is, compounds which are diagnostics at a low dose but turned into therapeutics at higher doses.

1.2.2.2.7 Imaging Infection and Inflammation and Imaging in BSL-3/4 Environs

Occult infection remains a difficult target for imaging. The biomarkers associated with infection such as C-reactive protein and cytokine expressions are all sufficiently nonspecific to not be adequate to use as a biomarker in the sense we have described earlier. The detection of appendicitis remains problematic. Labeled white blood cells or platelets with radioisotopes of In-111 or Tc-99m have been modestly successful but cells must be harvested from the patient, isolated, radiolabeled, reinjected, and then allowance of sufficient time is necessary for targeting as well as clearance (Arndt et al. 1993). A specific biomarker for definitive identification of infection and inflammation is yet to be developed, however, In-111 WBCs, F-18 FDG, and F-18 FLT (fluoro-thymidine) are routinely used in clinical research as a first pass test.

The first use of a radiolabeled aptamer specifically for imaging inflammation was by Charlton et al. (1997). The aptamer was created to target human neutrophil elastase and was labeled with Tc-99m. The aptamer was able to image neutrophils in a rat inflammation model with a peak target-to-background ratio of approximately 4 at 2 h postinjection. Aptamer uptake was compared against conventional IgG methods, which has decidedly slower clearance but IgG agents demonstrated a two- to threefold greater absolute uptake versus the aptamer.

Bioluminescence applied to imaging of infection and inflammation is a major new avenue of nonclinical development and is covered in full in Chap. 9 by Hana Golding and Marina Zaitseva from the NIH and FDA on optical probes. The Reader is also invited to read Chap. 10 by Lauren Keith et al., from Ft. Detrick MD, where they have created an imaging theater with specially modified PET/SPECT/CT and MR imaging systems and added high-level safety procedures to study dangerous infectious organisms in the BSL-3 and BSL-4 environments.

Cell-labeling techniques have expanded well beyond the In-111-oxine labeling of white blood cells (Arndt et al. 1993; Sinha et al. 2004) and now techniques in bioluminescence and fluorescence allow for tumor angiography as well as confocal microscopy and imaging of vascular flow patterns, transient adherence and tumbling of cells in the vasculature in situ (Michalet et al. 2005). There is an outstanding review covering magnetic resonance techniques (esp. using paramagnetic T2* relaxation) and special probes like SPIO particles where one can resolve to 50 μm in live animals (Muja and Bulte 2009). Also the Reader is encouraged to review the work of Thurner and Sundgren (2008) for imaging of and diagnostics for slow virus infections.

1.2.2.2.8 Cardiac and Atherosclerosis Imaging

Nuclear medicine imaging of the heart and measuring cardiac performance has been a major goal of modern medicine. Biomarkers of cardiac imaging began with K-40 studies in the middle of the last century (History of nuclear medicine discoveries: http://www.thealaragroup.com/amh/historicalmomentsinnuclearmedicine.doc, Society of Nuclear Medicine History: http://interactive.snm.org/index.cfm?PageID=1107&RPID=10). PET imaging has employed a potassium analog, Rb-82, but it is a 75-s half-life positron emitter which is generator produced but has practical limitations (Santana et al. 2007). The advent of SPECT imaging brought another potassium analog, Tl-201, into clinical practice (Valk et al. 2003; Machac et al. 2006). PET imaging of cardiac sugar metabolism with F-18 FDG began in the1980s and is now a clinical imaging standard. Newer tracers for metabolic imaging of the heart, such as I-123 BMIPP, a fatty acid substrate (radioiodine-labeled 15-(p-iodophenyl)-3-(R,S)-methyl pentadecanoic acid (I-BMIPP), more accurately test the heart as fatty acids are the primary fuel for the cardiac muscle (Fukushima et al. 2008). Lack of uptake of I-123 BMIPP is a clinical sign of cardiac injury. Research into radiolabeled biomarkers for vascular disease such as aneurysms, atherosclerosis, lipid accumulation in the vessel walls, and “unstable plaque” are all under development by big and small pharmaceutical companies. Radiolabeled lipids and lipid–DNA complexes have been investigated (Yamada et al. 1998; Niven et al. 2000). Platelet anomalies and deep vein thrombosis are explored using SPECT agents such as Tc-99m GPIIbIIIa peptide antagonists (Bates et al. 2003; Taillefer et al. 2000). An excellent resource on SPECT and PET cardiac imaging is available as web-accessible document from the American Society of Nuclear Cardiology.Footnote 6 The University of Kansas also has a web-accessible set of images with examples of F-18 FDG in full 3-D axial plane displays used for clinical diagnosis and they display gated studies of cardiac performance as animations.Footnote 7

The ideal biomarker for myocardial perfusion will need to not interact with cardiac medications or pharmacological vasodilator stress agents and it must show myocardial avidity providing a high heart/background ratio. The biomarker should have a high extraction fraction with uptake directly proportional to myocardial blood flow over physiological as well as exercise or pathologic-induced stress uptake over normal ranges and the agent should lack redistribution from the heart (target loss) during the imaging period. No biomarker currently provides all these attributes. The four biomarkers of cardiac performance that are currently used in clinical practice include: Tc-99m Sestamibi (ion channel imaging), Tc-99m Tetrafosmin (ion channel imaging), Tl-201 (as a K analog), and F-18 FDG (glucose metabolism). As mentioned earlier, I-123 BMIPP is a myocardial fatty acid metabolism imaging agent that is now approved worldwide in the assessment of “ischemic memory” using SPECT (Koyama et al. 2011). “Ischemic memory” is a phenomenon where, even after weeks post-recovery from an acute myocardial ischemic event, heart muscle areas still retain diagnostic anomalies in fatty acid metabolism that remain at-risk of further infarction even after successful reperfusion (Mouchizuki et al. 2002).

New novel SPECT designs (U-SPECT and D-SPECT) have begun a revolution in the design of high-resolution SPECT systems for cardiac imaging (Vastenhouw and Beekman 2007; Gambhir et al. 2009). The novel D-SPECT camera technology uses nine collimated detector columns arranged in a curved configuration to conform to the shape of the left side of the patient’s chest. The application of these newer designs has yet to be brought into the preclinical design with miniaturization. Potentially, both the U-SPEC and the D-SPECT molecular imaging technologies will be integrated with other existing modalities including CT, MRI, and ultrasound systems, providing high definition fused images.

1.2.2.2.9 Biomarkers of Tumor Hypoxia

Tumor hypoxia (necrotic center and attenuated by drug-induced apoptosis) has been a challenging biomarker for imaging (Krohn et al. 2008; Blankenburg 2008; Hiller et al. 2006). Reactive oxygen species (ROS) which arise due to anaerobic metabolism, radiation treatment, and the generalized reduction of drug permeation into tumors with anoxic regions are the principal issues in finding good probes. Gradient perfusion from the vascular supply (normoxic regions) to sites of active metabolism is highly regulated by tumor angiogenesis, apoptosis, and vascular integrity. Acute hypoxia must be reversed or the tissue physiologic condition moves to apoptosis or cell death. Chronic hypoxia will lead to an adaptive genomic change which will be “survival directed” and increase metabolic behavior to escape from hypoxic environments. Hypoxia is fast becoming as important a biomarker target in solid tumors as identification of glycolysis, angiogenesis, apoptosis, or distant metastases.

Identification of hypoxia has implications in many medical settings. The goal of tumor therapy must include the characterization of the tumor metabolic state and not simply its detection and sizing. Tumors show increased radiation sensitivity in the presence of oxygenation. This is termed the OER or oxygen enhancement ratio. Radiation therapy is best performed under oxygenated conditions as tumors are typically more radioresistant under hypoxic conditions and many tumors exhibit central necrosis due to hypoxia (Skarsgard and Harrison 1991; Verheij 2008). Tirapazamine, a hypoxic cytotoxin, is commonly used as a potentiator of radiotherapy in combination with the common chemotherapeutic cisplatin. For survival, hypoxic cells undergo genetic modifications to adapt to the stress of hypoxia including generation of mutant p53, glycolysis, and HIF-1. Understanding the heterogeneity of the tumor with respect to the regional hypoxia allows for a more successful irradiation plan, including more precise radiation dose delivery (enhancement) to hypoxic (radioresistant) regions, and a likely better outcome for the patient is possible. Tumor stage, grade, size poorly predict hypoxia so a nuclear or other imaging modality is an important part of therapy planning.

Krohn et al. (2008) have described the radiotracer F-18-labeled nitroimidazole F-18 FMISO as the choice for imaging tumor hypoxia. Other nitroimidazoles include FAZA, FETA, FETNIM, EF3, EF5, and IAZA. The methylthiosemicarbazone ATSM, labeled with Cu-64 (a longer half-life PET tracer with 0.53 day half-life), is a new imaging agent for hypoxia (Anderson and Ferdani 2009; Mankoff et al. 2008; Obata et al. 2003). Cu-64 ATSM is selective for hypoxic tissues due to the increase in the redox trapping mechanism in hypoxic cells. The agent accumulates avidly in hypoxic cells and remains in hypoxic areas within tumors, whereas in normoxic cells the product washes freely out providing contrast within the tumor body. Anderson and Ferdani (2009) have published a detailed treatise on Cu-64 PET agents.

MRI (directly or with contrast) imaging of hypoxia is also an alternative. Using fMRI and the BOLD signal of paramagnetic deoxyHb from O2Hb one can see the regional hypoxia in the tumor (Evelhoch et al. 2000; Ferris et al. 2011). O2-sensitive contrast agents, such as perfluorotributylamine, hexafluorobenzene, hexomethyldisiloxane, trifluoroethoxy-MISO can be used for MR imaging of hypoxic tumors. Also, lactate is a metabolic waste that is a result of hypoxia and lactate signal can be detected by MR spectroscopy, NIR, and bioluminescent probes and ESR (the electron spin resonance line width is sensitive to O2) can delineate poorly oxygenated areas.

Hypoxia is an important aspect of stroke, myocardial hypoxia (stunned myocardium), diabetes, infection, arthritis, transplantation hypoxia, and other conditions. F-18 MISO data analysis requires only a single image at approximately 2 h postinjection (intravenous) and uptake is not generally limited by blood flow. F-18 MISO uptake is the same in most normal tissues and, unlike FDG, no arterial input function sampling or metabolite analysis is required or needed for quantitation. Synthesis of F-18 MISO can be accomplished in high yield via modification of the FDG Box technology. Figure 1.9 shows a glioma imaged with F-18 FMISO and the difference in the hypoxia image from that of FDG is also shown for comparison. The glioma structure is viewed using MR and the regional metabolic differences of two biomarkers (glycolysis and hypoxia) are evident with F-18 MISO uptake setting within the borders of the FDG image.

Fig. 1.9
figure 00019

Glioma imaging with MRI, F-18 MISO, and F-18 FDG: Left: anatomy by MRI, MRI imaging of a glioma with MR contrast agent reveals the tumor location. Middle image: PET image of F-18 MISO uptake in the tumor hypoxic center; Right image: PET image of F-18 FDG showing uptake in the tumor periphery (active growth region). The hypoxic FMISO-avid region is concentric and within the FDG uptake and both surrounding the tumor’s actual necrotic core. The necrotic core is absent of both tracers. Courtesy: (Reproduced from Bioanalysis, May 2009, Vol. 1, No. 2, Pages 321–356 with permission of Future Science Ltd)

1.2.2.3 Magnetic Resonance Imaging, Functional MRI, and Mass Spectroscopy Imaging

Magnetic resonance imaging (MRI) and magnetic resonance spectroscopy (MRS) are not new players in imaging but they are in terms of biomarkers. MR is basically an imaging tool which utilizes radiowave emission in response to the action of an imposed magnetic field, Bo, on a biologic system. An excellent textbook on MR imaging and spectroscopy has been published by Westbrook et al. (2005) entitled MR in Practice. MR is almost universally an anatomical tool but it is capable of measuring metabolites using resonance spectroscopy (MRS). MR does not measure anatomy in the same sense that CT does with measuring density. MR measures the state of water. Ice and water are actually quite different in “structural” terms where liquid water has a mobility, or T2 relaxation time (stated as “apparent diffusion constant” or ADC) of 3,000 ms, while ice has a T2 relaxation time of 0.019 ms, more than 5-orders of magnitude. Tumors also demonstrate T2 ADC values which, under sufficient power (gauss) of an MR imaging system, can discriminate water mobility in muscle (ADC = 0.54 ms) versus tumors (ADC = 0.74 ms). Normal tissues range from 0.595 to 0.237 for brain to intestine, respectively, and thus if a tumor has a sufficiently different “water structure” (ADC value), a tumor may show a discrimination from normal tissues. Indeed, we can think of MR as a way to measure water mobility change as an effect of radiation or chemotherapy exposure or to drug therapy. ADC maps are parametric images of the apparent diffusion coefficients of diffusion weighted to create images. The term “apparent” refers to the dependence of these coefficients on factors other than prior molecular mobility. ADC maps are also called diffusion maps and represent a distinct biomarker of successful chemotherapy. They can also be a biomarker image of stroke as the changes in brain ADC with absence of flow can markedly be detected using this technique and can also show recovery of the neural area if intervention is successful (Schlaug et al. 1997; Chenevert et al. 1997; Morse et al. 2007; Ross et al. 2003). Other applications are in cardiac apoptosis (Hiller et al. 2006) and, as described earlier in this chapter, cell labeling and tracking using MR imaging in animal models of disease using paramagnetic probes and novel iron-tagged and gadolinium-tagged contrast agents (Muja and Bulte 2009).

The Reader is asked to read the chapters on Oncology Imaging and the chapter on Autoradiography where MALDI is discussed and also the chapter on MRS Imaging, where the physics of MR is further explained and the engagement of MRS applied to imaging is fully covered by Venter and his colleagues, respectively.

1.2.2.3.1 Functional MRI and Blood Flow in the Brain and Lungs

While anatomical images with MR are viewed as high resolution, a lower resolution MR tool has been developed for brain blood flow imaging, functional MR imaging (fMRI). fMRI is an imaging technique to measure blood flow (BF) and blood oxygenation BO) where increased flow and increased oxygenation are fundamentally “interpreted” as neurologic activation, or “stimulated” by reason of BF enrichment. This technique has particular implications as a biomarker tool in psychiatry, stroke (Pineiro et al. 2002), and neuropharmacology and there is an excellent review in animal studies by Ferris et al. (2011).

fMRI scans display changes in BF using the phenomenon of oxygen enhancement of the local water—blood—signal. In essence, “thinking” (use of a brain region) promotes (increases) blood flow to a region of the brain where the increased BF suggests neurologic activation. A test to stimulate recall of a memory, for example, may increase BF to the hippocampus, a brain region known to be important for memory. Measurement of the BF changes thus has a role in the assessment of cognition and may help in diagnosis of memory loss, AD, or neurologic deficits from other causes.

Hyperpolarized gases are also a tool of imaging with MR. Dugas et al. (2004) describe hyperpolarized Helium 3 (3He) in mice where they were investigating the flow of gases in and out of the lung. The murine lung field is indeed small and requires the high resolution of MR coupled with appropriate mechanical ventilation that can be used in imaging (Hedlund and Johnson 2002). With respect to MR, lung imaging provides two major challenges: (1) lungs are typically low in water content (use CT if you want to find consolidations) and (2) the air–tissue interfaces of bronchioles and arterioles reduce sensitivity due to variations in the magnetic susceptibility (causes short T2 and T2*relaxation times; see Chap. 11 for physics of MR). Hyperpolarized helium-3 introduced directly into the gas exchange regions of the lung allows for high-quality imaging of the lung, even in small mammals. What is imaged is the ADC (apparent diffusion coefficient; see Chap. 11) which is greater that controls by as much as 25 % in elastase-induced emphysema murine models which is a sufficient “delta” to allow for determination of efficacy of a drug or biologic to reduce such a change. This approach has been used for emphysema models using the pig, dog, guinea pig, rat, and mouse. Human studies to evaluate lung gas exchange using this technique have been employed clinically and, as described previously in this chapter, one can now image smaller animals with system miniaturizations.

1.2.2.3.2 Ultrasound Imaging

Ultrasound imaging, or sonograms, is a technique that utilizes sound waves to exploit a property of the tissues such as edges or discontinuity of density (Riess 2003; Wirtzfeld et al. 2005). In some respects, it is similar to CT, but with decidedly poorer resolution, in what it can detect. Imaging advancements in the technique through instrumentation, image reconstruction and the use of contrast agents (microbubble agents) can aid in the discrimination of edges and detection of anomalous surrounding tissues. The imaging technique can detect “biomarkers” of pathology such as blood clots (Cogo et al. 1998), kidney stones, tumors of the breast (calcium grains in breast DCIS), and in the abdomen, and it can be used for ventricular wall thickness and wall motion. While this imaging modality has many clinical indications, it has only limited small animal drug development applications and will not be covered in this book where we want to focus on imaging platforms that offer demonstrated translational capability and can more likely contribute to product licensure or approval. Readers are encouraged to seek more information on such topics as fluorocarbon-based injectables (Riess 2003), cardiac wall motion (Nagel et al. 1999), and a transgenic prostate cancer mouse model (Wirtzfeld et al. 2005). Nagel et al. have provided an excellent review of MRI imaging of ischemia-induced wall motion abnormalities. They used high-dose dobutamine for evaluation by stress MRI (DSMR) and then compared their outcomes against dobutamine stress echocardiography (DSE). They did find that DSMR yielded a significantly higher diagnostic accuracy compared to DSE which, while not unexpected, the DSE was still equally diagnostic.

Several different antibody- or peptide-targeted microbubbles have been successfully tested for visualizing receptors overexpressed on tumor blood vessels and on atherosclerotic plaques (Kiessling et al. 2012). Kiessling et al. reported the first molecularly targeted microbubble formulation to diagnosis and localize prostate cancer was entering a Phase 0 trial. Other potential uses of the microbubbles technologies include thrombolytic therapy and enhancement of drug delivery across biologic barriers (i.e., the BBB).

1.2.2.3.3 Optical Tomography, Quantum Dots and Luminescence Imaging

Optical imaging systems are the current “vogue” in the nonclinical animal imaging laboratory as the systems are simple, relatively good resolution in the murine models, and the technique has had major advancements in chemical fidelity of test agents which have improved light persistence and reduction of image noise.

One of the problems with commonly used luminescent or fluorophore probes in histology and microscopy imaging is their rapid light emission decay following an excitation pulse. Colloidal semiconductors, or quantum dots (QDs), are single crystal nanoparticle whose size and shape can be closely controlled (Medintz et al. 2005; Michalet et al. 2005, So et al. 2006; Frangioni 2006). The size controls their absorption and emission and they have been designed to show prolonged light emission decay time. When a QD is linked to a biomolecule it can be used as a probe in a tissue section or in vivo using a light capture imaging platform. The pharmaceutical probe emits a characteristic wavelength of light upon absorbance of an excitation pulse (light/laser or other source). The photon signal decay of a QD which is internalized in a cell is markedly persistent from that of the same, but unconjugated, fluorophore where the QD signal can last well beyond 180 s versus a near complete decay of emitted light by 60 s for the internalized flurophore (Lopez 2003; Wu et al. 2003).

QDs can be synthesized from a variety of semiconductor materials, i.e., CdSe, CdS, CdTe, InAs, PbSe, and more. A detailed review of the QD technology, their synthesis and light-emitting properties, was published in 2005 by Michalet. QDs have a unique size-dependent property of releasing specific wavelengths of light in the near infrared (>700 nm) when an incident wavelength of light by a tuned laser at another wavelength excites the nanoparticle. QDs are being used in multiple probes for tissue microscopy, especially confocal microscopy, for imaging structures deep into tissue or are in a flow situation and are now being explored for in vivo imaging of surface structures and tumors in nude mice. QDs do produce strong background autofluorescence, have self-absorption, and significant light loss by tissue scatter of emitted photons. Figure 1.10 depicts a cartoon of a QD in tissue with an incident light excitation and the resultant emission scatter. A graph of excitation energy spectra versus the unique emission wavelength of the QD is also shown. The tunable QD emission color characteristics are principally based upon the nanoparticle size and they typically range from 2 to 9.5 nm. The emission light ranges from 400 to 1,350 nm with each emission peak having a full width at half maxima of about 30–50 nm.

Fig. 1.10
figure 000110

Tunable quantum dots (QDs) for optical imaging. (a) An idealized quantum dot that may target a biologic entity uniquely can be tagged with a reporter or receptor-binding entity (i.e., DNA, iRNA proteins, Mabs, peptides, other binding entities). The QD does require external illumination to have an emission line. (b) is the incident excitation spectra (sloping line) and the resulting unique photopeak of the QD (30–50 nm FWHM) and, (c) examples of the emission colors which are dependent on the QD core size between 2 and 9.5 nm. Emission wavelengths range from 400 to 1,300 nm. (Courtesy: Jinghong Rao, Stanford University, Molecular Imaging Laboratory, Stanford, CA; with permission)

Little incident light is available for QD excitation at nonsuperficial locations. New QD designs are trifunctional having tags of light-emitting products such as luciferase (LUC8) and the high-affinity probe for localizing the QD complex and a means of auto-excitation. The binding portion of a QD complex to a receptor can trigger the bioluminescent complex to excite the associated QD due to separation distances and the QD becomes the energy acceptor from the donor luciferase light (rather than external visualization of the luciferase) and the QD becomes activated emitting its own signature wavelength. Trifunctional structures allow QDs to work without the need for an external (ex vivo) excitation source (laser). This new technology is called BRET, or Bioluminescence Resonance Energy Transfer, which has excellent potential to allow for deep tissue in vivo excitation of QDs and detection via specific wavelength directed laparoscopy or other techniques. BRET, or self-illuminating QDs, is described in Fig. 1.11 (So et al. 2006; Frangioni 2006).

Fig. 1.11
figure 000111

BRET technologies are potentially useful for obtaining higher energy QD emission than from luciferase itself. Deep in vivo tissue excitation of QDs from non-external laser excitation is the main advantage of BRET QD probes. (Courtesy: Jinghong Rao, Stanford University, Molecular Imaging Laboratory, Stanford, CA; with permission)

1.3 Summary

This brief introductory chapter has hopefully set the stage for the Reader to venture to each of the upcoming chapters which will cover specific aspects of imaging platforms in the drug discovery and development laboratory. Appendix 1 of this chapter attempts to format a wide variety of imaging applications, the platforms and selected authors. The Reader should also visit the references listed at the end of this chapter (and all the references of each chapter) as although they are cited for specific reasons in the body of this chapter, there are many additional and important imaging suggestions and procedures that the Reader may find informative. The last chapter of this book will hopefully address any regulatory questions the readers may have and we invite all readers to query any of our authors. We have discussed several examples of ways to image specific biologic processes as well as use of selected biomarkers, but it is an impossible task to list or discuss all the possibilities and ideas that are being generated about the uses of imaging as they are published faster than this book can be published. In some ways this book will be outdated nearly by the time it is published. This text will be published in print and electronically to allow for updating these technologies Imagination is the only limit at this time. The editors and the authors of this book recognize the value of imaging in drug and biologics development and all also understand that no one image platform tells the story of a drug or biologic and not one text can either. The Editors have created a resource and we are hopeful you will find important and practical imaging ideas which move your product(s) to approval.

The following key points are for you remember as you explore the chapters of this book as any new imaging platform, or any new imaging probe, should be examined under these critically important caveats or precepts:

  • Biomarkers assessed by imaging essentially fall into four categories: (1) predictive, (2) prognostic, (3) diagnostic, and (4) dosimetric.

  • Useful imaging targets or biomarkers need to be “causal”, i.e. they must be mechanistically related, plausible, and proximal to a disease endpoint to provide accurate, confirmatory, and supportive evidence of a therapeutic intervention’s efficacy.

  • Biomarkers exploited in imaging can represent biochemical/anatomical/pathological process(es) or be representative specific pharmacological response(s) to therapeutic intervention.

  • Biomarkers can serve as surrogate markers or replace a conventional clinical endpoint for efficacy and/or toxicity if the linkage is validated and the relation defensible.

  • Biomarkers can accelerate drug development and decision making if used appropriately in the right models, species, and under the appropriate-controlled conditions.

  • Biomarkers can provide a mechanistic bridge between preclinical study outcomes and clinical trial results.

  • From a regulatory perspective, biomarker development must be validated prior to utilization in a proposed drug development plan.

  • The Prentice criteria (Prentice 1989; Fleming and DeMets 1995; Campbell 2006), a unifying statistical approach for surrogate marker validation must be satisfied (Wagner et al. 2006), that is, a surrogate for a true endpoint must yield a valid test of the null hypothesis of no association between treatment and the true response. This criterion essentially requires the surrogate variable to “capture” any relationship between the treatment and the true endpoint.

  • It is becoming clear that future clinical endpoints will not be univariate (single outcomes or single biomarker reads) but rather composite endpoints and a multivariate approach.

  • Clinical medicine needs better ways to measure individual responses in pivotal clinical trials rather than simple mean analysis, perhaps using biomarker probes will be beneficial.

  • Biomarker use in medical practice may help create individualized, i.e. personalized, medicine, and the regulatory process is a critical consideration (Woodcock 1997).

  • Imaging is such a diverse technique to measure biomarkers. Physical properties, metabolism, pharmacokinetic and pharmacodynamic responses, and physiologic status can all be measured now with high resolution, minimal intervention, and high predictive clinical value.

  • Lastly, nonclinical and clinical imaging platform may not, and are likely to not, provide equivalent outcomes. One must consider the “imaging equivalence” across species in drug and biologic development is a measure of the imaging physics, the product and probe chemistry, and the species translational fidelity.