Abstract
Medical imaging consists of a wide range of modalities, each with its respective advantages and limitations. Image fusion technologies may allow the advantage of one imaging modality to overcome the limitations of another. In this chapter, the current state of hardware and software fusion technologies is discussed. We also present our experience with fusion technology for multi-modality imaging and consider future applications of the technology in the clinical setting.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Introduction
Medical imaging is of vital importance in modern medicine and of special interest in diagnostic medicine. Current medical imaging consists of a wide range of instrumentation including but not limited to computed tomography (CT), magnetic resonance imaging (MRI), ultrasonography (US), fluoroscopy, plain film radiography, positron emission tomography (PET), and single-photon emission computed tomography (SPECT). Each of these imaging modalities has certain advantages and limitations.
Clear advantages of one imaging system can be a limitation of another. Hence fusion of these imaging technologies may provide an approach for maximizing imaging information, as illustrated in Fig. 28.1.
We will first consider briefly advantages and limitations of common medical imaging technologies. Since the 1970s, CT usage has increased rapidly, and it is estimated that today more than 72 million scans are performed annually in the USA alone [1]. Although CT imaging is user-friendly and allows rapid image acquisition, concerns have been voiced recently regarding repetitive usage and radiation exposure resulting in a possible increase of life-time cancer risk [2]. Additionally, contrast agents that are required for some studies can lead to renal impairment and hence renal function should always be considered in image modality selection.
MR imaging provides excellent soft-tissue contrast and is free of ionizing radiation. MRI utilization is rapidly growing and today approximately 30 million annual scans are performed in the USA [3]. However, availability of MRI systems is limited and imaging costs are high compared to those of CT. The major contraindications to MRI are metallic implants in the body such as aneurysm clips, cochlear devices, spinal nerve stimulators, pacemaker, implantable cardioverter-defibrillator (ICD), and deep brain stimulators. Patients are frequently imaged using Gadolinium-based contrast agents. Gadolinium chelates are extravascular MRI contrast agents that are cleared by the renal system. Individuals with severe renal impairment may therefore experience adverse effects with the use of these contrast agents, and hence clinically, patients are screened for their kidney function by estimating the glomerular filtration rate (GFR).
PET is a functional imaging modality involving a radionuclide such as fluorodeoxyglucose (18F-FDG) which is intravenously injected after which the location of the tracer throughout the body can be determined for diagnostic purposes [4]. PET imaging has been very successful, especially in oncology [5]. One of the concerns regarding PET/CT imaging studies is the level of radiation exposure that can be in the order of ten to above 30 mSv per scan, depending on imaging protocols [6]. Moreover, at present the availability of PET imaging is limited and scanning costs are high. One of the major technical limitations of PET imaging is poor resolution when compared with other imaging modalities. The fusion of PET/CT systems alleviated the limitation of image resolution in part and also increased availability to approximately 2,000 installed systems in 2010 [7].
Ultrasound imaging is among the safest of imaging modalities with several advantages including portability and excellent temporal resolution. Moreover, US is a widely available versatile medical imaging technology. US has no known long-term side effects, and the modality can be used to image soft tissue, vasculature, blood flow, muscle, and bone surfaces (Fig. 28.2). US technology is based on sound waves that are generated and detected by piezoelectric elements inside the US transducer. Sound waves travel in a beam originating from the transducer that also acts as receiver for the reflected and scattered echo signals that are subsequently converted into an image. Clinical US imaging systems utilize acoustic waves in the low MHz range resulting in limited tissue penetration depth and inability to penetrate bone. US imaging is, however, more operator dependent than CT or MR imaging and hence the sonographer training and experience are important factors of image quality. Another limitation of ultrasound imaging is its “lateral drop out” or loss of detail in the lateral segments of the image, i.e., the segments away from the ultrasound beam—which is a consideration in assessing atherosclerosis [8].
Image Fusion
Image fusion presents the clinician and researcher with the opportunity to merge content from multiple imaging modalities, thereby possibly alleviating limitations of individual scanning technologies (Fig. 28.3). Image fusion can be implemented at the hardware or the software level. Table 28.1 lists clinically available image fusion systems such as, for example, PET/CT [4, 9] and recently available PET/MRI systems [10, 11]. Although image fusion is commonly understood as an inter-imaging modality technology, it can also be applied to intra-modality fusion; for instance to analyzing pre- and postoperative imaging data.
Table 28.2 shows a summary of important characteristics of major imaging technologies. Broadly speaking, image fusion between one or more imaging modalities can be accomplished by hardware or software fusion.
Image fusion can be broadly classified into two categories, namely hardware fusion and software fusion.
Hardware Image Fusion
In the hardware-based approach to image fusion, data are acquired simultaneously by different imaging modalities. The advantage of these systems is that the resulting imaging data are co-registered, and data fusion is performed in real time. The disadvantages include the possible requirement of larger equipment.
Ultrasonography
Compound imaging or the fusion of information obtained through multiple scans of the same anatomy or region of interest has been one of the main approaches toward improving information obtained through ultrasound-based imaging. Compound imaging systems fuse images from separate sequential scans of the area of interest [24–28]. Jeong and Kwon [29] obtained US scans of human breast tissue using two opposing array transducers. Although these efforts improved the image quality and resolution, imaging was done sequentially and hence resulted in long scan times or suboptimal co-registration, i.e., the ability to obtain information in the same plane. The simultaneous usage of multiple transducers might exacerbate the known US limitation of lateral dropout leading to potential loss of information.
The issue of lateral dropout in US imaging has been addressed by fusing images obtained from different angles using a technique known as spatial compounding which has been an active field of ultrasound research aimed at reducing intensity variations due to interference patterns from tissue echoes known as speckles [28]. Jespersen et al. [27] developed a scanning method known as multi-angle compound imaging (MACI) that uses a linear phased array to create iteratively a beam at one of n-angles at a time producing a set of acquisitions from different angles. MACI averages all n-images resulting in better tissue contrast and reduced speckle noise. The basic concept of MACI was extended by Behar et al. [28] to improve lateral resolution and speckle contrast by simultaneous image acquisition using three laterally separated transducers with only one acting as transmitter resulting in a compound image. Spatial compounding and MACI reduce speckle and improve tissue contrast at the cost of a reduced image frame rate.
PET/CT/MRI
PET/CT is a very successful imaging technology that combines functional imaging with anatomical imaging [9]. Although technical designs and specifications differ among vendors, the PET detectors and CT components are mounted typically inside a single gantry resulting in a co-registered data acquisition of both modalities [30, 31]. The major CT components such as detectors, readout electronics, and the X-ray tube are mounted on a rotating ring. The PET detectors and electronics are mounted on a separate ring, partial ring, or are integrated in a stationary setup, depending on vendor specifications. Technical details on PET/CT scanner instrumentation can be found in an article by Alessio et al. [32].
In 2011 the US Food and Drug Administration approved the first commercial PET/MRI system for US market. PET/MRI is a promising technology, given its capability to simultaneously image function with increased soft tissue contrast at a significantly lower radiation burden [33, 34]. PET/MRI can reduce ionizing radiation by approximately 70 % when compared to state of the art PET/CT systems. Hence, this new technology might be especially of interest to vulnerable populations such as individuals receiving multiple scans and children.
Software Image Fusion
The software-based approach to image fusion is routinely used for the co-registration of images obtained with two or more modalities (Fig. 28.4). The process of image fusion involves several algorithmic steps that can vary substantially in number depending on the application at hand. Image fusion techniques integrate algorithms from the broad areas of computer vision and object recognition [35, 36]. Common problems that arise in image fusion are due to inherent differences in the underlying imaging technologies. For instance, in-plane resolution is quite different between US and MRI images. The image fusion algorithms need to be able to tackle these differences in a robust fashion. Figure 28.5 shows the typical steps used in software-based image fusion techniques.
One of the most important steps in the image fusion workflow is the co-registration of the multi-modality imaging data. This aspect requires the identification of anatomical landmarks that are present in all of the source images. Depending on the approach, these landmarks, which are commonly referred to as fiducial markers, need to be extracted automatically or manually. Clinically, semiautomatic approaches have proven superior since automatic algorithms may perform poorly in the presence of imaging artifacts. Subsequently, these fiducial markers are used to extract features which in turn play a vital role in the quality of the final image fusion output.
Features are a compact representation of an image’s content and of central importance in software-based image fusion systems. In the case of image fusion, the complexity of features can extend from simple coordinates of a set of fiducial markers to wavelet descriptors of selected regions of interest or include a set of features combining texture, structure, shape, motion, or entropy information[37–43]. The construction of features is of central importance in image analysis and image registration algorithms. It is beyond the scope of this chapter to discuss feature extraction in detail. Therefore, we will focus on the conceptual aspects, and the interested reader is referred to specialized literature on feature computation [35, 41, 44–52]. An important property of features for image fusion applications is invariance. Invariant features remain unchanged in case the image content is transformed according to a group action, i.e., the features obtained for an unaltered or a transformed image are mapped to the same point in feature space. A simple example is the color histogram of an image that remains identical under any permutation of the image pixels. However, a slight change in illumination, i.e., changing the actual values of pixels, may significantly change a simple color histogram. The concept of invariance considerably simplifies semi-automatic or fully automatic image co-registration. Instead of comparing images in all transformed instances, only one comparison has to be performed.
The next step in the workflow outlined in Fig. 28.5 is the actual registration algorithm that uses the features extracted from both modalities to establish correspondence between the imaging data sets. The most commonly used registration approach assumes a rigid scenario meaning that a set of rotations, translations, and perhaps a uniform scaling operation results in a correspondence between the imaging data. These operations are also known as a similarity transformation that consists of 4 degrees of freedom (DoF) in the case of a two-dimensional image, i.e., translation in x and y direction (2 DoF), in plane rotation (1 DoF), and uniform scaling (1 DoF). In order to solve the similarity transformation, a total of 2 points is sufficient. In the general 3D case such as a volumetric US or MRI study, the similarity transformation has a total of 7 DoF (3 translation, 3 rotation, 1 scaling) and requires at least 3 fiducial markers to obtain a co-registration between the data sets. The rigid registration approach works favorably for many applications. However, in cases where a patient’s anatomy changes such as, for example, when scans were taken pre and post a surgical or endovascular intervention, the rigid registration approach delivers suboptimal results. In these cases image fusion can be attempted by using nonrigid registration algorithms. There is a broad spectrum of nonrigid registration methods that basically allow to transform elastically the content of one image to match the content of another one. In general, nonrigid registration algorithms are complex and time consuming as the number of degrees of freedom can be very large [53, 54]. In order to alleviate this limitation, nonrigid registration algorithms commonly incorporate expert knowledge of the underlying anatomical change that improves performance and accuracy.
US-CT and US-MRI Fusion
Ultrasound is a real time cost-effective and widely available imaging technology that can be fused with other modalities such as CT or MRI. US is of special interest in intraprocedural and postoperative imaging due to its noninvasive nature. Any other technique would require image viewing, processing pauses, or prolonging the procedure. Crocetti et al. [55] examined in a recent study the feasibility of a commercial multimodality fusion imaging system (Virtual Navigator System, Esaote SpA, Genoa, Italy), for real-time fusion of preprocedure CT scans with intraprocedure US. The study was conducted ex vivo using calf livers prepared with radiopaque internal targets to simulate liver lesions. Subsequently, acquired CT scans were fused with real time US images resulting in mean registration errors of 3.0 ± 0.1 mm.
Nakano et al. [56] used a commercially available image fusion system (Real-time Virtual Sonography, Hitachi Medical, Tokyo, Japan) to perform breast imaging. The system was tested in 51 patients who presented with 63 lesions. Patients underwent MR imaging on a 1.5 T imager followed by a sonographic evaluation of the same lesions. Lesion size measured by real-time virtual sonography and MRI was similar (r = 0.848, p < 0.001). Similarly, positioning errors for the sagittal and transverse planes and relative depth from the skin were small (6.9 mm, 7.7 mm, and 2.8 mm).
Wein et al. [57] developed an automatic CT-US registration framework for diagnostic imaging. Liver and kidney CT and US scans from 25 patients were fused to assess registration errors of the proposed algorithm. One expert defined ground truth data by manually locating fiducial landmarks (lesions) in both imaging modalities. Subsequently, registration errors were compared between the automatic algorithm and the fiduciary point-based registration method. The point-based method using manually identified lesions yielded more accurate results than the automatic method with respective fiducial registration errors of 5.0 mm and 9.5 mm. However, the point-based method involved up to 10 min of identifying fiducial markers, whereas the automatic method required approximately 40 s. Although the automatic method is not readily usable in the clinical setting, it could provide a means to reduce the time necessary to fuse CT and US data sets.
In another study, Caskey et al. [58] developed an US–CT fusion system with the capability to combine real-time US images with pre-acquired CT images. The system was tested using Met-1 tumors in the fat pads of 12 female mice. The CT data were used to identify the Hounsfield units of the tumor which in turn were validated histologically. The US and CT data were fused using fiducial markers with an accuracy of approximately 1 mm.
Clinical Research Applications
As multi-modality imaging becomes more prevalent, attention will be directed toward systematic, reproducible methods for inter-modality comparison of image sets for a given biological system. The fusion imaging techniques discussed above allow for such comparisons to be conducted.
With regard to ultrasound imaging, one can envision fusion ultrasound protocols using baseline ultrasound scans as a reference for true serial comparisons (i.e., ultrasound–ultrasound comparisons) for monitoring cardiac function and wall motion abnormalities. Fusion with more detailed structural scans such as MRI or CT may also allow for overlaying of functional information from real-time ultrasounds. We describe our experience for ex vivo and in vivo imaging toward development of these protocols in the subsequent sections, using a commercially available ultrasound system capable of fusion imaging.
Figure 28.6 illustrates the four main image fusion steps after uploading data from a secondary imaging modality:
-
1.
Locking the plane (Fig. 28.6a)
-
2.
Presenting markers and orientation, zoom out/in to view area of interest (Fig. 28.6b)
-
3.
Co-registration: Choosing at least three reference points on each imaging modality (Fig. 28.6c)
-
4.
Displaying the fused images in cross section and longitudinal views (Fig. 28.6d–f)
In Fig. 28.6, the carotid bifurcation and calcified plaques were used as intrinsic landmarks for co-registration. Below, we describe in detail the various considerations both for in-vivo and ex-vivo fusions.
Registration of Landmarks
As previously stated, one of the main challenge with inter-modality fusion of image sets comes from linking recognizable image landmarks through co-registration. Techniques for hardware- and software-based co-registration have been described in depth in the previous sections. Image landmarks, or fiducial markers, chosen for co-registration can be intrinsic or extrinsic to the biological system of interest.
The simplest co-registration techniques with intrinsic landmarks use anatomic features [59]. Anatomy common to a given biological system (e.g., carotid bifurcations) also allows for standardization of in vivo imaging protocols. When extrinsic landmarks are introduced into a biological system, compatibility of the chosen markers between various imaging modalities should be considered. For ex vivo imaging, spatial features of these landmarks aid with identifying spatial orientation within an imaging modality, since the native anatomic orientation (e.g., left–right, anterior–posterior, cranial–caudal) may no longer be present. We encountered these issues in our ex vivo experiments with carotid endarterectomy tissue specimens and have found plastic intravenous 3-way stopcocks to be adequate 3D markers for this system (Fig. 28.7).
Preparation for Inter-Modality Imaging
Maintaining spatial orientation of the biological system of interest has become so important for inter-modality comparisons that entire medical imaging fields have been developed to address this issue [59]. Again, hardware and software processing techniques have been covered in detail in the above sections. Steps may still be taken with preparing a given biological system to standardize quality assessment protocols of inter-modality comparisons and to assist with such comparisons.
In the ex vivo setting, soft tissues such as vessels are easily deformable and thus do not maintain their shape. Furthermore, the ultrasound imaging modality requires a substrate/media through which the sound beam can travel, ideally a substrate/media with acoustic properties resembling those of soft tissue. Agarose gels maintain the spatial arrangement of embedded tissues and can be mixed to have such desired acoustic properties. Additionally, agarose has minimal chemical interaction with biological tissue. Such a gel has been used for ultrasound imaging of carotid endarterectomy tissues ex vivo [60]. Furthermore, these constructs are compatible with other imaging modalities such as MRI and CT and have also been used for imaging phantoms for these modalities. We have used an agarose gel media (3 % by mass [g]–volume [mL]) for our studies [61].
Carotid endarterectomy tissues were obtained 1–3 h after surgical resection and preserved in phosphate-buffered saline/50 % glycerol at −20 °C to maintain the ultrastructural properties of the tissue. Prior to use, specimens were dialyzed for 24 h against phosphate-buffered saline to remove the glycerol and then embedded in an agarose mixture. The low-melting agarose mixture was created by heating to 60 °C and then degassing under vacuum. Degassing minimized the presence of bubbles in the mixture, which are hyperechogenic on ultrasound and interfere with ultrasound imaging. Each tissue was suspended with an intravenous plastic 3-way stopcock as an extrinsic fiducial marker (Fig. 28.8).
For MRI, tissues and markers were suspended in empty 50 mL tubes, then filled with agarose and placed in a specially constructed MRI compatible holder (Fig. 28.9) [61]. After MRI each tissue was extruded intact as an agarose cylinder from the tube and then transferred to a plastic box (10.5 × 2.5 × 4.0 cm), and additional degassed molten agarose was added to form an agarose bed (Fig. 28.10). This procedure was performed to provide adequate surface contact for the ultrasound probe.
For in vivo experiments, we observed and recorded the subject’s head position for the MRI scan. Having the subject reproduce this positioning for the 3D ultrasound scan helps minimize MRI-US co-registration issues. In theory, extrinsic landmarks may be employed for in vivo multi-modality imaging. However, such landmarks would either be limited to the skin surface, interfering with the contact by the ultrasound probe, or be necessarily invasive, requiring needle insertion of the marker. Other external reference systems, such as the Meijer’s arc, may also be considered. However, their reliability for co-registration should be compared with that of the above markers.
Three-Dimensional Ultrasound Imaging
MR Imaging
For in vivo experiments involving subjects with known carotid atherosclerosis, individuals were positioned supine in a Signa Excite 3.0 T MRI scanner (GE Healthcare, Wauwatosa, WI). The carotid arteries were imaged using a 6-cm phased array 4-channel carotid coil (Pathway Med Tech, Redmond, WA). A standard 3-plane localizer was used to identify the carotid arteries. Subsequently, a 2D time of flight (TOF) sequence was applied as a localizer to identify both the right and left common carotid bifurcation (flow divider) and to obtain high quality blood flow and vessel wall imaging. Three 2D fast spin echo scans were acquired using proton-density weighting, T2-weighting, and T1-weighting. The longitudinal coverage of this set of images was centered at the carotid bifurcation and covered a large part of the carotid artery below and above the bifurcation.
For ex vivo experiments, carotid endarterectomy tissue specimens were imaged using the same 3.0 T scanner and phased array coil as for the subjects. Serial axial proton-density weighted (PDW), T1-weighted, and T2-weighted images were acquired (2-mm slice thickness, matrix 512 × 512, field of view 100 × 100 mm) using a fast-spin echo sequence, providing 10–31 slices with an in-plane resolution of ∼0.195 mm. Correction algorithms adjusted for magnetic field strength gradients across the sample image. More recently, we also imaged the carotid tissue specimens using a 3.0 T Siemens Verio system with a 32-channel head coil. The CEA samples were imaged using a turbo spin echo sequence with the following parameters: repetition time (TR) = 3010 ms; echo time (TE) = 6.1 ms; number of averages = 4; slice thickness = 2 mm; echo train length = 7; pixel bandwidth = 521 Hz/pixel; flip angle = 123°; x–y pixel spacing = 0.364 mm; and number of slices = 63.
D Ultrasound Imaging
3D ultrasound images can be acquired using probes (a) with motors set to move elements over a certain length, (b) with two-dimensional arrays of transducer elements, or (c) with freehand techniques. With free-hand techniques, the probe is moved in a direction perpendicular to the scan plane or rotated in place, and internal or external reference systems are used to combine a series of B-mode ultrasound images acquired during the sweep. Internal reference systems may consist of (a) timing through clocks or (b) image processing algorithms that recognize large movement in the scan plane. These internal reference systems are beyond the scope of this chapter, but the interested reader is directed to relevant citations for more detail [63]. External reference systems have also been used. Some ultrasound systems have included accelerometers with the transducer probe, capable of detecting probe motion and direction [63]. Others have acquired position information through electromagnetic transmit–receive setups.
For our experiments, we have used one such commercially available position-sensing system as the external reference. This system consists of a mid-range DC magnetic transmitter for generating a weak magnetic field. Sensor arrays attached to the two-dimensional vascular ultrasound probes sensed this field and transmitted the information to an in-built circuit system. Thus, information on the position and orientation of the ultrasound probe can be acquired with the system from the signal detected by the attached sensor arrays. The system can also be used independent of a three-dimensional image data set to generate a three-dimensional reference “map” or image set.
After calibration through a semiautomated co-registration process, real-time B-mode ultrasound images were mapped to the co-registered three-dimensional image sets in a manner similar to that used by the Global Positioning System (GPS) navigation devices to map the device location to stored maps. For our experiments, we used a plane-point co-registration technique. First, the corresponding image set (e.g., MRI scans) for a biological system that was imaged was loaded into the ultrasound system, which performed multiplanar reconstruction of the image data set to form a three-dimensional “map.” The reconstruction was used to navigate to a landmark of interest (e.g., an extrinsic landmark such as a plastic marker for ex vivo experiments or the chin of a volunteer for in vivo experiments). Then, the plane for the loaded three-dimensional image set was locked. The corresponding plane on the live B-mode ultrasound scan was found and locked on the system, completing the plane-lock step. The system then tracked motion of the live ultrasound scan with freehand navigation through the loaded three-dimensional image set. Next, an intrinsic landmark (e.g., calcification in tissue or carotid bifurcation) was located, and the three-dimensional image set was rotated in-plane to align the corresponding images. Finally, the intrinsic landmark was marked with a point on both the live ultrasound scan to refine the real-time co-registration.
Ultrasound Imaging Fusion Experiments
For ex vivo experiments, we have validated use of the system for three-dimensional co-registration against manual co-registration based on anatomic landmarks with 13 carotid endarterectomy tissue specimens [64]. We found that on average, 13.92 (standard error [SE] 1.95) MRI slices each 2 mm thick and 265.77 (SE 28.58) ultrasound frames were necessary to image the tissue samples, translating to 19.66 (SE 2.07) ultrasound scan frames per MRI slice [64]. There were excellent inter-reader agreements between semi-automated GPS-like system and two different readers (intraclass correlation coefficients [ICC] > 0.99) for 33 landmarks (Fig. 28.11) [64]. Further experiments with additional specimens have suggested modestly better agreement between ultrasound and volumes measured by water displacement (ICC 0.85) than between MRI and water volumes (ICC 0.81) [65].
We have also examined if in vivo repeatability of carotid intima-media thickness (CIMT) measurements can be improved using the same system. CIMT measures using the Meijer’s arc and the GPS-like system were 0.61 (SE 0.03) mm and 0.63 (SE 0.03) mm at the initial visit, respectively [64]. On the second visit (∼2 days apart), CIMT measures were 0.64 (SE 0.03) mm and 0.64 (SE 0.04) mm, respectively. There was good agreement (ICC > 0.7) between the two methods (ICC 0.92). Overall, we found greater repeatability C-IMT measures with the registered images (ICC 0.91) than those with the Meijer’s arc (ICC 0.84) (Fig. 28.12) [64]. However, because of the 3-D map, the image quality was not as good as when performed with regular 2D approaches.
We have also used the same co-registration technique for our in vivo study of carotid atherosclerosis, using carotid bifurcations as the intrinsic landmark (Fig. 28.13).
So far, our three-dimensional evaluations have been limited to structural measurements of thicknesses and volumes. Future work may include more complex analyses such as physiologic functional assessment. In fact, others have begun work to incorporate color Doppler information from echocardiography with cine cardiac MRI, paving the way for four-dimensional co-registration [66]. One group has extracted color Doppler blood flow from transthoracic echocardiograms in ten volunteers, co-registered the flow information with corresponding cardiac MR images in time and space using the mitral annular root and root of the aortic valve as intrinsic landmarks, and fused the two data sets into one image set. Registration quality was assessed in this study using the variation in distance of a landmark between echocardiograms and cardiac MRI.[66] The fusion process was conducted off-line using in-house built algorithms, but one can already envision fusion of real-time ultrasonography with cine cardiac MRI using electrocardiography gating with existing technologies [66]. Challenges with algorithm development and temporal resolution would need to be overcome to bring these advanced analyses to research, and eventually clinical, settings.
Clinical Applications and the Future
Clearly the capability to fuse imaging technologies will have significant value in both diagnostic and therapeutic medicine. With respect to ultrasound the major advantage of portability, safety and providing real time functional and anatomical information can be harnessed to complement the excellent anatomic tomographic information that can be provided by MRI and CT.
Ultrasound-based fusion is already beginning to show value both in the clinical and research arenas. Ultrasound on its own is already used to guide biopsies. However, the limitations of ultrasound (drop out, tissue penetration) have to be circumvented by the physician. Although this is clearly possible, it is not optimal. The ability to get in addition the excellent anatomical information as can be provided by a CT scan or MRI will allow the physician performing the procedure more confidence and accuracy in performing the procedure. For example, during a biopsy, images can be fused and an ultrasound be used to guide the biopsy in real time. With image overlays, the physician will be able to simultaneously look at the CT scan/MRI to know if he is in the right anatomical location. These approaches have been already shown to improve prostate biopsies [67]. Similarly, ultrasound-based radiofrequency ablation can be used to treat some tumors. However, ultrasound drop out, isoechoic tumors, prior treatment, etc. may make it difficult for ultrasound to adequately identify these tumors for therapy. The fusion with CT/MRI may allow increased accuracy in the identification of these [55, 68]. In a recent animal study, ultrasound-ultrasound fusion was applied to improve the delivery of radiofrequency ablative therapy. In this study, gold pellets were inserted into the renal parenchyma in a canine model and then 3D ultrasound images were obtained. These 3D ultrasound images were used to plan ablative therapies. Then, using real time, 2D ultrasound, registered to the 3D data set, radiofrequency ablative therapy was delivered, and this overall improved the accuracy of delivery of therapy [69]. From a cardiovascular imaging perspective, in addition to the improved anatomical information, fusion could be an excellent educational tool when we try to resolve artifactual findings.
Ultrasound, on the other hand, can be used to complement other modalities as well. The functional information that can be obtained through ultrasound can be fused with the anatomical information on a MRI/CT to allow better interpretation of a MRI finding. For example, once a stent is placed, MR images in the region of the stent may not be helpful and functional information may be difficult to get. Fusing the images could allow ultrasound to get functional (Doppler) based information related to the same. Figure 28.14 shows an example of an MRA of the middle cerebral artery in the brain that was fused with ultrasound (transcranial Doppler) information to gain knowledge about how the stent was functioning. In such cases, fusion imaging can abrogate the need for a follow-up angiogram (Fig. 28.14).
Several potential uses of ultrasound–ultrasound fusion (similar to our efforts to fuse CIMT data and improve reliability) can be thought of as well including, for example, monitoring abdominal aortic aneurysms for expansion, cardiac chambers, and change in lumen dimensions in arterial segments. This may allow meaningful comparisons and a better appreciation of the progression/stability of disease. Clearly, there are innumerable clinical situations where one can think of possible uses of fusion.
However, one must be cognizant of limitations as well [64]. The time taken for the procedure will clearly increase. Further there is a learning curve: the advantages offered by the fusion technology will be limited by the accuracy of registration that the sonographer performs. There may be occasions where a small error in registration may not matter. However, there may equally be situations (such as during a biopsy/therapy) where any error will be critical. Then, when one images a dynamic structure such as the heart, changes in heart rate and rhythm between studies may all affect the ability to fuse the images without additional processing.
Conclusion
Image fusion, be it hardware or software has the potential to enable us to improve imaging-based assistance in diagnostics and therapeutics. Image fusion may be of particular value to ultrasound imaging given its clear differences (vis a vis advantages and disadvantages) with other imaging modalities. Although additional work is required, this developing application offers great promise.
References
Berrington de Gonzalez A, Mahesh M, Kim KP, Bhargavan M, Lewis R, Mettler F, Land C (2009) Projected cancer risks from computed tomographic scans performed in the United States in 2007. Arch Intern Med 169(22):2071–2077
Brenner DJ, Hall EJ (2007) Computed tomography–an increasing source of radiation exposure. N Engl J Med 357(22):2277–2284
IMV (2008) Benchmark report: MRI 2007. IMV Medical Information Division, Des Plaines, IL
Ben-Haim S, Israel O (2006) PET/CT for atherosclerotic plaque imaging. Q J Nucl Med Mol Imaging 50(1):53–60
Hillner BE, Siegel BA, Hanna L, Shields AF, Duan F, Gareen IF, Quinn B, Coleman RE (2012) Impact of 18F-FDG PET used after initial treatment of cancer: comparison of the national oncologic PET registry 2006 and 2009 cohorts. J Nucl Med 53(5):831–837
Huang B, Law MW, Khong PL (2009) Whole-body PET/CT scanning: estimation of radiation dose and cancer risk. Radiology 251(1):166–174
Buck AK, Herrmann K, Stargardt T, Dechow T, Krause BJ, Schreyogg J (2010) Economic evaluation of PET and PET/CT in oncology: evidence and methodologic approaches. J Nucl Med 51(3):401–412
Nambi V, Chambless L, Folsom AR, He M, Hu Y, Mosley T, Volcik K, Boerwinkle E, Ballantyne CM (2010) Carotid intima-media thickness and presence or absence of plaque improves prediction of coronary heart disease risk: the ARIC (atherosclerosis risk in communities) study. J Am Coll Cardiol 55(15):1600–1607
Townsend DW, Beyer T (2002) A combined PET/CT scanner: the path to true image fusion. Br J Radiol 75(Spec No):S24–S30
Calcagno C, Cornily JC, Hyafil F, Rudd JH, Briley-Saebo KC, Mani V, Goldschlager G, Machac J, Fuster V, Fayad ZA (2008) Detection of neovessels in atherosclerotic plaques of rabbits using dynamic contrast enhanced MRI and 18F-FDG PET. Arterioscler Thromb Vasc Biol 28(7):1311–1317
Nishioka T, Shiga T, Shirato H, Tsukamoto E, Tsuchiya K, Kato T, Ohmori K, Yamazaki A, Aoyama H, Hashimoto S, Chang TC, Miyasaka K (2002) Image fusion between 18FDG-PET and MRI/CT for radiotherapy planning of oropharyngeal and nasopharyngeal carcinomas. Int J Radiat Oncol Biol Phys 53(4):1051–1057
Flohr TG, McCollough CH, Bruder H, Petersilka M, Gruber K, Suss C, Grasruck M, Stierstorfer K, Krauss B, Raupach R, Primak AN, Kuttner A, Achenbach S, Becker C, Kopp A, Ohnesorge BM (2006) First performance evaluation of a dual-source CT (DSCT) system. Eur Radiol 16(2):256–268
Kellman P, Chefd’hotel C, Lorenz CH, Mancini C, Arai AE, McVeigh ER (2009) High spatial and temporal resolution cardiac cine MRI from retrospective reconstruction of data acquired in real time using motion correction and resorting. Magn Reson Med 62(6):1557–1564
Christodoulou AG, Brinegar C, Haldar JP, Zhang H, Wu YJ, Foley LM, Hitchens T, Ye Q, Ho C, Liang ZP (2010) High-resolution cardiac MRI using partially separable functions and weighted spatial smoothness regularization. Conf Proc IEEE Eng Med Biol Soc 2010:871–874
Hansen MS, Sorensen TS, Arai AE, Kellman P (2012) Retrospective reconstruction of high temporal resolution cine images from real-time MRI using iterative motion correction. Magn Reson Med 68(3):741–750
Krishnamurthy R, Pednekar A, Cheong B, Muthupillai R (2010) High temporal resolution SSFP cine MRI for estimation of left ventricular diastolic parameters. J Magn Reson Imaging 31(4):872–880
Poepping TL, Nikolov HN, Rankin RN, Lee M, Holdsworth DW (2002) An in vitro system for Doppler ultrasound flow studies in the stenosed carotid artery bifurcation. Ultrasound Med Biol 28(4):495–506
Bardo DM, Brown P (2008) Cardiac multidetector computed tomography: basic physics of image acquisition and clinical applications. Curr Cardiol Rev 4(3):231–243
Choe YH, Choo KS, Jeon ES, Gwon HC, Choi JH, Park JE (2008) Comparison of MDCT and MRI in the detection and sizing of acute and chronic myocardial infarcts. Eur J Radiol 66(2):292–299
Duivenvoorden R, de Groot E, Elsen BM, Lameris JS, van der Geest RJ, Stroes ES, Kastelein JJ, Nederveen AJ (2009) In vivo quantification of carotid artery wall dimensions: 3.0-Tesla MRI versus B-mode ultrasound imaging. Circ Cardiovasc Imaging 2(3):235–242
Bachmann R, Nassenstein I, Kooijman H, Dittrich R, Stehling C, Kugel H, Niederstadt T, Kuhlenbaumer G, Ringelstein EB, Kramer S, Heindel W (2007) High-resolution magnetic resonance imaging (MRI) at 3.0 Tesla in the short-term follow-up of patients with proven cervical artery dissection. Invest Radiol 42(6):460–466
Qiao Y, Steinman DA, Qin Q, Etesami M, Schar M, Astor BC, Wasserman BA (2011) Intracranial arterial wall imaging using three-dimensional high isotropic resolution black blood MRI at 3.0 Tesla. J Magn Reson Imaging 34(1):22–30
O’Leary DH, Bots ML (2010) Imaging of atherosclerosis: carotid intima-media thickness. Eur Heart J 31(14):1682–1689
Adam D, Beilin-Nissan S, Friedman Z, Behar V (2006) The combined effect of spatial compounding and nonlinear filtering on the speckle reduction in ultrasound images. Ultrasonics 44(2):166–181
Bercoff J, Montaldo G, Loupas T, Savery D, Meziere F, Fink M, Tanter M (2011) Ultrafast compound Doppler imaging: providing full blood flow characterization. IEEE Trans Ultrason Ferroelectr Freq Control 58(1):134–147
Vogt M, Ermert H (2008) Limited-angle spatial compound imaging of skin with high-frequency ultrasound (20 MHz). IEEE Trans Ultrason Ferroelectr Freq Control 55(9):1975–1983
Jespersen SK, Wilhjelm JE, Sillesen H (1998) Multi-angle compound imaging. Ultrason Imaging 20(2):81–102
Behar V, Adam D, Friedman Z (2003) A new method of spatial compounding imaging. Ultrasonics 41(5):377–384
Jeong MK, Kwon SJ (2010) Multimode ultrasound breast imaging using a new array transducer configuration. Ultrasound Med Biol 36(4):637–646
Townsend DW, Beyer T, Blodgett TM (2003) PET/CT scanners: a hardware approach to image fusion. Semin Nucl Med 33(3):193–204
Townsend DW, Carney JP, Yap JT, Hall NC (2004) PET/CT today and tomorrow. J Nucl Med 45(Suppl 1):4S–14S
Alessio AM, Kinahan PE, Cheng PM, Vesselle H, Karp JS (2004) PET/CT scanner instrumentation, challenges, and solutions. Radiol Clin North Am 42(6):1017–1032, vii
Lee WW, Marinelli B, van der Laan AM, Sena BF, Gorbatov R, Leuschner F, Dutta P, Iwamoto Y, Ueno T, Begieneman MP, Niessen HW, Piek JJ, Vinegoni C, Pittet MJ, Swirski FK, Tawakol A, Di Carli M, Weissleder R, Nahrendorf M (2012) PET/MRI of inflammation in myocardial infarction. J Am Coll Cardiol 59(2):153–163
Zaidi H, Del Guerra A (2011) An outlook on future design of hybrid PET/MRI systems. Med Phys 38(10):5667–5689
Hartley RI, Zisserman A (2004) Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, Cambridge
Faugeras O (1993) Three-dimensial computer vision: a geometric viewpoint. MIT Press, Cambridge, MA
Alexander SK, Azencott R, Papadakis M (2007) Isotropic multiresolution analysis for 3D-textures and applications in cardiovascular imaging. Paper presented at the proceedings of SPIE wavelets XII
Arivazhagan S, Ganesan L (2003) Texture classification using wavelet transform. Pattern Recogn Lett 24(9–10):1513–1521
Ali A, Farag A, El-Baz A (2007) Graph cuts framework for kidney segmentation with prior shape constraints. Med Image Comput Comput Assist Interv 10(Pt 1):384–392
Belongie S, Malik J, Puzicha J (2002) Shape matching and object recognition using shape contexts. IEEE Trans Pattern Anal Mach Intell 24(4):509–522
Brunner G, Chittajallu DR, Kurkure U, Kakadiaris IA (2010) Patch-cuts: a graph-based image segmentation method using patch features and spatial relations. Paper presented at the proceedings of the IAPR conference on British machine vision conference (BMVC), September 2010
Boykov Y, Kolmogorov V, Cremers D, Delong A (2006) An integral solution to surface evolution PDEs via geo-cuts. Paper presented at the proceedings of the 9th European conference on computer vision
Yu W, Yan P, Sinusas AJ, Thiele K, Duncan JS (2006) Towards pointwise motion tracking in echocardiographic image sequences – comparing the reliability of different features for speckle tracking. Med Image Anal 10(4):495–508
Haralick RM, Shanmugam K, Dinstein I (1973) Textural features for image classification. IEEE Trans Syst Man Cybernet 3:610–621
Jamie Shotton MJ (2008) Semantic texton forests for image categorization and segmentation. Paper presented at the proceedings of the computer vision and pattern recognition (CVPR 2008)
Bishop CM (2006) Pattern recognition and machine learning (information science and statistics). Springer, New York
Boykov Y, Veksler O (2006) Graph cuts in vision and graphics: theories and applications. In: Paragios N, Chen Y, Faugeras O (eds) Handbook of mathematical models in computer vision. Springer, New York
Gonzalez RC, Woods RC (2008) Digital image processing, 3rd edn. Prentice Hall, Upper Saddle River, NJ
Jain AK (1989) Fundamentals of digital image processing. Prentice-Hall, Englewood Cliffs
Petrou M, Sevilla PG (2006) Image processing: dealing with texture. Wiley, Chichester
Pratt WK (1991) Digital image processing. Wiley, New York
Sonka M, Hlavac V, Boyle R (1999) Image processing, analysis and machine vision, 2nd edn. Brooks/Cole, Pacific Grove
Rueckert D, Sonoda L, Hayes C, Hill D, Leach M, Hawkes D (1999) Nonrigid registration using free-form deformations: application to breast MR images. IEEE Trans Med Imaging 18(8):712–721
Yang J, Blum RS, Williams JP, Sun Y, Xu C (2006) Non-rigid image registration using geometric features and local salient region features. Paper presented at the proceedings of the CVPR ’06: proceedings of the 2006 IEEE computer society conference on computer vision and pattern recognition, Washington, DC
Crocetti L, Lencioni R, Debeni S, See TC, Pina CD, Bartolozzi C (2008) Targeting liver lesions for radiofrequency ablation: an experimental feasibility study using a CT-US fusion imaging system. Invest Radiol 43(1):33–39
Nakano S, Yoshida M, Fujii K, Yorozuya K, Kousaka J, Mouri Y, Fukutomi T, Ohshima Y, Kimura J, Ishiguchi T (2012) Real-time virtual sonography, a coordinated sonography and MRI system that uses magnetic navigation, improves the sonographic identification of enhancing lesions on breast MRI. Ultrasound Med Biol 38(1):42–49
Wein W, Brunke S, Khamene A, Callstrom MR, Navab N (2008) Automatic CT-ultrasound registration for diagnostic imaging and image-guided intervention. Med Image Anal 12(5):577–585
Caskey CF, Hlawitschka M, Qin S, Mahakian LM, Cardiff RD, Boone JM, Ferrara KW (2011) An open environment CT-US fusion for tissue segmentation during interventional guidance. PLoS One 6(11):e27372
Maintz JB, Viergever MA (1998) A survey of medical image registration. Med Image Anal 2(1):1–36
Wilhjelm JE, Jensen MS, Gammelmark KL, Sahl B, Martinsen K, Hansen JU, Brandt T, Jespersen SK, Falk E, Fredfeldt KE, Sillesen H (2004) A method to create reference maps for evaluation of ultrasound images of carotid atherosclerotic plaque. Ultrasound Med Biol 30(9):1119–1131
Culjat MO, Goldenberg D, Tewari P, Singh RS (2010) A review of tissue substitutes for ultrasound imaging. Ultrasound Med Biol 36(6):861–873
Choudhary S, Higgins C, Chen I, Reardon M, Lawrie G, Gr V, Karmonik C, Via D, Morrisett J (2006) Quantitation and localization of matrix metalloproteinases and their inhibitors in human carotid endarterectomy tissues. Arterioscler Thromb Vasc Biol 26(10):2351–2358
Prager RW, Ijaz UZ, Gee AH, Treece GM (2010) Three-dimensional ultrasound imaging. Proc Inst Mech Eng H 224(2):193–223
Yang EY, Polsani VR, Washburn MJ, Zang W, Hall AL, Virani SS, Hodge MD, Parker D, Kerwin WS, Lawrie GM, Garami Z, Ballantyne CM, Morrisett JD, Nambi V (2011) Real-time co-registration using novel ultrasound technology: ex vivo validation and in vivo applications. J Am Soc Echocardiogr 24(7):720–728
Kumar A, Yang EY, Brunner G, Murray TO, Garami Z, Volpi J, Virani SS, Washburn MJ, Zang W, Lawrie GM, Kabir R, Ballantyne CM, Morrisett JD, Nambi V (2011) Plaque volume of carotid endarterectomy (CEA) specimens measured using a novel 3-dimensional ultrasound (3DUS) technology. Circulation 124(21S):A14405
Wang C, Chen M, Zhao JM, Liu Y (2011) Fusion of color Doppler and magnetic resonance images of the heart. J Digit Imaging 24(6):1024–1030
Hadaschik BA, Kuru TH, Tulea C, Rieker P, Popeneciu IV, Simpfendorfer T, Huber J, Zogal P, Teber D, Pahernik S, Roethke M, Zamecnik P, Roth W, Sakas G, Schlemmer HP, Hohenfellner M (2011) A novel stereotactic prostate biopsy system integrating pre-interventional magnetic resonance imaging and live ultrasound fusion. J Urol 186(6):2214–2220
Sandulescu DL, Dumitrescu D, Rogoveanu I, Saftoiu A (2011) Hybrid ultrasound imaging techniques (fusion imaging). World J Gastroenterol 17(1):49–52
Hung AJ, Ma Y, Zehnder P, Nakamoto M, Gill IS, Ukimura O (2012) Percutaneous radiofrequency ablation of virtual tumours in canine kidney using global positioning system-like technology. BJU Int 109(9):1398–1403
Acknowledgments
1. We thank the SMART RISK study (ClinicalTrials.gov Identifier: NCT00860184) for allowing us to use images generated from the study
2. We thank the Gulf Coast Medical Foundation whose grant awarded to one of the authors supported part of the research described/images generated
3. We thank Drs. Ching-Hsuan Tung and Xukui Wang for micro-CT data acquisition.
4. We thank Dr John Volpi and Rasadul Kabir for their kind assistance with the ultrasound imaging
Support
This work was partly supported by the following grants: NIH/NHLBI T32 HL007812 training grant, American Heart Association South Central Affiliate Postdoctoral Fellowship grant, NIH/NHLBI K23 HL096893 grant, NIH/NHLBI HL63090, Gulf Coast Medical foundation. Dr. Nambi was supported by a NIH/NHLBI K23 HL096893 grant. Dr Nambi has research collaboration with GE. Clinical trial NCT00860184 is sponsored by VPDiagnostics, Inc. (Seattle, WA) and supported by NIH/NHLBI R44 HL070576. The views expressed in this chapter are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
Brunner, G., Yang, E.Y., Morrisett, J.D., Garami, Z., Nambi, V. (2014). Image Fusion Technology. In: Saba, L., Sanches, J., Pedro, L., Suri, J. (eds) Multi-Modality Atherosclerosis Imaging and Diagnosis. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-7425-8_28
Download citation
DOI: https://doi.org/10.1007/978-1-4614-7425-8_28
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-7424-1
Online ISBN: 978-1-4614-7425-8
eBook Packages: MedicineMedicine (R0)