Abstract
The advent of technology has taken us so far in our lives that we cannot imagine any field without technology or devices. Name any area today, for example, business, education, media and communication, aerospace, etc. There are no surprises that health care has become one of the most advanced prospectives for technologies and its application to be used. Currently we are in the era where medical professionals are using applications to speed up diagnosis, treatment, surgical procedures, recovery, etc., to provide better services to the public. One of the most interesting aspects is the medical image processing which has come a long way from requiring human intervention to current day scenario where application accurately predicts the cause and location of tumor or abnormalities from ultrasound, MRI, PET scan, CT scan, X-ray data, etc. Buzz is going on in the medical arena that in the near future technologies will replace some of the health-care professional jobs. Until then let us start by understanding the current state of affair between technology in biomedical image processing field and its applications.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
1 Medical Imaging
1.1 Introduction
Medical imaging is like a portal to view internal parts of bodies which are used for assisting medical diagnosis and analysis. This creates a visual representation for medical professionals to get an understanding of the current functioning or previous ailment of organs, tissues, or any interior section of the body. Biomedical images provide a wide range of patterns and designs like bones, muscles, etc. to diagnose diseases.
1.2 History
Prevalence of medical imaging started after the discovery of X-ray by Wilhelm Conrad Roentgen, a professor in Wuerzburg University in Germany, in 1895. While working in the lab, he discovered that only few rays showed internal parts of human body where the rays could penetrate for other its was opaque. He received a Nobel Prize in 1901 (Bradley 2008). The discovery of X-ray opened up new avenues in research for understanding human body structure. Interest was in the area of surgery and medical diagnosis. Few months after the discovery, experts in Europe and the United States started to use radiographs as a guide for medical professionals specially used in wars to locate bullets in wounded soldiers (Fig. 20.1).
X-ray tomography started becoming popular in 1900. Slices of anatomy information also called as “tomograms” were taken into consideration to view the internal body pattern. Other topographies that are frequently used are computerized axial tomography (CAT) scanning or computed tomography (CT) scanning that came around the 1970s and magnetic resonance imaging (MRI) that does not X-ray (Bradley 2008; Toennies 2012) (Fig. 20.2).
In the 1950s, nuclear medicine became famous. Today positron emission tomography or “PET” scanning is a well-known nuclear medicine. Instead of emitting gamma rays, it emits positrons (Bradley 2008). Depending on the focus of scanning, contrasting material (agent) injections are given to patients to enhance the visibility of certain tissues or blood vessels.
The benefits of these techniques are (Toennies 2012; http://www.imaginis.com/faq/history-of-medical-diagnosis-and-diagnostic-imaging):
-
Small quantity of X-ray is required.
-
Improved quality of images.
-
Easy to store the medical data as images for analysis.
-
Improve diagnosis by proper analyzing techniques.
1.3 Types of Medical Imaging Modalities
Technology has made it easy to obtain medical images without having invasively extracting information. There is a huge range of image modalities that could be considered to classify the types of medical images or techniques that are considered in biomedical image analysis.
-
(a)
Radiography
This imaging modality uses various rays like X-rays, gamma rays, and other radiations to view internal structures (Carroll 2014). X-ray beams are projected on the object, and the object would absorb the radiation to display structural design, composition, and density.
The study of anatomy using radiographic information is known commonly as radiographic anatomy (James and Dasarathy 2014).
Sub-classification of radiography:
-
Projectional radiographs (commonly known as X-ray) produce 2D images. They are normally used for detecting diseases in the lungs, stomach, intestines, etc. (Radiology – acute indications 2017; Radiographic Standard Operating Protocols (PDF) 2015) (Fig. 20.3).
-
Fluoroscopy – X-ray used at low dosage and used for image-guide surgeries for getting visual display of internal working of organs (Wang and Blackburn 2000; Last Image Hold Feature 2010) (Fig. 20.4).
-
-
(b)
Computed Tomography (CT or CAT Scan)
Computed tomography (CT) scan or computerized axial tomography (CAT) scan makes use of combinations of many X-ray beams which are computer processed. Different angles or sections are taken into consideration to get the final scanned images which are handled by the computer. These images are obtained without having to cut open the patient’s body (CT Scan 2018; Shrimpton et al. 2011) (Fig. 20.5).
-
(c)
Magnetic Resonance Imaging
MRI scan is a technique applied by radiologists that uses magnetism of the huge magnetic coil drum, and algorithms are used to reconstruct image of body structures (Bradley 2008). The MRI scanner is a circular drum that contains a magnet and a sliding table as shown in Fig. 20.6. The patient is placed on a moveable bed that is inserted into the magnet. The magnet creates strong magnetic field/signals which are later used by Fourier transformation using compressed sensing concepts to produce the final image, what we all know as MRI (Zhu 2003).
-
(d)
Nuclear Molecular Imaging
Nuclear medicine when used in diagnostic imaging is commonly referred to as molecular imaging which uses properties of particles that are emitted from radioactive material to diagnose or treat various diseases. Figure 20.7 shows the nuclear medicine image of the whole body which is used to diagnose bone-related diseases such as fracture, infections, abnormalities, etc. (https://en.wikipedia.org/wiki/Nuclear_medicine).
They are sub-classified as:
-
SPECT – 3D images taken from gamma-based cameras from different projected angles are combined to reconstruct this image (https://en.wikipedia.org/wiki/Nuclear_medicine).
-
PET – positron emitting radionuclide (a form of gamma rays) is captured after it reflects from the human tissue molecules to form PET image (Bailey et al. 2005). It is commonly used in neurology, cardiology, muscular-skeletal imaging, etc. (Carlson 2012) (Fig. 20.8).
-
-
(e)
Ultrasound
Ultrasound is a sound wave-based application which is considered to be a diagnostic imaging technique. These sound waves are known to have very high frequencies which are way higher than human hearing ability (Novelline 1997). Famous application is in obstetrics scanning which means ultrasound scanning of pregnant women to view the development of the fetus. It is also used to view internal body structures such as organs, bones, muscles, etc. (DistanceDoc and MedRecorder 2011; Ultrasound Imaging of the Pelvis 2008) (Fig. 20.9).
-
(f)
Functional Near-Infrared Spectroscopy
Functional near-infrared spectroscopy is an optical imaging technique that is non-invasive. This technique used low-level light to see the internal working of the brain and its activity through the movement of blood flow in them (http://researchimaging.pitt.edu/content/near-infrared-spectroscopy-nirs-brain-imaging-laboratory; Coyle et al. 2007) (Fig. 20.10).
-
(g)
Magnetic Particle Imaging
Magnetic particle imaging is an imaging technique used for tracking superparamagnetic iron oxide nanoparticles. It is highly sensitive, and depth of the structure can be analyzed (https://en.wikipedia.org/wiki/Magnetic_particle_imaging). This technique has been used for researches in areas such as cardiovascular performance, neuroperfusion, and cell movement tracking (Weizenecker et al. 2009; Yu et al. 2017) (Fig. 20.11).
1.4 Comparison of Medical Imaging
Few imaging techniques are being used for a long time, and the benefit of them still exists which make them the most famous and well known across the medical field. Table 20.1 shows some of these techniques and their comparison as regards the common properties that make these images feasible.
2 Medical Image Analysis
2.1 Introduction
Analysis of medical images has been the integral part of any diagnoses, treatments, procedures, etc. These analyses could be carried out by medical professionals to help predict or take action with regard to the patients’ health. Since these images are obtained non-invasively and can be stored, they serve to act one important aspect in electronic health record (EHR) for future references (Toennies 2012).
Reasons for carrying out medical image analysis are as follows (https://www.doc.ic.ac.uk/~jce317/history-medical-imaging.html):
-
Clinical study – to detect patterns or structure in images that could describe or prove hypotheses of the study. These are used for scientific analyses and future case study analyses for educational institutions for training budding medical professionals.
-
Diagnosis – to diagnose chronicle illness or diseases by detecting tumor or other patterns. Doctors or experts in their field identify the medical conditions of patients.
-
Treatment planning – after diagnosing comes the treatment to illness. Analyses need to be done about the course of action to be taken for diseases which could be drugs or medical procedures. Planning for the treatment needs serious research regarding previous health conditions or allergies. This can be obtained from medical image history of patients.
-
Computer-aided surgeries – advancement in technology has given an automated assistance to doctors in various areas of health care from diagnoses, treatment, surgeries, post-surgery care, etc. They are used as guided tools for surgeries. Doctors have even started performing remote operations which could save millions of lives (Toennies 2012; http://www.imaginis.com/faq/history-of-medical-diagnosis-and-diagnostic-imaging).
Modern radiologists have various tasks to be performed during the diagnosis process. Medical image data is not only about reading the image, but other aspects contribute toward the analysis which are as shown in Fig. 20.12.
Image processing has a wide range of applications especially in medical area. Visual images have been contributing to various medical analyses (Goel et al. 2016; Rao and Rao n.d.). Few of the applications are mentioned below:
-
Tumor detection
-
Fracture detection
-
Structural disabilities
-
Cancer detection
-
Heart defects and diseases
-
Tuberculosis
-
Birth defects
-
Neurological functioning
2.2 Image Pre-processing
The medical field majorly deals with data problems like understanding, acquiring, accessing, denoising, cleaning, and analysis of data as shown in Fig. 20.13 (Image from J. Galeotti, class material from “Methods in Medical Image Analysis”, Carnegie Mellon University 2018).
Image data experts have been trying to extract information based on content and textual description, but image feature extraction has been the key point (Scholl et al. 2011; Deserno et al. 2009). Feature analyses range from the entire image to specific localized section to some structural-based approaches (Scholl et al. 2011; Long et al. 2009; Tagare et al. 1997).
Medical image pre-processing has a series of steps that need to be taken into consideration as shown in Fig. 20.14 (Goel et al. 2016).
2.3 Challenges in Medical Image Analysis
There are a number of specific challenges in medical image processing (Thirumaran and Shylaja 2014) (Fig. 20.15). They are:
-
Pre-processing of image using image enhancement and restoration for best quality of image data
-
Automated and accurate image segmentation of features of interest (region of interest)
-
Automated and accurate image registration and fusion of multiple images
-
Classification of image features or properties
-
Simulation software that can be used to rehearse and plan procedures, evaluate access strategies, and plan treatments.
-
Latest being is visualization of the environment in which image-guided procedures or reconstruction of working of human body in 3D.
Medical image analysis has key tasks, which will be explained in detail in subsequent sections of this document such as:
-
Classification
-
Segmentation
-
Registration
-
Deep learning (DL)-based analysis
2.4 Conclusion
Images play an important role in health care. Technology advancement in medical image has helped doctors to get an insight into the human body without having to cut open the body (Goel et al. 2016; Binh 2010) and to achieve the best possible diagnosis, treatment, and other surgical procedures via image analysis obtained after noise removal and high-quality resolution (Tsui et al. 2012).
3 Medical Image Classification
3.1 Introduction
Recently, rapid development in the combination of machine learning (ML) and medical field has become a popular and active topic in research area. Thus, medical image classification plays a significant role in computer-aided diagnosis (Lai and Deng 2018). The main concern for researchers in this area is how to extract features from medical image and classify them into the same model to achieve an accurate result for identifying the parts of a patient’s body which are affected by the specific disease (Aberle et al. 2010).
The main purpose of image classification in the medical field is specifying the affected parts of the human body by disease, instead of gaining the high accuracy result; therefore, in this chapter we discuss the various medical image classification techniques in detail (Miranda et al. 2016).
3.2 Overview of Image Classification Techniques
Image classification process is divided into three stages, namely, pre-processing, feature extraction and feature selection, and classification. After the pre-processing level, by using feature extraction methods, analyze the images to extract the most appropriate features from input data for classification process, and then using feature selection methods, select the most correlated features to reduce the dimension of data which can be effective in improving performance of classification methods (Miranda et al. 2016; Lashari and Ibrahim 2013) (Fig. 20.16).
Some of the main feature selection techniques are:
-
Genetic algorithms-based optimization
The generic algorithms are one of the powerful methods which are based on natural selection (Kaushik et al. 2013). This technique has some disadvantages, which cause deflection in medical image segmentation (Cao et al. 2017).
-
Linear discriminant analysis
It is one of the dimensionality reduction techniques which goal is to preserve most of the features without eliminating any data in order to separate different classes as much as possible (Dhawan 2008; Sharma 2015).
-
Principal component analysis (PCA)
Another method of dimensionality reduction is PCA which is used for transformation methods to reduce a number of correlated variables to a smaller number of variables in a new subspace (Ashour and Salem 2015).
Therefore, PCA is an applicable technique in medical image processing that can be used in feature extraction, feature selection, image segmentation, and image registration (Ashour and Salem 2015). PCA cannot be efficient in selecting features, if input images include noises (Dhawan 2008) (Fig. 20.17).
After completing the feature extraction and feature selection steps, start classifying images.
This part provides a brief explanation of some of the classification techniques which are more applied to classify and detect abnormalities in medical images (Lashari and Ibrahim 2013).
-
(a)
Neural Network Classification
Neural Network (NN) is a computational model that has an important impact on classification by using supervised and unsupervised learning techniques. Neural Network models have some advantages; the first advantage is they are non-linear models; therefore, they are flexible in performing any complicated real-world application models. The second advantage of Neural Network is universal functional approximations which enable to approximate any function with ideal accuracy results, and the third advantage is Neural Network can regulate itself to the input data without any characteristics operational. In other words, it is a self-adaptive model (Lashari and Ibrahim 2013; https://pdfs.semanticscholar.org/1ba9/d67c80b6a762c11b9d519367e9e13a9c5c4f.pdf).
-
(b)
Support Vector Machine (SVM)
SVM is the machine learning model that uses different algorithms to analyze data for the purpose of reaching the efficient classification outcome (Sharma 2015). Furthermore, this model is a binary classifier which provides the maximum separation line between two classes. However, SVM has disadvantages as it needs longer time for training data and does not manage discrete features (Lashari and Ibrahim 2013; Ehteshami 2017).
-
(c)
Statistical Classification Methods
These models are based on supervised and unsupervised approach. Supervised learning method can be accomplished by using Bayesian decision theory because it is based on statistical classification and probabilistic methods. For instance, nearest neighbor and Bayesian model are the most practical classifiers. In addition, for performing supervised methods, besides training data and test data, it requires label data as well (Table 20.2).
The unsupervised learning technique classifies the data by separating the feature space, like K-means (Miranda et al. 2016; https://pdfs.semanticscholar.org/1ba9/d67c80b6a762c11b9d519367e9e13a9c5c4f.pdf; Dhawan and Dai 2008).
3.3 Medical Image Classification Challenges
-
The first challenge is the variety of features in medical images which make challenges for training dataset; as a result, the classification outcome will be decreased.
-
The second challenge is about the size of the medical images. Since, the medical images are very small, extracting and selecting enough valid information from dataset is not easy (Lai and Deng 2018).
3.4 Conclusion
Study in medical image classification can be beneficial for both computer-aided diagnosis and teaching purposes in medical fields. Recent research in this area could be helpful for analyzing and diagnosing diseases rapidly. This chapter has provided the overview of image classification methods and algorithms which are using medical images to identify the human body in order to distinguish images showing diseases from ones which do not (Miranda et al. 2016).
4 Medical Image Segmentation
4.1 Introduction
Today, with growing usage of computed topography (CT) and magnetic resonance (MR), X-ray image, digital mammography, and other imaging modalities, analyses of these images manually are not possible; therefore, digital image processing and computer algorithms, such as image segmentation methods, play an important role in diagnosing diseases and progressing biomedical research areas especially in medical imaging applications (Petitjean and Dacher 2011; Pham et al. 2000). Image segmentation is a process that divides an image to many homogeneous sub-regions which have the same characteristics as color, depth, and intensity (Withey and Koles 2007).
For instance, MR images which provide high resolution of three dimensional (3D) are the most common applications that use image segmentation techniques. Image segmentation can analyze both 2D and 3D images, and the main difference between them is processing the pixels in 2D and voxels in 3D (Despotovi 2015) (Fig. 20.18).
In the following section, some important methods of medical image segmentation are reviewed in order to introduce the segmentation process and its importance in analyzing medical images accurately for diagnosing disease.
4.2 Review of Medical Image Segmentation Techniques
As it is mentioned before, segmentation is a technique that provides wide diagnostic insights in the medical field. Using this technique in medical images can be improved, detecting of image’s boundaries, cell counting, scaling organs of human body and many other applications automatically (https://www5.cs.fau.de/research/groups/medical-image-segmentation/).
In this part, some of the medical image segmentation methods are provided:
-
(a)
Intensity-based segmentation method
In this method, pixels in 2D images and voxels in 3D images are classified based on their intensity, for instance, brain MR images contain three tissue types, namely, cerebrospinal fluid (CSF), white matter (WM), and gray matter (GM), which can be identified based on intensity after using intensity-based segmentation method (Despotovi 2015) (Fig. 20.19).
-
(b)
Thresholding segmentation method
This technique thresholding histogram gray scale images which it is one of the traditional and simplest method in medical image segmentation that it can be categorized in intensity-based methods. In other words, while thresholding segmentation is applied on medical images, intensity histogram is used in order to distinguish intensity value and separate different classes. The result of applying thresholding method on an abdomen CT image is illustrated in Fig. 20.20 (Despotovi 2015; Aggarwal 2010).
Thresholding method includes multiple groups, such as (Despotovi 2015):
-
Local threshold which is dependent on the position in the image
-
Adaptive thresholding
-
Global or single thresholding
-
Multi-thresholding
Thresholding is an efficient and fast technique; however, it has some limitations; first, choosing an appropriate value for threshold for different medical images causes many difficulties, and second, in low-contrast images, it processes the distributed class of pixels rather than connected areas, so it is required to use connectivity algorithm before thresholding process (Despotovi 2015; Sahoo et al. 1988).
-
-
(c)
Region growing segmentation method
The major purpose of this method is to form a region for segmentation based on more homogeneity between pixels or voxels (Withey and Koles 2007). In addition, this method can be categorized in intensity-based segmentation as well. Initially, for processing the region growing techniques, require selecting a seed point by an operator manually, then after testing similarity of neighborhood pixels or voxels, region continue growing until to get to the heterogeneity pixels (Withey and Koles 2007; Despotovi 2015).
Region growing obtains impressive result to segment medical images. For example, in brain MR images, it could segment brain tumor and brain vessels successfully (Despotovi 2015; Passat et al. 2005; Haralick and Shapiro 1985). However, this technique has some disadvantages: first, it is sensitive to noise so that it can affect the segmented region and disconnect it from the related region, and second, initializing the seed point by human is difficult and time-consuming because the operator requires to select a seed point for each different region (Kaur and Singh 2011) (Fig. 20.21).
-
(d)
Edge detection segmentation method
Edge detection algorithm is one of the common methods that are used in medical image segmentation. The basic idea of this technique is to detect the boundary or any interruption in the image (Upadhyay and Kashyap 2016). This method contains different algorithms (Aggarwal 2010):
-
Hough transform based
-
Edge relaxation
-
Border detection method
Some limitations of this method are that it is sensitive to noise, segmentation can be completed by combination of edge detection and region growing techniques, and some lines appear that are not edge in outcome which effect on the final result (Aggarwal 2010) (Fig. 20.22).
-
-
(e)
Classification segmentation method
Classifier methods are known as statistical pattern recognition which is a type of segmentation techniques that can divide images to feature space by labeling them.
This method is divided into supervised and unsupervised learning. Supervised learning classification is a time-consuming method because the image has to be segmented manually rather than using automatic segmentation for other images; therefore, supervised needs training and test dataset and also label dataset. Moreover, another disadvantage of supervised classification is that using the same training dataset for various images reduces the quality of result (Pham et al. 2000; Withey and Koles 2007; Anbeek et al. 2005). Some examples of supervised classification method are K-nearest neighbor and Bayesian classifiers (Withey and Koles 2007).
Unsupervised classification is a type of statistical clustering which uses expectation-maximum (Pham et al. 2000).
4.3 Conclusion
In this chapter, we discussed some segmentation techniques briefly and also limitations and disadvantages in medical images. In conclusion, image segmentation is one of the most challenging areas in image processing study which can be most efficient in computer-aided diagnosis and identify diseases in medical images especial 3D images. It is expected that these algorithms will become more practical in biomedical field to achieve faster and higher-quality diagnosis of diseases from 3D medical images reconstructing and visualizing the anatomical structures (Despotovi 2015).
5 Medical Image Registration
5.1 Introduction
In this section, we discuss the domain of image registration. We provide the elements of this field, current developments in this direction, the applications, and future scope of this topic. Medical image registration is one of the key elements in analysis of medical data. It is essential for mapping medical images and data to the correct physio-structural components of the body, for relating changes in captured data across spatio-temporal plane, and for generating an atlas of human structural biology. Registration consists of matching points of interest between source and destination images, transformation of source data to the target data, and optimization of the result to best suit the application. We next detail each of the abovementioned three aspects of the registration process and review the related literature.
5.2 Transformations (Deformations)
As previously mentioned, a typical registration process consists of three key steps: correspondence, transformation, and optimization. Although correspondence can be considered a first step of the process, it often is dictated by the transformation process. In fact, the registration process can be summed up in the following formula for the optimal transformation:
Here, Ψ denotes the quality of the transformation process. This is often considered as a measure of the transformation in literature but can also be considered as qualitative measure when manual validation of the transformation process is performed. This is often the case where medical professionals exert their choice of the registration result to enhance automated transformation. Automated transformations mostly reply on quantitative transformation measure. D and S denote the destination and the source image, respectively, with the aim being to transform the source image to destination. The transformation is aided by ϕ, which denotes the transformation process used on the domain of pixels, W, of the source image. A reward function, also known as a regularization term, is used to balance the transformation process. We will focus on the transformation process next. The transformation process is often referred as the deformation process. Transformation processes have unique properties. Transformation properties are mostly ill posed according to Hadamard definition (Hadamard 1923). In perspective of real-time processing transformation parameters are as less as possible. Likewise, the constraint should ensure that as little as displacement is incurred in the transformation process.
We now discuss the variety of transformations available and presented in recent literature.
Transformation process can be categorized into physical model-based approaches, geometric-based methods, and knowledge-based paradigms. It is noted that this approach of categorization is not independent but has significant intersection. The aim of this categorization is to enforce logical categorization to the transformation field.
Physical-based models use plant models like elastic model by Navier-Cauchy is planar or hierarchical approach (Bajcsy and Kovačič 1989; Gee and Bajcsy 1999). The physical models can be linear (Leow et al. 2005; He and Christensen 2003) or non-linear (Rabbitt et al. 1995; Pennec et al. 2005) in nature. These include elastic as well as diffusion model.
The next approach is geometric-based transformations. They include radial basis, elastic body, and piecewise affine models. The radial basis functions use kernels to find the transformations of source to target image. The radial basis function falls under kernelized approach wherein the distance better source and target is optimally mapped via kernel thereby generating the transformation parameters. Typical kernelized formulation is expressed as:
where Γ () denotes the transformation operation, ϑi is the weight of the ith parameter of the kernel transformation, and pi is the ith parameter of the kernel. F () is the kernel used. Kernels like radial basis and thin plate spline have been frequently used in literature (Zagorchev and Goshtasby 2006; Yang et al. 2011; Bookstein 1989; Bookstein 1991). Recent advancements in this field include Tensor-based deformation, wherein each dimension is deformed using Tensor transformations (Declerck et al. 1997; Rueckert et al. 1999). Another variation in this direction often perused in literature is the use of piecewise affine model (Hellier et al. 2001).
The final approach that is presented in literature is the knowledge-based approach. In many situations, further information is available on the registration process. This can be in terms of the statistical variability of the transformations (Cootes et al. 1995), availability of biophysical models (like tumor growth models) (Clatz et al. 2005; Hogea et al. 2007), and biomechanical models of human organs (Bharatha et al. 2001; Hensel et al. 2007).
5.3 Matching (Correspondence)
Matching process consists of two aspects, namely, location matching and interpolation. Interpolation is typically used within each iteration of the registration process wherein we find the most plausible value of the transformation based on the neighborhood kernel. The key aspect of matching can, however, be considered as location matching. Location matching between source and destination (or target) images is what we primarily consider as matching. If we register a functional image to a structural image, for example, matching a PET image or fMRI data (functional MR image) to a CT Image, we need to map functional image to structural image. In such case, the functional image may be first mapped to a template image, which provides structural bearings. It is then related to the structural image.
Registration of two functional images in similar lines entails two such intermediate mapping, one for each functional image. Thus, for matching either intrinsic or extrinsic markers are used. Extrinsic marker methods rely on human intervention to relate the markers, while the intrinsic ones use various algorithmic approaches including feature and intensity measures. We now discuss some of the key works related to image location matching.
The most commonly used matching method is geometric in nature. Geometric methods relate two images by minimizing geometric criterion at landmark locations in the image. Geometric matching includes finding points of interests, creating correspondence between the suitable points, creating transformations with the suitable points instead of using correspondence, and finally joint use of correspondence and transformation. Key point detection procedures include the famous Harris point detectors (Triggs 2004; Mikolajczyk and Schmid 2004), invariant feature detectors (Mikolajczyk et al. 2005), multiscale approach (Kadir and Brady 2001), and histogram-oriented approach like SURF and SIFT (Mikolajczyk and Schmid 2005; Morel and Yu 2009). Using the feature points detected correspondence-based matching can be used. They include use of descriptor distance as well as incorporating geometric constraints (Cheung and Hamarneh 2009; Ni et al. 2008; Torresani et al. 2008). Use of Gaussian mixture model (Jian and Vemuri 2011) provides an approach for transformation matching. The above methods are cleverly weaved together by the famous iterative closest point algorithm (Besl and McKay 1992).
The geometric-based techniques are extended to spatio-temporal to incorporate matching within modal, multimodal, and temporal data. In this situation, care is taken to constraint the prior mentioned matching processes to suit the variability of the data. The key criterion in such situations is the cross-correlation coefficients using either intensity or attribute features (Kim and Fessler 2004). Information theoretic approach is also used in literature with mutual information being the primary measure (Viola and Wells III 1997).
5.4 Optimization
Optimization framework is the core of the registration process. Using the optimization framework, we can choose the best transformation parameters which generate the best measure of transformation for a minimum cost function. The cost function is typically considered as a difference between source and target images. An implicit factor which plays a key role in the optimization process is the computational efficiency as well as the rate of convergence of the optimization process. As more and more registration algorithms are required to be near real time, this implicit constraint becomes highly significant. Based on the nature of variables, the optimization process can be broken into continuous parameter optimization, discrete parameter optimization, or hybrid optimizations. We will discuss these algorithms next.
As mentioned previously, optimization process may be separated into continuous, discrete, and hybrid forms. Continuous approach assumes the optimization parameters to be real. This allows the objective function for the continuous optimizations to be differentiable. Such case therefore utilizes iterative update to real optimal value using delta increment. The typical update rule is provided in the following equation:
The above equation shows the update of the continuous parameter using step size alpha and the update function f, which is often the derivative function. The above framework has mutated into multiple forms. They include gradient descent approach (Klein et al. 2007; Moré and Thuente 1994) which is the closest to the original formulation. Conjugate gradient methods on the other hand have better converge rates that gradient descent. They try to exploit the knowledge from previous gradient direction to generate a new direction of descent based jointly on the previous gradient direction and the derivative (Fletcher and Reeves 1964; Polyak 1969; Hestenes and Stiefel 1952; Hager and Zhang 2006). Other similar approaches include Powell Conjugate, (Maes et al. 1997), Gauss-Newton (Ashburner and Friston 2011; Haber and Modersitzki 2007; Modersitzki 2008), Levenberg-Marquardt (L-M) (Kybic and Unser 2003; Wu et al. 2000; Gefen et al. 2003), and stochastic gradient descent.
Gauss-Newton (G-N) works by optimizing the sum of squared error term differential. This is referred to in the literature as the Jacobian, and the search direction is denoted by the following equation:
where J(θ) denotes the Jacobian operation, ∇(θ) denotes the delta derivative, and T is the transpose operator. The L-M technique modifies the G-N method by adding a weighting term to the Jacobian above. The modified formulation is shown by the following equation:
All the previously described approaches relay on being able to compute the gradient, which can be very demanding due to the vastness of the data source. In such case, the gradient is approximated by a stochastic version.
The second set of methods assumes the optimal parameters belong to a set of discrete values. One such approach is to use Markov random fields, which are probabilistic graphical models. Graphical models consist of graphs consisting of vertices and edges (G = {v, e}). Max-flow min-cut principle is the key formulation (Ford and Fulkerson 1956) and is the fundamental approach for graph segmentation. Alpha-expansion approach was used by Bokov (Greig et al. 1989; Boykov et al. 2001) by using extensive label check for registration. Belief propagation (Frey and MacKay 1997; Murphy et al. 1999) is another technique wherein local message exchange happens between nodes and backtracked to recover the best solution. Linear programming (Komodakis and Tziritas 2007; Komodakis et al. 2008) has also been used in literature to solve discrete optimization problems.
The last set of approaches uses hybrid or miscellaneous approaches like Greedy learning (Liu et al. 2004; Xue et al. 2004) and neural algorithms and evolutionary methods (Hansen and Ostermeier 2001). We will discuss DL methods in a separate section. These hybrid methods are heuristic or meta-heuristic in nature. Greedy methods relay on choosing the best conditional solution in each iteration without enforcing combinatorial check on the optimality of the solution. Evolutionary techniques on the other hand use genetic-based techniques to create mutation of parameters and choose the mutation that has the best survivability.
5.5 Brain Registration
Brain registration occupies a special place in the domain of registration techniques. There are multiple reasons behind it. We know well that the brain is by far the most complicated organ with millions of neural connections and pathways. It also has volumetric multidimensional network connections and functional linkages. Due to large variation between populations further complicated by deformations due to disease like Alzheimer’s disease. Several attempts have been made to generate brain template or atlas. They include Talairach atlas (Talairach and Tournoux 1988) developed from a physical brain. Digital brain atlas has been developed from physical models (Kruggel and Yves von Cramon 1999; Nowinski and Thirunavuukarasuu 2001; Roland and Zilles 1994; Roland et al. 1997) at Harvard and Montreal Neurology. To encompass diversity of intermodal and intra-modal brain variability, probabilistic atlases were developed by considering distributions of the brain landmarks and their intensity and other features. International Consortium of Brain mapping has been led in this direction (Mazziotta 2002; Mazziotta et al. 1995). Deformable brain atlas has been another direction of research wherein use of non-rigid registration has allowed the pliability of brain map between subjects. This approach is also suitable for longitudinal brain study (Thompson et al. 2000; Ganser et al. 2004; Woods 2003). The utility of this approach is however extremely dependent on the registration technique used.
5.6 Conclusion
In this review we visited the need of image registration, particularly in perspective of medical image analysis. We reviewed the different components of the registration process, namely, the correspondence problem, the transformation between source and target domains, and finally the validation of the transformation process via optimization algorithms. We reviewed the cutting-edge techniques of each of the three components mentioned above and provided their pros and cons. Figure 20.23 (Gholipur 2007) shows the flowchart of the registration process detailing the source and target images, transformation, correspondence (and interpolation) and optimization processes, and the iterative loop.
We further elaborated the image registration process of the brain. Due to the uniqueness of the brain, registration process can happen between structural (anatomical) data and functional (image/data). Brain atlas has been a unique approach related to brain registration. We provided review of the cutting-edge works in this domain. Figure 20.24 (Gholipur 2007) provides comparative approaches available in this domain.
As detailed through the paper, most of the techniques in image registration require significant medical domain knowledge as well as signal processing expertise, thereby making the domain significantly challenging. A recent advancement in this domain is the use of deep learning and artificial deep networks to auto-train the registration process parameters. This is nascent and has a significant potential in future. Another aspect related to the registration process is the validation of its results. Currently, manual validation by domain experts is the norm. Such techniques require domain expertise, are laborious, and in many cases have limited availability. In the future, DL is anticipated to provide support in this direction as well.
6 Deep Learning
6.1 Introduction
Deep learning is based on artificial neural network, which is built over basic biological system of working of brain network. Deep learning is a part of machine learning in artificial intelligence (AI) that learns patterns from unsupervised/supervised data to improve the future prediction/recognition of new patterns using complex algorithms over different layers to achieve it.
Application of deep learning has become a trend in the current state of art in each and every field, for example, pattern recognition in images, speech, image and art restoration, language processing, news feed generating, and classification.
6.2 Concepts of Machine Learning, Deep Learning, and Artificial Intelligence
There is always a confusion about basic concepts between artificial intelligence (AI), machine learning, and deep learning because they are all inter-related and inter-connected. According to experts like Mr. Calum McClelland, Director of Big Data at Leverege:
-
AI – “AI involves machines that can perform tasks that are characteristic of human intelligence.”
-
ML – “Machine learning is simply a way of achieving AI.”
-
DL – “Deep learning is one of many approaches to machine learning.” (Fig. 20.25)
Machine learning algorithm can be classified as supervised, semi-supervised, or unsupervised learning:
-
Supervised learning – (classification and regression problem) (https://machinelearningmastery.com/supervised-and-unsupervised-machine-learning-algorithms/) – label of the data is known.
-
Semi-supervised learning – combination of supervised and unsupervised learning in the data.
-
Unsupervised earning (clustering) – label of the data is unknown.
6.3 Deep Learning Architectures
Neural Network was inspired by the biological working of brains. Multiple machine learning algorithms are combined together at different layers of the processes for specific purpose to solve on give complex data to find useful information/prediction is the overall concepts of neural network.
A deep neural network hierarchically combines multiple layers of neurons which contain important features. This network does memorize information from the training data and makes prediction on the test data or unseen data. Hence deep learning is very popular in computer vision and medical imaging area (LeCun 2013; Razzak et al. n.d.) (Fig. 20.26).
Some of the popular deep learning algorithms are compared in Table 20.3 such as Convolutional Neural Networks (LeCun 2013), Deep Neural Network, Deep Belief Network, Deep Autoencoder, Deep Boltzmann Machine, Recurrent Neural Network (Roell 2017), and Generative Adversarial Network.
6.4 Research Trends
Deep learning is trending with lot of research going on the health-care area. In terms of medical images, all the sections mentioned in the paper like classification, segmentation, registration, etc., can be done on a larger scale by using deep learning techniques. Some applications of deep learning in medical image are as mentioned below:
-
(a)
Disorder classification
Classification of disease is a basic requirement and needs accuracy. Table 20.4 shows few researches carried out for various parts of the body (Razzak et al. n.d.; Ker et al. 2017).
-
(b)
Tumor detection and segmentation
Lesion/tumor detection and segmentation have been a very important research. Deep learning algorithms are now capable of handling diseases that could be missed by doctors. This provides a double assurance for their diagnosis. A few researches have been conducted in recent times (Razzak et al. n.d.; Ker et al. 2017) (Table 20.5).
-
(c)
Robotics surgery (autonomous)
The Da Vinci robot revolutionized surgery avenues. This device acts as robotic limbs for surgeons. Accuracy is very important in these situations for fine detail and limited spaces are prone to human error which can be reduced by machine (Faggella 2018; https://www.youtube.com/watch?v=0XdC1HUp-rU) (Fig. 20.27).
Medical instruments for tracking, detecting, and performing surgery are common across the health-care arena. Hence a medical image obtained provides opportunity for using deep learning (Table 20.6).
-
(d)
Virtual reality for visualization
Major research companies have started exploring 3D technologies such as augmented reality and virtual reality (VR) visualization of human body for best equipping the doctors, medical professionals, and medical students to prepare them to provide the most personalized services. This provides a new avenue to get detailed understanding, practice to train and education them to deal with difficult situations (Fig. 20.28).
Recent trends have been in the research to provide innovative tools and technology to cater to the VR needs using deep learning techniques. Here are few research papers and articles related to the field (Table 20.7).
6.5 Challenges of Deep Learning
Apart from data issue for medical images, there lie challenges of using deep learning:
-
Black-box problem – even though neural network is pretty clear, it is a huge collection or combination of machine learning algorithms for getting data, pattern recognition, building predictive models, and understanding the results. Selection of these algorithms for each dataset and problem statement varies.
-
Overfitting – there is a different training model created from the training data and testing on unseen test data. Performance of model varies. There will be lots of error that could create a bad prediction.
-
Optimization of hyperparameters – choosing the right combination of hyperparameter to configuring is a critical point to get optimal solution model. These hyperparameters vary from model to model and dataset to dataset.
-
High-performance hardware – deep learning surely requires a high-performance hardware to support the huge dataset and various algorithms are internally used. Sometimes even with high-performance system, it will take days.
6.6 Conclusion
Deep learning paves way for new avenues for doctors/medical professional to provide accurate, faster, and cheaper diagnosis, treatment, etc. Deep learning has more benefits even with tradeoff or challenges mentioned above. Future of medical images is tending toward deep learning.
References
Aberle D, El-Saden S, Abbona P, Gomez A, Motamedi K, Ragavendra N, Bassett L, Seeger L, Brown M, Brown K, Bui AAT, Kangarloo H (2010) A primer on imaging anatomy and physiology. In: Medical imaging informatics. Springer, New York, pp 17–53
Anbeek P, Vincken KL, van Bochove GS, van Osch MJP, van der Grond J (2005) Probabilistic segmentation of brain tissue in MR imaging. NeuroImage 27:795–804
Anthimopoulos M, Christodoulidis S, Ebner L, Christe A, Mougiakakou S (2016) Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Trans Med Imag 35(5):1207–1216
Ashburner J, Friston KJ (2011) Diffeomorphic registration using geodesic shooting and Gauss-Newton optimisation. NeuroImage 55(3):954–967
Bailey DL, Townsend DW, Valk PE, Maisey MN (2005) Positron-emission tomography: basic sciences. Springer, Secaucus, NJ. ISBN 1-85233-798-2
Bajcsy R, Kovačič S (1989) Multiresolution elasticmatching. Comput Vis Graph Image Process 46(1):1–21
Berahim M, Samsudin NA, Nathan SS (2018) A review: image analysis techniques to improve labeling accuracy of medical image classification. Int Conf Soft Comput Data Min 2018:298–307
Besl PJ, McKay ND (Feb. 1992) Amethod for registration of 3-D shapes. IEEE Trans Pattern Anal Mach Intell 14(2):239–256
Bharatha A, Hirose M, Hata N, Warfield SK, Ferrant M, Zou KH, Suarez-Santana E, Ruiz-Alzola J, D’Amico A, Cormack RA, Kikinis R, Jolesz FA, Tempany CMC (2001) Evaluation of three-dimensional finite element-based deformable registration of pre and intraoperative prostate imaging. Med Phys 28(12):2551–2560
Binh NT, Khare A (2010) Adaptive complex wavelet technique for medical image denoising. In Proceedings of third international conference on development of biomedical engineering, 195–198, Vietnam, January 11–14, 2010.
Bookstein FL (Jun. 1989) Principal warps: thin-plate splines and the decomposition of deformations. IEEE Trans Pattern Anal Mach Intell 11(6):567–585
Bookstein FL (1991) Thin-plate splines and the atlas problem for biomedical images. Proc Int Conf Inf Process Med Imag:326–342
Boykov Y, Veksler O, Zabih R (Nov. 2001) Fast approximate energy minimization via graph cuts. IEEE Trans Pattern Anal Mach Intell 23(11):1222–1239
Bradley WG (2008) History of medical imaging. Proc Am Philos Soc 152(3):349–361
Cao X, Miao J, Xiao Y (2017) Medical image segmentation of improved genetic algorithm research based on dictionary learning. World J Eng Technol (5):90–96
Carlson N (2012) “Physiology of behavior”, methods and strategies of research, 11th edn. Pearson, London, p 151. ISBN 0205239390
Carroll QB (2014) Radiography in the digital age, 2nd edn. Charles C Thomas, Springfield, p 9. ISBN 9780398080976
Cheung W, Hamarneh G (2009) n-SIFT: N-dimensional scale invariant feature transform. IEEE Trans Imag Process 18(9):2012–2021
Chung K, Scholten ET, Oudkerk M, De Jong PA, Prokop M, Van Ginneken B (2015) Automatic classification of pulmonary peri-fissural nodules in computed tomography using an ensemble of 2D views and a convolutional neural network out-of-the-box. Med Image Anal 26(1):195–202
Clatz O, Sermesant M, Bondiau P-Y, Delingette H, Warfield SK, Malandain G, Ayache N (Oct. 2005) Realistic simulation of the 3-d growth of brain tumors in MR images coupling diffusion with biomechanical deformation. IEEE Trans Med Imag 24(10):1334–1346
Cootes TF, Taylor CJ, Cooper DH, Graham J (1995) Active shape models – their training and application. Comput Vis Imag Understand 61(1):38–59
Coyle SM, Ward TSE, Markham CM (2007) Brain–computer interface using a simplified functional near-infrared spectroscopy system. J Neur Eng 4(3):219–226. https://doi.org/10.1088/1741-2560/4/3/007
CT Scan (CAT Scan, Computerized Tomography) Imaging Procedure (2018) MedicineNet. Retrieved 29 Nov 2018
Declerck J, Feldmar J, Goris ML, Betting F (Dec. 1997) Automatic registration and alignment on a template of cardiac stress and rest reoriented SPECT images. IEEE Trans Med Imag 16(6):727–737
Deserno TM, Antani S, Long R (2009) Ontology of gaps in content-based image retrieval. J Digit Imaging 22(2):202–215
Despotovi I, Goossens B, Philips W (2015) MRI segmentation of the human brain: challenges, methods, and applications. Comput Math Methods Med 2015
Dhawan AP (2008) Image segmentation and feature extraction. In: Principles and advanced methods in medical imaging and image analysis. World Scientific Publishing Co. Pte. Ltd, Singapore, pp 197–228
Dhawan AP, Dai S (2008) Clustering and pattern classification. In: Principles and advanced methods in medical imaging and image analysis. World Scientific Publishing Co. Pte. Ltd, Singapore, pp 229–265
DistanceDoc and MedRecorder: New Approach to Remote Ultrasound Imaging, Solutions, Epiphan Systems Archived 2011-02-14 at the Wayback Machine.. Epiphan.com. Retrieved on 2011-11-13.
Dolovich M, Labiris R (2004) Imaging drug delivery and drug responses in the lung. Proc Am Thorac Soc 1:329–337
Dou Q, Member S, Chen H, Member S, Yu L, Zhao L, Qin J (2016) Automatic detection of cerebral microbleeds from MR images via 3D convolutional neural networks. IEEE Trans Med Imaging 11(4):1–14
Duggal R, Gupta A, Gupta R, Wadhwa M, Ahuja C (2016) Overlapping cell nuclei segmentation in microscopic images using deep belief networks. In: Proceedings of the tenth Indian conference on computer vision, graphics and image processing. ACM, New York, p 82
Erickson BJ, Korfiatis P, Akkus Z, Kline TL (2017) Machine learning for medical imaging. Radiographics 37(2):505–515
Esteva A et al (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639):115–118
Faggella D (2018) Machine learning healthcare applications – 2018 and beyond, article published in techemergence.com in Mar 2018
Fletcher R, Reeves CM (1964) Function minimization by conjugate gradients. Comput J 7(2):149–154
Ford LR, Fulkerson DR (1956) Maximal flow through a network (PDF). Can J Math 8:399–404
Frey BJ, MacKay DJC (1997) A revolution: belief propagation in graphs with cycles. Proc Conf Adv Neural Inf Process Syst:479–485
Ganser KA, Dickhaus H, Metzner R, Wirtz CR (2004) A deformable digital brain atlas system according to Talairach and Tournoux. Med Imag Anal 8:3–22
Ge C et al (2018) 3D multi-scale convolutional networks for Glioma grading using MR images. IEEE Int Conf Imag Process Proc:141–145
Gee JC, Bajcsy R (1999) Elastic matching: continuum mechanical and probabilistic analysis. Brain Warp:183–197
Gefen S, Tretiak O, Nissanov J (Nov. 2003) Elastic 3-D alignment of rat brain histological images. IEEE Trans Med Imag 22(11):1480–1489
Gholipur K, Briggs G (2007) Brain function Localization: a survey of Image Registration techniques. IEEE Trans Med Imag 26(4):427–451
Ghose A, Ghose A, Dasgupta P (2018) New surgical robots on the horizon and the potential role of artificial intelligence. J Invest Clin Urol. https://doi.org/10.4111/icu.2018.59.4.221
Goel N, Yadav A, Singh BM (2016) Medical image processing: a review, IEEE CIPECH, Nov 2016
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial networks. arXiv:1406.2661
Gopalakrishnan V, Menon PG, Madan S (2015) cMRI-BED: a novel informatics framework for cardiac MRI biomarker extraction and discovery applied to pediatric cardiomyopathy classification. BioMed Eng 14(suppl 2):1–16
Greig DM, Porteous BT, Seheult AH (1989) Exact maximuma posteriori estimation for binary images. J R Stat Soc Ser B (Methodol) 51(2):271–279
Haber E, Modersitzki J (2007) Image registration with guaranteed displacement regularity. Int J Comput Vis 71(3):361–372
Hadamard J (1923) Lectures on the Cauchy’s Problem in Linear Partial Differential Equations. Yale Univ. Press, New Haven, CT
Hager WW, Zhang H (2006) A survey of nonlinear conjugate gradient methods. Pacific J Optimizat 2(1):35–58
Hansen N, Ostermeier A (2001) Completely derandomized self-adaptation in evolution strategies. Evolution Comput 9(2):159–195
Haralick RM, Shapiro LG (1985) Image segmentation techniques. Comput Vis Grap Imag Process 29(1):100–132
Havaei M et al (2017) Brain tumor segmentation with deep neural networks. Med Imag Anal 35:18–31
He J, Christensen GE (2003) Large deformation inverse consistent elastic image registration. Int Conf Inf Process Med Imag:438–449
Hellier P, Barillot C, Mémin É, Pérez P (2001) Hierarchical estimation of a dense deformation field for 3-D robust registration. IEEE Trans Med Imag 20(5):388–402
Hensel JM, Ménard C, Chung PW, Milosevic MF, Kirilova A, Moseley JL, Haider MA, Brock KK (2007) Development of multiorgan finite element-based prostate deformation model enabling registration of endorectal coil magnetic resonance imaging for radiotherapy planning. Int J Radiat Oncol Biol Phys 68(5):1522–1528
Hestenes MR, Stiefel E (1952) Methods of conjugate gradients for solving linear systems. J Res Nat Bureau Stand 49(6):409–436
Hogea C, Biros G, Abraham F, Davatzikos C (2007) A robust framework for soft tissue simulations with application to modeling brain tumor mass effect in 3d MR images. Phys Med Biol (23):6893–6908
Hong J, Vicory J, Schulz J, Styner M, Marron JS, Pizer SM (2016) Non-Euclidean classification of medically imaged objects via s-reps R. Med Image Anal 31:37–45
Iwahori Y, Hattori A, Adachi Y, Bhuyan MK, Robert J (2015) Automatic detection of polyp using hessian filter and HOG features. In: Procedia computer science international conference in knowledge based and intelligent information and engineering systems - KES2015, pp 730–739
James AP, Dasarathy BV (2014) Medical image fusion: a survey of state of the art. Inf Fusion 19:4–19. arXiv:1401.0166
Jian B, Vemuri B (2011) Robust point set registration using Gaussian mixture models. IEEE Trans Pattern Anal Mach Intell 33(8):1633–1645
Jin KH, McCann MT, Froustey E, Unser M (2017) Deep convolutional neural network for inverse problems in imaging. IEEE Trans Image Process 26(9):4509–4522
Kadir T, Brady M (2001) Saliency, scale and image description. Int J Comput Vis 45(2):83–105
Kaur G, Singh B (2011) Intensity based image segmentation using wavelet analysis and clustering techniques. Indian J Comput Sci Eng 2(3)
Kaushik D, Singh U, Singhal P, Singh V (2013) Medical image segmentation using genetic algorithm. Int J Comput Appl 81(18)
Ker J, Wang L, Rao J, Lim T (2017) Deep learning applications in medical image analysis, Special section on soft computing techniques for image analysis in the medical industry current trends, challenges and solutions, IEEE Access, Dec 2017
Kevles BH (1996) Naked to the bone medical imaging in the twentieth century. Rutgers University Press, Camden, NJ, pp 19–22. ISBN 978-0-8135-2358-3
Khalvati F, Wong A, Haider MA (2015) Automated prostate cancer detection via comprehensive multi-parametric magnetic resonance imaging texture feature models. BMC Med Imag:1–14
Kim J, Fessler JA (Nov. 2004) Intensity-based image registration using robust correlation coefficients. IEEE Trans Med Imag 23(11):1430–1444
Klein S, Staring M, Pluim JPW (Dec. 2007) Evaluation of optimization methods for nonrigid medical image registration using mutual information and B-splines. IEEE Trans Image Process 16(12):2879–2890
Komodakis N, Tziritas G (2007) Approximate labeling via graph cuts based on linear programming. IEEE Trans Pattern Anal Mach Intell 29(8):1436–1453
Komodakis N, Tziritas G, Paragios N (2008) Performance vs computational efficiency for optimizing single and dynamic MRFs: setting the state of the art with primal-dual strategies. Comput Vis Imag Understand 112(1):14–29
Kruggel F, Yves von Cramon D (Jun. 1999) Alignment of magnetic-resonance brain datasets with the stereotactical coordinate system. Med Imag Anal 3:175–185
Kybic J, Unser M (Nov. 2003) Fast parametric elastic image registration. IEEE Trans Imag Process 12(11):1427–1442
Lai ZF, Deng HF (2018) Medical image classification based on deep features extracted by deep model and statistic feature fusion with multilayer perceptron. Comp Intel Neurosci 2018
Lashari SA, Ibrahim R (2013) A framework for medical images classification using soft set. Proc Technol 11:548–556
Last Image Hold Feature (2010) Fluoroscopic Radiation Management. Walter L. Robinson & Associates. Retrieved April 3, 2010
LeCun Y (2013) LeNet-5, convolutional neural networks. Retrieved 16 Nov 2013
Leow A, Huang S-C, Geng A, Becker J, Davis S, Toga A, Thompson P (2005) Inverse consistent mapping in 3D deformable image registration: its construction and statistical properties. Int Conf Inf Process Med Imag:493–503
Lilja AR, Strong CW, Bailey BJ, Thurecht KJ, Houston ZH, Fletcher NL, McGhee JB (2018) Design-led 3D visualization of nanomedicines in virtual reality, VRST, Proceeding of the 24th ACM symposium on Virtual Reality Software and Technology Article No. 48
Lin Q, Xu Z, Li B, Baucom R, Poulose B, Landman BA, Bodenheimera RE (2013) Immersive virtual reality for visualization of abdominal CT. Proc SPIE 28:8673. https://doi.org/10.1117/12.2008050
Litjens, Geert, Kooi, Thijs, Bejnordi, Babak Ehteshami, Setio, Arnaud Arindra Adiyoso, Ciompi, Francesco, Ghafoorian, Mohsen, Van Der Laak, Jeroen Awm, Van Ginneken, Clara I. Sánchez. (2017). A survey on deep learning in medical image analysis, Med Image Anal, vol. 42, pp. 60–88
Liu T, Shen D, Davatzikos C (2004) Deformable registration of cortical structures via hybrid volumetric and surface warping. NeuroImage 22(4):1790–1801
Liu J, Ma W, Liu F, Hu Y, Yang J, Xu X (2007) Study and application of medical image visualization technology, ICDHM 2007: Digital Human Modeling, 668–677
Long LR, Antani S, Deserno TM, Thoma GR (2009) Contentbased image retrieval in medicine retrospective assessment, state of the art, and future directions. Int J Health Inform Syst Informat 4(1):1–16
Maes F, Collignon A, Vandermeulen D, Marchal G, Suetens P (1997) Multimodality image registration by maximization of mutual information. IEEE Trans Med Imag 16(2):187–198
Maier-Hein L, Eisenmann M, Reinke A, Onogur S, Stankovic M, Scholz P, Arbel T, Bogunovic H, Bradley AP, Carass A, Feldmann C, Frangi AF, Full PM, van Ginneken B, Hanbury A, Honauer K, Kozubek M, Landman BA, März K, Maier O, MaierHein K, Menze BH, Müller H, Neher PF, Niessen W, Rajpoot N, Sharp GC, Sirinukunwattanal K, Speidel S, Stock C, Stoyanov D, Taha AA, van der Sommen F, Wang C-W, Weber M-A, Zheng G, Jannin P, Kopp-Schneider A (n.d.) Is the winner really the best? A critical analysis of common research practice in biomedical image analysis competitions, https://arxiv.org/pdf/1806.02051.pdf
Masood A, Al-jumaily A (2015) Semi advised SVM with adaptive differential evolution based feature selection for skin cancer diagnosis. J Comput Comm 3:184–190
Mazziotta J (2002) The international consortium for brain mapping: a probabilistic atlas and reference system for the human brain. In: Toga AW, Mazziotta JC (eds) Brain mapping: the methods. Academic, New York, pp 727–755
Mazziotta JC, Toga AW, Evans A, Fox P, Lancaster J (1995) A probabilistic atlas of the human brain: theory and rationale for its development. NeuroImage (2):89–101
Mikolajczyk K, Schmid C (2004) Scale & affine invariant interest point detectors. Int J Comput Vis 60(1):63–86
Mikolajczyk K, Schmid C (Oct. 2005) A performance evaluation of local descriptors. IEEE Trans Pattern Anal Mach Intell 27(10):1615–1630
Mikolajczyk K, Tuytelaars T, Schmid C, Zisserman A, Matas J, Schaffalitzky F, Kadir T, Gool LV (2005) A comparison of affine region detectors. Int J Comput Vis 65(1–2):43–72
Miranda E, Aryuni M, Irwansyah E (2016) A survey of medical image classification techniques. Int Conf Inf Manag Technol (ICIMTech):56–61, 2016
Mittal D, Rani A (2016) Detection and classification of focal liver lesions using support vector machine classifiers. J Biomed Eng Med Imaging 3(1):21–34
Modersitzki J (2008) Flirt with rigidity-image registration with a local nonrigidity penalty. Int J Comput Vis 76(2):153–163
Moré JJ, Thuente DJ (1994) Line search algorithms with guaranteed sufficient decrease. ACM Trans Math Software 20(3):286–307
Morel J-M, Yu G (2009) Asift: a new framework for fully affine invariant image comparison. SIAM J Imag Sci 2(2):438–469
Murphy KP, Weiss Y, Jordan MI (1999) Loopy belief propagation for approximate inference: an empirical study. Proc Conf Uncert Artif Intell:467–475
Nandi D, Ashour AS, Samanta S, Chakraborty S, Salem MAM, Dey N (2015) Principal component analysis in medical image processing: a study. Int J Image Mining 1(1):65–86
Nguyen LD, Lin D, Lin Z, Cao J (2018) Deep CNNs for microscopic image classification by exploiting transfer learning and feature concatenation. In: Circuits and Systems (ISCAS), 2018 IEEE international symposium, pp 1–5
Ni D, Qu Y, Yang X, Chui Y, Wong T-T, Ho S, Heng P (2008) Volumetric ultrasound panorama based on 3d sift. Proc Int Conf Med Image Comput Assist Intervent:52–60
Novelline R (1997) Squire’s fundamentals of radiology, 5th edn. Harvard University Press, Cambridge, MA, pp 34–35. ISBN 0-674-83339-2
Nowinski WL, Thirunavuukarasuu A (2001) Atlas-assisted localization analysis of functional images. Med Imag Anal 5:207–220
Passat N, Ronse C, Baruthio J, Armspach J-P, Maillot C, Jahn C (2005) Region-growing segmentation of brain vessels: an atlas-based automatic approach. J Magn Res Imag 21(6):715–725
Pedram SA, Ferguson P, Ma J, Dutson E, Rosen J (2017) Autonomous suturing via surgical robot: an algorithm for optimal selection of needle diameter, shape, and path. In: Proceedings of IEEE international conference on robotics and automation. IEEE, Singapore
Pennec X, Stefanescu R, Arsigny V, Fillard P, Ayache N (2005) Riemannian elasticity: a statistical regularization framework for non-linear registration. In International conference Medical Image Computing and Computer-Assisted Intervention, 943–950
Pereira S, Pinto A, Alves V, Silva CA (2016) Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans Med Imag 35(5):1240–1251
Pestscharing S, Schoffmann K (2017) Learning laparoscopic video shot classification for gynecological surgery. Multimed Tools Appl 77:8061–8079. https://doi.org/10.1007/s11042-017-4699-5
Petitjean C, Dacher J-N (2011) A review of segmentation methods in short axis cardiac MR images. Med Imag Anal 15(2):169–184
Pham DL, Xu C, Prince JL (2000) Current methods in medical image segmentation. Ann Rev Biomed Eng 2(1):315–337
Polyak BT (1969) The conjugate gradient method in extremal problems. USSR Computat Math Math Phys 9(4):94–112
Que Q, Tang Z, Wang R, Zeng Z, Wang J, Chua M, Gee TS, Yang X, Veeravalli B (2018) CardioXNet: automated detection for cardiomegaly based on deep learning. IEEE EMBC. https://doi.org/10.1109/EMBC.2018.8512374
Rabbitt RD, Weiss JA, Christensen GE, Miller MI (1995) Mapping of hyperelastic deformable templates using the finite element method. In: Proceedings of SPIE Visual Geometry, pp 252–265
Radiographic Standard Operating Protocols (PDF) (2015) HEFT Radiology Directorate. Heart of England NHS Foundation Trust. Retrieved 27 Jan 2016
Radiology – acute indications (2017) Royal Children’s Hospital, Melbourne. Retrieved 23 July 2017
Rao KMM, Rao VDP Medical image processing
Razzak MI, Naz S, Zaib A (n.d.) Deep learning for medical image processing: overview, challenges and future, https://arxiv.org/pdf/1704.06825.pdf
Richard P, Coiffet P (1995) Human perceptual issues in virtual environments: sensory substitution and information redundancy. In: Proceedings of IEEE international workshop on robot and human communication. IEEE, Tokyo
Roell J (2017) Understanding recurrent neural networks: the preferred neural network for time-series data, Article in towards data science, Jun 26, 2017
Roland PE, Zilles K (Nov. 1994) Brain atlases – a new research tool. Trends Neurosci 17:458–467
Roland PE, Geyer S, Amunts K, Schormann T, Schleicher A, Malikovic A, Zilles K (1997) Cytoarchitectural maps of the human brain in standard anatomical space. Hum Brain Mapp 5:222–227
Roth HR et al (2015) Deeporgan: multi-level deep convolutional networks for automated pancreas segmentation. Proc Int Conf Med Imag Comput Assist Intervent 2015:556–564
Rueckert D, Sonoda LI, Hayes C, Hill DLG, Leach MO, Hawkes DJ (1999) Nonrigid registration using free-form deformations: application to breast MR images. IEEE Trans Med Imag 18(8):712–721
Sahoo PK, Soltani S, Wong AKC (1988) A survey of thresholding techniques. Comput Vis Graph Image Proc 41:233–260
Sakamoto M, Nakano H (2016) Cascaded neural networks with selective classifiers and its evaluation using lung x-ray ct images. arXiv preprint arXiv:1611.07136
Sample S (2007-03-27) X-Rays. The electromagnetic spectrum. NASA. Retrieved 3 Dec 2007 https://en.wikipedia.org/wiki/X-ray
Sarikaya D, Corso JJ, Guru KA (2017) Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Trans Med Imag. https://doi.org/10.1109/TMI.2017.2665671
Scholl I, Aach T, Deserno TM, Kuhlen T (2011) Challenges of medical image processing. Comput Sci Res Dev 26:5–13. https://doi.org/10.1007/s00450-010-0146-9
Seetharaman K, Sathiamoorthy S (2016) A unified learning framework for content based medical image retrieval using a statistical model. J King Saud Univ Comput Inf Sci 28(1):110–124
Setio AAA, Ciompi F, Litjens G, Gerke P, Jacobs C, Van Riel SJ, Wille MW, Naqibullah M, Clara IS, Van Ginneken B (2016) Pulmonary nodule detection in CT images: false positive reductionusing multi-view convolutional networks. IEEE Trans Med Imag 35(5):1160–1169
Sharma A (2015) A refinement: better classification of images using LDA in contrast with SURF and SVM for CBIR system. Int J Comput App 117(16)
Sharma N, Aggarwal LM (2010) Automated medical image segmentation techniques. J Med Phys Assoc Med Phys India 35(1):3
Shrimpton PC, Miller HC, Lewis MA, Dunn M (2011) Doses from Computed Tomography (CT) examinations in the UK – 2003 Review Archived 2011-09-22 at the Wayback Machine.
Shvets A, Rakhlin A, Kalinin AA, Iglovikov V (2018) Automatic instrument segmentation in robot-assisted surgery using deep learning. bioRxiv. https://doi.org/10.1101/275867
Sirinukunwattana K, Raza SEA, Tsang Y, Snead DRJ, Cree IA, Rajpoot NM (2016) Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans Med Imaging 35(5):1–12
Solodova RF, Galatenko VV, Nakashidze ER, Andreytsev IL, Galatenko AV, Senchik DK et al (2016) Instrumental tactile diagnostics in robot-assisted surgery. Med Devices Evid Res 9:377–382. https://doi.org/10.2147/MDER.S116525
Song Y, Cai W, Huang H, Zhou Y, Wang Y, Feng DD (2015) Locality-constrained subcluster representation ensemble for lung image classification. Med Image Anal 22(1):102–113
Tagare HD, Jaffe CC, Duncan J (1997) Medical image databases: a content-base
Talairach J, Tournoux P (1988) Co-planar stereotaxic atlas of the human brain. Thieme, New York
Thirumaran J, Shylaja S (2014) Medical image processing – an introduction, IJSR, ISSN (Online): 2319-7064
Thompson PM, Woods RP, Mega MS, Toga AW (2000) Mathematical/computational challenges in creating deformable and probabilistic atlases of the brain. Hum Brain Mapp 9:81–92
Toennies KD (2012) Guide to medical image analysis. Springer Adv Patt Recogn. https://doi.org/10.1007/978-1-4471-2751-2
Torresani L, Kolmogorov V, Rother C (2008) Feature correspondence via graph matching: models and global optimization. Proc Eur Conf Comput Vis:596–609
Triggs B (2004) Detecting keypoints with stable position, orientation, and scale under illumination changes. Proc Eur Conf Comput Vis:100–113
Tsui P-H, Yeh CK, Huang C-C (2012) Noise-assisted correlation algorithm for suppressing noise-induced artifacts in ultrasonic Nakagami images. IEEE Trans Infor Technol Biomed 16(3)
Ultrasound Imaging of the Pelvis. radiologyinfo.org. Archived from the original on 2008-06-25. Retrieved 2008-06-21
Upadhyay A, Kashyap R (2016) Fast segmentation methods for medical images. Int J Comput Appl 156(3):18–23
Van Grinsven MJJP, Van Ginneken B, Hoyng CB, Theelen T, Clara IS (2016) Fast convolutional neural network training using selective data sampling: application to hemorrhage detection in color fundus images. IEEE Trans Med Imaging 35(5):1273–1284
Van Tulder G, De Bruijne M (2016) Combining generative and discriminative representation learning for lung CT analysis with convolutional restricted boltzmann machines. IEEE Trans Med Imaging 35(5):1262–1272
Varytimidis C, Rapantzikos K, Loukas C, Kolias S (2016) Surgical video retrieval using deep neural networks. In: Proceedings of workshop and challenges on modeling and monitoring of computer assisted interventions. MICCAI, Athens
Viola P, Wells WM III (1997) Alignment by maximization of mutual information. Int J Comput Vis 24(2):137–154
Wang J, Blackburn TJ (2000) The AAPM/RSNA physics tutorial for residents: X-ray image intensifiers for fluoroscopy. Radiographics 20(5): 1471–1477. doi:https://doi.org/10.1148/radiographics.20.5.g00se181471. ISSN 0271-5333. PMID 10992034.
Wang L, Pedersen PC, Agu E, Strong DM, Tulu B (2017) Area determination of diabetic foot ulcer images using a cascaded two-stage SVM-based classification. IEEE Trans Biomed Eng 64(9):2098–2109
Weizenecker J, Gleich B, Rahmer J, Dahnke H, Borgert J (2009) Three-dimensional real-time in vivo magnetic particle imaging. Phys Med Biol 54(5):L1–L10. https://doi.org/10.1088/0031-9155/54/5/L01
Withey DJ, Koles, ZJ (2007) Medical image segmentation: methods and software, 140–143
Woods RP (2003) Characterizing volume and surface deformations in an atlas framework: theory, applications, and implementation. Neuroimage 18:769–788
Wu Y-T, Kanade T, Li C-C, Cohn J (2000) Image registration using wavelet-based motion model. Int J Comput Vis 38(2):129–152
Xue Z, Shen D, Davatzikos C (Oct. 2004) Determining correspondence in 3-D MR brain images using attribute vectors as morphological signatures of voxels. IEEE Trans Med Imag 23(10):1276–1291
Yamamoto T, Abolhassani N, Jung S, Okamura AM, Judkins T (2012) Augmented reality and haptic interfaces for robot-assisted surgery. Int J Med Robotics Comput Assist Surg 8:45–56. https://doi.org/10.1002/rcs.421
Yang X, Xue Z, Liu X, Xiong D (2011) Topology preservation evaluation of compact-support radial basis functions for image registration. Pattern Recognit Lett 32(8):1162–1177
Yu YE, Bishop M, Zheng B, Ferguson RM, Khandhar AP, Kemp SJ, Krishnan KM, Goodwill PW, Conolly SM (2017) Magnetic particle imaging: a novel in vivo imaging platform for cancer detection. Nano Lett 17(3):1648–1654. https://doi.org/10.1021/acs.nanolett.6b04865
Zagorchev L, Goshtasby A (2006) A comparative study of transformation functions for nonrigid image registration. IEEE Trans Imag Process 15(3):529–538
Zhang W et al (2015) Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. NeuroImage 108:214–224
Zhao Z, Voros S, Weng Y, Chang F, Li R (2017) Tracking-by-detection of surgical instruments in minimally invasive surgery via the convolutional neural network deep learning-based method. Comput Assist Surg 22:26–35. https://doi.org/10.1080/24699322.2017.1378777
Zhu H (2003) Medical image processing overview
Zhu Q, Du B, Wu J, Yan P (2018) A deep learning health data analysis approach: automatic 3D prostate MR segmentation with densely-connected volumetric convnets. IJCNN. https://doi.org/10.1109/IJCNN.2018.8489136
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Emami, T., Janney, S.S., Chakravarty, S. (2019). Elements of Medical Image Processing. In: Paul, S. (eds) Biomedical Engineering and its Applications in Healthcare. Springer, Singapore. https://doi.org/10.1007/978-981-13-3705-5_20
Download citation
DOI: https://doi.org/10.1007/978-981-13-3705-5_20
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-3704-8
Online ISBN: 978-981-13-3705-5
eBook Packages: Biomedical and Life SciencesBiomedical and Life Sciences (R0)