Abstract
The science of solving clinical problems by analyzing images generated in clinical practice is known as medical image analysis. The aim is to extract information in an affective and efficient manner for improved clinical diagnosis. The recent advances in the field of biomedical engineering have made medical image analysis one of the top research and development area. One of the reasons for this advancement is the application of machine learning techniques for the analysis of medical images. Deep learning is successfully used as a tool for machine learning, where a neural network is capable of automatically learning features. This is in contrast to those methods where traditionally hand crafted features are used. The selection and calculation of these features is a challenging task. Among deep learning techniques, deep convolutional networks are actively used for the purpose of medical image analysis. This includes application areas such as segmentation, abnormality detection, disease classification, computer aided diagnosis and retrieval. In this study, a comprehensive review of the current state-of-the-art in medical image analysis using deep convolutional networks is presented. The challenges and potential of these techniques are also highlighted.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Deep learning (DL) is a widely used tool in research domains such as computer vision, speech analysis, and natural language processing (NLP). This method is suited particularly to those areas, where a large amount of data needs to be analyzed and human like intelligence is required. The use of deep learning as a machine learning and pattern recognition tool is also becoming an important aspect in the field of medical image analysis. This is evident from the recent special issue on this topic [1], where the initial impact of deep learning in the medical imaging domain is investigated. According to an MIT technological review, deep learning is among the top ten breakthroughs of 2013 [2]. Medical imaging has been a diagnostic method in clinical practices for a long time. The recent advancements in hardware design, safety procedures, computational resources and data storage capabilities have greatly benefited the field of medical imaging. Currently, major application areas of medical image analysis involve segmentation, classification, and abnormality detection using images generated from a wide spectrum of clinical imaging modalities.
Medical image analysis aims to aid radiologist and clinicians to make diagnostic and treatment process more efficient. The computer aided detection (CADx) and computer aided diagnosis (CAD) relies on effective medical image analysis making it crucial in terms of performance, since it would directly affect the process of clinical diagnosis and treatment [3, 4]. Therefore, the performance of important prameters such as accuracy, F-measure, precision, recall, sensitivity, and specificity is crucial, and it is mostly desirable that these measures give high values in medical image analysis. As the availability of digital images dealing with clinical information is growing, therefore a method that is best suited to big data analysis is required. The state-of-the-art in data centric areas such as computer vision shows that deep learning methods could be the most suitable candidate for this purpose. Deep learning mimics the working of the human brain [5], with a deep architecture composed of multiple layers of transformations. This is similar to the way information is processed in the human brain [6].
A good knowledge of the underlying features in a data collection is required to extract the most relevant features. This could become tedious and difficult when a huge collection of data needs to be handled efficiently. A major advantage of using deep learning methods is their inherent capability, which allows learning complex features directly from the raw data. This allows us to define a system that does not rely on hand-crafted features, which are mostly required in other machine learning techniques. These properties have attracted attention for exploring the benefits of using deep learning in medical image analysis. The future of medical applications can benefit from the recent advances in deep learning techniques. There are multiple DL open source platforms available such as caffe, tensorflow, theano, keras and torch to name a few [7]. The challenges arise due to limited clinical knowledge of DL experts and limited DL knowledge of clinical experts. A recent tutorial attempts to bridge this gap by providing a step by step implementation detail of applying DL to digital pathology images [8]. In [9], a high-level introduction to medical image segmentation task using deep learning is presented by providing the code. In general, most of the work using DL techniques use an open source model, where the code is made available on platforms such as github. This allows researchers to come up with a running model relatively quickly for applying these techniques to various medical image analysis tasks. The challenge remains to select an appropriate DL architecture depending upon the number of available images and ground truth labels.
In this paper, a detailed review of the current state-of-the-art medical image analysis techniques is presented, which are based on deep convolutional neural networks. A summary of the key performance parameters having clinical significance achieved using deep learning methods is also discussed. The rest of the paper is organized as follows. “Medical image analysis”, presents a brief introduction to the field of medical image analysis. “Convolutional neural networks (CNNs)” and “Medical image analysis using CNN”, presents a summary and applications of the deep convolutional neural network methods to medical image analysis. In “Discussion”, the recent advances in deep learning methods for medical image analysis are analyzed. This is followed by the conclusions presented in “Conclusion”.
Medical image analysis
Medical imaging includes those processes that provide visual information of the human body. The purpose of medical imaging is to aid radiologists and clinicians to make the diagnostic and treatment process more efficient. Medical imaging is a predominant part of diagnosis and treatment of diseases and represent different imaging modalities. These include X-ray, computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasound to name a few as well as hybrid modalities [10]. These modalities play a vital role in the detection of anatomical and functional information about different body organs for diagnosis as well as for research [11]. A typology of common medical imaging modalities used for different body parts which are generated in radiology and laboratory settings is shown in Fig. 1. Medical imaging is an essential aid in modern healthcare systems. Machine learning plays a vital role in CADx with its applications in tumor segmentation, cancer detection, classification, image guided therapy, medical image annotation, and retrieval [12,13,14,15,16,17,18].
Segmentation
The process of segmentation divides an image in to multiple non-overlapping regions using a set of rules or criterion such as a set of similar pixels or intrinsic features such as color, contrast and texture [19]. Segmentation reduces the search area in an image by dividing the original image into two classes such as object or background. The key aspect of image segmentation is to represent the image in a meaningful form such that it can be conveniently utilized and analyzed. The meaningful information extracted using the segmentation process in medical images involves shape, volume, relative position of organs, and abnormalities [20, 21]. In [22], an iterative 3D multi-scale Otsu thresholding algorithm is presented for the segementation of medical images. The effects of noise and weak edges are eliminated by representing images at multiple levels. In [23], a hybrid algorithm is proposed for an automatic segmentation of ultrasound images. The proposed method combine information from spatial constraint based kernel fuzzy clustering and distance regularized level set (DRLS) based edge features. Multiple experiments are conducted for evaluating the method on real as well as synthetically generated ultrasound images. A segmentation approach for 3D medical images is presented in [24], in which the system is capable of assessing and comparing the quality of segmentation. The approach is mainly based on the statistical shape based features coupled with extended hierarchal clustering algorithm and three different datasets of 3D medical images are used for experimentation. An expectation maximization approach is used for tumor segmentation on brain tumor image segmentation (BRATS) 2013 dataset. The method achieves considerable performance, but is only tested on a few images from the dataset and is not shown to generalize for all images in the dataset [25].
Detection and classification of abnormality
Abnormality detection in medical images is the process of identifying a certain type of disease such as tumor. Traditionally, clincial experts detect abnormalities, but it requires a lot of human effort and is time consuming. Therefore, development of automated systems for detection of abnormalities is gaining importance. Different methods are presented in literature for abnormality detection in medical images. In [26], an approach is presented for detection of the brain tumor using MRI segmentation fusion, namely potential field segmentation. The performance of this system is tested on a publicly available MRI benchmark, known as brain tumor image segmentation. A particle swarm optimization based algorithm for detection and classification of abnormalities in mammography images is presented in [27], which uses texture features and a support vector machine (SVM) based classifier. In [28], a method is presented for detection of myocardial abnormalities using cardiac magnetic resonance imaging.
Computer aided detection or diagnosis
A Computer Aided Diagnosis (CAD) system is used in radiology, which assists the radiologist and clinical practitioners in interpreting the medical images. The system is based on algorithms which use machine learning, computer vision and medical image processing. In clinical practice, a typical CADx system serves as a second reader in making decisions that provides more detailed information about the abnormal region. A typical CADx system consists of the following stages, pre-processing, feature extraction, feature selection and classification [29]. In literature, there are methods proposed for the diagnosis of diseases such as fatty liver [30], prostate cancer [29], dry eye [31], Alzheimer [32], and breast cancer [33]. In [34], hybrid features are used for the detection glaucoma in fundus images. The optic disc is localized by employing support vector machine trained using local features extracted from the vessels [35]. A hybrid of clinical and image based features are used for multi-class classification of alzheimer disease using the alzheimer disease neuro-image initiative (ADNI) dataset with reasonable accuracy [36].
Medical image retrieval
Recent years have witnessed a broad use of computers and digital information systems in hospitals. The picture archiving and communication systems (PACSs) are producing large collections of medical images [37,38,39]. The hospitals and radiology departments are producing a large number of medical images, ultimately resulting in huge medical image repositories. An automatic medical image classification and retreival system is required to efficiently deal with this big data. A speciliazed medical image retrieval system could assist the clinical experts in making a critical decision in disease prognosis and diagnosis. A timely and accurate deceison regarding the diagnosis of a patient’s disease and its stage can be mabe by using similar cases retrieved by the reterival system [40]. Text based and content based image retrieval (CBIR) methods have been commonly used for medical image retrieval. Text based retrieval methods were initially proposed in 1970s [37], where images were manually annotated with a text based description. In case, the textual annotation is done efficiently, the performance of such systems is fast and reliable. The drawback of such systems is that they cannot perform well in un-annotated image databases. Image annotation is not only a subjective matter but also a time taking process [41]. In CBIR methods, texture, color and shape based features are used for searching and retrieving images from large collections of data [42].
A CBIR system based on Line Edge Singular Value Pattern (LESVP) is proposed in [43]. In [44], a CBIR system for skin lesion images using reduced feature vector, classification and regression tree is presented. In [40], an Bag of Visual Words (BoVWs) approach is used along with scale invariant feature transform (SIFT) for the diagnosis of Alzheimer disease (AD). In [45], a supervised learning framework is presented for biomedical image retrieval, which uses the predicted class label from classifier for retrieval. It also uses image filtering and similarity fusion and multi-class support vector machine classifier. The use of class prediction eliminates irrelevant images and results in reducing the search area for similarity measurement in large databases [46].
Evaluation metrics for medical image analysis system
A typical medical image analysis system is evaluated by using different key performance measures such as accuracy, F1-score, precision, recall, sensitivity, specificity and dice coefficient. Mathematically, these measures are calculated as,
where,
and
where True Positive (TP) represents number of cases correctly recognized as defected, False Positive (FP) represents number of cases incorrectly recognized as defected, True Negative (TN) represents number of cases correctly recognized as non-defected and False Negative (FN) represents number of cases incorrectly recognized as non-defected. In Eq. 7, P denotes the prediction as given by the system being evaluated for a given testing sample and GT represents the ground truth of the corresponding testing sample.
Convolutional Neural Networks (CNNs)
Deep learning is a tool used for machine learning, where multiple linear as well as non-linear processing units are arranged in a deep architecutre to model high level abstraction present in the data [47]. There are numerous deep learning techniques currently used in a variety of applications. These include auto-encoders, stacked auto-encoders, restricted Boltzmann machines (RBMs), deep belief networks (DBNs) and deep convolutional neural networks (CNNs). In recent years, CNN based methods have gained more popularity in vision systems as well as medical image analysis domain [48,49,50].
CNNs are biologically inspired variants of multi-layer perceptrons. They tend to recognize visual patterns, directly from raw image pixels. In some cases, a minimal pre-processing is performed before feeding images to CNNs. These deep networks look at small patches of the input image, called receptive fields, by using multiple layer neurons and use shared weights in each convolutional layer. CNNs combine three architectural ideas for ensuring invariance for scale, shift and distortion to some extent. The first CNN model (LeNet-5) that was proposed for recognizing hand written characters is presented in [51]. The local connections of patterns between the neurons of adjacent layers of CNN i.e., inputs from hidden units of a layer m are taken as a subset of units in the layer m − 1, units having spatially adjacent receptive fields for exploiting the spatial local correlation. Additionally, in CNN each filter hi is replicated around the whole visual field. These filters share bias and weight vectors to create a feature map. The gradient of shared weights is equal to the sum of gradients of the shared parameters. When convolution operation is performed on sub-regions of the whole image, a feature map is obtained. The process involves convolution of the input image or feature map with a linear filter with the addition of a bias followed by an application of a non-linear filter. A bias value is added such that it is independent of the output of previous layer. The bias values allow us to shift the activation function of a node in either left or right direction. For example, for a sigmoid function, the weights control the steepness of the output, whereas bias is used to offset the curve and allow better fitting of the model. The bias values are learned during the training model and allows an independent variable to control the activation. At a given layer, the kth filter is denoted symbolically as hk, and the weights Wk and bias bk determines their filters. The mathematical expression for obtaining feature maps is given as,
where, tanh represents the tan hyperbolic function, and ∗ is used for the convolution operation. Figure 2 illustrates two hidden layers in a CNN, where layer m − 1 and m has four and two features maps respectively i.e., h0 and h1 named as w1 and w2. These are calculated from pixels (neurons) of layer m − 1 by using a 2 × 2 window in the layer below as shown in Fig. 2 by the colored squares. The weights of these filter maps are 3D tensors, where one dimension gives indices for input feature maps, while the other two dimensions provides pixel coordinates. Combining it all together, \(W_{ij}^{kl}\) represents the weight connected to each pixel of kth feature map at a hidden layer m with ith feature map of a hidden layer m − 1 and having coordinates i,j.
Each neuron or node in a deep network is governed by an activation function, which controls the output. There are various activation functions used in deep learning literature such as linear, sigmoid, tanh, rectified linear unit (ReLU). A broader classification is made in the form of linear and non-linear activation function. A linear function passes the input at a neuron to the output without any change. Since, deep network architectures are designed to perform complex mathematical tasks, non-linear activation functions have found wide spread success. ReLU and its variations such as leaky-ReLU and parametric ReLU are non-linear activations used in many deep learning models due to their fast convergence characteristic. Pooling is another important concept in convolutional neural networks, which basically performs non-linear down sampling. There are different types of pooling used such as stochastic, max and mean pooling. Max pooling divides the input image into non-overlapping rectangular blocks and for every sub-block local maxima is considered in generating the output. Max pooling provides benefits in two ways, i.e., eliminating minimum values reduces computations for upper layers and it provides translational invariance. Concisely, it provides robustness while reducing the dimension of intermediate feature maps smartly. On the other hand, mean pooling replace the underlying block with its mean value. In stochastic pooling the activation function within the active pooling region is randomly selected. In addition to down-sampling the feature maps, pooling layers allows learning features for translational and rotational invariant classification [52]. The pooling operation can also be performed on overlapping regions. In circumstances where weak spatial information surrounding the dominant regions of an image is also useful, fractional or overlapping regions for pooling could be beneficial [53].
There are various techniques used in deep learning to make the models learn and generalize better. This could include L1, L2 regularizer, dropout and batch normalization to name a few. A major issue in using deep convolutional network (DCNN) is over-fitting of the model during training. It has been shown that dropout is used successfully to avoid over-fitting [54]. A dropout layer drops certain unit connections which are selected randomly. Dropout layer is widely used for regularization. In addition to dropout, batch normalization has also been successfully used for the purpose of regularization. The input data is divided into mini batches. It is shown that using batch normalization not only speeds up the training but, in some cases, preform regularization eliminating the need for using dropout layers [55]. The performance of a deep learning method is highly dependent on the data. In cases, where the availability of data is limited, various augmentation techniques are utilized [56]. This may include random cropping, colour jittering, image flipping and random rotation [57].
Medical image analysis using CNN
There is a wide variety of medical imaging modalities used for the purpose of clinical prognosis and diagnosis and in most cases the images look similar. This problem is solved by deep learning, where the network architecture allows learning difficult information. Hand crafted features work when expert knowledge about the field is available and generally make some strict assumptions. These assumptions may not be useful for certain tasks such as medical images. Therefore, with the hand-crafted features in some applications, it is difficult to differentiate between a healthy and non-healthy image. A classifier such as SVM does not provide an end to end solution. Features extracted form techniques such as scale invariant feature transform (SIFT) etc. are independent of the task or objective function in hand. Afterwards, sample representation is taken in term of bag of words (BOW), Fisher vector or some other mechanism. The classifier like SVM is applied on this representation and there is no mechanism for the of loss to improve local features as the process of feature extraction and classification is decoupled from each other.
On the other hand, a DCNN learn features from the underlying data. These features are data driven and learnt in an end to end learning mechanism. The strength of DCNN is that the error signal obtained by the loss function is used/propagated back to improve the feature (the CNN filters learnt in the initial layers) extraction part and hence, DCNN results in better representation. The other advantage is that in the initial layers a DCNN captures edges, blobs and local structure, whereas the neurons in the higher layers focus more on different parts of human organs and some of the neurons in the final layers can consider whole organs.
Figure 3 shows a CNN architecture like LeNet-5 for classification of medical images having N classes accepting a patch of 32 × 32 from an original 2D medical image. The network has convolutional, max pooling and fully connected layers. Each convolutional layer generates a feature map of different size and the pooling layers reduce the size of feature maps to be transferred to the following layers. The fully connected layers at the output produce the required class prediction. The number of parameters required to define a network depends upon the number of layers, neurons in each layer, the connection between neurons. The training phase of the network makes sure that the best possible weights are learned, that would give high performance for the problem at hand. The advancement in deep learning methods and computational resources has inspired medical imaging researchers to incorporate deep learning in medical image analysis. Some recent studies have shown that deep learning algorithms are successfully used for medical image segmentation [58], computer aided diagnosis [59,60,61], disease detection and classification [62,63,64,65] and medical image retrieval [66, 67].
A deep learning based approach has been presented in [68], in which the network uses a convolutional layer in place of a fully connected layer to speed up the segmentation process. A cascaded architecture has been utilized, which concatenates the output of the first network with the input of succeeding network. The network presented in [69] uses small kernels to classify pixels in MR image. The use of small kernels decreases network parameters, allowing to build deeper networks, without worrying about the dangers of over-fitting. Data augmentation and intensity normalization have been performed in pre-processing step to facilitate training process. Another CNN for brain tumor segmentation has been presented in [70]. The architecture uses dropout regularizer to deal with over-fitting, while max-out layer is used as activation function. A two path eleven layers deep convolutional neural network has been presented in [71] for brain lesion segmentation. The network is trained using a dense training method using 3D patches. A 3D fully connected conditional random field has been used to remove false positives as well as to perform multiple predictions. The CNN based method presented in [72] deals with the problem of contextual information by using a global-based method, where an entire MRI slice is taken into account in contrast to patch based approach. A re-weighting training procedure has been used to deal with the data imbalance problem. A 3D convolutional network for brain tumor segmentation for the BRATS challenge has been presented in [73]. The network uses a two-path approach to classify each pixel in an MR image. In [58], a deep convolutional neural network is presented for brain tumor segmentation, where a patch based approach with inception method is used for training purpose. Drop-out, batch normalization and inception modules are utilized to build the proposed ILinear nexus architecture. The problem of over-fitting, which arises due to scarcity of data, is removed by using drop-out regularizer. Table 1 highlights the usage of CNN based architectures for segmentation of medical images.
A method for classification of lung disease using a convolutional neural network is presented in [62], which uses two databases of interstitial lung diseases (ILDs) and CT scans each having a dimension of 512 × 512. A total of 14696 image patches are derived from the original CT scans and used to train the network. A method based on convolutional classification restricted Boltzmann machine for lung CT image analysis is presented in [63]. Two different datsets containing lung CT scans are used for classification of lung tissue and detection of airway center line. The network is trained on 32 × 32 image patches selected along a gird with a 16-voxel overlap. A patch is retained if it has 75% of voxel belonging to the same class. In [64], a framework for body organ recognition is presented based on two-stage multiple instance deep learning. In the first stage, discriminative and non-informative patches are extracted using CNN. In the second stage, fine tuning of the network parameters is performed on extracted discriminative patches. The experiments are conducted for the classification of synthetic dataset as well as the body part classification of 2D CT slices. In [65], a locality sensitive deep learning algorithm called spatially constrained convolutional neural networks is presented for the detection and classification of the nucleus in histological images of colon cancer. A novel neighboring ensemble predictor is proposed for accurate classification of nuclei and is coupled with CNN. A large dataset having 20,000 annotated nuclei of four classes of colorectal adenocarcinoma images is used for evaluation purposes. In [66], a deep convolutional neural network has been proposed to retrieve multimodal images. An intermodal dataset having five modalities and twenty-four classes are used to train the network for the purpose of classification. Three fully connected layers are used at the last part of the network for extracting features, which are use for the retrieval. A content based medical image retrieval (CBMIR) system based on CNN for radiographic images is proposed in [67]. Image retrieval in medical application (IRMA) database is used for the evaluation of the proposed CBMIR system. In [60], a hybrid thyroid module diagnosis system has been proposed by using two pre-trained CNNs. The models differs in terms of the number of convolutional and fully connected layers. A soft-max classifier is used for diagnosis and results are validated on 15000 ultrasound images. A semi-supervised deep CNN based learning scheme is proposed for the diagnosis of breast cancer[61], and is trained on a small set of labeled data. In [66], a CNN based approach is proposed for diabetic retinopathy using colored fundus images. The network classify the images into three classes i.e., aneurysms, exudate and haemorrhages and also provide the diagnosis. The proposed architecture is tested on dataset comprising of 80000 images. In [74, 75], deep neural network including GoogLeNet and ResNet are successfully used for multi-class classification of Alzheimer’s disease patients using the ADNI dataset. An accuracy of 98.88% is achieved, which is higher than the traditional machine learning approaches used for Alzheimer’s disease detection.
Table 2 highlights CNN applications for the detection and classification task, computer aided diagnosis and medical image retrieval. It is seen that CNN based networks are successful in application areas dealing with multiple modalities for various tasks in medical image analysis and provide promising results in almost every case. The results can vary with the number of images used, number of classes, and the choice of the DCNN model. Looking at these successes of CNN in medical domain, it seems that convolutional networks will play a crucial role in the development of future medical image analysis systems. Deep convolutional neural networks have proven to give high performance in medical image analysis domain when compared with other techniques applied in similar application areas. Table 3, summarises results of different techniques used for lung pattern classification in ILD disease. The CNN based method outperforms other methods in major performance indicators. Table 4 shows a comparison of the performance of a CNN based method and other state-of-the-art computer vision based methods for body organ recognition. It is evident that the CNN based method achieves significant improvement in key performance indicators.
Discussion
In this section, various considerations for adopting deep learning methods in medical image analysis are discussed. A roadmap for the future of artificial intelligence in medical image analysis is also drawn in the light of recent success of deep learning for these tasks.
Various deep learning architectures for medical image analysis
The success of convolutional neural networks in medical image analysis is evident from a wide spectrum of literature that is recently available [79]. There are multiple CNN architectures reported in literature to deal with different imaging modalities and tasks involved in medical image analysis [58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74]. These architectures include conventional CNN, multiple layer networks, cascaded networks, semi- and fully supervised training models and transfer learning. In most cases, the data available is limited and expert annotations are scarce. In general, shallow networks have been preferred in medical image analysis, when compared with very deep CNNs employed in computer vision applications [80, 81]. In [82], a U shaped network is used for the purpose of semi-automated segmentation of sparsely annotated volumetric data. This architecture introduces skip connections and use convolution, deconvolution in a structured manner. A modification to U-Net is proposed in [83], which is applied on a variety of medical datasets for segmentation tasks. In [84], a W-shaped network is proposed for 2D medical image segmentation task. In [85], a volumetric solution is proposed for end to end segmentation of prostate cancer. A convolutional-deconvolutional network based on a capsule architecture is proposed in [86] for lung image segmentation and is shown to substantially reduce the number of parameters required when compared to U-Net architecture. This analysis shows that different DCNN network architectures are adopted or proposed for medical image analysis. These architectures focus on reducing the parameter space, improve computation time, and handle 3D data. It is generally found that DCNN based architectures have found wider success in dealing with medical image data, when compared to other deep learning frameworks.
3D imaging modalities
A large amount of data produced in the medical domain has 3-dimensional information. This is particularly true for volumetric imaging modalities such as CT and MRI. Medical image analysis can benefit from this enriched information. Deep learning methods generally adopt different methods to handle this 3D information. This can involve converting 3D volume data into 2D slices and combination of features from 2D and multi-view planes to benefit from the contextual information [87, 88]. Recent techniques are proposed using 3D CNN to fully benefit from the available information [89, 90]. In [91], a fully 3D DCNN is used for the classification of dysmaturation in neonatal MRI image data. In [92], a two stage network is used for the detection of vascular origin lacunes, where a fully 3D CNN used in the second stage. The performance of the system is close to trained raters. In [93], a 3D CNN is used for the segmentation of cerebral vasculature using 4D CT data. In [94], brain lesion segmentation is performed using 3D CNN. A 3D fully connected conditional random field (CRF) is used for post processing. A geometric CNN is proposed in [95] to deal with geometric shapes in medical imaging, particularly targeting brain data. The utilization of 3D CNN has been limited in literature due to the size of network and number of parameters involved. This also leads to slow inference due to 3D convolutions. A hybrid of 2D/3D networks and the availability of more compute power is encouraging the use of fully automated 3D network architectures.
Limitation of deep learning and future prospects
Despite the ability of deep learning methods to give better or higher performance, there are some limitations of deep learning techniques, which could limit their application in clinical domain. Deep learning architecture requires a large amount of training data and computational power. A lack in computational power will lead to a need for more time to train the network, which would depend on the size of training data used. Most deep learning techniques such as convolutional neural network requires labelled data for supervised learning and manual labelling of medical images is a difficult task. These limitations are being overcome with every passing day due to the availability of more computation power, improved data storage facilities, increasing number of digitally stored medical images and improving architecture of the deep networks. The application of deep learning in medical image analysis also suffers from the black box problem in AI, where the inputs and outputs are known but the internal representations are not very well understood. These methods are also affected by noise and illumination problems inherent in medical images. The noise can be removed using pre-processing steps to improve the performance [58].
A possible solution to deal with these limitations is to use transfer learning, where a pre-trained network on a large dataset (such as ImageNet) is used as a starting point for training on medical data. This typically includes reducing the learning rate by one or two orders of magnitude (i.e., if a typical learning rate is 1e − 2, reduce it to 1e − 3 or 1e − 4) and increase the local learning rate of the newly introduce layers by a factor of 10. Also, as an alternative the DCNN model can be pretrained by converting ImageNet data into gray scale images. However, it may require more computation resources (such as GPUs) to train on the whole ImageNet data. The best option would be to train DCNN model on large scale annotated medical image data. This underlying task for pre-training can be as simple as organ classification [66] or binary classification task of benign or malignant images. Different modalities e.g., X-ray, MRI, and CT can be combined for this task. This pre-trained model can be used in transfer learning for fine tuning a network for a particular problem at hand.
In general, shallow networks are used in situations where data is scarce. One of the most important factors in deep learning is the training data. However, this is partially addressed by using transfer learning. However, even in the presence of transfer learning more data on the target domain will give better performance. The use of generative adversarial network (GAN) [96] can be explored in the medical imaging field in cases where the data is scarce. One of the main advantages of transfer learning is to enable the use of deeper models to relatively small dataset. In general, a deeper DCNN architecture is the better for the performance.
Conclusion
A comprehensive review of deep learning techniques and their application in the field of medical image analysis is presented. It is concluded that convolutional neural network based deep learning methods are finding greater acceptability in all sub-fields of medical image analysis including classification, detection, and segmentation. The problems associated with deep learning techniques due to scarce data and limited labels is addressed by using techniques such as data augmentation and transfer learning. For larger datasets, availability of more compute power and better DL architectures is paving the way for a higher performance. This success would ultimately translate into improved computer aided diagnosis and detection systems. Further research is required to adopt these methods for those imaging modalities, where these techniques are not currently applied. The recent success indicates that deep learning techniques would greatly benefit the advancement of medical image analysis.
References
Greenspan, H., van Ginneken, B., and Summers, R. M., Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique. IEEE Trans. Med. Imaging 35(5):1153–1159, 2016.
Wang, G., A perspective on deep imaging. IEEE Access 4:8914–8924, 2016.
Liu, Y., Cheng, H., Huang, J., Zhang, Y., Tang, X., Tian, J.-W., and Wang, Y., Computer aided diagnosis system for breast cancer based on color doppler flow imaging. J. Med. Syst. 36(6):3975–3982, 2012.
Diao, X.-F., Zhang, X.-Y., Wang, T.-F., Chen, S.-P., Yang, Y., and Zhong, L., Highly sensitive computer aided diagnosis system for breast tumor based on color doppler flow images. J. Med. Syst. 35(5):801–809, 2011.
Wan, J., Wang, D., Hoi, S. C. H., Wu, P., Zhu, J., Zhang, Y., and Li, J.: Deep learning for content-based image retrieval: A comprehensive study. In: Proceedings of the 22nd ACM international conference on Multimedia. ACM, pp. 157–166, 2014
Deng, L., Yu, D., et al., Deep learning: Methods and applications. Foundations and Trends®, in Signal Processing 7(3–4):197–387, 2014.
Shi, S., Wang, Q., Xu, P., and Chu, X.: Benchmarking state-of-the-art deep learning software tools. In: 2016 7th International Conference on Cloud Computing and Big Data (CCBD). IEEE, pp. 99–104, 2016
Janowczyk, A., and Madabhushi, A.: Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases, Journal of pathology informatics 7
Lakhani, P., Gray, D. L., Pett, C. R., Nagy, P., and Shih, G., Hello world deep learning in medical imaging. J. Digit. Imaging 31(3):283–289, 2018.
Heidenreich, A., Desgrandschamps, F., and Terrier, F., Modern approach of diagnosis and management of acute flank pain: Review of all imaging modalities. Eur. Urol. 41(4):351–362, 2002.
Rahman, M. M., Desai, B.C., and Bhattacharya, P., Medical image retrieval with probabilistic multi-class support vector machine classifiers and adaptive similarity fusion. Comput. Med. Imaging Graph. 32(2):95–108, 2008.
Sáez, A., Sánchez-Monedero, J., Gutiérrez, P. A., and Hervás-Martínez, C., Machine learning methods for binary and multiclass classification of melanoma thickness from dermoscopic images. IEEE Trans. Med. Imaging 35(4):1036–1045, 2016.
Miri, M. S., Abràmoff, M. D., Lee, K., Niemeijer, M., Wang, J.-K., Kwon, Y. H., and Garvin, M. K., Multimodal segmentation of optic disc and cup from sd-oct and color fundus photographs using a machine-learning graph-based approach. IEEE Trans. Med. Imaging 34(9):1854–1866, 2015.
Gao, Y., Zhan, Y., and Shen, D., Incremental learning with selective memory (ilsm): Towards fast prostate localization for image guided radiotherapy. IEEE Trans. Med. Imaging 33(2):518–534, 2014.
Tao, Y., Peng, Z., Krishnan, A., and Zhou, X. S., Robust learning-based parsing and annotation of medical radiographs. IEEE Trans. Med. Imaging 30(2):338–350, 2011.
Ahmad, J., Muhammad, K., Lee, M. Y., and Baik, S. W., Endoscopic image classification and retrieval using clustered convolutional features. J. Med. Syst. 41(12):196, 2017.
Ahmad, J., Muhammad, K., and Baik, S. W., Medical image retrieval with compact binary codes generated in frequency domain using highly reactive convolutional features. J. Med. Syst. 42(2):24, 2018.
Jenitta, A., and Ravindran, R. S., Image retrieval based on local mesh vector co-occurrence pattern for medical diagnosis from mri brain images. J. Med. Syst. 41(10):157, 2017.
Zhang, L., and Ji, Q., A bayesian network model for automatic and interactive image segmentation. IEEE Trans. Image Process. 20(9):2582–2593, 2011.
Sharma, M. M.: Brain tumor segmentation techniques: A survey. Brain 4 (4): 220–223
Vishnuvarthanan, G., Rajasekaran, M. P., Subbaraj, P., and Vishnuvarthanan, A., An unsupervised learning method with a clustering approach for tumor identification and tissue segmentation in magnetic resonance brain images. Appl. Soft Comput. 38:190–212, 2016.
Feng, Y., Zhao, H., Li, X., Zhang, X., and Li, H., A multi-scale 3d otsu thresholding algorithm for medical image segmentation. Digital Signal Process. 60:186–199, 2017.
Gupta, D., and Anand, R., A hybrid edge-based segmentation approach for ultrasound medical images. Biomed. Signal Process. Control 31:116–126, 2017.
von Landesberger, T., Basgier, D., and Becker, M., Comparative local quality assessment of 3d medical image segmentations with focus on statistical shape model-based algorithms. IEEE Trans. Vis. Comput. Graph. 22 (12):2537–2549, 2016.
Anwar, S., Yousaf, S., and Majid, M.: Brain timor segmentation on multimodal mri scans using emap algorithm. In: Engineering in medicine and biology soceity (EMBC), International Conference of the IEEE. IEEE, pp. 1-4, 2018
Cabria, I., and Gondra, I., Mri segmentation fusion for brain tumor detection. Information Fusion 36:1–9, 2017.
Soulami, K. B., Saidi, M. N., and Tamtaoui, A.: A cad system for the detection of abnormalities in the mammograms using the metaheuristic algorithm particle swarm optimization (pso). In: Advances in Ubiquitous Networking 2. Springer, pp. 505–517, 2017
Kobayashi, Y., Kobayashi, H., Giles, J. T., Yokoe, I., Hirano, M., Nakajima, Y., and Takei, M., Detection of left ventricular regional dysfunction and myocardial abnormalities using complementary cardiac magnetic resonance imaging in patients with systemic sclerosis without cardiac symptoms: A pilot study. Intern. Med. 55(3): 237–243, 2016.
Mosquera-Lopez, C., Agaian, S., Velez-Hoyos, A., and Thompson, I., Computer-aided prostate cancer diagnosis from digitized histopathology: A review on texture-based systems. IEEE Rev. Biomed. Eng. 8:98–113, 2015.
Ma, H.-Y., Zhou, Z., Wu, S., Wan, Y.-L., and Tsui, P.-H., A computer-aided diagnosis scheme for detection of fatty liver in vivo based on ultrasound kurtosis imaging. J. Med. Syst. 40(1):33, 2016.
Remeseiro, B., Mosquera, A., and Penedo, M. G., Casdes: A computer-aided system to support dry eye diagnosis based on tear film maps. IEEE journal of biomedical and health informatics 20(3):936–943, 2016.
Torrents-Barrena, J., Lazar, P., Jayapathy, R., Rathnam, M., Mohandhas, B., and Puig, D., Complex wavelet algorithm for computer-aided diagnosis of alzheimer’s disease. Electron. Lett. 51(20):1566–1568, 2015.
Saha, M., Mukherjee, R., and Chakraborty, C., Computer-aided diagnosis of breast cancer using cytological images: A systematic review. Tissue Cell 48(5):461–474, 2016.
Salam, A. A., Akram, M. U., Wazir, K., Anwar, S. M., and Majid, M.: Autonomous glaucoma detection from fundus image using cup to disc ratio and hybrid features. In: IEEE International Symposium on Signal processing and information technology (ISSPIT) 2015. IEEE, pp. 370-374, 2015
Salam, A. A., Akram, M. U., Abbas, S., and Anwar, S. M.: Optic disc localization using local vessel based features and support vector machine. In: IEEE 15th International Conference on Bioinformatics and Bioengineering (BIBE), 2015. IEEE, pp. 1–6, 2015
Altaf, T., Anwar, S. M., Gul, N., Majeed, M. N., and Majid, M., Multi-class alzheimer’s disease classification using image and clinical features. Biomed. Signal Process. Control 43:64–74, 2018.
Hwang, K. H., Lee, H., and Choi, D., Medical image retrieval: Past and present. Healthcare informatics research 18(1):3–9, 2012.
Müller, H., Rosset, A., Vallée, J.-P., Terrier, F., and Geissbuhler, A., A reference data set for the evaluation of medical image retrieval systems. Comput. Med. Imaging Graph. 28(6):295–305, 2004.
Müller, H., Michoux, N., Bandon, D., and Geissbuhler, A., A review of content-based image retrieval systems in medical applications—clinical benefits and future directions. Int. J. Med. Inform. 73(1):1–23, 2004.
Mizotin, M., Benois-Pineau, J., Allard, M., and Catheline, G.: Feature-based brain mri retrieval for alzheimer disease diagnosis. In: 2012 19th IEEE International Conference on Image Processing (ICIP). IEEE, pp. 1241–1244, 2012
Brahmi, D., and Ziou, D.: Improving cbir systems by integrating semantic features. In: 2004 Proceedings of the 1st Canadian Conference on Computer and robot vision. IEEE, pp. 233-240, 2004
Chang, N.-S., and Fu, K.-S., Query-by-pictorial-example. IEEE Trans. Softw. Eng. SE-6(6):519–524, 1980.
Thakur, M. S., and Singh, M., Content based image retrieval using line edge singular value pattern (lesvp): A review paper. International Journal of Advanced Research in Computer Science and Software Engineering 5(3): 648–652, 2015.
Jiji, G. W., and Raj, P. S. J. D., Content-based image retrieval in dermatology using intelligent technique. IET Image Process. 9(4):306–317, 2014.
Rahman, M. M., Antani, S. K., and Thoma, G. R., A learning-based similarity fusion and filtering approach for biomedical image retrieval using svm classification and relevance feedback. IEEE Trans. Inf. Technol. Biomed. 15(4):640–646, 2011.
Anwar, S. M., Arshad, F., and Majid, M.: Fast wavelet based image characterization for content based medical image retrieval. In: 2017 International Conference on communication, computing and digital systems (C-CODE). IEEE, pp.351-356, 2017
Deng, L., Yu, D., et al., Deep learning: Methods and applications. Foundations and Trends®, in Signal Processing 7(3–4):197–387, 2014.
Premaladha, J., and Ravichandran, K., Novel approaches for diagnosing melanoma skin lesions through supervised and deep learning algorithms. J. Med. Syst. 40(4):96, 2016.
Kharazmi, P., Zheng, J., Lui, H., Wang, Z. J., and Lee, T. K., A computer-aided decision support system for detection and localization of cutaneous vasculature in dermoscopy images via deep feature learning. J. Med. Syst. 42(2):33, 2018.
Wang, S.-H., Phillips, P., Sui, Y., Liu, B., Yang, M., and Cheng, H., Classification of alzheimer’s disease based on eight-layer convolutional neural network with leaky rectified linear unit and max pooling. J. Med. Syst. 42(5):85, 2018.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P., Gradient-based learning applied to document recognition. Proc. IEEE 86(11):2278–2324, 1998.
LeCun, Y., Bengio, Y., and Hinton, G., Deep learning. Nature 521(7553):436, 2015.
Ding, S., Lin, L., Wang, G., and Chao, H., Deep feature learning with relative distance comparison for person re-identification. Pattern Recog. 48(10):2993–3003, 2015.
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R., Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1):1929–1958, 2014.
Ioffe, S., and Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167
Kooi, T., Litjens, G., van Ginneken, B., Gubern-Mérida, A., Sánchez, C. I., Mann, R., den Heeten, A., and Karssemeijer, N., Large scale deep learning for computer aided detection of mammographic lesions. Med. Image Anal. 35:303–312, 2017. https://doi.org/10.1016/j.media.2016.07.007. http://www.sciencedirect.com/science/article/pii/S1361841516301244.
Perez, L., and Wang, J.: The effectiveness of data augmentation in image classification using deep learning. arXiv:1712.04621
Hussain, S., Anwar, S. M., and Majid, M., Segmentation of glioma tumors in brain using deep convolutional neural network. Neurocomputing 282:248–261, 2018.
Ma, J., Wu, F., Zhu, J., Xu, D., and Kong, D., A pre-trained convolutional neural network based method for thyroid nodule diagnosis. Ultrasonics 73:221–230, 2017.
Sun, W., Tseng, T.-L. B., Zhang, J., and Qian, W., Enhancing deep convolutional neural network scheme for breast cancer diagnosis with unlabeled data. Comput. Med. Imaging Graph. 57:4–9 , 2017.
Pratt, H., Coenen, F., Broadbent, D. M., Harding, S. P., and Zheng, Y., Convolutional neural networks for diabetic retinopathy. Procedia Comput. Sci. 90:200–205, 2016.
Anthimopoulos, M., Christodoulidis, S., Ebner, L., Christe, A., and Mougiakakou, S., Lung pattern classification for interstitial lung diseases using a deep convolutional neural network. IEEE Trans. Med. Imaging 35 (5):1207–1216, 2016.
van Tulder, G., and de Bruijne, M., Combining generative and discriminative representation learning for lung ct analysis with convolutional restricted boltzmann machines. IEEE Trans. Med. Imaging 35(5):1262–1272, 2016.
Yan, Z., Zhan, Y., Peng, Z., Liao, S., Shinagawa, Y., Zhang, S., Metaxas, D. N., and Zhou, X. S., Multi-instance deep learning: Discover discriminative local anatomies for bodypart recognition. IEEE Trans. Med. Imaging 35(5):1332–1343, 2016.
Sirinukunwattana, K., Raza, S. E. A., Tsang, Y.-W., Snead, D. R., Cree, I. A., and Rajpoot, N. M., Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans. Med. Imaging 35(5):1196–1206, 2016.
Qayyum, A., Anwar, S. M., Awais, M., and Majid, M., Medical image retrieval using deep convolutional neural network. Neurocomputing 266:8–20, 2017.
Chowdhury, M., Bulo, S. R., Moreno, R., Kundu, M. K., and Smedby, Ö.: An efficient radiographic image retrieval system using convolutional neural network. In: 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, pp. 3134–3139, 2016
Havaei, M., Davy, A., Warde-Farley, D., Biard, A., Courville, A., Bengio, Y., Pal, C., Jodoin, P.-M., and Larochelle, H., Brain tumor segmentation with deep neural networks. Med. Image Anal. 35:18–31, 2017.
Pereira, S., Pinto, A., Alves, V., and Silva, C. A., Brain tumor segmentation using convolutional neural networks in mri images. IEEE Trans. Med. Imaging 35(5):1240–1251 , 2016.
Jodoin, A. C., Larochelle, H., Pal, C., and Bengio, Y.: Brain tumor segmentation with deep neural networks
Kamnitsas, K., Ledig, C., Newcombe, V. F., Simpson, J. P., Kane, A. D., Menon, D. K., Rueckert, D., and Glocker, B., Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation. Med. Image Anal. 36:61–78 , 2017.
Tseng, K.-L., Lin, Y.-L., Hsu, W., and Huang, C.-Y.: Joint sequence learning and cross-modality convolution for 3d biomedical segmentation. arXiv:1704.07754
Casamitjana, A., Puch, S., Aduriz, A., Sayrol, E., and Vilaplana, V.: 3d convolutional networks for brain tumor segmentation. Proceedings of the MICCAI Challenge on Multimodal Brain Tumor Image Segmentation (BRATS), pp. 65–68 , 2016
Farooq, A., Anwar, S., Awais, M., and Rehman, S.: A deep cnn based multi-class classification of alzheimer’s disease using mri. In: 2017 IEEE International Conference on Imaging systems and techniques (IST). IEEE, pp. 1–6, 2017
Farooq, A., Anwar, S., Awais, M., and Alnowami, M.: Artificial intelligence based smart diagnosis of alzheimer’s disease and mild cognitive impairment. In: 2017 International Smart cities conference (ISC2). IEEE, pp. 1–4, 2017
Gangeh, M. J., Sørensen, L., Shaker, S. B., Kamel, M. S., De Bruijne, M., and Loog, M.: A texton-based approach for the classification of lung parenchyma in ct images. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, pp. 595–602, 2010
Sorensen, L., Shaker, S. B., and De Bruijne, M., Quantitative analysis of pulmonary using local binary patterns. IEEE Trans. Med. Imaging 29(2):559–569, 2010.
Anthimopoulos, M., Christodoulidis, S., Christe, A., and Mougiakakou, S.: Classification of interstitial lung disease patterns using local dct features and random forest. In: 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, pp. 6040–6043, 2014
Chen, M., Shi, X., Zhang, Y., Wu, D., and Guizani, M.: Deep features learning for medical image analysis with convolutional autoencoder neural network. IEEE Transactions on Big Data (1) 1–1. https://doi.org/10.1109/TBDATA.2017.2717439, 2017
Hoo-Chang, S., Roth, H. R., Gao, M., Lu, L., Xu, Z., Nogues, I., Yao, J., Mollura, D., and Summers, R. M., Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 35(5):1285, 2016.
Simonyan, K., and Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556
Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T., and Ronneberger, O.: 3D u-net Learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M. R., Unal, G., and Wells, W. (Eds.) Medical image computing and computer-assisted intervention – MICCAI, Vol. 2016, pp. 424–432. Springer International Publishing, Cham, 2016.
Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., and Liang, J.: Unet++: A nested u-net architecture for medical image segmentation. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer, pp. 3–11, 2018
Chen, W., Zhang, Y., He, J., Qiao, Y., Chen, Y., Shi, H., and Tang, X.: W-net: Bridged u-net for 2d medical image segmentation. arXiv:1807.04459
Milletari, F., Navab, N., and Ahmadi, S.: V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. https://doi.org/10.1109/3DV.2016.79, 2016
LaLonde, R., and Bagci, U.: Capsules for object segmentation. arXiv:1804.04241
Chen, H., Dou, Q., Yu, L., and Heng, P.-A.: Voxresnet: Deep voxelwise residual networks for volumetric brain segmentation. arXiv:1608.05895
Setio, A. A. A., Ciompi, F., Litjens, G., Gerke, P., Jacobs, C., Van Riel, S. J., Wille, M. M. W., Naqibullah, M., Sánchez, C. I., and van Ginneken, B., Pulmonary nodule detection in ct images: False positive reduction using multi-view convolutional networks. IEEE Trans. Med. Imaging 35(5):1160–1169, 2016.
Brosch, T., Tang, L. Y., Yoo, Y., Li, D. K., Traboulsee, A., and Tam, R., Deep 3d convolutional encoder networks with shortcuts for multiscale feature integration applied to multiple sclerosis lesion segmentation. IEEE Trans. Med. Imaging 35(5):1229–1239, 2016.
Çiçek, Ö., Abdulkadir, A., Lienkamp, S. S., Brox, T., and Ronneberger, O.: 3d u-net: Learning dense volumetric segmentation from sparse annotation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, pp. 424–432, 2016
Ceschin, R., Zahner, A., Reynolds, W., Gaesser, J., Zuccoli, G., Lo, C. W., Gopalakrishnan, V., and Panigrahy, A., A computational framework for the detection of subcortical brain dysmaturation in neonatal mri using 3d convolutional neural networks. NeuroImage 178:183–197, 2018.
Ghafoorian, M., Karssemeijer, N., Heskes, T., Bergkamp, M., Wissink, J., Obels, J., Keizer, K., de Leeuw, F.-E., van Ginneken, B., Marchiori, E., et al., Deep multi-scale location-aware 3d convolutional neural networks for automated detection of lacunes of presumed vascular origin. NeuroImage: Clinical 14:391–399, 2017.
Meijs, M., and Manniesing, R.: Artery and vein segmentation of the cerebral vasculature in 4d ct using a 3d fully convolutional neural network. In: Medical Imaging 2018: Computer-Aided Diagnosis, Vol. 10575, International Society for Optics and Photonics, p. 105751Q, 2018
Kamnitsas, K., Ledig, C., Newcombe, V. F., Simpson, J. P., Kane, A. D., Menon, D. K., Rueckert, D., and Glocker, B., Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesion segmentation. Med. Image Anal. 36:61–78, 2017.
Seong, S.-B., Pae, C., and Park, H.-J., Geometric convolutional neural network for analyzing surface-based neuroimaging data. Frontiers in Neuroinformatics 12:42, 2018.
Tzeng, E., Hoffman, J., Saenko, K., and Darrell, T.: Adversarial discriminative domain adaptation. In: Computer Vision and Pattern Recognition (CVPR), Vol. 1, p. 4, 2017
Author information
Authors and Affiliations
Corresponding author
Additional information
This article is part of the Topical Collection on Image & Signal Processing
Rights and permissions
About this article
Cite this article
Anwar, S.M., Majid, M., Qayyum, A. et al. Medical Image Analysis using Convolutional Neural Networks: A Review. J Med Syst 42, 226 (2018). https://doi.org/10.1007/s10916-018-1088-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10916-018-1088-1