Abstract
Computer-aided diagnosis have stumbled rapidly in the last few years. One of foremost step in computer-aided diagnosis is organ classification and segmentation. Among various organ segmentation techniques, the segmentation of abdominal organs like liver, stomach, kidney, pancreas and bladder from different modality of images has gotten keen interest in past few years. Mostly the interpretations of abdominal images are being done by medical experts or radiologists. Image interpretation by human experts is quite limited due to its subjectivity, complexity of the image, extensive variations exist across different interpreters, and fatigue. After the success of deep learning in real world applications, it is also providing exciting solutions with good accuracy for medical imaging and is seen as a key method for future applications in medical field. Emergence of deep Convolutional Neural Networks (CNN) tends to provide better classification in abdominal imaging analysis as compared to traditional models. This paper presents the state of the art of abdominal images for classifying abdominal organs based on deep learning and is a useful for computer-aided diagnosis applications. First this paper describe background of abdominal organs as well as modalities of imaging system. Then, we reviewed the techniques of deep learning for image segmentation, object detection, classification and other related tasks for multiorgan and single organ abdominal images. For single organ, different organs of abdomen such as liver, kidney, pancreas, and stomach are discussed seprately. In the last section, we have discussed current market challenges and the future recommendations.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The primary cognitive task of a diagnostic radiologist is medical imaging analysis and interpretation. With the rapid development in medical imaging, an enormous amount of data is produced due to the different image modalities such as magnetic resonance imaging (MRI), computed tomography (CT) and positron emission tomography (PET). Computer-aided diagnosis systems improved the healthcare by analyzing the image modalities (especially CT images). This computerized analysis has several advantages over human interpretation, such as accuracy, speed, non-sensitivity to exhaustion, and bulky knowledge base for diagnostic information. One of foremost step in computer-aided diagnosis is organ classification and segmentation. Among the various organ segmentation techniques, the segmentation of abdominal organs like liver, stomach, kidney, pancreas and bladder from CT images has gotten keen interest since last few years.
In radiology, the procedures of endoscopic pancreatobiliary is deployed for imaging and intervention of organs. For this aid, an endoscope is orally injected and navigated over the gastrointestinal region to particular locations on the stomach or duodenal wall to permit the pancreatobiliary organ imaging. This navigation procedure is often challenging, particularly for novice endoscopists due to the small field view of endoscopic, and lack of visual location alignment. To overcome this challenge, segmentation of multiple organs is introduced to support the navigation and targeting procedure. Segmentation of multiple organ includes: gastrointestinal organs (stomach, duodenum, and esophagus), adjacent organs used as navigational landmarks (kidney, liver, gallbladder and spleen) and the pancreas.
Organ segmentation from abdominal images can aid scientific work flow in multiple clinical domains like computer-assisted diagnostic intervention, treatment planning, and delivery, surgical planning and delivery, intra-operatablity, planning radiation therapies and so on [1, 2]. These potential benefits recently encouraged the interest in the development of more comprehensive computational anatomical models. Deep learning (DL) techniques has got the revolutional breakthrough in computer vision and pattern recognition tasks. The deep learning encompasses several machine learning algorithms for modelling high level abstractions in data by deploying deep architectures comprised of multiple non-linear transformations [3]. Deep learning techniques are used to learn the discriminative visual features from raw images directly at multiple level of abstractions. Recently, the deep learning techniques have been employed in different domains especially in medical imaging classification and automated disease diagnostic systems. This recent development of deep learning serves as a motivation to exploit convolutional neural network (CNN) for abdominal organ segmentation and classification.
This paper presents a detailed review on the deep leaning based techniques for multi-organ as well as single organ abdominal images classification/segmentation. The background of basic terminologies of multi-organ abdominal images as well as medical imaging modalities are explained in Section 2. Section 3 explain the state of art of multi-organ and single organ abdominal images according to the traditional standard steps: image acquisition, preprocessing, feature extraction, segmentation, and classification. Section 4 presents the existing and suggested recent performance evaluation matrices. The main findings of the survey is elaborated in Section 5. Section 6 presents the challenges offering in the domain along with the future perspectives. Finally the conclusion is draw in Section 7.
2 Background of basic terminologies
The background of the abdominal diagnostic is quite an old concept. The information that a tense abdomen is a dangerous and life-threatening condition is extremely old. In 1948, the pediatric specialist Gross perceived the clinical significance of a tense abdomen as a complication of the fix of huge omphaloceles [4]. Kron et al. [5] suggested the term “abdominal compartment syndrome” (ACS) in 1984. World Society of the Abdominal Compartment Syndrome (WSACS) refined the definitions of ACS [6] and provide guidelines management in the recent publications of 2006 and 2007 [7] and updated in 2013 [8].
The conventional methods of diagnosis and intervention of abdominal images relay on the human expertise. Thus these methods were time consuming and tiresome. This difficulty is overcome by the automated computerized systems that work on the abdominal images and provide faster and accurate diagnosis and intervention on abdominal images. The automated systems either classify or segment the multi-organ of abdomen or either work on single organ of abdomen. Here multi-organ means that more than one organ of abdomen whereas single-organ means only one organ of abdomen is used for classification or segmentation task respectively. On the basis of these classification, we provide the literature review of multi-organ and single-organ in Section 3 and illustrate the taxonomy in Figure 3.
Before presenting the state of art, we provides comprehensive detail according to the biological point of view as well as different types of images from different types of imaging systems that will present the ease for the IT researcher to work on it. Basically abdominal images are the image of the abdomen. The abdomen is the part of body that space between the thorax (chest) and pelvis. The abdomen is the combination of three main categories on the basis of organs: gastrointestinal organs (stomach, duodenum, and esophagus), adjacent organs used as navigational landmarks (kidney, liver, small and large intestines, gallbladder and spleen) and the pancreas. These organs are connected together loosely with the aid of connecting tissues known as mesentery that permit them to expand and to slide against each other. Abdomen have many important blood vessels including the venacava and their small branches, aorta etc. Abdomen is protected in the front by the fascia that is thin and tough tissue layer. In the rear of the abdomen are the back muscles and spine. The overall organs of abdomen are graphically illustrated in the Figure. 1.
The major organ of abdomen is stomach that is located on the left upper side. It is the muscular organ of the abdomen that receives the food from the esophagus through the valve known as esophageal sphincter. To the left of stomach, the organ at the upper far left part of the abdomen is known as spleen. The spleen is almost 4 inches long, purple in color, fist-shaped organ. It is one of supporting organ abdomen that play a significant role in blood filtering as part of the immune system. Another important organ of abdomen is kidney that is a bean shaped organ located either the side of the spine, behind the abdomen and below the ribs. The kidney’s pelvis is responsible to collect the waste into the urine and drains down towards a tube called the ureter to the bladder. The next foremost organ of abdomen is liver, large reddish brown meaty organ of abdomen that lies on the right side protected by the rib cage with a weight of 3 pounds. The bladder could be a tiny pouch that sits slightly below the liver. The bladder stores digestive fluid (bile) created by the liver. The pancreas lies across the back of abdomen behind the stomach. The head of the pancreas is attached to the first section of small intestine known as duodenum through the small tube known as pancreatic duct.The tail of the pancreas is shrink and extends to the left side of the body. The intestines is the amalgamation of large intestine, small intestine and rectum. The small intestine is also known as small bowel that is one inch in diameter and 20 feet long in height.
Automated imaging analysis tool based on machine learning and deep learning algorithms are the key enablers to improve the quality of image diagnosis and interpretation by facilitating through efficient identification of finding. Medical imaging is the process of creating visual representations of the interior of a body for medical analysis and clinical intervention, as well as visual representation of organs or tissues. Medical imaging is also called radiology. There are different types of imaging, such as X-rays, ultrasound, CT (computed tomography) scans, and MRI (magnetic resonance imaging). Each imaging type uses a different technology to create an image. One of the oldest and most commonly used imaging technique is X-rays discovered in 1895 and used in 1896 to capture image of human tissue first time. X-rays use ionizing radiation to generate images of a person’s internal structure by sending beams through the body. These are absorbed at different levels depending on the density of the tissue. X-ray radiation produce three kinds of medical images; conventional X-ray imaging, angiography, and fluoroscopy. X-rays are used to evaluate the digestive system.
Ultrasound is introduced in 1952, used for therapy and muscle stimulation or as a diagnostic tool in medical imaging using an ultrasonographer. It uses high frequency sound waves to produce images of interior of body. It is broadly used in abdominal images such as scanning organs in the pelvis and abdomen and diagnosing symptoms of pain, swelling and infection. Another important medical imaging is CT (computed tomography) scans used from 1972. CT combines multiple X-ray images taken from different angles, thus generate images with greater information. CT scans are widely used in abdominal images such as scanning organs in the pelvis, chest and abdomen as well as in abdominal aortic aneurysms (CT angiography). Magnetic Resonance Imaging (MRI) was developed in 1977, uses radio waves and a magnetic field to provide detailed images of organs and tissues. MRI is used to evaluate the major organs of abdomens. The advantages and disadvantages of these medical imaging are illustrated from Table 1.
3 State of the art
The work on abdominal images started back to 1983 when Liu et al. [9] investigated digital processing in order to improve ultrasonic abdominal images. With the advent of machine learning and deep learning approaches, researchers focus on the development of automated system for the diagnosis, prediction, classification, detection, and segmentation of abdominal images. A lot of survey papers had been published on liver images for segmentation techniques [10,11,12] , liver diseases prediction [13,14,15], and monitoring liver disorder [16, 17]. However, there is no comprehensive review on multiorgan abdominal images, stomach images, pancreas images, and stomach images using machine learning or deep learning models. In contrast to several reviews on this subject, our review paper provides additional information which is lacking in other review articles, such as provide comprehensive state of art on multiorgan and single organ like liver, kidney, pancreas, and stomach.
Early work of abdominal organ segmentation relied on atlas-based methods [18,19,20], statistical models [21, 22], patch-based methods [23], multi-atlas methods [18], multi-atlas label fusion methods [22, 24,25,26,27], probabilistic atlas [19, 28] and registration-free methods [29,30,31]. These techniques attain remarkable results for abdominal organs but a lot of challenges were offered in these techniques like large anatomical variability, reliability on handcrafted features, organ-diagnostic image features [31, 32] etc. Recently deep learning techniques had been employed for multi-organ and single-organ abdominal images. This motivate us to provide a comprehensive review on abdominal images including multiorgan and single organ images using deep learning techniques.
Before presenting the related work, we first elaborate the standard workflow of the system. The standard workflow of automated abdominal images classification/segmentation system consists of four major phases: Image Acquisition, Preprocessing, Image Segmentation, Feature Extraction and Classification. These phases are graphically illustrated in Fig. 2 and elaborated comprehensively in the coming subsections along with the literature review. We present the literature of the abdominal images with respect to standard workflow according to the taxonomy of multi-organ and single organ as illustrated from Fig. 3.
3.1 Image acquisition
The process of capturing or acquiring the images is called image acquisition. It is one of the foremost and fundamental task in any computer vision or image processing domain. Due to the medical imaging nature, the abdominal images belongs to different modalities like Computed Tomography (CT), Ultrasound-imaging, Magnetic Resonance (MR) imaging, Positron Emission Tomography (PET), Digital Subtraction Angiography, Single-Photon Emission Computed Tomography (SPECT) etc. These modalities play an important role in disease detection and diagnosis. Another important aspect is that optimal acquisition device will be select on the basis of the investigation objective, in order to highlight the particular areas of the human body. Dataset is the cornerstone of any research work. The availability of dataset is one of the essential prerequisite for development and evaluation in any research domain and same is the case with abdominal images based systems. Some of the publicly available datasets are detailed followed and tabulated in Table 2.
3.1.1 Multi-organ abdominal ct reference standard segmentations
This dataset consists of 90 abdominal CT images including liver, left kidney, stomach, pancreas, spleen, gallbladder, esophagus, and duodenum for reference segmentation. The dataset is released with conjunction with the paper [33] published in IEEE Transactions on Medical Imaging paper. The abdominal images and reference segmentation was taken from two publicly available datasets: The Cancer Image Archive (TCIA) Pancreas-CT data set [34, 35] and the Beyond the Cranial Vault (BTCV) Abdomen data set [36].
3.1.2 Cancer imaging archive pancreas-ct dataset
The Pancreas-CT data set encompasses 82 3D abdominal CT scans acquired at the National Institutes of Health Clinical Center. CT scans were acquired from 53 male and 27 female subjects. 17 subjects are pre-nephrectomy healthy kidney donors while 65 subjects have neither major abdominal pathologies nor pancreatic cancer. The images were acquainted from Philips and Siemens MDCT scanners (120 kVp tube voltage) with resolutions of 512×512 pixels and slice thickness between 1.5 to 2.5 mm.
3.1.3 Beyond the Cranial Vault (BTCV) abdomen dataset
The BTCV data set contains abdominal CT scans acquired at the Vanderbilt University Medical Center from metastatic liver cancer patients or post-operative ventral hernia patients. The voxel sizes of images are from 0.6 to 0.9 mm in the left-right axis and anterior-posterior, 0.5 to 5.0 mm in the inferior-superior axis. The fields of view is from 172–318 mm for anterior-posterior, 246–367 mm for left-right axis and 138–283 mm for inferior-superior axis that achieved from the manual cropping of rib-cage.
3.1.4 LiTS - Liver Tumor Segmentation dataset
The LiTS dataset comprises of 200 3D liver CT scans from different clinics. The dataset contains images of varying spatial resolution and fields-of-view. The axial slices have identical size of 512×512, with slice spacing from 0.45-5.0 mm, in-plane resolution of 0.60-0.98 mm. The dataset splits into 130 CT scans in training and 30 CT scans in testing respectively.
3.1.5 3D-IRCADb-01 database
The 3DIRCADb dataset contains 20 venous phase enhanced CT scans of 10 men and 10 women, where 15 volumes have hepatic tumors in the liver (75% of cases). The analyzed CT volumes differ substantially in the level of size and number of tumor lesions and contrast enhancement.
3.1.6 NIH pancreas segmentation dataset
NIH pancreas segmentation dataset [34] contains 82 contrast-enhanced abdominal CT scans. The size of CT volumes is 512× 512 ×D, where D belongs to 181 and 466. The spatial resolutions height and weight ranges from 0.5-1.0 mm with the depth of 1.0 mm.
3.1.7 BOT gastric slice dataset
The gastric dataset encompases 560 gastric cancer images and 140 normal images. The images were acquired by hematoxylin-eosin (H&E) staining with 20 times magnification factor. The resolution of images is 2048×2048. The tumor area in images are annotated by data provider.
3.2 Image pre-processing
Pre-processing is elementary step to enhance the input data or to sort raw data samples in a standard form that is appropriate for feature extraction phase. In medical imaging, the acquired images are mostly affected by low contrast, noise, blurriness, poor sharpness that leads to the false diagnosis. Therefore several image processing and computer vision algorithms are deployed to enhance the images like contrast enhancement, histogram equalization, filtering, de-noising, gray level transformation etc.
The primary task of medical imaging analysis is correct image annotation. Image annotation is the task of radiologists to annotate an image with label. The next step is to clean the images and to enhance the contrast. The medical images were obtained from different modalities that cause artifacts and false intensity levels. Thus, different machine leaning and image processing algorithms were deployed to enhance the contrast of images. Noise removal algorithms are used to remove the unnecessary information and artifacts from images. Thresholding is used to segment the images into regions by creating binary image. Furthermore, data augmentation is a new emerging image processing technique used to increase the training samples artificially for deep learning models. Different types of data augmentation are scaling, translation, rotation, flipping, sharpening, zooming, brightness, and high frequency images generation. We present the preprocessing techniques deployed in the domain of abdominal images using multiorgan and single organ in the following subsections.
3.2.1 MultiOrgan preprocessing
Different types of preprocessing techniques are employed to enhance the multiorgan abdominal images. The foremost preprocessing technique used for multiorgan abdominal images is image annotation. Image annotation is perform by radiologist that assign label to abdominal organs. Along with image annotation, different image processing and machine learning techniques are used as preprocessing technique in the domain of multiorgan abdominal image processing. We here present some of the notable techniques of preprocessing.
Zhou et al. [37] performed multi-organ segmentation using 210 CT images dataset. The images were annotated by four radiologists and preprocessed using contrast enhanced technique. They represented the CT images into coronal, axial, and sagittal planes. In [38], 240 3D CT scans from Computational Anatomy dataset were collected. They performed 3D to 2D image sampling and 2D to 3D label voting in preprocessing.
González et al.[39] gathered the CT scan images from retrospective COPD observational study [40]. They split the dataset on the basis of annotation structure. In complete dataset, 2000 cases were annotated into six abdominal structures by an expert while in partial dataset, 3000 cases were annotated in such a way one structure per case.
Cheng and Malhi [41] conducted experiment on 185 studies of consecutive clinical abdominal ultrasound with the total gray scale images of 9298. They performed text annotation for categorization of ultrasound images into 11 categories. In preprocessing, images were resample into 256 × 256 resolution.
Roth et al. [42] used 150 cases, 281 contrast enhanced CT images for training set and 50 images for validation set. Gibson et al. [33] experimented on two publicly available dataset: 43 subjects of Cancer Imaging Archive Pancreas-CT dataset and 47 subjects of Beyond the Cranial Vault (BTCV) dataset for the segmentation challenge. They performed manual cropping in preprocessing. Larsson et al. [43] performed thresholding and removal of positive small samples in post processing.
3.2.2 Single organ preprocessing
After discussing the multiorgan preprocessing techniques, we present the single organ preprocessing techniques. We start with the preprocessing techniques of liver proceeding with kidney, pancreas, and stomach in the following paragraphs.
Gruber et al. [44] conducted experiment on different type of liver lesions using the subset of LiTS (Liver Tumor Segmentation) challenge dataset. They used 756 axial slices in training, 50 slices in validation and 50 for testing in their experiment respectively. Adar et al. [45] conducted experiment on 182 CT images of liver lesions (53 cysts, 64 metastases and 65 hemangiomas). In order to increase the training data and reduce over-fitting, they performed data augmentation techniques like rotation, scaling, flipping and translation in preprocessing. Li et al. [46] performed liver lesion segmentation on 26 portal of enhanced CT images acquired from Zhujiang Hospital. They generated 1 million patches of size 17 × 17 in training. Cohen et al. [47] worked on 333 CT images of 40 patients annotated by radiologist. They split the dataset in such a way that 255 images for training and 108 for testing. They applied two data augmentation techniques i.e. scaling and translation in preprocessing. Schmauch et al. [48] investigated the focal liver lesion with the aid of deep learning techniques for the detection and classification of liver lesion into two classes (malignant and benign). They experimented on 367 ultrasound images. ResNet50 was deployed to extract the feature vector of 2048. Doğantekin et al. [49] used perceptual hash function in preprocessing. They conducted experiment on 200 augmented CT images. Ben-Cohen [50] experimented on 333 annotated CT images.
Kline et al. [51] diagnosed polycystic kidney disease (PKD) using MRI images of 2000 cases in training and 400 cases in testing. Yin et al. [52] collected 185 ultrasound images of kidney from Children’s Hospital of Philadelphia (CHOP). Kuo et al. [53] conducted experiment on 4,505 ultrasound images. They labeled the images using eGFRs with the aid of serum creatinine concentrations. Al Imran et al. [54] diagnosed the Chronic Kidney Disease by retrieving from the UCI machine learning repository. They performed imputation of missing value and feature scaling in preprocessing and then partition and balance the dataset. Salehinejad [55] annotated images by sampling with cylindrical transform in 3D space. They conducted experiment on 20 contrast enhanced CT images of abdomen. Marsh et al. [56] used deep learning model for the classification of non-sclerosed and sclerosed glomeruli from the 48 whole slide images of donor kidney biopsies. Pedraza et al. [57] conducted experiment on 10,600 region of interests (ROIs) images from 40 whole slide images.
Roth et al. [58] performed experiment on the contrast enhanced CT images of 82 patients for pancreas segmentation. Sekaran et al. [59] classified the pancreatic cancer from 1900 images of the dataset obtained from the Cancer Imaging Archive (TCIA). Oktay et al. [60] conducted experiment on 150 abdominal 3D CT scans of TCIA CT Pancreas dataset.
Shichijo et al. [61] conducted experiment on 32,208 esophagogastroduodenoscopy (EGD) images of 1750 patients. Garcia et al. [62] detected the lymphocyte gastric cancer using Immunohistochemistry (IHC) images. They performed data augmentation techniques like rotation and reflection on 3,257 images and generated 10868 image patches. Horie et al. [63] gathered 8428 training images and 1118 test images from Cancer Institute Hospital of Japan for the detection of esophageal cancer. Itoh et al. [64] conducted experiment on 179 upper gastrointestinal endoscopy images from 139 patients. The dataset was increased with the rotation data augmentation technique. Li et al. [65] experimented on BOT gastric slice dataset that is available publicly. They performed rotation as a data augmentation technique in preprocessing. Zhu et al. [66] experimented on 790 images and 203 test images. Rotation was performed as a data augmentation technique to increase the data samples. The qualitative analysis of preprocessing techniques of abdominal images from 2017 to 2019 are presented in Table 3.
3.3 Feature extraction
Feature extraction is the method of extracting the representative characteristics of abdominal organs that are used for the discrimination of several organs. The optimal selection of a dominant set of feature is the key for an effective system. Broadly speaking, feature extraction techniques can be categorized into three broad categories: statistical features, structural features, model based features or automatic features [67, 68]. Statistical features are the mathematical or statistical measurements for classification of relevant information for reducing the gap among difference classes. Another broad categorization of statistical features are between Global and Local features. The global features are extracted from the entire image. On contrary, in local features extraction, image is divided into number of units or sections and the features are extracted from particular section of image. The structural features are the local structure of the abdominal images like Local Binary Patterns (LBP), pixel density within abdominal organ grids, Scale-Invariant Feature Transform (SIFT) and Speed Up Robust Features (SURF) etc. There has been an increased attention in recent years on methods that do not rely on hand-crafted features. This interest leads the researchers towards the model based or automatic features. The model based or automatic features are learned from raw data (pixels, in the case of images) using specific models like Convolutional Neural Network (CNN) , Extend Learning Model (ELM) , Recurrent Neural Network (RNN) etc. The qualitative analysis of feature extraction techniques used in the abdominal imaging systems from 2017 to 2019 are reported in Table 4.
3.4 Image segmentation and classification
One of the important and crucial task in medical imaging is image segmentation in which image is divided into the regions that are significant for a particular task such as the detection or segmentation of organs or the metrics computation. Image segmentation can be categorized into three categories on the basis of the extracted features: region-based, edge-based, or classification techniques [95]. The region-based and edge-based techniques depends on the inter-region differences and intra-region similarities between features. On contrary, classification techniques assigns the unique class labels to the individual voxels or pixels depending on the values of the features. Some of the notable segmentation techniques are thresholding (global or local, point-based or region-based) [96], atlas based methods [20] and deep learning based segmentation [97].
Classification is the process of predicting the class labels or categories of images or data points. A number of classification techniques have been proposed in the recent years. These techniques rely on supervised or unsupervised learning. Supervised learning is the classification technique in which targets are provided with the input data. The classification techniques based on supervised learning are Artificial Neural Networks (ANNs), Swarm Intelligence (SI), Support Vector Machines (SVMs), Linear Discriminant Analysis (LDA) etc. On the other hand, unsupervised learning does not require target classes with the training data. Examples of unsupervised learning are K-means clustering, hierarchical clustering, mixture models, OPTICS algorithm etc. Regarding different classification strategies, the success of deep neural networks are revealed tremendously due to the large spread of Convolutional Neural Networks (CNNs). Deep Convolutional Neural Networks works on the model based or automatic features are learned from raw data (pixels, in the case of images) using specific models. Deep Convolutional Neural Networks had been deployed in many medical applications for image classification and segmentation with more than 100 publications. Deep Convolutional Neural Network architectures can be used in three scenarios: training from scratch, fine-tuning and fixed feature extractor or freeze. In first scenario, the overall architecture of a specific CNN is designed from scratch. This requires an enormous amount of training samples and large dataset with several classes. In second scenario, pretrained CNN architecture is deployed on the target dataset using transfer learning. As the pretrained CNN is trained on the large base dataset thus it can be employed on the small target dataset for attaining high accuracy. Different pretrained CNN architecture are AlexNet [98], VGG [99], ResNet [100, 101] , GoogLeNet [102], DenseNet [103], Inception [104], Xception [105], and MobileNet [106]. In the third scenario, classification layer from the CNN architecture is removed and freeze the specific layer of the CNN. The freeze layers act as the features that can be classified using linear classifier [107, 108].
Pre-invasive segmentation gives more difficult classification than the binary classification task of invasive segmentation. Thus semantic segmentation is used to assign each pixel of image to appropriate target label using region of interest image as a ground truth [109]. Different CNN models are used for semantic segmentation like FCN [110], UNet [111], Fully Connected DenseNet [112], DeepLab [113], and Gated-SCNN [114]
3.4.1 Multiorgan segmentation and classification
Zhou et al. [37] performed multi-organ segmentation using 210 CT images dataset. They segmented 16 abdominal organs by deploying their deep learning model named as Deep Multi-Planar CoTraining (DMPCT). They achieved the mean Dice-Sørensen Coefficient (DSC) of 77.94%. In [38], FCN was deployed with the aid of voting technique on 240 3D CT scans. They attained 89% accuracy using the proposed voting scheme. González et al. [39] employed UNet and its modification on the axial slice to perform semantic segmentation. An average mean dice score of 0.909 using UNet, 0.908 using 6xUNets, 0.910 using PUNet and 0.916 using CUNet was achieved respectively. Cheng and Malhi [41] deployed VGGNet and CaffeNet for feature extraction and classification. They achieved highest accuracy of 90.4% in Top-2. Roth et al. [42] employed 3D FCN (UNet) for segmentation of seven abdominal organs. Their experiment composed of two stages. In first stage, 3D FCN was trained on candidate region while in second stage, organs were segmented in more detail. An overall mean dice score from 68.5 to 82.2% was reported. Gibson et al. [33] segmented eight abdominal organs by deploying DenseVNet architecture. The experiment was conducted on two publicly available dataset: 43 subjects of Cancer Imaging Archive Pancreas-CT dataset and 47 subjects of Beyond the Cranial Vault (BTCV) dataset for the segmentation challenge. They achieved the dice score of 0.95 on spleen, 0.93 on left kidney, 0.95 on liver, 0.87 on stomach, 0.75 on pancreas, 0.73 on gall bladder and 0.63 on duodenum respectively. Larsson et al. [43] employed two CNN architectures: CNN-sw and FCN on MICCAI 2015 dataset by simplifying the computation of training. They used multi-atlas technique for the localization of ROI and CNN for the voxal classification. An average dice score of 0.767 using FCN and 0.757 using CNN-sw was attained from their proposed system.
3.4.2 Single organ segmentation and classification
In this section we present the work of abdominal images using single organ of abdomen. We elaborate the deep learning based segmentation and classification of four major organs of abdomen: Liver, Kidney, Pancreas, and Stomach.
3.4.3 i. Liver
In this section we present the related work of liver. We explain the feature extraction, segmentation, and classification techniques along with the performance evaluation. Gruber et al. [44] conducted experiment on different type of liver lesions using the subset of LiTS (Liver Tumor Segmentation) challenge dataset. They used 756 axial slices in training, 50 slices in validation and 50 for testing in their experiment respectively. They deployed UNet CNN architecture to segment the tumor region and IoU of 0.93848. Adar et al. [45] conducted experiment on 182 CT images of liver lesions (53 cysts, 64 metastases and 65 hemangiomas). They employed the CNN architecture to detect the lesions and named it GAN model. They attained 92.4% specificity and 85.7% sensitivity using augmented dataset.
Li et al. [46] performed liver lesion segmentation on 26 portal of enhanced CT images acquired from Zhujiang Hospital. They generated 1 million patches of size 17 × 17 in training and deployed CNN. Furthermore they compared the proposed system with the existing machine learning techniques. The overall mean similarity coefficient of 80.06 ± 1.63%, precision of 82.67 ± 1.43%, and recall of 84.34 ± 1.61% respectively. Cohen et al. [47] performed semantic segmentation using UNet to detect the lesion and a dice of 83 was reported.
Schmauch et al. [48] investigate the focal liver lesion with the aid of deep learning techniques for the detection and classification of liver lesion into two classes (malignant and benign). They experimented on 367 ultrasound images. ResNet50 was deployed to extract the feature vector of 2048. The liver lesion was detected using the local prediction technique in which annotation was provided to the model. However, characterization of liver lesion was accomplished with 7 neurons of densely connected layer. The proposed system revealed mean ROC-AUC scores of 0.935 for liver detection and 0.916 for liver characterization.
Doğantekin et al. [49] extracted features 5 layer convolutional neural network and the extracted features were fed to the SVM, KNN and ELM. The ELM outperformed and reported 97.3% accuracy. They concluded that proposed system reduced the execution time of CNN, hard disk space and improved the classification performance. Vorontsov et al. [77] performed segmentation and detection of liver using fully convolutional network (FCN). They experimented on Liver Tumor Segmentation (LiTS) challenge dataset that contains CT images of the patients suffering from colorectal liver metastases. They reported the highest Dice similarity coefficient of 0.68.
Christ et al. [78] segmented the liver and lesion with the aid of cascaded fully convolutional neural networks (CFCNs) and dense 3D conditional random fields (CRFs) based on the architecture of UNet. The ROI of liver segment was detected from first FCN and input to the second FCN for the lesion segmentation. The results were refined using CRFs. They worked on 3DIRCAD abdomen CT dataset and reported 94% dice score. In 2017, they deployed same experiment on MRI images for the liver and tumor segmentation [79]. They experimented on 38 MRI images and achieved over than 94% dice score. In the meanwhile, Sun et al. [80] segmented the liver tumors by employing multi-channel FCN on CT images, where the feature vector was created by the fusion of features from different channels.
Han [81] proposed deep convolutional neural network (DCNN) for the segmentation of liver lesion. The proposed system revealed 0.67 dice score on 70 test CT scan images of LiTS (Liver Tumor Segmentation Challenge) dataset. Li et al. [82] worked on the datasets of MICCAI 2017 i.e. 3DIRCADb and LiTS Dataset. They designed H-DenseUNet architecture for the segmentation of liver and tumor. A dice global of 96.5% was achieved on LiTS Dataset and 0.937 on 3DIRCADb dataset for the tumor segmentation and 0.982 for liver segmentation.
Ben-Cohen [50] deployed FCN for the detection of liver lesions and segmentation. The experiment was accompanied on 333 annotated CT images. They proposed a deep learning model based on UNet framework and attained dice of 83%.
3.4.4 ii. Kidney
We present the feature extraction, segmentation, and classification techniques of abdominal images regarding to kidney in this section. Kline et al. [51] diagnosed polycystic kidney disease (PKD) using MRI images of 2000 cases in training and 400 cases in testing. They performed semantic segmentation by deploying UNet architecture of CNN. They reported 0.96 dice score using single network and 0.97 dice score using multi-observer respectively.
Zheng et al. [83] detected the congenital abnormalities in children due to the kidney and urinary tract. Graph cuts method were used for the segmentation of kidney. They extracted conventional features like geometrical features and histogram of oriented gradient (HOG). Furthermore, they also extracted CNN based features. Linear SVM was used for the classification. The accuracy of 81%, 84%, and 87% on right, left, and bilateral kidney. In [84], imagenet-caffe-alex model was deployed to explore the transfer learning techniques. They extracted handcrafted and automated features. They claimed that integrating handcrafted and automated features lead to improve the performance of classification and reported the classification accuracy of 0.87 ± 2.1 on combined features.
Yin et al. [52] extracted features from pretrained VGG-16 and fine-tune through DeepLab. They extracted the boundary of kidney using distance regression network and performed semantic segmentation. Their proposed system revealed 98% accuracy. Kuo et al. [53] performed transfer learning techniques using ResNet model. They achieved overall accuracy of 85.6%.
Kannan et al. [85] employed CNN model for the classification of normal, nonglomerular, and globally sclerosed images. They collected 275 trichrome-stained images from 171 chronic kidney disease patients of Boston Medical Center. They attained 92.67% accuracy on test set.
Al Imran et al. [54] diagnosed the chronic kidney disease using deep learning, logistic regression, and feed-forward neural networks. They reported f1-score of 0.97 on deep learning model, 0.95 on logistic regression, and 0.99 on feed-forward neural networks respectively. Salehinejad [55] performed 3D semantics segmentation using deep convolutional neural networks (DCNNs). They compared the FCN with DCNN and reported the highest dice similarity coefficient of 98.40% on cylindrical transform (CLT) using GoogLeNet.
Marsh et al. [56] used deep learning model for the classification of non-sclerosed and sclerosed glomeruli from the 48 whole slide images of donor kidney biopsies. The pre-trained VGG16 CNN was used as patch based and fully convolutional model. They attained the IOU of 0.9766 on Tubulointerstitial, 0.5949 on Non-sclerosed, and 0.3560 on Sclerosed using fully convolutional CNN. Furthermore, IOU of 0.9160 on Tubulointerstitial, 0.2017 on Non-sclerosed, and 0.0713 on Sclerosed was reported respectively.
Bevilacqua et al. [86] performed sementic segmentation using deep learning model. They experimented on 155 MRI images of four patients. They deployed encoder-decoder CNN on full image and region of interest for the classification. Their architecture revealed 86% accuracy on full image whereas, 84% accuracy on region of interest. Pedraza et al. [57] employed pretrained AlexNet for the classification of glomerulus and non-glomerulus. They conducted experiment on 10,600 region of interests (ROIs) images from 40 whole slide images. The highest F-score of 0.999 was reported.
Sharma et al. [87] experimented on 244 CT scans of Autosomal Dominant Polycystic Kidney Disease (ADPKD) patients. They trained fully convolutional network for segmentation on slicewise axial-CT sections. Their system revealed overall Dice Similarity Coefficient of 0.86 ± 0.07.
3.4.5 iii. Pancreas
Pancreas is one of major organ of abdomen. Computer scientists and machine learning researchers play a significant work using the images of pancreas. We here present some of notable work of pancreas.
Roth et al. [58] performed experiment on the contrast enhanced CT images of 82 patients for pancreas segmentation. ConvNet was deployed and dice of 68% was retrieved on test set. In [34], they deployed multi-level deep convolution networks (ConvNets) on patches and regions and reported a Dice Similarity Coefficient of 71.8 ± 10.7% on test set.
Sekaran et al. [59] classified the pancreatic cancer from 1900 images of the dataset obtained from the Cancer Imaging Archive (TCIA). They combined the Gaussian Mixture model (GMM) with Expectation Maximization (EM) algorithm for the feature extraction. The region of interest was detected by CNN and achieved the recognition rate of 99.9% from the proposed combined system.
Oktay et al. [60] deployed UNet model for pancreas segmentation. They conducted experiment on 150 abdominal 3D CT scans of TCIA CT Pancreas dataset. They attained 81.48 ± 6.23 DSC for pancreas labels.
Li et al. [88] diagnosed pancreatic cysts using densely-connected convolutional networks (Dense-Net) without pre-segmenting the lesions. They worked on the images of 206 patients and retrieved overall accuracy of 72.8%.
Zhu et al.[89] used ResNet for 3D coarse to fine segmentation of pancreas. They conducted experiment on NIH and JHMI Pathological Pancreas dataset and reported DSC 84.59 ± 4.86% on NIH dataset. In [90] identified the pancreatic cancer i.e. pancreatic ductal adenocarcinoma (PDAC) using 439 CT scans. They developed a multi-scale CNN for tumor segmentation and achieved 56.46% accuracy, 94.1% sensitivity and 98.5% specificity respectively.
Man et al. [91] proposed Deep Q Network(DQN) for the segmentation of pancreas using deformable UNet. They worked on NIH dataset and attained 86.93 ± 4.92 mean dice coefficient.
3.4.6 iv. Stomach
One of foremost organ of abdomen is stomach that play significant role in human body. Computer vision and machine learning researchers deployed several architectures of CNN for feature extraction, segmentation, and classification using images of stomach. We here present the notable contributions.
Shichijo et al. [61] deployed GoogLeNet CNN for the classification of positive or negative Helicobacter pylori. Then they also performed the classification of 8 different anatomical locations of the stomach using GoogLeNet. They reported accuracy, sensitivity and specificity of 83.1%, 81.9%, 83.4% on first CNN and 87.7%, 88.9%, 87.4% on second CNN respectively.
Garcia et al. [62] detected the lymphocyte gastric cancer using Immunohistochemistry (IHC) images. Deep CNN model was trained with ADAM algorithm and accuracy of 94% was attained.
Horie et al. [63] gathered 8428 training images and 1118 test images from Cancer Institute Hospital of Japan for the detection of esophageal cancer. They developed a time efficient CNN model and achieved 98% accuracy.
Itoh et al. [64] employed GoogLenet for the detection of HP infection and reported sensitivity, specificity, and AUC of 86.7%, 86.7%, and 0.956 respectively.
Zhang [92] constructed an efficient CNN model named as GDPNet for the classification of Gastric precancerous diseases. 1331 gastroscopy images were collected from Sir Run Run Shaw Hospital. The GDPNet classified images into three classes: polyp, ulcer, and erosion. They accomplished 88.90% accuracy on their proposed network.
Lee et al. [93] performed deep learning classification using transfer learning technique with the aid of VGGNet, ResNet, and inception models. They collected the images from Gil Hospital and developed dataset of 367 cancer, 200 normal, and 220 ulcer images. They reported highest accuracy of 0.9649 on ResNet-50 for the classification of normal and cancer classes.
Li et al. [65] experimented on BOT gastric slice dataset that is available publicly. They developed CNN based architecture called GastricNet for the identification of gastric cancer. An average accuracy of 97.93% was retrieved on patches and 100% on slices correspondingly.
Takiyama et al. [94] deployed GoogLeNet architecture for the classification of four anatomical locations and the sub-classification of stomach images into three regions. They conducted experiment on 27,335 EGD images in training and 17,081 images in validation. They achieved AUC of 0.99 for stomach and duodenum images and 1.00 for larynx and esophagus images respectively.
Zhu et al. [66] used the transfer learning techniques of ResNet50 and developed CNN-CAD system for the diagnosis of gastric cancer. They experimented on 790 images and 203 test images. Rotation was performed as a data augmentation technique to increase the data samples. They reported the overall accuracy of 89.16%. The qualitative analysis of deep learning based classification techniques used in the abdominal imaging systems from 2017 to 2019 are reported in Table 5.
4 Performance evaluation matrices
The effectiveness of any abdominal images based segmentation or classification system was evaluated by computing evaluation measures based on four major outcomes; true positives (tp), false positives (fp), true negatives (tn) and false negatives (fn). The performance of the proposed system is computed using the following measures:
Accuracy
is used to determine the classes of proposed system correctly. To evaluate the accuracy of a test set, we compute the proportion of true positive and true negative in all evaluated cases computed as:
Sensitivity or Recall
measure the ability of system to correctly classify the classes and is calculated from the proportion of true positives. It is calculated as:
Specificity
is the ability of the model to accurately classify the actual class and is computed as:
Precision
is the true positive relevant measure and is calculated as:
F-score or F 1 score or F-measure
is used to measure the accuracy of test set. It is the harmonic mean of precision and recall measured as:
Dice coefficient
also called the overlap index or Sørensen–Dice coefficient, is a metric for validation of medical image segmentation. The pair-wise overlap of the repeated segmentation is calculated using the Dice, which is defined by:
IOU
also known as Jaccard index is computed by comparing the number of pixels in each category in which the predicted and annotated labels agree (intersection) divided by the total number of predicted and annotated pixels assigned a label for that category (union).
The existing models were usually based on the aforementioned performance evaluation measures as illustrated in Table 7. However recent researches released three recent measures to evaluate the similarity region of images i.e. pixel-wise accuracy of the segmentation, region similarity, and structure similarity [115]. We elaborate these recent evaluation measures for the IT researcher that will work on this domain.
Region Similarity
measures the correct similarity of two region maps. It is computed as:
where β2 is the trade off between precision and recall and its suggested value is 0.3 [116]
Pixel-wise Accuracy
compute normalized and mean absolute error between predicted map (M) and ground map (G).
where W is width and H is height of images.
Structure similarity
also known as enhanced alignment measures sturctural similarity between regions and objects of predicted map and ground truth [117].
where is the function of absolute value ξ.
The overall qualitative state of art is presented in Table 7 that shows the publication year, abdominal type, dataset, modality of images in the dataset, features, models and performance evaluation respectively. The list of abberivations used in the related work and Table 7 is presented in Table 6.
5 Main findings
This survey presents the emerging landscape of deep learning techniques in the domain of abdominal imaging system using multiorgan and single organ that is useful for computer-aided diagnosis applications. We discuss the various feature extraction, segmentation, and classification techniques using deep learning techniques regarding to multiorgan and four single organ of abdomen i.e. liver, kidney, pancreas, and stomach. This survey draws an integrated depiction of how distinct healthcare activities are accomplished in a pipeline to facilitate individual patients from multiple perspectives. The existing reviews did not provide the detailed explanation using multiorgan and four single organ using deep learning models.
We have performed two types of analysis to show the trend of the research direction: quantitative analysis and qualitative analysis. In quantitative analysis, we use Google Scholar search engine to evaluate the most cited paper between 2017 to 2019 regarding to multiorgan and single organ. There are 983 publications found at Google Scholar on the query ”MultiOrgan abdominal images segmentation/classification using deep learning models. The most cited paper was presented by Gibson et al. [33] with 108 citations. Similarly, we used the same query is used for single organ by replacing multiorgan with liver, kidney, pancreas, and stomach respectively. We found 8000 publications on liver, 5400 on kidney, 2000 on pancreas, and 4300 on stomach correspondingly. On deep insight, mostly publications are duplicated in the search query. The most cited paper of liver was presented by Li et al. [82] with 186 citation. The publication by Hu et al. [118] in 2017 got 86 citation and become the most cited paper of kidney. Oktay et al. [60] presented attention UNet model for pancreas segmentation and become the most popular publication of pancreas with 169 citations. Sharma et al. [119] presented CNN model for classification of gastric carcinoma and become most cited paper of stomach with 87 citations. Finally we analyze that there are more than 500 publications on abdominal images either multiorgan or single organ. This seems that a lot of work is presented in this domain thus we present some of most notable contributions in our survey specially the deep learning based techniques.
In qualitative analysis, we have presented the online available dataset, preprocessing techniques, feature extraction, segmentation and classification techniques of multiorgan and single organ in Section 3 comprehensively. The online publicly available datasets are presented in Table 2 along with the web URL. The qualitative techniques of preprocessing, feature extraction, and classification from 2017 to 2019 are reported in Table 3, 4, 5. Furthermore, we present the notable contributions in Table 7. The table 7 illustrate the publication year, dataset type (multiorgan or single organ), modalaity type of images, dataset, CNN model, and performance measure respectively.
6 Current Trends, Limitations and Challenges
This review provides the detail on the deep learning techniques deployed for the abdominal images either in multi-organ or single-organ. The review discussed the deep learning based literature of five abdominal organs (liver, kidney, stomach, pancreas and intestines). Deep learning models provides the tremendous results and improvement in the research domain however, there is still room for improvement. We highlight some of the notable challenges of the domain in this section that pave the path for future research in this domain graphically illustrated in Fig. 4.
Starting from the image acquisition, the most facing challenge is whole-body organ scans. In the analysis of multiorgan, several organs and their structures are need to be captured simultaneously. The appearance, shape and size of abdominal organs vary among the patients that is a challenging task in scanning. It is also noted that small organs of abdomen are less often investigated than major organs. However, in clinical applications, small organs play a significant role in diagnosis like cancer screening is diagnosis form the small organs. There is a need of optimal acquisition device that overcome these challenges. Another important challenge regarding to the CT scans of abdominal image is the artifacts that can arise in images. Different types of artifacts that arise in the abdominal images are beam-hardening, partial-volume and streak artifacts. The beam-hardening artifacts are resulting as focal regions of low attenuation contiguous to bones. The partial-volume artifacts arises in blurred edges results from the spatial averaging of distinct abdominal tissues in close proximity. The streak artifacts arises due to the motion of patient results from the respiratory, cardiac and peristalsis motion etc. Another important aspect is that several abdominal organs have similar gray levels, homogeneity of image slices and similar visual appearance which consigns thresholding to limited utility.
One of the notable challenge is availability of the public datasets. The researchers worked on the private datasets collected from different hospitals. The patients in the hospitals feels insecure to provide their data, thus only a small amount of data samples were experimented. Similarly, annotated dataset is also limited that leads to the recurrent challenge in medical imaging. This challenge mostly arises when modeling the anatomical structures of multi-organ in which large datasets are required not only to locate each organ at particular location but also to find the relations of the complex inter-organ of abdominal images. Similarly, the deep learning models requires enormous amount of training data in order to provide the best performance. In case of abdominal images, only a few data repositories like VISCERAL[120], NIH Cancer Imaging Archive [35] and UK Bio Bank Imaging Study [121] exists. These repositories does not contains the data of all the abdominal organs, anyhow, combining the organs data was performed. Another limitation of these repositories data is that annotations are not provides with most of the images. This leads to the critical challenge in anatomy and computer aided systems. Therefore, there is need to develop the large datasets with the proper annotations for the multiorgan analysis.
Moving towards the multiorgan segmentation of the abdominal images, annotation is one of the hot challenge. There is a large variation exists in the size, shape, location and appearance of the organs, thus the challenge exist in locating the anatomical structures in the target image. Another important factor is to define the fuzzy boundaries of adjacent organs and soft tissues of abdomen. This become challenging due to the similar visual appearance and the gray level intensities of the organs. One more challenge is to ensure the consistency of global spatial in labeling or annotating of the patches.
Another common challenge facing by the researcher is high inter-subject variability that occurs in abdominal organs due to the differences in disease status, age, physique, gender and intricate relations of inter-organ. Some challenges are persuaded by respiratory cycle, body pose status of digestive system, edema etc. Another important challenge is the inherent inter-organ and intra-organ variability with the development of age and body. It is one of the hot challenge in the future computer aided and computational anatomy systems because it needs the development of more comprehensive multiorgan system that characterize the organs with the diversity of age [122] and fetal stage[123].
7 Conclusion and future recommendations
In this survey paper, we provided the deep learning techniques for the abdominal images classification. The effective implementation of organs segmentation from abdominal images can aid scientific workflows in multiple clinical domains like computer-assisted diagnostic intervention, treatment planning and delivery, surgical planning and delivery, intraoperatablity, planning radiation therapies and so on. These potential benefits recently encouraged the interest in the development of more comprehensive computational anatomical models. Therefore, despite of multi-organ systems, we also surveyed the five single organs of abdomen i.e. liver, kidney, stomach, pancreas and intestines. We highlight some of the notable challenges of the domain in the previous section that pave the path for future research in this domain. Here we present some of the directions that will be helpful for the future researchers of the domain.
There is a need of optimal acquisition device that will investigate the small organs for cancer screening as well as minimize the artifacts in the images. Next, we are suggesting to explore the improved image enhancement and preprocessing algorithms that consigns thresholding to vast utility because several abdominal organs have similar gray levels, homogeneity of image slices and similar visual appearance that become a hurdle of the domain. With the advancement in image processing, there are many thresholding algorithms that tackle this challenge. There is a critical need to develop the publicly available datasets for the researchers as it is one of foremost requirement to perform any machine learning or computer vision task. It can also be noted that the available datasets contain less data samples that is a obstacle in the deployment of deep learning models. Similarly, annotated dataset is also limited that leads to the recurrent challenge in medical imaging. Therefore, there is need to develop the large datasets with the proper annotations for the multiorgan analysis.
With the rise of technological innovation and personalised medicine, big data analytics has the potential to make a huge impact on our life i.e. how it helps to predict, prevent, manage, treat and cure disease. Big data is the name given to the larger and enormous data-sets that are usually complex so that traditional information processing techniques are not enough to deal with them. It helps, government agencies, policy maker and hospital to manage resources, improving medical research, planning preventative methods and managing epidemic [124]. we are suggesting to explore and investigate the implementation of big data in the domain of abdominal imaging based system.
Furthermore, there is need to explore recent CNN architectures like Local Estimation and Global Search (LEGS) [125], Multi-Context (MC) deep learning [126], Multiscale Deep Features (MDF) [127], Deep Contrast Learning (DCL) for salient object detection [128], Encoded LowLevel Distance map (ELD) [129], Deep Hierarchical Saliency (DHS) network for salient object detection [130], Recurrent Fully Convolutional Networks (RFCN) [131], Deep Image Saliency Computing (DISC) [132], Integrating Multi-level Cues (IMC) [133], Non-Local Deep Features (NLDF) for salient object detection [134], Aggregating Multi-level Convolutional Features for Salient Object Detection (AmuletNet) [135] , Deep Saliency (DS) [136], WSS [137], and MSR [138]. Another important technique in medical images is attention mechanism that need to be explore in abdominal imaging systems.
At the end of the article, we graphically represent the future directions in Fig. 5. Having a glance on Fig. 5 it can be concluded that large researches has been conducted but much more is necessary until the problem of the abdominal images classification and segmentation can be considered largely solved.
References
VanGinneken B, Schaefer-Prokop CM, Prokop M (2011) Computer-aided diagnosis: how to move from the laboratory to the clinic. Radiology 261 (3):719–732
Sykes J (2014) Reflections on the current status of commercial automated segmentation systems in clinical practice. Journal of medical radiation sciences 61(3):131–134
Bengio Y, Courville AC, Vincent P (2012) Unsupervised feature learning and deep learning: A review and new perspectives. CoRR, abs/1206.5538 1:2012
Gross RE (1948) A new method for surgical treatment of large omphaloceles. Surgery 24(2):277–292
Kron IL, Harman PKENT, Nolan STANTONP (1984) The measurement of intra-abdominal pressure as a criterion for abdominal re-exploration. Annals of surgery 199(1):28
Malbrain ManuLNG, Cheatham ML, Kirkpatrick A, Sugrue M, Parr M, DeWaele J, Balogh Z, Leppäniemi A, Olvera C, Ivatury R, et al. (2006) Results from the international conference of experts on intra-abdominal hypertension and abdominal compartment syndrome. i. definitions. Intensive care medicine 32(11):1722–1732
Cheatham ML, Malbrain ManuLNG, Kirkpatrick A, Sugrue M, Parr M, DeWaele J, Balogh Z, Leppäniemi A, Olvera C, Ivatury R, et al. (2007) Results from the international conference of experts on intra-abdominal hypertension and abdominal compartment syndrome. ii. recommendations. Intensive care medicine 33(6):951–962
Kirkpatrick AW, Roberts DJ, DeWaele J, Jaeschke R, Malbrain ManuLNG, DeKeulenaer B, Duchesne J, Bjorck M, Leppaniemi A, Ejike JC, et al. (2013) Intra-abdominal hypertension and the abdominal compartment syndrome: updated consensus definitions and clinical practice guidelines from the world society of the abdominal compartment syndrome. Intensive care medicine 39(7):1190–1206
Liu CN, Fatemi M, Waag RC (1983) Digital processing for improvement of ultrasonic abdominal images. IEEE transactions on medical imaging 2 (2):66–75
Mharib AM, Ramli AR, Mashohor S, Mahmood RB (2012) Survey on liver ct image segmentation methods. Artif Intell Rev 37(2):83–95
Priyadarsini S, Selvathi D (2012) Survey on segmentation of liver from ct images. In: 2012 IEEE international conference on advanced communication control and computing technologies (ICACCCT), IEEE, pp 234–238
Campadelli P, Casiraghi E, Esposito A (2009) Liver segmentation from computed tomography scans: a survey and a new algorithm. Artificial intelligence in medicine 45(2-3):185–196
Sindhuja D, Priyadarsini RJ (2016) A survey on classification techniques in data mining for analyzing liver disease disorder. International Journal of Computer Science and Mobile Computing 5(5):483–488
Kumar MK, Sreedevi M, Reddy YCAP (2018) Survey on machine learning algorithms for liver disease diagnosis and prediction. International Journal of Engineering and Technology (UAE) 7:99–102
Kefelegn S, Kamat P (2018) Prediction and analysis of liver disorder diseases by using data mining technique: survey. International Journal of Pure and Applied Mathematics 118(9):765–770
Singh A, Pandey B (2014) Intelligent techniques and applications in liver disorders: a survey. Int J Biomed Eng Technol 16(1):27–70
Huang Q, Zhang F, Li X (2018) Machine learning in ultrasound computer-aided diagnostic systems: a survey. BioMed research international, 2018
Wolz R, Chu C, Misawa K, Fujiwara M, Mori K, Rueckert D (2013) Automated abdominal multi-organ segmentation with subject-specific atlas generation. IEEE transactions on medical imaging 32(9):1723–1730
Chu C, Oda M, Kitasaka T, Misawa K, Fujiwara M, Hayashi Y, Nimura Y, Rueckert D, Mori K (2013) Multi-organ segmentation based on spatially-divided probabilistic atlas from 3d abdominal ct images. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, pp 165–172
Iglesias JE, Sabuncu MR (2015) Multi-atlas segmentation of biomedical images: a survey. Medical image analysis 24(1):205–219
Cerrolaza JJ, Reyes M, Summers RM, González-Ballester MA, Linguraru MG (2015) Automatic multi-resolution shape modeling of multi-organ structures. Medical image analysis 25(1):11–21
Okada T, Linguraru MG, Hori M, Summers RM, Tomiyama N, Sato Y (2015) Abdominal multi-organ segmentation from ct images using conditional shape–location and unsupervised intensity priors. Medical image analysis 26(1):1–18
Wang Z, Bhatia KK, Glocker B, Marvao A, Dawes T, Misawa K, Mori K, Rueckert D (2014) Geodesic patch-based segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, pp 666–673
Xu Z, Burke RP, Lee CP, Baucom RB, Poulose BK, Abramson RG, Landman BA (2015) Efficient multi-atlas abdominal segmentation on clinically acquired ct with simple context learning. Medical image analysis 24 (1):18–27
Tong T, Wolz R, Wang Z, Gao Q, Misawa K, Fujiwara M, Mori K, Hajnal JV, Rueckert D (2015) Discriminative dictionary learning for abdominal multi-organ segmentation. Medical image analysis 23(1):92–104
Suzuki M, Linguraru MG, Okada K (2012) Multi-organ segmentation with missing organs in abdominal ct images. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, pp 418–425
Shimizu A, Ohno R, Ikegami T, Kobatake H, Nawano S, Smutek D (2007) Segmentation of multiple organs in non-contrast 3d abdominal ct images. International journal of computer assisted radiology and surgery 2(3-4):135–142
Park H, Bland PH, Meyer CR (2003) Construction of an abdominal probabilistic atlas and its application in segmentation. IEEE Transactions on medical imaging 22(4):483–492
Campadelli P, Casiraghi E, Pratissoli S, Lombardi G (2009) Automatic abdominal organ segmentation from ct images. ELCVIA: electronic letters on computer vision and image analysis 8(1):1–14
Saxena S, Sharma N, Sharma S, Singh SK, Verma A (2016) An automated system for atlas based multiple organ segmentation of abdominal ct images. Journal of Advances in Mathematics and Computer Science 12(1):1–14
He B, Huang C, Jia F (2015) Fully automatic multi-organ segmentation based on multi-boost learning and statistical shape model search.. In: VISCERAL Challenge@ ISBI, pp 18–21
Lombaert H, Zikic D, Criminisi A, Ayache N (2014) Laplacian forests: Semantic image segmentation by guided bagging. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, pp 496–504
Gibson E, Giganti F, Hu Y, Bonmati E, Bandula S, Gurusamy K, Davidson B, Pereira SP, Clarkson MJ, Barratt DC (2018) Automatic multi-organ segmentation on abdominal ct with dense v-networks. IEEE transactions on medical imaging 37(8):1822–1834
Roth HR, Lu L, Farag A, Shin H-C, Liu J, Turkbey EB, Summers RM (2015) Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation. In: International conference on medical image computing and computer-assisted intervention, Springer, pp 556–564
Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, et al. (2013) The cancer imaging archive (tcia): maintaining and operating a public information repository. Journal of digital imaging 26(6):1045–1057
Xu Z, Lee CP, Heinrich MP, Modat M, Rueckert D, Ourselin S, Abramson RG, Landman BA (2016) Evaluation of six registration methods for the human abdomen on clinically acquired ct. IEEE Trans Biomed Eng 63 (8):1563–1572
Zhou Y, Wang Y, Tang P, Bai S, Shen W, Fishman E, Yuille A (2019) Semi-supervised 3d abdominal multi-organ segmentation via deep multi-planar co-training. In: 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE, pp 121–140
Zhou X, Ito T, Takayama R, Wang S, Hara T, Fujita H (2016) Three-dimensional ct image segmentation by combining 2d fully convolutional network with 3d majority voting. In: Deep Learning and Data Labeling for Medical Applications. Springer, pp 111–120
González G, Washko GR, Estépar R SJ (2018) Multi-structure segmentation from partially labeled datasets. application to body composition measurements on ct scans. In: Image Analysis for Moving Organ, Breast, and Thoracic Images. Springer, pp 215–224
Regan EA, Hokanson JE, Murphy JR, Make B, Lynch DA, Beaty TH, Curran-Everett D, Silverman EK, Crapo JD (2011) Genetic epidemiology of copd (copdgene) study design. COPD: Journal of Chronic Obstructive Pulmonary Disease 7(1):32–43
Cheng PM, Malhi HS (2017) Transfer learning with convolutional neural networks for classification of abdominal ultrasound images. Journal of digital imaging 30(2):234–243
Roth HR, Oda H, Hayashi Y, Oda M, Shimizu N, Fujiwara M, Misawa K, Mori K (2017) Hierarchical 3d fully convolutional networks for multi-organ segmentation. arXiv preprint arXiv:1704.06382
Larsson M, Zhang Y, Kahl F (2018) Robust abdominal organ segmentation using regional convolutional neural networks. Appl Soft Comput 70:465–471
Gruber N, Antholzer S, Jaschke W, Kremser C, Haltmeier M (2019) A joint deep learning approach for automated liver and tumor segmentation. arXiv preprint arXiv:1902.07971
Frid-Adar M, Diamant I, Klang E, Amitai M, Goldberger J, Greenspan H (2018) Gan-based synthetic medical image augmentation for increased cnn performance in liver lesion classification. Neurocomputing 321:321–331
Li W, Jia F, Hu Q (2015) Automatic segmentation of liver tumor in ct images with deep convolutional neural networks. Journal of Computer and Communications 3(11):146
Ben-Cohen A, Klang E, Amitai MM, Goldberger J, Greenspan H (2018) Anatomical data augmentation for cnn based pixel-wise classification. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, pp 1096–1099
Schmauch B, Herent P, Jehanno P, Dehaene O, Saillard C, Aubé C, Luciani A, Lassau N, Jégou S (2019) Diagnosis of focal liver lesions from ultrasound using deep learning. Diagnostic and Interventional Imaging
Doğantekin A, Özyurt F, Avcı E, Koç M (2019) A novel approach for liver image classification: Ph-c-elm. Measurement 137:332–338
Ben-Cohen A, Diamant I, Klang E, Amitai M, Greenspan H (2016) Fully convolutional network for liver segmentation and lesions detection. In: Deep Learning and Data Labeling for Medical Applications. Springer, pp 77–85
Kline TL, Korfiatis P, Edwards ME, Blais JD, Czerwiec FS, Harris PC, King BF, Torres VE, Erickson BJ (2017) Performance of an artificial multi-observer deep neural network for fully automated segmentation of polycystic kidneys. Journal of digital imaging 30(4):442–448
Yin S, Zhang Z, Li H, Peng Q, You X, Furth SL, Tasian GE, Fan Y (2019) Fully-automatic segmentation of kidneys in clinical ultrasound images using a boundary distance regression network. arXiv preprint arXiv:1901.01982
Kuo C-C, Chang C-M, Liu K-T, Lin W-K, Chiang H-Y, Chung C-W, Ho M-R, Sun P-R, Yang R-L, Chen K-T (2019) Automation of the kidney function prediction and classification through ultrasound-based kidney imaging using deep learning. npj Digital Medicine 2(1):29
AlImran A, Amin MN, Johora FT (2018) Classification of chronic kidney disease using logistic regression, feedforward neural network and wide & deep learning. In: 2018 International Conference on Innovation in Engineering and Technology (ICIET), IEEE, pp 1–6
Salehinejad H, Naqvi S, Colak E, Barfett J, Valaee S (2018) Cylindrical transform: 3d semantic segmentation of kidneys with limited annotated images. In: 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), IEEE, pp 539–543
Marsh JN, Matlock MK, Kudose S, Liu T-C, Stappenbeck TS, Gaut JP, Swamidass SJ (2018) Deep learning global glomerulosclerosis in transplant kidney frozen sections. IEEE transactions on medical imaging 37(12):2718–2728
Pedraza A, Gallego J, Lopez S, Gonzalez L, Laurinavicius A, Bueno G (2017) Glomerulus classification with convolutional neural networks. In: Annual Conference on Medical Image Understanding and Analysis, Springer, pp 839–849
Roth HR, Farag A, Lu L, Turkbey EB, Summers RM (2015) Deep convolutional networks for pancreas segmentation in ct imaging. In: Medical Imaging 2015: Image Processing, vol 9413, International Society for Optics and Photonics, p 94131G
Sekaran K, Chandana P, Krishna NM, Kadry S (2019) Deep learning convolutional neural network (cnn) with gaussian mixture model for predicting pancreatic cancer. Multimed Tools Appl 79:1–15
Oktay O, Schlemper J, Folgoc LL, Lee M, Heinrich M, Misawa K, Mori K, McDonagh S, Hammerla NY, Kainz B, et al. (2018) Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999
Shichijo S, Nomura S, Aoyama K, Nishikawa Y, Miura M, Shinagawa T, Takiyama H, Tanimoto T, Ishihara S, Matsuo K, et al. (2017) Application of convolutional neural networks in the diagnosis of helicobacter pylori infection based on endoscopic images. EBioMedicine 25:106–111
Garcia E, Hermoza R, Castanon CB, Cano L, Castillo M, Castanneda C (2017) Automatic lymphocyte detection on gastric cancer ihc images using deep learning. In: 2017 IEEE 30th International Symposium on Computer-Based Medical Systems (CBMS), IEEE, pp 200–204
Horie Y, Yoshio T, Aoyama K, Yoshimizu S, Horiuchi Y, Ishiyama A, Hirasawa T, Tsuchida T, Ozawa T, Ishihara S, et al. (2019) Diagnostic outcomes of esophageal cancer by artificial intelligence using convolutional neural networks. Gastrointestinal endoscopy 89(1):25–32
Itoh T, Kawahira H, Nakashima H, Yata N (2018) Deep learning analyzes helicobacter pylori infection by upper gastrointestinal endoscopy images. Endoscopy international open 6(02):E139–E144
Li Y, Li X, Xie X, Shen L (2018) Deep learning based gastric cancer identification. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, pp 182–185
Zhu Y, Wang Q-C, Xu M-D, Zhang Z, Cheng J, Zhong Y-S, Zhang Y-Q, Chen W-F, Yao L-Q, Zhou P-H, et al. (2019) Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointestinal endoscopy 89(4):806–815
Rehman A, Naz S, Razzak MI (2019) Writer identification using machine learning approaches: a comprehensive review. Multimedia Tools and Applications 78(8):10889–10931
Bibi K, Naz S, Rehman A (2019) Biometric signature authentication using machine learning techniques: Current trends, challenges and opportunities. Multimed Tools Appl 79:1–52
Yang W, Lu Z, Yu M, Huang M, Feng Q, Chen W (2012) Content-based retrieval of focal liver lesions using bag-of-visual-words representations of single-and multiphase contrast-enhanced ct images. Journal of digital imaging 25(6):708–719
Wang J, Han X-H, Xu Y, Lin L, Hu H, Jin C, Chen Y-W (2017) Sparse codebook model of local structures for retrieval of focal liver lesions using multiphase medical images. International journal of biomedical imaging, 2017. https://doi.org/10.1155/2017/1413297
AlSadeque Z, Khan TI, Hossain QD, Turaba MY (2019) Automated detection and classification of liver cancer from ct images using hog-svm model. In: 2019 5th International Conference on Advances in Electrical Engineering (ICAEE), IEEE, pp 21–26
Pole R, Rajeswari P (2017) Analysis of liver anomalies in ct image using feature extraction method glrlm and phog algorithm. IJERT NLPGPS-17, 5(21)
Pal R, Saraswat M (2019) Histopathological image classification using enhanced bag-of-feature with spiral biogeography-based optimization. Appl Intell 49(9):3406–3424
Bevilacqua V, Pietroleonardo N, Triggiani V, Brunetti A, DiPalma AM, Rossini M, Gesualdo L (2017) An innovative neural network framework to classify blood vessels and tubules based on haralick features evaluated in histological images of kidney biopsy. Neurocomputing 228:143–153
Korkmaz SA, Bínol H, Akçiçek A, Korkmaz MF (2017) A expert system for stomach cancer images with artificial neural network by using hog features and linear discriminant analysis: Hog_lda_ann. In: 2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY), IEEE, pp 000327–000332
Korkmaz SA, Binol H (2018) Classification of molecular structure images by using ann, rf, lbp, hog, and size reduction methods for early stomach cancer detection. J Mol Struct 1156:255–263
Vorontsov E, Cerny M, Régnier P, DiJorio L, Pal CJ, Lapointe R, Vandenbroucke-Menu F, Turcotte S, Kadoury S, Tang A (2019) Deep learning for automated segmentation of liver lesions at ct in patients with colorectal cancer liver metastases. Radiology: Artificial Intelligence 1(2):180014
Christ PF, Elshaer M EA, Ettlinger F, Tatavarty S, Bickel M, Bilic P, Rempfler M, Armbruster M, Hofmann F, DAnastasi M, et al. (2016) Automatic liver and lesion segmentation in ct using cascaded fully convolutional neural networks and 3d conditional random fields. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, pp 415–423
Christ PF, Ettlinger F, Grün F, Elshaera M EA, Lipkova J, Schlecht S, Ahmaddy F, Tatavarty S, Bickel M, Bilic P, et al. (2017) Automatic liver and tumor segmentation of ct and mri volumes using cascaded fully convolutional neural networks. arXiv preprint arXiv:1702.05970
Sun C, Guo S, Zhang H, Li J, Chen M, Ma S, Jin L, Liu X, Li X, Qian X (2017) Automatic segmentation of liver tumors from multiphase contrast-enhanced ct images based on fcns. Artificial intelligence in medicine 83:58–66
Han X (2017) Automatic liver lesion segmentation using a deep convolutional neural network method. arXiv preprint arXiv:1704.07239
Li X, Chen H, Qi X, Dou Q, Fu C-W, Heng P-A (2018) H-denseunet: hybrid densely connected unet for liver and tumor segmentation from ct volumes. IEEE transactions on medical imaging 37(12):2663–2674
Zheng Q, Furth SL, Tasian GE, Fan Y (2019) Computer-aided diagnosis of congenital abnormalities of the kidney and urinary tract in children based on ultrasound imaging data by integrating texture image features and deep transfer learning image features. Journal of pediatric urology 15(1):75–e1
Zheng Q, Tastan G, Fan Y (2018) Transfer learning for diagnosis of congenital abnormalities of the kidney and urinary tract in children based on ultrasound imaging data. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, pp 1487–1490
Kannan S, Morgan LA, Liang B, Cheung MG, Lin CQ, Mun D, Nader RG, Belghasem ME, Henderson JM, Francis JM, et al. (2019) Segmentation of glomeruli within trichrome images using deep learning. Kidney International Reports
Bevilacqua V, Brunetti A, Cascarano GD, Palmieri F, Guerriero A, Moschetta M (2018) A deep learning approach for the automatic detection and segmentation in autosomal dominant polycystic kidney disease based on magnetic resonance images. In: International Conference on Intelligent Computing, Springer, pp 643–649
Sharma K, Rupprecht C, Caroli A, Aparicio MC, Remuzzi A, Baust M, Navab N (2017) Automatic segmentation of kidneys using deep learning for total kidney volume quantification in autosomal dominant polycystic kidney disease. Scientific reports 7(1):2049
Li H, Lin K, Reichert M, Xu L, Braren R, Fu D, Schmid R, Li J, Menze B, Shi K (2018) Differential diagnosis for pancreatic cysts in ct scans using densely-connected convolutional networks. arXiv preprint arXiv:1806.01023
Zhu Z, Xia Y, Shen W, Fishman EK, Yuille AL (2017) A 3d coarse-to-fine framework for automatic pancreas segmentation. arXiv preprint arXiv:1712.00201; 02
Zhu Z, Xia Y, Xie L, Fishman EK, Yuille AL (2019) Multi-scale coarse-to-fine segmentation for screening pancreatic ductal adenocarcinoma. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, pp 3–12
Man Y, Huang Y, Li J FX, Wu F (2019) Deep q learning driven ct pancreas segmentation with geometry-aware u-net. IEEE transactions on medical imaging
Zhang X, Hu W, Chen F, Liu J, Yang Y, Wang L, Duan H, Si J (2017) Gastric precancerous diseases classification using cnn with a concise model. PloS one 12(9):e0185508
Lee JH, Kim YJ, Kim YW, Park S, Choi Y-, Kim YJ, Park DK, Kim KG, Chung J-W (2019) Spotting malignancies from gastric endoscopic images using deep learning. Surgical endoscopy, pp 1–8
Takiyama H, Ozawa T, Ishihara S, Fujishiro M, Shichijo S, Nomura S, Miura M, Tada T (2018) Automatic anatomical classification of esophagogastroduodenoscopy images using deep convolutional neural networks. Scientific reports 8(1):7497
Fu K-S, Mui JK (1981) A survey on image segmentation. Pattern recognition 13(1):3–16
Kumar N (2018) Thresholding in salient object detection: a survey. Multimedia Tools and Applications 77(15):19139–19170
Litjens G, Kooi T, Bejnordi BE, Setio A AA, Ciompi F, Ghafoorian M, Van DerLaak JA, VanGinneken B, Sánchez CI (2017) A survey on deep learning in medical image analysis. Medical image analysis 42:60–88
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105
Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
Targ S, Almeida D, Lyman K (2016) Resnet in resnet: Generalizing residual architectures. arXiv preprint arXiv:1603.08029
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9
Huang G, Liu Z, Van DerMaaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4700–4708
Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826
Chollet F (2017) Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1251–1258
Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C (2018) Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4510–4520
Rehman A, Naz S, Razzak MI, Hameed IA (2019) Automatic visual features for writer identification: a deep learning approach. IEEE Access 7:17149–17157
Rehman A, Naz S, Razzak MI, Akram F, Imran M (2019) A deep learning-based framework for automatic brain tumors classification using transfer learning. Circuits, Systems, and Signal Processing, pp 1–19
Naz A RS, Naseem U, Razzak I, Hameed IA Deep autoencoder-decoder framework for semantic segmentation of brain tumor. Australian Journal of Intelligent Information Processing Systems, pp 53
Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3431–3440
Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention, Springer, pp 234–241
Jégou S, Drozdzal M, Vazquez D, Romero A, Bengio Y (2017) The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 11–19
Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2017) Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence 40(4):834–848
Takikawa T, Acuna D, Jampani V, Fidler S (2019) Gated-scnn: Gated shape cnns for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp 5229–5238
Fan D-P, Cheng M-M, Liu J-J, Gao S-H, Hou Q, Borji A (2018) Salient objects in clutter: Bringing salient object detection to the foreground. In: Proceedings of the European conference on computer vision (ECCV), pp 186–202
Achanta R, Hemami S, Estrada F, Susstrunk S (2009) Frequency-tuned salient region detection. In: 2009 IEEE conference on computer vision and pattern recognition, IEEE, pp 1597–1604
Fan D-P, Gong C, Cao Y, Ren B, Cheng M-M, Borji A (2018) Enhanced-alignment measure for binary foreground map evaluation. arXiv preprint arXiv:1805.10421
Hu P, Wu F, Peng J, Bao Y, Chen F, Kong D (2017) Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets. International journal of computer assisted radiology and surgery 12 (3):399–411
Sharma H, Zerbe N, Klempert I, Hellwich O, Hufnagl P (2017) Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology. Comput Med Imaging Graph 61:2–13
Jimenez-del Toro O, Müller H, Krenn M, Gruenberg K, Taha AA, Winterstein M, Eggel I, Foncubierta-Rodríguez A, Goksel O, Jakab A, et al. (2016) Cloud-based evaluation of anatomical structure segmentation and landmark detection algorithms: Visceral anatomy benchmarks. IEEE transactions on medical imaging 35(11):2459–2475
Sudlow C, Gallacher J, Allen N, Beral V, Burton P, Danesh J, Downey P, Elliott P, Green J, Landray M, et al. (2015) Uk biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS medicine 12(3):e1001779
deBakker BS, deJong KH, Hagoort J, deBree K, Besselink CT, deKanter FroukjeEC, Veldhuis T, Bais B, Schildmeijer R, Ruijter JM, et al. (2016) An interactive three-dimensional digital atlas and quantitative database of human development. Science 354(6315):aag0053
Gholipour A, Rollins CK, Velasco-Annis C, Ouaalam A, Akhondi-Asl A, Afacan O, Ortinau CM, Clancy S, Limperopoulos C, Yang E, et al. (2017) A normative spatiotemporal mri atlas of the fetal brain for automatic segmentation and analysis of early brain growth. Scientific reports 7(1):476
Rehman A, Naz S, Razzak I (2020) Leveraging big data analytics in healthcare enhancement: Trends, challenges and opportunities. arXiv preprint arXiv:2004.09010
Wang L, Lu H, Ruan X, Yang M-H (2015) Deep networks for saliency detection via local estimation and global search. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 3183–3192
Zhao R, Ouyang W, Li H, Wang X (2015) Saliency detection by multi-context deep learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 1265–1274
Li G, Yu Y (2015) Visual saliency based on multiscale deep features. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5455–5463
Li G, Yu Y (2016) Deep contrast learning for salient object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 478–487
Lee G, Tai Y-W, Kim J (2016) Deep saliency with encoded low level distance map and high level features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pp 660–668
Liu N, Han J (2016) Dhsnet: Deep hierarchical saliency network for salient object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 678–686
Wang L, Wang L, Lu H, Zhang P, Ruan X (2016) Saliency detection with recurrent fully convolutional networks. In: European conference on computer vision, Springer, pp 825–841
Chen T, Lin L, Liu L, Luo X, Li X (2016) Disc: Deep image saliency computing via progressive representation learning. IEEE transactions on neural networks and learning systems 27(6):1135–1149
Zhang J, Dai Y, Porikli F (2017) Deep salient object detection by integrating multi-level cues. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), IEEE, pp 1–10
Luo Z, Mishra A, Achkar A, Eichel J, Li S, Jodoin P-M (2017) Non-local deep features for salient object detection. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 6609–6617
Zhang P, Wang D, Lu H, Wang H, Ruan X (2017) Amulet: Aggregating multi-level convolutional features for salient object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp 202–211
Li X, Zhao L, Wei L, Yang M-H, Wu F, Zhuang Y, Ling H, Wang J (2016) Deepsaliency: Multi-task deep neural network model for salient object detection. IEEE transactions on image processing 25(8):3919–3930
Wang L, Lu H, Wang Y, Feng M, Wang D, Yin B, Ruan X (2017) Learning to detect salient objects with image-level supervision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 136–145
Li G, Xie Y, Lin L, Yu Y (2017) Instance-level salient object segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2386–2395
Acknowledgments
The authors are grateful to Dr. Kashif Bilal and Hafiza Zuha Ather for their valuable suggestions, technical language editing, and proofreading. We are also thankful to Dr. Saeeda Naz for her administrative support and writing assistance.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Rehman, A., Khan, F.G. A deep learning based review on abdominal images. Multimed Tools Appl 80, 30321–30352 (2021). https://doi.org/10.1007/s11042-020-09592-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-020-09592-0