Abstract
Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017–2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Deep learning is a machine learning method that teaches computers to perform tasks that humans accomplish without thinking about them. A computer model can learn to carry out categorization tasks directly via images, text, or sound using deep learning. Modern precision can be attained by deep models, sometimes even outperforming human ability. A sizable collection of labelled data and multi-layered neural network structures are used to train models. Cancer has historically been a fatal disease. It can be devastating even in today's technologically advanced world if it isn’t caught in its earliest stages. Millions of lives could be saved by swiftly identifying any malignant cells. Nucleus segmentation is a method for identifying an image's nucleus by segmenting it into different parts. Deep learning is quickly gaining traction in the field of nucleus segmentation and has attracted quite a few researchers with its numerous published research articles demonstrating its usefulness.
Image Segmentation is principally a process that is used to partition a digital image into numerous segments or objects (Szeliski 2010). It is widely employed in several applications ranging from image compression (Rabbani 2002) to medical image analysis (Ker et al. 2017) to robotic perception (Porzi et al. 2016). Image segmentation is categorized as semantic (Ahmed et al. 2020) and instance segmentation (Birodkar et al. 2021). Semantic segmentation groups together parts of an image that belong to the same class. Instance segmentation, which combines object detection and semantic segmentation, finds objects in well-defined categories. Medical image segmentation similar as natural image segmentation refers to the procedure of mining the anticipated object (organ) from a medical image that can be instigated manually, semi-automatically or automatically intending to make anatomical or pathological structures transform indistinctly of the underlying images. Quite a few medical image segmentations take into account Breast and Breast Histopathology Segmentation (Liu et al. 2018a), liver and liver-tumour segmentation (Li 2015) (Vivanti et al. 2015), cell segmentation (Song et al. 2017) etc. as an input imagery and further applies mechanism into it. Medical image segmentation is a key part of Computer-Aided Diagnosis (CAD) and smart medicine, where features are taken from segmented images. Due to the rapid growth of deep learning techniques (Krizhevsky et al. 2017), medical image segmentation is no longer limited to hand-crafted features. Instead, Convolutional Neural Networks (CNN) can efficiently create hierarchical image features, which leads to the most accurate image segmentation models on popular benchmarks. This CNN method has inspired academics to develop deep learning segmentation models for histopathology images. This article focuses on recent trends in Deep Learning for Nucleus Segmentation from Histopathology Images throughout 2017–2021 by discussing U-Net (Ronneberger et al. 2015), SCPP-Net (Chanchal et al. 2021b), Sharp U-Net(Zunair and Hamza 2021), and LiverNet (Aatresh et al. 2021a) etc.
In recent years, Deep Learning-based innovative algorithms have shown state-of-the-art performance in medical imaging segmentation, processing, detection, and classification. The literature review was used to choose the four segmentation models. Only these four models were selected because they have demonstrated excellent nucleus segmentation performance in recent years. This introduction's references were chosen because they accurately represent.
The remaining sections of the paper are schematized as follows: Sect. 2 deliberates upon the importance of nucleus segmentation like cell counting, movement tracking, and morphological study, etc., stressing certain challenges while dealing with the same. The review and discussion on recent trends in Deep Learning for Nucleus Segmentation since 2017 is offered in Sect. 3. In Sect. 4, we also have an analysis based on year-wise published papers, backbone, loss functions for the research initiative since 2017 with a graphical representation based on the most frequently used backbone, loss function, optimizer, dataset etc. over the last five years in the literature survey. The architecture and brief description of some segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) along with their loss function and segmentation quality parameters have been conveyed in Sect. 5. Experimental datasets, training and implementation, and comparison of a few segmentation models along with the experimental outcomes are emphasised in Sect. 6 with some graphical representation based on segmentation results and training loss. Lastly, Sect. 7 discusses the conclusion and future research directions.
2 Nucleus segmentation: need and challenges
This section briefly presents the need for and challenges of nucleus segmentation from histopathology images.
2.1 Need of nucleus segmentation
Segmenting cell nuclei in histopathology images is the preliminary step in analyzing current imaging data for biological and biomedical purposes. The fundamental steps for nucleus segmentation namely Cell counting (Grishagin 2015), Movement tracking (Dewan et al. 2011), Computational pathology (Louis et al. 2015), Cytometric analysis (Yu et al. 1989), Computer-Aided diagnosis (Kowal and Filipczuk 2014) and Morphological study (Abdolhoseini et al. 2019) plays a dynamic role in analysing, diagnosing and grading cancerous cell. These fundamental steps we described as below:
-
a.
Cell Counting: It is a subclass of cytometry considered one of the methods used for counting or quantification of similar cells and is widely employed in numerous research and clinical practices. Superior quality microscopy images can be used with statistical classification algorithms for cell counting and recognition as part of image analysis (Han et al. 2012) performed off-line, keeping the error rate constant (Han et al. 2008).
-
b.
Movement Tracking: Automated tracking and analysis (Meijering et al. 2009) is seen as an important part of biomedical research, both for biological processes and for diagnosing diseases.
-
c.
Computational Pathology: It deals with analysing digitized pathology images with allied metadata wherein nucleus segmentation in digital microscopic tissue images aids high-quality features extraction for nucleus morpho metrics in it (Kumar et al. 2017).
-
d.
Cytometric Analysis: Nucleus segmentation is a significant step in the pipeline of many cytometric analyses. It has been used in a few studies to analyse the nucleus DNA to observe the association between the DNA ploidy pattern and the 5-year survival rate of advanced gastric cancer patients using paraffin-embedded tissue specimens (Kimura and Yonemura 1991).
-
e.
Computer-Aided Diagnosis (CAD): Computer-aided detection, also called CAD, is a useful tool for precise diagnosis and prognosis (Su et al. 2015). It helps doctors interpret medical images.
-
f.
Morphological Study: This complex biological mechanism regulates cell proliferation, differentiation, development and disease (Jevtic et al. 2014). Cell morphology, for example, requires nucleus segmentation as a fundamental step because it provides valuable information about nucleus morphology, chromatin, DNA content, etc.
2.2 Challenges of nucleus segmentation
Dependent on a variability of measures like nuclides, malignant tumours, their life cycles etc., nuclei appear in different shapes and sizes. Several types of nuclei exist; however, lymphocyte nuclei (LN) are inflammatory nuclei having a regular shape, which have a major role in the immune system, and epithelial nuclei (EN) (Irshad et al. 2013) have nearly uniform chromatin distribution with a smooth boundary, which are the types of interest. Automated nuclei segmentation, though, is a well-researched problem in the field of digital pathology, but segmenting the nucleus turns out to be difficult due to the presence of a variety of blood cells. Furthermore, due to variability induced by elements in slide preparation (dyes concentration, damage of the given tissue sample, etc.) and image acquisition (digital noise existence, explicit features of the slide scanner, etc.), existing methods are unfitting and cannot be applied to all types of histopathology images (Hayakawa et al. 2021). Additionally, some of significant challenges that arise while segmenting nuclei are presented below:
-
1.
There is a high level of heterogeneity in appearance between different types of organs or cells. So, methods that were made based on what was already known about geometric features can’t be used right away on different images.
-
2.
Nuclei are often clustered with many overlapping instances. Separating the clustered nuclei frequently necessitates additional processing.
-
3.
In out-of-focus images, the boundary of nuclei seems blurry. That increases the difficulty of extricating dense illustrations from images. Furthermore, the factors that make the segmentation task difficult are the appearance of the nucleus and the noticeable variation in its shape.
An effective image processing approach must be able to overcome the aforesaid obstacles and challenges while maintaining the quality and accuracy of the underlying images in various situations.
3 Survey on deep learning based nucleus segmentation
For a few years, deep learning models have proven to be effective, vigorous, and accurate in biomedical image segmentation, specifically nucleus segmentation. This section includes a literature review of work done from 2017 to 2021 on Convolutional Neural Network (CNN) Model for nucleus segmentation, as shown in Table 1. The mentioned papers have been collected from the following sources:
-
a.
Google Scholar—https://scholar.google.com
-
b.
IEEE Xplore—https://ieeexplore.ieee.org
-
c.
ScienceDirect—https://www.sciencedirect.com
-
d.
SpringerLink—https://www.springerlink.com
-
e.
ACM Digital Library—https://dl.acm.org
- f.
Each of these above sources is queried with the following combinations of keywords:
-
KW1: Deep Learning based histopathology Image Segmentation.
-
KW2: Deep Learning based hematology Image Segmentation.
-
KW3: Deep Learning based pathology Image Segmentation.
-
KW4: Deep learning based white blood cell segmentation.
-
KW5: Nucleus segmentation using deep learning.
-
KW6: Nucleus segmentation using machine learning.
-
KW7: Nucleus segmentation using Convolutional Neural Network.
-
KW8: White blood cell segmentation using Convolutional Neural Network.
-
KW9: Deep Neural Network based image segmentation.
4 Analysis and discussion
This section presents an analysis based on the reports on nucleus segmentation using CNN models which are reported in Table1. The analysis is based on the year wise publications, datasets, CNN models, utilized segmentation metrics etc.
4.1 Analysis based on publication year
This subsection will present the analysis based on the publication years of the various works that were taken into consideration that are associated with nucleus segmentation. The year wise number of published papers over Nucleus Segmentation has been presented in Fig. 1, which clearly demonstrates that Nucleus Segmentation is paving its way in the field of research.
4.2 Analysis based on dataset
This sub-section highlights a brief description about some of the extensively used Nucleus Segmentation datasets encountered while performing the literature survey as depicted in Table 1 namely TCGA (Tomczak et al. 2015; The Cancer Genome Atlas (TCGA) 2016), TNBC (Naylor et al. 2018), Herlev dataset (Jantzen et al. 2005), MS COCO (Lin et al. 2015), MoNuSeg (Kumar et al. 2017, 2020), DSB2018 (Caicedo et al. 2019; Data science bowl 2018), KMC Liver dataset (Kasturba Medical College 2021), and PanNuke dataset (Gamper et al. 2019, 2020), respectively.
-
(i)
The cancer genome atlas (TCGA) dataset:
The TCGA dataset is sponsored project wherein the researcher aims to analyse and produces an atlas of cancer genomic profiles (openly available datasets (The Cancer Genome Atlas (TCGA) 2016)) with over 20,000 cases of 33 types of cancer acknowledged till date. Kumar et al. for the purpose of nuclear segmentation task (Kumar et al. 2017) generated ground truths by picking around 44 WSIs of multiple organs especially images collected from seven different organs, including the bladder, breast, colon, kidney, liver, prostate, and stomach.
-
(b)
Triple negative breast cancer (TNBC) dataset:
For breast cancer histopathology images, Naylor et al. presented this dataset that deals with the type of breast cancer in which the cancer cells do not have oestrogen or progesterone receptors and yield in adequate amounts of the protein called HER2 and presented nuclear segmentation technique (Naylor et al. 2018) for the same. TNBC encompasses 50 H&E-stained images with 512 × 512 resolution and 4022 annotated nuclei. Entire images of TNBC are extracted from 11 triple negative breast cancer patients, and comprises of several cell types such as myoepithelial breast cells, endothelial cells and inflammatory cells.
-
(iii)
Herlev Pap smear dataset:
Herlev University Hospital and the Technical University of Denmark announced the Herlev Pap smear dataset (Jantzen et al. 2005) comprising of 917 Pap smear images, each of which encompasses one cervical cell segmented and classified by means of ground truth. The images in the dataset are captured at a magnification of 0.201 µm/pixel with a resolution of 156 × 140 pixels on average with the longest length of a side is 768 pixels and the shortest is 32 pixels. Seven classes of cell images are available in this dataset wherein first three classes namely superficial squamous, intermediate squamous, columnar are normal cells and remaining four classes namely are abnormal cells namely mild dysplasia, moderate dysplasia, severe dysplasia, and carcinoma in situ.
-
(d)
Microsoft common objects in context (MS COCO) dataset:
The MS COCO dataset (Lin et al. 2015) investigates the drawbacks of non-iconic views of object representation. Looking at Objects that are not the main emphasis of an image, is generally stated as a non-iconic view and this dataset was created with the help of Amazon Mechanical Turk for data annotations. MS COCO comprises of 2,500,000 labelled instances in 328,000 images and comprises of 91 common object categories, 82 of which have over 5,000 labelled instances.
-
(v)
Multi-organ nuclei segmentation (MoNuSeg) dataset:
Indian Institute of Technology Guwahati prepared a dataset named as Multi-organ Nuclei Segmentation (MoNuSeg) dataset that was published in the official satellite event of MICCAI 2018 contains WSI images of 7 organs (breast, kidney, colon, stomach, prostate, liver, and bladder) from various medical centres (i.e., various stains) of high-resolution WSI of H&E-stained slides from nine tissue types, digitised at 40 × magnification in eighteen different hospitals and obtained from National Cancer Institute’s cancer Genome Atlas (TCGA) (Tomczak et al. 2015) with training set comprising of colour normalized (Vahadane et al. 2016) H&E images from all tissue types, excluding breast.
-
(f)
Data science bowl 2018 (DSB2018) dataset:
The DSB2018 (Caicedo et al. 2019) dataset is yet another dataset freely available at Broad Bio-image Benchmark Collection (Data science bowl 2018) comprising of 670 images of segmented nuclei attained under diverse circumstances namely altering the cell type, magnification, and imaging modality (bright-field vs. fluorescence) that are further resized from various resolution to 256 × 256 (aspect ratio maintained).
-
(vii)
Kasturba Medical College Liver (KMC Liver) dataset:
The KMC Liver (Kasturba Medical College 2021) dataset containing 257 (70: sub-type 0; 80: sub-type 1, 83: sub-type 2, and 24: sub-type 3) original slides each measuring 1920 × 1440 pixels, belonging to 4 sub-types of liver HCC tumour taken from various patients. It includes 80 H&E-stained histopathology images collected by pathologists at Kasturba Medical College (KMC), Manipal.
-
(viii)
PanNuke dataset:
The PanNuke (Gamper et al. 2019) dataset comprise of H&E-stained image set containing 7,904 images of 256 × 256 patches from 19 different tissue types wherein the nuclei are classified into 5 different categories of cell namely neoplastic, inflammatory, connective/soft tissue, dead, and epithelial cells. Gamper et al. (2020) outlines an evaluation process that separates the patches into three folds (later these 3 folds are used to create three different dataset splits) wherein one-fold is helpful for training and the remaining two folds for validation and testing sets, containing 2657, 2524, and 2723 images, respectively.
The following Fig. 2 shows a graphical representation of the most frequently used dataset over the last five years by the researchers, according to Table 1.
4.3 Analysis based on optimizer
Optimizer is a procedure of improving the neural network properties like weight and learning rates. It helps to minimizing the loss and enhancing performance. The following Fig. 3 shows a graphical representation of most frequently used optimizers by the researchers according to Table 1. Adam is the widely used optimizer.
4.4 Analysis based on loss function
Loss Function is a process of examines how a CNN model predicts the intended results. The following Fig. 4 shows the most of the usable loss function that encountered while performing the literature survey as depicted in Table 1. BCE is the mostly used loss function.
4.5 Analysis based on evaluation metric
The evaluation metric or segmentation quality parameters are a measurement process of performance indicator for segmentation models. The following Fig. 5 shows some of the mostly used parameters that had been using the most of the work in the literature survey as depicted in Table 1.
4.6 Analysis based on backbone
Backbone means which feature extracting network is being used in the CNN model architecture. In the following Fig. 6 covers some of the backbone that has been used in Table 1 literature survey's models. U-Net is the most popular backbone for nucleus segmentation.
5 Experimental CNN models
This section provides an overview of some of the greatest CNN models that have been proposed up to this point, including U-Net, SCPP-Net, Sharp U-Net, and LiverNet. These are the models that we have utilized in our comparative analysis, and they are described in the following ways:
5.1 U-Net
FCNs and encoder-decoder models influenced numerous models originally designed for medical/biomedical picture segmentation. Qu et al. proposed the U-Net (Ronneberger et al. 2015) model, in which the network and training approach rely o or data augmentation to effectively learn from a limited number of annotated images. The U-Net design, which is depicted in Fig. 7, comprises two parts: a contracting path for context capture and a symmetrically extending path for accurate localization. An FCN-like design extracts features with 3 × 3 convolutions in the down-sampling or contracting section. Up-convolution, popularly known as deconvolution, is used for feature map reduction and up-sampling for increasing the dimensions to prevent losing pattern information. Navigation of feature maps takes place from the network’s down-sampling section towards the up-sampling section. Finally, feature map analysis takes place with the help of a 1 × 1 convolution, further creating a segmentation map that classifies each pixel in the input picture. For different types of pictures, several U-Net extensions have been developed. A multi-channel feature map is represented by each blue box in Fig. 7 with channel numbers on top, and the white boxes represent the copies of the feature map. The sizes X and Y are indicated in the lower left border of the box, whereas the arrows represent the various operations being carried out.
5.2 Separable convolutional pyramid pooling network (SCPP-Net)
This SCPP-Net model by Chanchal et al. (2021b) was built upon the idea of mining supplementary information at an advanced level, as depicted in Fig. 8. The receptive field of the SCCP layer is expanded by keeping the kernel size constant while regulating four distinct dilation rates. The generated feature maps have an extra parameter called “dilation rate” that could be changed to see bigger areas. The separation of clumped and overlapping nuclei is a critical issue in histopathology image nuclei segmentation. However, by expanding the receptive field at a higher level, this CNN-based design helps to overcome the problem of proximity and overlapping nuclei.
The convolution and max-pooling operations are conducted on the input image during the down-sampling process, giving extreme importance to capturing the context of the image, which leads to the growth in the size of the image on the one hand but, on the other hand, depth drops along the growing route. For the same reason, progressively adding up-sampling to the decoder route enables accurate localization. Figure 8 depicts the proposed SCPP-Net’s comprehensive design, whereas Fig. 9 depicts the SCPP-Net's inclusive and precise SCPP block concept.
5.3 Sharp U-Net
In encoder-decoder networks, predominantly in U-Net (Ronneberger et al. 2015), for the purpose of convalescing fine-grained features, skip connections play a vital role for prediction. Moreover, skip connections have a tendency to semantically associate low- and high-level convolution features of diverse nature, thereby generating totally obscure feature maps. In order to overcome such a flaw, Zunair and Hamza et al. suggested the Sharp U-Net (Zunair and Hamza 2021) architecture, as revealed in Fig. 10 that is applicable to both binary and multi-class segmentation.
The encoder section is divided into five blocks, each of which includes two 3 × 3 convolutional layers with ReLU activations, followed by a 2 × 2 layer known as a “max-pooling layer.” For the convolutional layers, 32, 64, 128, 256, and 512 filters are applied, and the same are used along the input to construct a feature map that basically recapitulates the occurrence and existence of the features that have been mined from the said input. A new connection mechanism, termed a “sharp block,” as depicted in Fig. 11, is formed to contain the up-sampled features with the intention of fusing the encoder and decoder’s low- and high-level features, avoiding the semantic gap issues. The encoder features are rather exposed to a spatial convolution operation that is fundamentally accomplished autonomously on each channel of the encoder features by means of a sharpening spatial kernel beforehand and then making use of meek skip connections between encoder and decoder.
-
(a)
Sharpening Spatial Kernel
Spatial filtering, on the other hand, is a low-level neighbourhood-based image processing method that basically tends to enhance the image (sharpen the image) by performing certain operations on the neighbourhood of individual pixels of the input image. Image convolution with kernels is used to perform high-pass filtering or image sharpening. Convolution kernels, normally referred to as filters, are a second-order derivative operator that might respond to intensity evolutions in any direction. A typical Laplacian high-pass filtering kernel is specified as a matrix, K, that includes a negative value off-center and a single positive value in the centre for image sharpening that takes into account all eight neighbours of the input image's reference pixel.
Kernel adjusts the brightness of the centre pixel in relation to the adjacent pixels while convolving an image with the Laplacian filter. Additionally, the input imagery is added to its convolution with the kernel to produce a refined image. Considering an input image, I, and the resultant sharpened image S, S is generated as S = I + K * 1; wherein * signifies convolution, a kernel weighted neighbourhood-based operator that processes an image by adding each pixel's value to its nearest neighbours.
-
(b)
Sharp block
This block does a depth-wise convolution on a single feature map by using a sharpening spatial kernel given by the Laplacian filter kernel K. This kernel is usually of size WxHxM, where W, H, and M are the width, height, and number of the encoder's feature maps, respectively.
In convolutions, M filters are applied that discretely act on each of the input channels rather than a single filter of a specific size (i.e., 333). Individual input channels are convolved with the kernel K individually with a stride of 1, thereby producing a feature map of WxHx1 dimension. To retain the dimension of the output to be the same as that of the input and to match the size of the decoder features all throughout the connection, padding is performed during the feature fusion of the encoder and decoder sub-networks. The depth-wise convolution layer's ultimate output of size WxHxM is attained at this point by piling these maps together. This planned feature connection is referred to as a “sharp block.” Fig. 11 displays a visual representation of the sharp block's operation flow.
5.4 LiverNet
The convolution procedure resides in the heart of every CNN. 2D discrete linear convolution is articulated as (1) with f and h as two-dimensional signals. Aatresh et al. (2021a) suggested the LiverNet model for liver hepatocellular carcinoma histopathology images.
Using the above definition, they add the bias to Eq. (1) to obtain the computation formula per node in a given layer. In addition, Max-pooling operations are a critical operation in most CNN systems nowadays (Krizhevsky et al. 2017). Consider a sliding window over the input feature map to the max-pool layer to better understand this procedure. By sliding the window with a stride S, this procedure provides the greatest value of the pixels inside the window, which is repeated throughout the entire image. By lowering the number of parameters, max-pool layers help minimise the network's computational complexity and provide an abstract representation of the input data.
Aatresh et al. (2021a) employ a base architecture similar to To˘gaçar et al. (2020), and they extract features from the input image using two convolution layers before the initial max-pool operation. To extract relevant information more effectively, they used CBAM blocks (Woo et al. 2018) and residual blocks deeper in the architecture. After each max-pool operation, they employ intermediate features in the encoder pipeline to feed into ASPP blocks before up-sampling. To merge the pixel data of layers of varied depths, the hyper-column approach employed in To˘gaçar et al. (2020) was applied. The hyper-column technique, along with ASPP blocks, ensures multi-scale feature extraction and information retrieval for further processing in this architecture. They have applied these ideas to the problem of multi-class cancer classification in liver tissue, and a detailed depiction of the proposed model can be found in Fig. 12. The sub-modules of the LiverNet architecture have been described in detail in the following subsections.
-
(a)
CBAM block and residual block
Convolutional Block Attention Module (CBAM), introduced by Woo et al. (2018), comprised of a CBAM block that is proficiently implanted into any CNN architecture without instigating unnecessary computation or memory performance drawbacks. Channel-wise and spatial attention modules were anticipated in succession to produce attention maps that were increased by the input feature map. In CBAM, the channel-wise attention block focuses on what the network needs to focus on, whereas the spatial attention block concentrates on where the network needs to place emphasis.
The CBAM block's behaviour at an intermediate step, taking into consideration a feature map A ∈ ℝH×W×C input in the encoder pipeline, can be mathematically projected as in (2).
where “.” representing element-wise multiplication; fc: ℝH×W×C → ℝ1×1×C and fs: ℝH×W×C → ℝH×W×1 symbolizing the functions of channel-wise and spatial attention blocks, correspondingly. Following the element-wise multiplication between the channel-wise attention map fc and the input feature map A, Ac is the intermediate output. The product of element-wise multiplication between the spatial attention map and Ac, as well as the final output of the CBAM attention block. The channel-wise attention block is composed of concurrent average and max-pooling procedures that share a fully connected network as described in Eq. (3) before it is added.
wherein \(\sigma\) is the popular sigmoid function with FC being the shared fully connected layers. Before feeding the result to a convolution layer, the spatial attention block concatenates the results of max-pool and average-pool operations. Whenever the input A is provided, the action is defined by Eq. (4).
wherein ⊗ represents the two-dimensional convolution operation with a kernel w. (He et al. 2016) has proposed the residual block that is used in the LiverNet architecture, which is comparable to the residual block used in To˘gaçar et al. (2020). The main difference is that the filters in the residual block’s initial convolution layer are lowered by a factor of 4 when compared to the filter used in the residual block presented in To˘gaçar et al. (2020). This not only reduced the number of parameters needed in the model but it also increased the quality of the features derived from the input.
-
(b)
ASPP block
An Atrous Spatial Pyramid Pooling (ASPP) block may successfully extract multi-scale features from a feature map, as demonstrated in Chen et al. (2018). They use a comparable ASPP block in the LiverNet architecture because of its effectiveness. To increase the size of the receptive field without increasing the number of parameters involved, atrous convolution or dilated convolution can be utilised. Consider a two-dimensional signal X that has been convolved with a two-dimensional filter w via Atrous convolution. A convoluted product is represented by the following equation Eq. (5).
where r corresponds to the dilation rate or the rate at which the input signal X is sampled. Atrous convolution has the effect of increasing the receptive field size of the kernel by adding r-1 zeros in between the kernel elements. As a result, if r = 2, a 3 × 3 kernel will have some receptive field size equivalent to a 5 × 5 kernel but with just 9 parameters.
Figure 13 illustrates the ASPP block that is employed in the LiverNet architecture. A feature map is received as an input before concatenation thereafter certain operations are conducted in parallel namely 1 × 1 Convolution; 3 × 3 Convolution with dilation rate = 2; 3 × 3 Convolution with dilation rate = 3; 3 × 3 Convolution with dilation rate = 6; 3 × 3 Convolution with dilation rate = 8 and global average pooling.
To keep the same filter size as the input, the entire convolution and pooling outputs are concatenated and passed through a 11 convolution layer. Further, the convolution output is passed through a batch normalization and ReLU activation layer before being delivered to the bilinear up-sampling layer. The output of the max-pool layers delivers feature-rich information at many sizes and extents; therefore, the ASPP block is placed after each max-pooling operation in the encoder pipeline in the LiverNet architecture.
For the entire models used in our work, we used Binary Cross Entropy (BCE) as the loss function, as well as the Intersection over Union (IoU) and Dice Coefficient (DC) parameters for the quantitative analysis of the nucleus segmentation results. We define these loss functions and parameters as follows:
-
(i)
Loss function
Reducing the loss is the goal of an error-driven learning algorithm, which is accomplished through the use of a good loss function. We anticipated a number and were eager to find how much we were off, the squared error loss appears appropriate for the regression problem. We recognize it was a distribution for classification, so we could use something that captures the difference between the true and projected distributions. In our study, we use a loss function named Binary Cross-Entropy (BCE) (Ahamed et al. 2020).
where \({y}_{k}\) and \({s}_{k}\) represent the ground truth and projected scores for each class k in C respectively. For loss computation, ReLU activation in the intermediate layer and sigmoid activation before are used.. Two distinct symbols, C and C′, represent two classes as used in different equations wherein for C classes, Eq. (6) represents cross-entropy loss; for C′ classes, Eq. (7) represents BCE loss. As projected in Eq. (8) BCE loss is highlighted with respect to the activation unit is either denoted as f(\({s}_{k}\)) or \(\widehat{y}\).
-
(b)
Segmentation quality parameters
In purpose of our study, two segmentation quality parameters have been used, such as Intersection over Union (IoU) (Kanadath et al. 2021) and Dice Coefficient (DC) (Gudhe et al. 2021).
-
(A)
Intersection over Union (IoU): In the fields of semantic segmentation, IoU, popularly known as the Jaccard Index, is yet another frequently used metric that is basically the area overlapped between predicted segmentation and the ground truth, as indicated by the area of union between the predicted segmentation and the ground truth indicated in Eq. (9), wherein A is the ground truth mask image and B is the predicted segmentation result obtained from the model.
$$\mathrm{IoU}= \frac{\mathrm{Area \,of\, Overlap}}{\mathrm{Area\, of \,Union}}= \frac{|\mathrm{A}\cap \mathrm{B}|}{|\mathrm{A}\cup \mathrm{B}|}$$(9) -
(B)
Dice coefficient (DC): This segmentation quality parameter measures the similarity between the predicted mask and the corresponding ground truth mask, which is generally defined as 2 multiplied by the area of overlap divided by the total number of pixels in both images, as depicted in Eq. (10), wherein A is the ground truth mask image and B is the predicted segmentation result obtained from the model.
$$\mathrm{DC}= \frac{2 \times \mathrm{ Area\, of \,Overlap}}{\mathrm{Total\, Number\, of\, pixels}}= \frac{2 \times |\mathrm{A}\cap \mathrm{B}|}{\left|\mathrm{A}\right|+|\mathrm{B}|}$$(10)
6 Experimental result and discussion
This section represents the experimental results of the four well-known deep learning CNN models, namely U-Net, Separable Convolutional Pyramid Pooling Network (SCPP-Net), Sharp U-Net, and LiverNet, over a merged dataset. The specifics of the dataset we used are briefly described below.
6.1 Experimental dataset
For our purpose, we merged three publicly available datasets, such as JPI 2016 dataset (Janowczyk and Madabhushi 2016), IEEE TMI 2019 dataset (Naylor et al. 2018) and PSB 2015 dataset (Irshad et al. 2015), respectively. These three datasets are described in details as follows:
-
(A)
JPI 2016 Dataset: Janowczyk and Madabhushi (2016) announced this dataset which comprises of 143 H&E images of 137 patients and ER + BC a images scanned at 40x.Each image is 2000 by 2000 pixels in size, with around 12,000 nuclei painstakingly segmented throughout the photos. The file is in the following formats: 12750_500_f00003_original.tif for original H&E photos and 12750_500_f00003_mask.png for a mask of the same size, with white pixels representing nuclei. Each image is prefaced by a code i.e., 12,750 to the left of the first underscore (_), which defined with a unique patient number. A few patients (137 patients vs. 143 images) have several images associated with them.
-
(B)
IEEE TMI 2019 Dataset: Naylor et al. (2018) offered this IEEE TMI 2019 dataset generated by the Curie Institute, which comprises of annotated H&E-stained histology images at 40 × magnification wherein total of 122 histopathology slides are annotated. There are 56 annotated pCR, 10 are RCB-I, 49 are RCB-II and 7 are RCB-III.
-
(C)
PSB 2015 Dataset: Irshad et al. (2015) presence this PSB 2015 dataset, images in this dataset came from the TCGA data portal's WSIs of Kidney Renal Clear Cell Carcinoma (KIRC). The TCGA project is jointly supported by National Cancer Institute and the National Human Genome Research Institute and TCGA has undertaken detailed molecular profiling on tens of thousands of tumours, covering the 25 most frequent cancer types. 10 KIRC Whole Slide Images (WSI) from the TCGA data portal (https://tcgadata.nci.nih.gov/tcga/) is selected. Nucleus-rich ROIs and extracted 256 × 256-pixel size images for each ROI at 40 × magnification is identified further from these WSIs.
Therefore, there are a total of 653 images contained inside the combined dataset that was employed. We used random selection to choose 457 images from these three datasets to use for training, 98 images to use for validation, and 98 images to use for testing.
6.2 Training and implementation
To speed up the development procedure and experiments on a machine with Ryzen 5 3550, 16 GB RAM, and a Nvidia GTX 1650, the training and implementation were done in a Jupyter notebook with the latest version of Keras and Tensorflow Python-3 framework. The four deep learning models considered in this study were trained using Sigmoid or SoftMax as the activation function and an adaptive learning rate optimization algorithm known as the Adam optimizer to speed up the training. The loss function employed for the four models is binary cross entropy (BCE) (Ahamed et al. 2020), as highlighted in (9). Further, batch sizes of 8, 4, 10, and 2 for U-Net, SCPP-Net, Sharp U-Net, and LiverNet, respectively, are used to train all 256 × 256 histopathology images.
6.3 Discussion on segmentation results
In this study, a comparative examination of four pre-trained CNN architectures—U-Net (Ronneberger et al. 2015), SCPP-Net (Chanchal et al. 2021b), Sharp U-Net (Zunair and Hamza 2021), and LiverNet (Aatresh et al. 2021a)—is conducted. On a combined dataset, all four models are trained using almost 457 images for training, 98 for validation, and 98 for testing. During training, the network is fed the histopathological images from the training set and the ground truth masks. Two assessment metrics are proposed in this study, namely intersection over union (IoU) (Kanadath et al. 2021) and dice coefficient (DC) (Gudhe et al. 2021), which are shown in (9) and (10), respectively. All the models are then used to predict the masks of the test images. The input size for all models is 256 × 256 pixels. The U-Net and SCPP-Net models that were used had, respectively, 7,725,249 and 2,985,659 trainable parameters. The Sharp U-Net and LiverNet models, on the other hand, have 4,320 and 12,288 non-trainable parameters, respectively, and 7,760,097 and 989,117 trainable parameters. The training times for the U-Net, SCPP-Net, Sharp U-Net, and LiverNet models, which are trained over 1500, 500, 550, and 700 epochs, respectively, are 590 ms, 330 ms, 595 ms, and 490 ms per step. Because U-Net is less sophisticated than the other three models, it takes slightly less time. On the other hand, Sharp U-Net provides better image segmentation and accuracy. The performance of four deep learning models for nuclear segmentation (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) is fairly compared in Table 2.
The main model complexity of U-Net (Ronneberger et al. 2015) architecture is that the resulting segmentation map will be negatively impacted by the feature mismatch in between encoder and decoder paths, which will cause the fusing of semantically incompatible data and hazy feature maps during the learning process. The segmentation of overlapping nuclei is the main complexity of the SCPP-Net (Chanchal et al. 2021b) model. Sharp U-Net (Zunair and Hamza 2021) predicts segmented outcomes that are slightly under-segmented and defective, but it generates far less noise and segmented outputs that are broken. The key difficulty of the LiverNet (Aatresh et al. 2021a) model is that it was difficult to segment the tiniest and most densely packed nuclei.
Figure 14 depicts a graphical representation of the segmentation experiment based on (IoU Score, Dice, and Accuracy%), which clearly demonstrates that Sharp U-Net produces the best segmentation results for two quality parameters, namely IoU and Dice, producing smoother predictions than the other three segmentation models used.
In terms of the Dice Coefficient (DC) and Intersection over Union (IoU) score, the segmentation results of four nuclei segmentation models considered the merged dataset. As shown, LiverNet obtains a DC of 0.5299 and an IoU of 0.3801, which are lower than the other three models. U-Net and SCPP-Net achieve improvements on the DC and IoU scores. U-Net and SCPP-Net obtain DC = 0.6599 and IoU = 0.4934 and 0.4711, respectively, as depicted in Table 2. Sharp U-Net obtains the best results on the DC and IoU at 0.6899 and 0.5276, respectively. This analysis further reveals that Sharp U-Net could be used to get suitable nuclear segmentation results. The four segmentation models used (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) produce accuracy of 83.13%, 81.67%, 82.04%, and 82.28%, respectively. Figure 15 depicts a graphical representation of the training and validation loss for four CNN models.
Figure 16 contains examples of some original images as well as images predicted by various models. These examples highlight the various outcomes of our segmentation results based on our combined dataset. Based on the information shown in this figure, the Sharp U-Net produces a better segmented image than the other three models that were tested.
7 Conclusion and future directions
Recent advancements in the field of computer vision and machine learning strengthen an assemblage of algorithms with remarkable and noteworthy ability to interpret the content of imagery. Several such deep learning algorithms are being imposed and employed on biological images, thereby massively transmuting the analysis and interpretation of imaging data and generating satisfactory outcomes for segmentation and even classification of images across numerous domains. Even though learning parameters in deep architectures necessitate a large volume of labeled training data, transfer learning is promising in such scenarios because it focuses on reusing the learned features and applying them appropriately based on the situation's requirements and demands. This study has three major contributions as a survey paper, which is stated below:
-
a.
An overview table of deep learning models used for nucleus segmentation from 2017 to 2021, with different optimizers used across a range of datasets and for different types of images, will show how different deep learning models are used for nucleus segmentation.
-
b.
A study that makes comparisons between four different deep learning models that were developed very recently for segmenting nuclei.
-
c.
Training the deep learning models mentioned in (ii) was performed by the merged version of three datasets, namely JPI 2016, IEEE TMI 2019, and PSB 2015, each containing thousands of images, and the training results are updated in Table 2 and grouped according to the accuracy results. The experimental results are very encouraging; highlighting that Sharp U-Net delivers high accuracy results in all the cases with minimal loss. The highest accuracy obtained by Sharp U-Net is a DC of 0.6899 and an IoU of 0.5276. The DC and IoU values for U-Net, SCPP-Net, and LiverNet were 0.6599 and 0.4934, 0.6340 and 0.4711, and 0.5299 and 0.3801, respectively.
Therefore, it can be easy to infer that deep learning-based nucleus segmentation for histopathology images is a fresh and exciting research topic to work on and concentrate on. The major challenges one might encounter in the future would be to develop:
-
a.
Innovative and hybrid CNN architectures should be enabled for a wide range of medical image segmentation techniques,
-
b.
loss function should be designed for more specific medical image segmentation,
-
c.
The researchers should place a strong emphasis on transfer learning as well as the interpretability of the CNN model.
-
d.
Nature Inspired Optimization Algorithms (NIOA) based optimized CNN models should be explored.
-
e.
Explore different techniques and Architectures will be explored to further improve the speed and decrease the model size. in addition, larger diversified silent object datasets are needed to train more accurate and robust models
-
f.
Future study will focus on developing deep architecture that requires fewer calculations and can work on embedded devices while producing better test results.
-
g.
The information recession problem needs to be effectively mitigated that occurs in traditional u-shape architecture.
-
h.
Nature inspired optimization algorithms (Rai et al. 2022) like Aquila Optimizer (Abualigah et al. 2021a), Reptile Search algorithm (Abualigah et al. 2022), and Arithmetic optimization algorithm (Abualigah et al. 2021b) can be utilized to build optimized CNN models in the field of medical image segmentation.
Data availability
The authors do not have the permission to share the data.
References
Aatresh AA, Alabhya K, Lal S, Kini J, Saxena PP (2021a) LiverNet: efficient and robust deep learning model for automatic diagnosis of sub-types of liver hepatocellular carcinoma cancer from H&E stained liver histopathology images. Int J Comput Assist Radiol Surg. https://doi.org/10.1007/s11548-021-02410-4
Aatresh AA, Yatgiri RP, Chanchal AK, Kumar A, Ravi A, Das D et al (2021b) Efficient deep learning architecture with dimension-wise pyramid pooling for nuclei segmentation of histopathology images. Comput Med Imaging Graph 93:101975
Abdolhoseini M, Kluge MG, Walker FR, Johnson SJ (2019) Segmentation of heavily clustered nuclei from histopathological images. Sci Rep 9(1):1–13
Abualigah L, Yousri D, Abd Elaziz M, Ewees AA, Al-Qaness MA, Gandomi AH (2021a) Aquila optimizer: a novel meta-heuristic optimization algorithm. Comput Ind Eng 157:107250
Abualigah L, Diabat A, Mirjalili S, Abd Elaziz M, Gandomi AH (2021b) The arithmetic optimization algorithm. Comput Methods Appl Mech Eng 376:113609
Abualigah L, Abd Elaziz M, Sumari P, Geem ZW, Gandomi AH (2022) Reptile Search Algorithm (RSA): a nature-inspired meta-heuristic optimizer. Expert Syst Appl 191:116158
Ahamed MA, Hossain MA, Al Mamun M (2020) Semantic segmentation of self-supervised dataset and medical images using combination of u-net and neural ordinary differential equations. In; 2020 IEEE Region 10 symposium (TENSYMP), pp 238–24
Ahmed L, Iqbal MM, Aldabbas H, Khalid S, Saleem Y, Saeed S (2020) Images data practices for semantic segmentation of breast cancer using deep neural network. J Ambient Intell Humaniz Comput. https://doi.org/10.1007/s12652-020-01680-1
Akram SU et al (2018) Leveraging unlabeled whole-slide-images for mitosis detection. In: Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 11039 LNCS, pp 69–77
Ali MA, Misko O, Salumaa SO, Papkov M, Palo K, Fishman D, Parts L (2021) Evaluating very deep convolutional neural networks for nucleus segmentation from brightfield cell microscopy images. SLAS DISCOV: Adv Sci Drug Discov 26:1125–1137
Allehaibi KHS, Nugroho LE, Lazuardi L, Prabuwono AS, Mantoro T (2019) Segmentation and classification of cervical cells using deep learning. IEEE Access 7:116925–116941
Alom ZMd, Aspiras TH, Taha TM, Asari VK, Bowen TJ, Billiter D, Arkell S (2019) Advanced deep convolutional neural network approaches for digital pathology image analysis: a comprehensive evaluation with different use cases. CoRR. Preprint at http://arxiv.org/abs/1904.09075
Amgad M, Elfandy H, Hussein H, Atteya LA, Elsebaie MAT, Elnasr LSA, Sakr RA, Salem HSE, Ismail AF, Saad AM et al (2019) Structured crowdsourcing enables convolutional segmentation of histology images. Bioinformatics 35(18):3461–3467
Arganda-Carreras I et al (2015) Crowdsourcing the creation of image segmentation algorithms for connectomics. Front Neuroanat 9:142
Armato SG, McLennan G, Bidaut L, McNitt-Gray MF, Meyer CR, Reeves AP, Zhao B, Aberle DR, Henschke CI, Hoffman EA et al (2011) The lung image database consortium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans. Med Phys 38(2):915–931 (PubMed: 21452728)
Basha SS, Ghosh S, Babu KK, Dubey SR, Pulabaigari V, Mukherjee S (2018) Rccnet: an efficient convolutional neural network for histological routine colon cancer nuclei classification. In: 2018 15th international conference on control, automation, robotics and vision (ICARCV). IEEE, pp 1222–1227
Bernal J, S´anchez FJ, Fern´andez-Esparrach G, Gil D, Rodr´ıguez C, Vilari˜no F (2015) WM-DOVA maps for accurate polyp highlighting in colonoscopy: validation vs. saliency maps from physicians. Computerized Medical Imaging and Graphics 43:99–111
Bernal J, Tajkbaksh N, S’anchez FJ, Matuszewski BJ, Chen H, Yu L, Angermann Q, Romain O, Rustad B, Balasingham I et al (2017) Comparative validation of polyp detection methods in video colonoscopy: results from the miccai 2015 endoscopic vision challenge. IEEE Trans Med Imaging 36(6):1231–1249
Birodkar V, Lu Z, Li S, Rathod V, Huang J (2021) The surprising impact of mask-head architecture on novel class segmentation. Preprint at arXiv:2104.00613
Buda M (2020) Brain mri segmentation. [Online]. https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation
Buda M, Saha A, Mazurowski MA (2019) Association of genomic subtypes of lower-grade gliomas with shape features automatically extracted by a deep learning algorithm. Comput Biol Med 109:218–225
Budginaitė E, Morkūnas M, Laurinavičius A, Treigys P (2021) Deep learning model for cell nuclei segmentation and lymphocyte identification in whole slide histology images. Informatica 32(1):23–40
Caicedo JC et al (2019) Nucleus segmentation across imaging experiments: the 2018 Data Science Bowl. Nat Methods 16(12):1247–1253
Camalan S, Mahmood H, Binol H, Araújo ALD, Santos-Silva AR, Vargas PA et al (2021) Convolutional neural network-based clinical predictors of oral dysplasia: class activation map analysis of deep learning results. Cancers. https://doi.org/10.3390/cancers13061291
Candemir S, Jaeger S, Palaniappan K, Musco JP, Singh RK, Xue Z, Karargyris A, Antani S, Thoma G, McDonald CJ (2013) Lung segmentation in chest radiographs using anatomical atlases with non-rigid registration. IEEE Trans Med Imaging 33:577–590
Cardona A et al (2010) An integrated micro- and macroarchitectural analysis of the Drosophila brain by computer-assisted serial section electron microscopy. PLoS Biol 8:e1000502
Celik Y, Talo M, Yildirim O, Karabatak M, Acharya UR (2020) Automated invasive ductal carcinoma detection based using deep transfer learning with whole-slide images. Pattern Recogn Lett 133:232–239
Cervantes-Sanchez F, Maktabi M, Köhler H, Sucher R, Rayes N, Avina-Cervantes JG et al (2021) Automatic tissue segmentation of hyperspectral images in liver and head neck surgeries using machine learning. Artif Intell Surg 1:22–37
Chanchal AK, Lal S, Kini J (2021a) High resolution deep transferred ASPPU-net for nuclei segmentation of histopathology images. Int J Comput Assist Radiol Surg. https://doi.org/10.1007/s11548-021-02497-9. (PMID: 34622381)
Chanchal AK, Kumar A, Lal S, Kini J (2021b) Efficient and robust deep learning architecture for segmentation of kidney and breast histopathology images. Comput Electr Eng 92:107177
Chen L, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2018) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848
Chen S, Ding C, Tao D (2020) Boundary-assisted region proposal networks for nucleus segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, pp 279–288
Chen S, Ding C, Liu M, Tao D (2021) CPP-Net: Context-aware polygon proposal network for nucleus segmentation. Preprint at arXiv:2102.06867
Chidester B, Ton T-V, Tran M-T, Ma J, Do MN (2019) Enhanced rotation-equivariant U-net for nuclear segmentation. In: Proceedings of the 2019 IEEE/CVF conference on computer vision and pattern recognition workshops (CVPRW), Long Beach, 16–17 June 2019, pp 1097–1104
Cicconet M, Hochbaum DR, Richmond DL, Sabatin BL (2017) Bots for software-assisted analysis of image-based transcriptomics. In: Proc. IEEE Int. Conf. Comput. Vis. Workshops (ICCVW), pp 134–142
Codella NC, Gutman D, Celebi ME, Helba B, Marchetti MA, Dusza SW et al (2018) Skin lesion analysis toward melanoma detection: a challenge at the 2017 international symposium on biomedical imaging (isbi), hosted by the international skin imaging collaboration (isic). In: 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018). IEEE, pp 168–172
Codella NCF, Rotemberg V, Tschandl P, Celebi ME, Dusza SW, Gutman D, Helba B, Kalloo A, Liopyris K, Marchetti MA, Kittler H, Halpern A (2019) Skin lesion analysis toward melanoma detection 2018a: A challenge hosted by the international skin imaging collaboration (ISIC), CoRRabs/1902.03368
Cohen JP, Morrison P, Dao L (2020) Covid-19 image data collection. Preprint at arXiv:2003.11597
Cruz-Roa A, Basavanhally A, González F, Gilmore H, Feldman M, Ganesan S, Shih N, Tomaszewski J, Madabhushi A (2014) Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks. In: Med. Imaging 2014 Digit. Pathol. SPIE, pp 904103. https://doi.org/10.1117/12.2043872.
Data science bowl (2018) https://www.kaggle.com/c/data-science-bowl-2018
Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) ImageNet: a large-scale hierarchical image database. In: CVPR09
Dewan MAA, Ahmad MO, Swamy MNS (2011) Tracking biological cells in time-lapse microscopy: an adaptive technique combining motion and topological features. IEEE Trans Biomed Eng 58(6):1637–1647
Dinh TL, Kwon SG, Lee SH, Kwon KR (2021) Breast tumor cell nuclei segmentation in histopathology images using EfficientUnet++ and multi-organ transfer learning. J Korea Multimed Soc 24(8):1000–1011
Dogan RO, Dogan H, Bayrak C, Kayikcioglu T (2021) A two-phase approach using mask R-CNN and 3D U-net for high-accuracy automatic segmentation of pancreas in CT imaging. Comput Methods Programs Biomed 207:106141
Elmore JG et al (2015) Diagnostic concordance among pathologists interpreting breast biopsy specimens. JAMA 313:1122–1132
Ethical approval for the Sheffield cohort was obtained for this study from the HRA and Health and Care Research Wales (HCRW), Reference number 18/WM/0335 on 19 October 2018
Feng L, Song JH, Kim J, Jeong S, Park JS, Kim J (2019) Robust nucleus detection with partially labeled exemplars. IEEE Access 7:162169–162178
Feng Y, Hafiane A, Laurent H (2020) A deep learning based multiscale approach to segment cancer area in liver whole slide image. Preprint at arXiv:2007.12935
Fishman D, Salumaa S-O, Majoral D et al (2019) Segmenting Nuclei in Brightfield Images with Neural Networks. bioRxiv. https://doi.org/10.1101/764894
Gamper J, Koohbanani NA, Benet K, Khuram A, Rajpoot N (2019) PanNuke: an open pan-cancer histology dataset for nuclei instance segmentation and classification. In: Proc. Eur. Congr. Digit. Pathol. (ECDP), pp 11–19
Gamper J et al (2020) PanNuke dataset extension, insights and baselines. Preprint at arXiv:2003.10778
Gong X, Chen S, Zhang B, Doermann D (2021) Style consistent image generation for nuclei instance segmentation. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 3994–4003
Graham S, Vu QD, Raza SEA, Azam A, Tsang YW, Kwak JT, Rajpoot N (2019) Hover-net: simultaneous segmentation and classification of nuclei in multi-tissue histology images. Med Image Anal 58:101563
Grishagin IV (2015) Automatic cell counting with ImageJ. Anal Biochem 473:63–65
Gudhe NR, Behravan H, Sudah M, Okuma H, Vanninen R, Kosma VM, Mannermaa A (2021) Multi-level dilated residual network for biomedical image segmentation. Sci Rep 11(1):1–18
Han JW, Breckon TP, Randell DA, Landini G (2008) Radicular cysts and odontogenic keratocysts epithelia classification using cascaded Haar classifiers (PDF). In: Proc. 12th annual conference on medical image understanding and analysis, pp 54–58 (Retrieved 8 April 2013)
Han JW, Breckon TP, Randell DA, Landini G (2012) The application of support vector machine classification to detect cell nuclei for automated microscopy. Mach vis Appl 23(1):15–24. https://doi.org/10.1007/s00138-010-0275-y. (Retrieved 8 April 2013)
Hassan L, Saleh A, Abdel-Nasser M, Omer OA, Puig D (2021a) Promising deep semantic nuclei segmentation models for multi-institutional histopathology images of different organs. Int J Interact Multimed Artif Intell 6(6)
Hassan L, Saleh A, Abdel-Nasser M, Omer OA, Puig D (2021b) Efficient multi-organ multi-center cell nuclei segmentation method based on deep learnable aggregation network. Traitement Du Signal 38(3):653–661
Hayakawa T, Prasath VB, Kawanaka H, Aronow BJ, Tsuruoka S (2021) Computational nuclei segmentation methods in digital pathology: a survey. Arch Comput Methods Eng 28(1):1–13
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. IEEE Conf Comput vis Pattern Recog (CVPR) 2016:770–778
Ioannidis GS, Trivizakis E, Metzakis I, Papagiannakis S, Lagoudaki E, Marias K (2021) Pathomics and deep learning classification of a heterogeneous fluorescence histology image dataset. Appl Sci 11(9):3796
Irshad H, Veillard A, Roux L, Racoceanu D (2013) Methods for nuclei detection, segmentation, and classification in digital histopathology: a review—current status and future potential. IEEE Rev Biomed Eng 7:97–114
Irshad H, Kouhsari LM, Waltz G, Bucur O, Nowak JA, Dong F, Knoblauch NW, Beck AH (2015) Crowdsourcing image annotation for nucleus detection and segmentation in computational pathology: evaluating experts, automated methods, and the crowd. In: Pacific symposium on biocomputing (PSB). pp 294–305. https://doi.org/10.13140/2.1.4067.0721
Jaeger S, Karargyris A, Candemir S, Folio L, Siegelman J, Callaghan F, Xue Z, Palaniappan K, Singh RK, Antani S et al (2013) Automatic tuberculosis screening using chest radiographs. IEEE Trans Med Imaging 33:233–245
Jahanifar M, Tajeddin NZ, Koohbanani NA, Rajpoot N (2021) Robust interactive semantic segmentation of pathology images with minimal user input. Preprint at arXiv:2108.13368
Janowczyk A, Madabhushi A (2016) Deep learning for digital pathology image analysis: a comprehensive tutorial with selected use cases. J Pathol Inform. https://doi.org/10.4103/2153-3539.186902
Jantzen J, Norup J, Dounias G, Bjerregaard B (2005) Pap-smear benchmark data for pattern classification. Nature inspired Smart Information Systems (NiSIS 2005), pp 1–9
Jevtic P, Edens LJ, Vukovic LD, Levy DL (2014) Sizing and shaping the nucleus: mechanisms and significance. Curr Opin Cell Biol 28:16–27. https://doi.org/10.1016/j.ceb.2014.01.003
Jha D, Riegler MA, Johansen D, Halvorsen P, Johansen HD (2020) Doubleu-net: a deep convolutional neural network for medical image segmentation. In: 2020 IEEE 33rd International symposium on computer-based medical systems (CBMS). IEEE, pp 558–564
Jung H, Lodhi B, Kang J (2019) An automatic nuclei segmentation method based on deep convolutional neural networks for histopathology images. BMC Biomed Eng 1(1):1–12
Kadia DD, Alom MZ, Burada R, Nguyen TV, Asari VK (2021) R2U3D: recurrent residual 3D U-net for lung segmentation. Preprint at arXiv:2105.02290
Kanadath A, Jothi JAA, Urolagin S (2021) Histopathology image segmentation using MobileNetV2 based U-net model. In: 2021 international conference on intelligent technologies (CONIT). IEEE, pp 1–8
Kang Q, Lao Q, Fevens T (2019) Nuclei segmentation in histopathological images using two-stage learning. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, pp 703–711
Kasturba Medical College (KMC) (2021) Mangalore, Manipal Academy of Higher Education (MAHE), Manipal, Karnataka, India for sharing liver cancer histopathology image dataset
Kather JN, Krisam J, Charoentong P, Luedde T, Herpel E, Weis CA et al (2019) Predicting survival from colorectal cancer histology slides using deep learning: a retrospective multicenter study. PLoS Med 16:e1002730
Ke J, Shen Y, Lu Y, Deng J, Wright JD, Zhang Y et al (2021) Quantitative analysis of abnormalities in gynecologic cytopathology with deep learning. Lab Invest 101(4):513–524
Ker J, Wang L, Rao J, Lim T (2017) Deep learning applications in medical image analysis. IEEE Access 6:9375–9389
Khan AR, Khan S, Harouni M, Abbasi R, Iqbal S, Mehmood Z (2021) Brain tumor segmentation using K‐means clustering and deep learning with synthetic data augmentation for classification. Microscopy Research and Technique
Kimura H, Yonemura Y (1991) Flow cytometric analysis of nuclear DNA content in advanced gastric cancer and its relationship with prognosis. Cancer 67(10):2588–2593
Kong Y, Genchev GZ, Wang X, Zhao H, Lu H (2020) Nuclear segmentation in histopathological images using two-stage stacked U-nets with attention mechanism. Front Bioeng Biotechnol 8:1246
Koohbanani NA, Jahanifar M, Gooya A, Rajpoot N (2019) Nuclear instance segmentation using a proposal-free spatially aware deep learning framework. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, pp 622–630
Kowal M, Filipczuk P (2014) Nuclei segmentation for computer-aided diagnosis of breast cancer. Int J Appl Math Comput Sci 24(1):19–31
Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90
Kromp F, Bozsaky E, Rifatbegovic F, Fischer L, Ambros M, Berneder M, Weiss T, Lazic D, Dörr W, Hanbury A et al (2020) An annotated fluorescence image dataset for training nuclear segmentation methods. Sci Data 7:1–8
Kumar N, Verma R, Sharma S, Bhargava S, Vahadane A, Sethi A (2017) A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans Med Imaging 36(7):1550–1560
Kumar N et al (2020) A multi-organ nucleus segmentation challenge. IEEE Trans Med Imaging. https://doi.org/10.1109/TMI.2019.2947628
Lagree A, Mohebpour M, Meti N, Saednia K, Lu FI, Slodkowska E et al (2021) A review and comparison of breast tumor cell nuclei segmentation performances using deep convolutional neural networks. Sci Rep 11(1):1–11
Lal S, Das D, Alabhya K, Kanfade A, Kumar A, Kini J (2021) NucleiSegNet: Robust deep learning architecture for the nuclei segmentation of liver cancer histopathology images. Comput Biol Med 128:104075
Li W (2015) Automatic segmentation of liver tumor in ct images with deep convolutional neural networks. J Comput Commun 3(11):146–151
Li J, Hu Z, Yang S (2019a) Accurate nuclear segmentation with center vector encoding. In: International conference on information processing in medical imaging. Springer, Cham, pp 394–404
Li C et al (2019b) Weakly supervised mitosis detection in breast histopathology images using concentric loss. Med Image Anal 53:165–178
Li L, Wei M, Liu B, Atchaneeyasakul K, Zhou F, Pan Z, Kumar SA, Zhang JY, Pu Y, Liebeskind DS, Scalzo F (2020) Deep learning for hemorrhagic lesion detection and segmentation on brain CT images. IEEE J Biomed Health Inform 25(5):1646–1659
Li Y, Wu X, Li C, Sun C, Li X, Rahaman M, Zhang Y (2021a) Intelligent gastric histopathology image classification using hierarchical conditional random field based attention mechanism. In: 2021a 13th international conference on machine learning and computing, pp 330–335
Li X, Yang H, He J, Jha A, Fogo AB, Wheless LE et al (2021b) BEDS: bagging ensemble deep segmentation for nucleus segmentation with testing stage stain augmentation. In: 2021b IEEE 18th international symposium on biomedical imaging (ISBI). IEEE, pp 659–662
Lin TY, Maire M, Belongie S, Bourdev L, Girshick R, Hays J, Perona P, Ramanan D, Zitnick CL, Dollar P (2015) Microsoft COCO: Common Objects in Context. Preprint at arXiv:1405.0312. [Online]. https://arxiv.org/pdf/1405.0312
Liu J, Xu B, Zheng C, Gong Y, Garibaldi J, Soria D et al (2018a) An end-to-end deep learning histochemical scoring system for breast cancer TMA. IEEE Trans Med Imaging 38(2):617–628
Liu Y, Zhang P, Song Q, Li A, Zhang P, Gui Z (2018b) Automatic segmentation of cervical nuclei based on deep learning and a conditional random field. IEEE Access 6:53709–53721
Liu D, Zhang D, Song Y, Zhang C, Zhang F, ODonnell L, Cai W (2019) Nuclei segmentation via a deep panoptic model with semantic feature fusion. IJCAI, pp 861–868
Liu X, Guo Z, Cao J, Tang J (2021a) MDC-Net: a new convolutional neural network for nucleus segmentation in histopathology images with distance maps and contour information. Comput Biol Med 135:104543
Liu K, Mokhtari M, Li B, Nofallah S, May C, Chang O et al (2021b) Learning melanocytic proliferation segmentation in histopathology images from imperfect annotations. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3766–3775
Ljosa V, Sokolnicki KL, Carpenter AE (2012) Annotated high-throughput microscopy image sets for validation. Nat Methods 9(7):637–637
Louis DN et al (2015) Computational pathology: a path ahead. Arch Pathol Lab Med 140(1):41–50
Lu C, Romo-Bucheli D, Wang X, Janowczyk A, Ganesan S, Gilmore H, Rimm D, Madabhushi A (2018) Nuclear shape and orientation features from h&e images predict survival in early-stage estrogen receptor-positive breast cancers. Lab Invest 98(11):1438
LUNA16—Home (2020) [Online]. https://luna16.grand-challenge.org/. Accessed 4 Nov 2020
Mahbod A, Schaefer G, Ellinger I, Ecker R, Smedby Ö, Wang C (2019) A two-stage U-Net algorithm for segmentation of nuclei in H&E-stained tissues. In: European congress on digital pathology. Springer, Cham, pp 75–82
Mahmood T, Arsalan M, Owais M, Lee MB, Park KR (2020) Artificial intelligence-based mitosis detection in breast cancer histopathology images using faster R-CNN and Deep CNNs. J Clin Med 9:749
Mahmood T, Owais M, Noh KJ, Yoon HS, Koo JH, Haider A et al (2021) Accurate segmentation of nuclear regions with multi-organ histopathology images using artificial intelligence for cancer diagnosis in personalized medicine. J Personal Med 11(6):515
Maktabi M, Köhler H, Ivanova M et al (2020) Classification of hyperspectral endocrine tissue images using support vector machines. Int J Med Robot 16:1–10
Mehta S, Lu X, Weaver D, Elmore JG, Hajishirzi H, Shapiro L (2020) HATNet: an end-to-end holistic attention network for diagnosis of breast biopsy images. Preprint at arXiv:2007.13007
Meijering E, Dzyubachyk O, Smal I, van Cappellen WA (2009) Tracking in cell and developmental biology. Semin Cell Dev Biol 20(8):894–902
Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J et al (2014) The multimodal brain tumor image segmentation benchmark (BraTS). IEEE Trans Med Imaging 34(10):1993–2024
Natarajan VA, Kumar MS, Patan R, Kallam S, Mohamed MYN (2020) Segmentation of nuclei in histopathology images using fully convolutional deep neural architecture. In: 2020 International conference on computing and information technology (ICCIT-1441). IEEE, pp 1–7
Naylor P, Laé M, Reyal F, Walter T (2017) Nuclei segmentation in histopathology images using deep neural networks. In: Biomedical imaging (ISBI 2017), 2017 IEEE 14th international symposium on. IEEE, pp 933–936. https://doi.org/10.1109/isbi.2017.7950669
Naylor P, La’e M, Reyal F, Walter T (2018) Segmentation of nuclei in histopathology images by deep regression of the distance map. IEEE Trans Med Imaging 38(2):448–459. https://doi.org/10.1109/TMI.2018.2865709
Özyurt F, Sert E, Avci E, Dogantekin E (2019) Brain tumor detection based on convolutional neural network with neutrosophic expert maximum fuzzy sure entropy. Measurement 147:106830
Paeng K, Hwang S, Park S, Kim M (2017) A Unified framework for tumor proliferation score prediction in breast histopathology. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 10553 LNCS, pp 231–239
PAIP2019 (2019) https://paip2019.grand-challenge.org/
Plissiti ME, Dimitrakopoulos P, Sfikas G, Nikou C, Krikoni O, Charchanti A (2018) SIPAKMED: a new dataset for feature and image based classification of normal and pathological cervical cells in Pap smear images. In: 2018 25th IEEE international conference on image processing (ICIP). IEEE, pp 3144–3148
Piracicaba Dental Ethical Committee (2019) Registration number 42235421.9.0000.5418
Podder S, Bhattacharjee S, Roy A (2021) An efficient method of detection of COVID-19 using Mask R-CNN on chest X-Ray images. AIMS Biophysics 8(3):281–290
Porzi L, Bulo SR, Penate-Sanchez A, Ricci E, Moreno-Noguer F (2016) Learning depth-aware deep representations for robotic perception. IEEE Robot Autom Lett 2(2):468–475
Qu H, Wu P, Huang Q, Yi J, Yan Z, Li K et al (2020) Weakly supervised deep nuclei segmentation using partial points annotation in histopathology images. IEEE Trans Med Imaging 39(11):3655–3666
Rabbani M (2002) JPEG2000: Image compression fundamentals, standards and practice. J Electron Imaging 11(2):286
Rai R, Das A, Dhal KG (2022) Nature-inspired optimization algorithms and their significance in multi-thresholding image segmentation: an inclusive review. Evol Syst. https://doi.org/10.1007/s12530-022-09425-5
Reza MS, Ma J (2018) Imbalanced histopathological breast cancer image classification with convolutional neural network. In: 14th IEEE international conference on signal processing (ICSP), pp 619–624
Romero FP, Tang A, Kadoury S (2019) Multi-level batch normalization. In: Deep networks for invasive ductal carcinoma cell discrimination in histopathology images. Preprint at arXiv:1901.03684
Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, pp 234–241
Roth HR, Lu L, Lay N, Harrison AP, Farag A, Sohn A, Summers RM (2018) Spatial aggregation of holistically-nested convolutional neural networks for automated pancreas localization and segmentation. Med Image Anal 45:94–107
Roy M, Kong J, Kashyap S, Pastore VP, Wang F, Wong KC, Mukherjee V (2021) Convolutional autoencoder based model histocae for segmentation of viable tumor regions in liver whole-slide images. Sci Rep 11(1):1–10
Schols RM, terLaan M, Stassen LP et al (2014) Differentiation between nerve and adipose tissue using wide-band (350–1,830 nm) in vivo diffuse reflectance spectroscopy. Lasers Surg Med 46:538–545
Schols RM, Alic L, Wieringa FP, Bouvy ND, Stassen LP (2017) Towards automated spectroscopic tissue classification in thyroid and parathyroid surgery. Int J Med Robot 13:e1748
Seetha J, Raja SS (2018) Brain tumor classification using convolutional neural networks. Biomed Pharmacol J 11:1457–1461
Shuvo MB, Ahommed R, Reza S, Hashem MMA (2021) CNL-UNet: a novel lightweight deep learning architecture for multimodal biomedical image segmentation with false output suppression. Biomed Signal Process Control 70:102959
Silva AB, Martins AS, Neves LA, Faria PR, Tosta TA, do Nascimento MZ (2019) Automated nuclei segmentation in dysplastic histopathological oral tissues using deep neural networks. In: Iberoamerican congress on pattern recognition. Springer, Cham, pp 365–374
Sirinukunwattana K, Raza SEA, Tsang Y-W, Snead DR, Cree IA, Rajpoot NM (2016) Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans Med Imaging 35(5):1196–1206
Sohail A, Khan A, Wahab N, Zameer A, Khan S (2021) A multi-phase deep CNN based mitosis detection framework for breast cancer histopathological images. Sci Rep 11(1):1–18
Song T-H, Sanchez V, EIDaly H, Rajpoot NM (2017) Dual-channel active contour model for megakaryocytic cell segmentation in bone marrow trephine histology images. IEEE Trans Bio-Med Eng 64(12):2913–2923
Spanhol FA, Oliveira LS, Petitjean C, Heutte L (2015) A dataset for breast cancer histopathological image classification. IEEE Trans Biomed Eng 63:1455–1462
Su H, Xing F, Kong X, Xie Y, Zhang S, Yang L (2015) Robust cell detection and segmentation in histopathological images using sparse reconstruction and stacked denoising autoencoders. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, pp 383–390
Szeliski R (2010) Computer vision: algorithms and applications. Springer Science & Business Media, Berlin
Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, Liang J (2016) Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging 35(5):1299–1312 (PubMed: 26978662)
Tarighat AP (2021) Breast tumor segmentation using deep learning by U-net network. J Telecommun Electron Comput Eng (JTEC) 13(2):49–54
The Cancer Genome Atlas (TCGA) (2016) [Online]. http://cancergenome.nih.gov/. Accessed 14 May 2016
To˘gaçar M, Özkurt KB, Ergen B, Cömert Z (2020) Breastnet: a novel convolutional neural network model through histopathological images for the diagnosis of breast cancer. Phys A Stat Mech Appl 545:123592. http://www.sciencedirect.com/science/article/pii/S0378437119319995
Tomczak K, Czerwiñska P, Wiznerowicz M (2015) Review the cancer genome atlas (TCGA): an immeasurable source of knowledge. WspolczesnaOnkol Oncol 2015:68–77. https://doi.org/10.5114/wo.2014.47136
Tschandl P, Rosendahl C, Kittler H (2018) The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci Data 5:180161
Ultrasound nerve segmentation (2016) https://www.kaggle.com/c/ultrasound-nerve-segmentation
Vahadane A et al (2016) Structure-preserving color normalization and sparse stain separation for histological images. IEEE Trans Med Imaging 35:1962–1971
VESSEL12—Home (2020) [Online]. https://vessel12.grand-challenge.org/. Accessed 4 Nov 2020
Vivanti R, Ephrat A, Joskowicz L, Karaaslan O, Lev-Cohain N, Sosna J (2015) Automatic liver tumor segmentation in follow-up ct studies using convolutional neural networks. In: Proc. Methods Med. Image Process. Workshop, vol 2
Vu QD, Graham S, To MNN, Shaban M, Qaiser T, Koohbanani NA, Khurram SA, Kurc T, Farahani K, Zhao T et al (2018) Methods for segmentation and classification of digital microscopy tissue images. Preprint at arXiv:1810.13230
Wahab N, Khan A, Lee YS (2019) Transfer learning based deep CNN for segmentation and detection of mitoses in breast cancer histopathological images. Microscopy 68:216–233
Wang EK, Zhang X, Pan L, Cheng C, Dimitrakopoulou-Strauss A, Li Y, Zhe N (2019a) Multi-path dilated residual network for nuclei segmentation and detection. Cells 8(5):499
Wang S, Zhu Y, Yu L, Chen H, Lin H, Wan X et al (2019b) RMDL: recalibrated multi-instance deep learning for whole slide gastric image classification. Med Image Anal 58:101549
Wang H, Xian M, Vakanski A (2020) Bending loss regularized network for nuclei segmentation in histopathology images. In: 2020 IEEE 17th international symposium on biomedical imaging (ISBI). IEEE, pp 1–5
Wang H, Vakanski A, Shi C, Xian M (2021) Bend-Net: bending loss regularized multitask learning network for nuclei segmentation in histopathology images. Preprint at arXiv:2109.15283
Wenzhong L, Huanlan L, Caijian H, Liangjun Z (2020) Classifications of breast cancer images by deep learning. medRxiv
Woo S, Park J, Lee J-Y, Kweon IS (2018) Cbam: convolutional block attention module. Lecture Notes Comput Sci. https://doi.org/10.1007/978-3-030-01234-2_1
Xiao W, Jiang Y, Yao Z, Zhou X, Lian J, Zheng Y (2021) Polar representation-based cell nucleus segmentation in non-small cell lung cancer histopathological images. Biomed Signal Process Control 70:103028
Yoo I, Yoo D, Paeng K (2019) Pseudoedgenet: nuclei segmentation only with point annotations. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, pp 731–739
Yu J-M, Yang L-H et al (1989) Flow cytometric analysis DNA content in esophageal carcinoma: correlation with histologic and clinical features. Cunccir 64:80–82
Zeng Z, Xie W, Zhang Y, Lu Y (2019) RIC-Unet: an improved neural network based on Unet for nuclei segmentation in histology images. IEEE Access 7:21420–21428
Zhang Z, Lin C (2018) Pathological image classification of gastric cancer based on depth learning. ACM Trans Intell Syst Technol 45(11A):263–268
Zhao J, Li Q, Li X, Li H, Zhang L (2019a) Automated segmentation of cervical nuclei in pap smear images using deformable multi-path ensemble model. In: 2019a IEEE 16th international symposium on biomedical imaging (ISBI 2019a). IEEE, pp 1514–1518
Zhao J, Dai L, Zhang M, Yu F, Li M, Li H et al (2019b) PGU-net+: progressive growing of U-net+ for automated cervical nuclei segmentation. In: International workshop on multiscale multimodal medical imaging, pp 51–58
Zhou Y, Xie L, Shen W, Fishman E, Yuille (2016) A Pancreas segmentation in abdominal ct scan: a coarse-to-fine approach. Preprint at arXiv:1612.08230.
Zhou Z, Shin J, Zhang L, Gurudu S, Gotway M, Liang J (2017) Fine-tuning convolutional neural networks for biomedical image analysis: actively and incrementally. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 7340–7351
Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J (2018) Unet++: a nested u-net architecture for medical image segmentation. In: Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, Cham, pp 3–11
Zhou Y, Onder OF, Dou Q, Tsougenis E, Chen H, Heng PA (2019a) Cia-net: Robust nuclei instance segmentation with contour-aware information aggregation. In: International conference on information processing in medical imaging. Springer, Cham, pp 682–693
Zhou Y, Chen H, Xu J, Dou Q, Heng PA (2019b) Irnet: instance relation network for overlapping cervical cell segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, pp 640–648
Zhou Y, Chen H, Lin H, Heng PA (2020) Deep semi-supervised knowledge distillation for overlapping cervical cell instance segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer, Cham, pp 521–531
Zhou C, Jin Y, Chen Y, Huang S, Huang R, Wang Y et al (2021) Histopathology classification and localization of colorectal cancer using global labels by weakly supervised deep learning. Comput Med Imaging Graph 88:101861
Zunair H, Hamza AB (2021) Sharp U-Net: depthwise convolutional network for biomedical image segmentation. Comput Biol Med 136:104699
Funding
There is no funding related with this research.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest. The authors declare that they have no conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Basu, A., Senapati, P., Deb, M. et al. A survey on recent trends in deep learning for nucleus segmentation from histopathology images. Evolving Systems 15, 203–248 (2024). https://doi.org/10.1007/s12530-023-09491-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12530-023-09491-3