Abstract
Digital Pathology and examination of microscopy images is broadly used for the investigation of cell morphology or tissue structure. Manual assessment of the images is labour concentrated and inclined to inter-observer and intra-observer variations. The detection and segmentation of cell nuclei is the first step in quantitative analysis of biomedical microscopy images which helps in cancer diagnosis and prognosis. Many methods are available to segment nuclei in the images but a single method does not work on different imaging experiments. It needs to be chosen and designed for every experiment. Here we describe a deep learning approach for segmentation, that could be applied to different types of images and experimental conditions by not adjusting the parameters manually.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Digital Pathology is the process of creating high-resolution images from the digitized histology slides. Digital Pathology is acquiring a lot of significance due to accessibility of WSI scanners [1]. These digitized images allow to apply various image analysis techniques to digital pathology for applications such as identification, segmentation and classification. Previously existing methodologies exhibited their ability not only in reducing the laborious and difficulty in providing accurate quantification but also as second opinion in assisting pathologists to reduce inter-observer variability [2, 3].
Deep Learning is a machine learning paradigm for feature learning which involves in extracting an appropriate feature space exclusively from the data itself. It is a significant feature of deep learning methods, which allows the learned model to be generalized so that it can be used to other autonomous test sets. After training the deep learning network with rich training set, it can be generalized well to not seen circumstances, preventing the requirement of manually engineering features. Thus, deep learning is well for analyzing huge data archives (e.g., TCGA, which includes digital tissue slide images in terms of petabytes).
1.1 Related Work
Various deep learning models have been proposed for cell nuclei segmentation. Song et al. (2014) [4] propose a method based on CNN for the segmentation of cervical nuclei and cytoplasm. They applied a CNN for nuclei detection and then performed coarse segmentation based on Sobel edge operator, morphological operations and thresholding. Xing et al. (2016) [5] generated probability maps for nuclei by applying Two-class CNN to digitized histopathology images. And to solve the problem of overlapping nuclei, the robust shape model (dictionary of nuclei shapes) was constructed and repulsive deformable model at local level was applied. On the other hand, Kumar et al. (2017) [6] proposed Three-class CNN that predicts not only the nuclei and background, but also the boundary of each nucleus. This provided significantly better results in comparison with Two-class problem but the post-processing step was time consuming. The first FCN for semantic segmentation was presented by Long et al. (2015) [7]. Their results showed that the FCN can achieve state-of-the art performance in terms of segmentation. Further, the inference step associated with this method is significantly faster to obtain the corresponding segmentation mask. In order to perform nuclei segmentation in histopathology images, Naylor et al. (2017) [8] used FCN to obtain the nuclei probability map, then watershed method was applied to split the touching nuclei but the nuclei boundaries predicted by this method was not accurate when compared with ground truth image.
Investigation in the area of deep learning is increasing rapidly; hence new architectures are being developed at significantly fast speed. Accounting the importance of cell nuclei segmentation, there are a number of approaches that have been presented to solve this problem, most of which are based on U-Net [9]. U-Net is the most common architecture used for medical image segmentation. Specially designed for biomedical image segmentation, this architecture has conquered the Cell Tracking Challenge in 2015 [9]. Several approaches based on U-Net has presented to resolve the issue of nuclei segmentation. Cui et al. (2018) [10] have proposed a method, inspired by U-Net, to predict nuclei and their contours simultaneously in H&E-stained images. By predicting contour of each nucleus, applying a sophisticated weight map in the loss function they were able to split touching and overlapping nuclei accurately with simple and parameter free post-processing step. Caicedo et al. (2019) [11] trained U-Net model in order to predict the nuclei and their boundaries, giving the loss function with weight which is 10 times more to the boundary class. Winning solutions of the Kaggle data competition 2018[12] were constructed on U-Net and Mask-RCNN. The first best solution by [ods.ai] topcoders [13], used a architecture based on U-Net which is of encoder-decoder type, initializing encoders with pretrained weights. For the post-processing step, a combination of watershed and morphological operations was applied. The third best solution by Deep Retina Team [14] is based on a single Mask-RCNN model using as code-base Matterport’s Mask-RCNN [15]. Kong et al. (2020) [16] have used Two-stage stacked U-Nets, where stage1 for nuclei segmentation and stage2 to tackle the problem of overlapping nuclei. Zhao et al. (2020) [17] used U-Net++, which is a modification to the U-Net [9] architecture, which combined U-Nets of different depths. Pan et al. [18] proposed AS-UNet which is an extension to UNet consists of three parts: encoder module, decoder module and atrous convolutional module. The outcome of the system showed that nuclei could be segmented effectively.
1.2 Nuclei Segmentation
Nuclei segmentation is an important issue because arrangement of nuclei is interrelated with the result [19] and nuclear morphology takes a vital role in different cancer grading schemes [20, 21]. However, there is lot of challenges and difficulties related to this task is associated with image acquisition: presence of noise, background clutter [5], blurriness [22]; Biological data: nucleus occlusion [5], touching or overlapping nuclei [5], variations in shape [22] and texture (differences in chromatin distribution) [10], differences in nuclear appearance in different pathologies [23]; Experimental variations: preparation of samples isn’t uniform [24], variations due to different illumination conditions, use of different staining methods [24]. The review on segmentation [25], shows that detecting these nuclei is not a difficult task, but finding the borders of these nuclei and/or touching nuclei accurately is the present challenge.
1.3 Dataset
The dataset provided by Kaggle 2018 DSB challenge is used. The dataset includes 871 images with 37, 333 manually annotated nuclei. The images represent 31 experiments with 22 cell types, 15 different resolutions and 5 groups of images which are visually indistinguishable. This dataset includes 2D light microscopy images with different staining methods including DAPI, Hoechst or H&E and cells of different sizes which display the structures from variety of organs and animal model. Out of 31 experiments, 16 are for training (670 samples), first-stage evaluation (65 samples) and 15 for second-stage evaluation (106 samples).
2 Proposed Methods
The methodology employed in the experiment is shown in Fig. 1. The methodology has three steps: image pre-processing, nuclei segmentation and post-processing.
2.1 Image Pre-processing
During the process of data collection, due to influence of various factors there exist large imaging differences in the images of the dataset which affect the image segmentation results. Hence, there is a necessity of pre-processing step before segmentation. Firstly, most of the images in the dataset are of grayscale and a few are of colored, the color images are changed into grayscale. Secondly, in some of the images the contrast between the background and nuclei is low, the dataset is pre-processed for histogram equalization to distinguish well the nuclei from the background. Then, to improve signal-to-noise ratio of the image, it is necessary to pre-process the image by filtering. In the experiment, the image is pre-processed by Gaussian smoothing filter. Before training the network, image resizing and normalization is done. To overcome the phenomenon of overfitting in CNN, data augmentation is done by using translations, rotations, horizontal/vertical flipping and zoom.
2.2 Nuclei Segmentation
U-Net architecture is proposed to segment the nuclei from the images in the dataset because of its simplicity towards image segmentation. U-Net is inspired from FCN, however it has more up-sampling layers than FCN, making it symmetric as represented in the Fig. 1.
The U-Net architecture includes two paths [9]. The first path is the down sampling path, the contracting path which is known as encoder. The encoder is composed of convolution and pooling layers, which allows extracting high level features from the image. In the encoder the size of the image decreases, whereas the depth increases. While the spatial information is decreasing the receptive field is increasing, due to max pooling operations. After max pooling operations, less important pixels are removed. The encoder generates feature maps which are low resolution representations of the input image. The second path is up-sampling path, an expanding path also known as decoder. This path converts the low-resolution image into high-resolution image that represents pixel-wise segmentation of the original image. For each layer in the expanding path the image’s height and width are doubled and depth is halved. At this step, spatial information which is present in the contracting path is included into the expanding path and this operation is represented by horizontal gray arrow in Fig. 1.
2.3 Post-processing
After nuclei segmentation, to handle touching/overlapping nuclei watershed transform is used to separate the large objects with the combination of morphological operations.
3 Experimental Results and Discussion
The model is implemented with Keras Functional API over tensorflow framework. Our model took 90 min for training where each step took 2 s on NVIDIA RTX 2080Ti.
3.1 Hyperparameter
Training of the model was done for 50 epochs with batch size 16, Adam Optimizer, Binary cross entropy as loss function, ReLu activation function at convolution layers, sigmoid activation function at output layer and learning rate at le−5. Table 1 lists the hyperparameters.
3.2 Evaluation Metrics
The evaluation metrics used were Precision, Recall, F1 Score and IoU and are calculated as shown in the Eqs. (1–4). TP, FP, TN and FN denote true positive, false positive, true negative and false negative [18].
Table 2 shows the evaluation metric values compared with the state-of-the-art model [18]. The state-of-the-art model in [18] is applied on MOD dataset, a multi-organ which is of 30 H&E images from seven organs such as kidney, breast, colon, stomach, prostate, liver and bladder, 1000 * 1000 resolution with 21,000 manually annotated nuclei. Comparison of the results shows that proposed method performs better than the models in [18].
Table 3 shows the result of proposed method when applied on Kaggle dataset. It shows that the proposed method is behaving significantly better when compared to another dataset shown in Table 2. From the observation it shows that the proposed method is behaving different when applied on different datasets. And also, the method is performing better on multi-organ dataset.
3.3 Segmentation Result
Segmentation result of the model is shown in Fig. 2, which includes some of the images of the dataset used. From Fig. 2, the nucleus positions between original image and the predicted image are similar which indicates that the model behavior is accurate.
4 Conclusion
The dataset set used in the experiment is of diversified data with varying size, shape and color and includes data from multi-organ. Method when applied on different dataset results in different behavior. The result shows that the proposed method proves to be more promising on the multi-organ dataset. Proposed method is a semantic segmentation network, therefore if more than one nucleus is touching, they will be recognized as a single object. The model is applied for first-stage evaluation of the dataset. From the experiment it is observed that, the model is clearly segmenting the images with non-touching nuclei whereas the problem of touching and/or overlapping nuclei is handled by applying postprocessing method.
References
Gurcan, M.N., Boucheron, L.E., Can, A., Madabhushi, A., Rajpoot, N.M., Yener, B.: Histopathological image analysis: a review. IEEE Rev. Biomed. Eng. 2, 147–171 (2009). https://doi.org/10.1109/RBME.2009.2034865
Veta, M., Pluim, J.P.W., van Diest, P.J., Viergever, M.A.: Breast cancer histopathology image analysis: a review. IEEE Trans. Biomed. Eng. 61, 1400–1411 (2014). https://doi.org/10.1109/TBME.2014.2303852
Bhargava, R., Madabhushi, A.: A review of emerging themes in image informatics and molecular analysis for digital pathology. Annu. Rev. Biomed. Eng. 18 (2016). https://doi.org/10.1146/annurev-bioeng-112415-114722
Song, Y., et al.: A deep learning based framework for accurate segmentation of cervical cytoplasm and nuclei. In: 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 2903–2906. IEEE (2014). https://doi.org/10.1109/EMBC.2014.6944230
Xing, F., Xie, Y., Yang, L.: An automatic learning-based framework for robust nucleus segmentation. IEEE Trans. Med. Imaging 35(2), 550–566 (2016). https://doi.org/10.1109/TMI.2015.2481436
Kumar, N., Verma, R., Sharma, S., Bhargava, S., Vahadane, A., Sethi, A.: A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE Trans. Med. Imaging 36(7), 1550–1560 (2017). https://doi.org/10.1109/TMI.2017.2677499
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015). https://doi.org/10.1109/TPAMI.2016.2572683
Naylor, P., Lae, M., Reyal, F., Walter, T.: Nuclei segmentation in histopathology images using deep neural networks. In: 14th International Symposium on Biomedical Imaging (ISBI 2017), pp. 933–936. IEEE (2017). https://doi.org/10.1109/ISBI.2017.7950669
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Cui, Y., Zhang, G., Liu, Z., Xiong, Z., Hu, J.: A deep learning algorithm for one-step contour aware nuclei segmentation of histopathology images. Med. Biol. Eng. Comput. 57(9), 2027–2043 (2019). https://doi.org/10.1007/s11517-019-02008-8
Caicedo, J.C., et al.: Evaluation of deep learning strategies for nucleus segmentation in fluorescence images. BioRxiv, p. 335216 (2019). https://doi.org/10.1002/cyto.a.2386
Find the nuclei in divergent images to advance medical discovery. https://www.kaggle.com/c/data-science-bowl-2018
[ods.ai] topcoders, 1st place solution. https://www.kaggle.com/c/data-science-bowl-2018/discussion/54741
Deep Retina, 3rd place solution. https://www.kaggle.com/c/data-science-bowl-2018/discussion/56393
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2980–2988 (2017)
Yan, K., Georgi, Z.G., Wang, X., Zhao, H., Lu, H.: Nuclear segmentation in histopathological images using two-staged stacked U-Nets with attention mechanism. Front. Bioeng. Biotechnol. (2020). https://doi.org/10.3389/fbioe.2020.573866
Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: UNet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39(6), 1856–1867 (2020). https://doi.org/10.1109/TMI.2019.2959609
Pan, X., Li, L., Yang, D., He, Y., Liu, Z., Yang, H.: An accurate nuclei segmentation algorithm in pathological image based on deep semantic network. IEEE Access 7, 110674–110686 (2019). https://doi.org/10.1109/ACCESS.2019.2934486
Feldman, M., Shih, N., Mies, C., Tomaszewski, J., Ganesan, S., et al.: Multi-field-of-view strategy for image-based outcome prediction of multi-parametric estrogen receptor-positive breast cancer histopathology: comparison to oncotype DX. J. Pathol. Inform. 2, S1 (2011). https://doi.org/10.4103/2153-3539.92027
Genestie, C., et al.: Comparison of the prognostic value of Scarff-Bloom-Richardson and Nottingham histological grades in a series of 825 cases of breast cancer: major importance of the mitotic count as a component of both grading systems. Anticancer Res. 18(1B), 571–576 (1998)
Humphrey, P.A.: Gleason grading and prognostic factors in carcinoma of the prostate. Mod. Pathol. 17, 292–306 (2004). https://doi.org/10.1038/modpathol.3800054
Liu, Y., Zhang, P., Song, Q., Li, A., Zhang, P., Gui, Z.: Automatic segmentation of cervical nuclei based on deep learning and a conditional random field. IEEE Access 6, 53 709-53 721 (2018). https://doi.org/10.1109/ACCESS.2018.2871153
Ofener, H.H., Homeyer, A., Weiss, N., Molin, J., Lundström, C.F., Hahn, H.K.: Deep learning nuclei detection: a simple approach can deliver state-of-the-art results. Comput. Med. Imaging Graph. 70, 43–52 (2018). https://doi.org/10.1016/j.compmedimag.2018.08.010
Khoshdeli, M., Parvin, B.: Deep leaning models delineates multiple nuclear phenotypes in H&E stained histology sections. arXiv preprint arXiv:1802.04427 (2018)
Irshad, H., Veillard, A., Roux, L., Racoceanu, D.: Methods for nuclei detection, segmentation, and classification in digital histopathology: a review-current status and future potential. IEEE Rev. Biomed. Eng. 7, 97–114 (2014). https://doi.org/10.1109/RBME.2013.2295804
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ramya Shree, H.P., Minavathi, Dinesh, M.S. (2022). Nuclei Segmentation of Microscopic Images from Multiple Organs Using Deep Learning. In: Guru, D.S., Y. H., S.K., K., B., Agrawal, R.K., Ichino, M. (eds) Cognition and Recognition. ICCR 2021. Communications in Computer and Information Science, vol 1697. Springer, Cham. https://doi.org/10.1007/978-3-031-22405-8_23
Download citation
DOI: https://doi.org/10.1007/978-3-031-22405-8_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-22404-1
Online ISBN: 978-3-031-22405-8
eBook Packages: Computer ScienceComputer Science (R0)