Abstract
Breast cancer screening is an efficient method to detect breast lesions early. The common screening techniques are tomosynthesis and mammography images. However, the traditional manual diagnosis requires an intense workload for pathologists, and hence is prone to diagnostic errors. Thus, the aim of this study was to build a deep convolutional neural network method for automatic detection, segmentation, and classification of breast lesions in mammography images. Based on deep learning the Mask-CNN (RoIAlign) method was developed to automate RoI segmentation. Then feature extraction, selection and classification were carried out by the DenseNet architecture. Finally, the precision and accuracy of the model was evaluated by the AUC, accuracy and precision metrics. To summarize, the findings of this study show that the methodology may improve the diagnosis and efficiency in automatic tumor localization through medical image classification.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
By screening for indeterminate breast lesions, it is possible to detect breast cancer [1,2,3,4,5,6]. Clinically, the most common and best techniques are images captured from ultrasound [7] and mammography [8, 9] procedures, if there are suspicious lesions. Then further analyses using biopsies [10], histopathological images [11,12,13] and magnetic resonance imaging (MRI) are performed [14].
The ultrasound allows obtaining high quality images, without the need for ionizing radiation, and enables detection of very small lesions, even masses and microcalcifications. However, mammography (x-rays) is currently the most used imaging method to detect breast cancer early in both, symptomatic and asymptomatic patients [2], reducing unnecessary biopsies. Also, World Health Organization recommends it as the standard imaging procedure for early diagnosis.
Specialists can interpret the breast images using the latest breast imaging reporting and data system (BI-RADS) version [15,16,17]. Nevertheless, the traditional manual diagnosis is time consuming and prone to diagnostic errors [18, 19]. Digital images from physiological structures can be processed to visualize hidden diagnostic features [20].
Automated techniques based on Deep Learning (DL) and Machine Learning (ML) [20,21,22,23,24,25], can be utilized for classification, diagnostic accuracy and improvement of localization and tumor process monitoring. Convolutional neural networks (CNN), have been extensively used to analyze medical images [27,28,29,30,31,32,33]. A recent paper [27] by Jimenez et al., reviews DL applications in breast cancer using ultrasound and mammography images.
There are many semi-automated breast tumor classification methodologies [34, 35]. For instance, Ragab et al. [2] used a deep CNN technique and replacing the last fully connected layer with a SVM as breast tumor classifier. However, these semi-automated methods cannot totally relieve the diagnosis burden of the pathologist. Thus, recently automatic DL techniques are gaining attention due to their superior performance in automatic feature extraction, selection and better features discrimination for breast lesions classification [16, 36,37,38]. There are a number of CNN architectures e.g. AlexNet [39], VGGNet [40], ResNet [41], Inception (GoogleNet) [42], DenseNet [43], ImageNet [43] that are of great value in screening and reduces the need for manual processing by experts, thus saving time and resources. In this work we selected DenseNet to solve the vanishing-gradient problem, strengthen feature propagation, feature reuse and reduce the number of parameters as indicated by Huang et al. [18].
Therefore, the principal contribution of this paper is to present a novel Deep CNN method for automatic segmentation and a DenseNet for feature selection and classification of breast lesions in both Cranio-Caudal (CC) and Medio Lateral Oblique (MLO) mammography views, and discuss the results obtained from this network.
2 Materials and Methodology
The workflow for this methodology is illustrated in Fig. 1 and consists of the follo-wing steps: (1) Breast Dataset acquisition and preprocessing. (2) RoI (Region of Interest) image segmentation using a Mask R-CNN with RoIAlign technique. (3) Feature selection, extraction and classification using DenseNet architecture. (4) Evaluation of performance metrics. The Mask R-CNN and RoIAlign are discussed below.
2.1 Dataset
Images from a public Breast Cancer Digital Repository (BCDR) were used for training and evaluation of the CNN. The BCDR-DM [44] mammography dataset contains 724 (723 female and 1 male). In addition to individual patient clinical data, the patient mammograms had both CC and MLO image views as well as the coordinates of the lesion contours. The images are grey-level mammograms with a resolution of 3328 (width) by 4084 (height) or 2560 (width) by 3328 (height) pixels, depending on the compression plate used in the acquisition (according to the breast size of the patient).
2.2 Segmentation
Preprocessing consists of breast border extraction, pectoral muscle removal and tumor delineation from the background [24]. This is followed by Region of Interest (RoI) segmentation. The operation is necessary to target and crop the bounding box of the lesions automatically. For that, a statistical cross-validation technique (hold-out splits) was used to divide the dataset into training 80% (579 images) and testing 20% (145 images) where 579 segmentations were manually made by specialized radiologists based on BI-RADS criteria (Fig. 2).
Once the RoI is detected and cropped, we extract the features maps of the tumor contour by a Mask R-CNN [45] network trained using RoI alignment (RoI Align) technique. This technique is based on bilinear interpolation to smoothly crop a patch from a full-image feature maps based on a region proposal network (RPN), and then resize the cropped patch to a desired spatial size using a loss function. This has shown to outperform the use of RoI pooling [28].
The four sampling points in each bin dashed grid represents the RoIAlign method (Fig. 3).
Here the value of each sampling point is computed by bilinear interpolation from the nearby grid points on the feature map. The maxpooling procedure is used by RoI Pooling to convert features in the projected region of the image of any size, (x1) × (y1), into a small fixed window, [x1] × [y1]. The input region is divided into [x1] and [y1] grids, giving approximately every sub-window of size ([x1]/x1) ([y1]/y1). Then maxpooling is applied to every grid.
During the Mask R-CNN training, the loss function L (Eq. 1) is minimized,
where Lclass is the classification loss, Lbox is the bounding-box loss regression and.
Lmask is the average binary cross-entropy loss mask prediction. The parameters Lclass + Lbox, and Lclass are defined by Eqs. (2) and (3).
where, smooth in Eq. (2) is given by:
and, the Lmask, is:
The different variables are interpreted in Table 1.
2.3 Feature Extraction and Classification: DenseNet Architecture
After Mask R-CNN segmented each RoI, DensetNet carried out the features extraction and classification process. DenseNet presents several advantages over other pretrained CNN methods. These include more accuracy, less prone to overfitting and is efficient to train a cross-layer connections structure because it contain shorter connections between layers [18].
In addition, the CNN consists of a number of feedforward layers implementing convolutional filters and pooling layers. After the last pooling layer, the CNN has a number of several fully connected layers that convert the 2D feature maps of the previous layers into a 1D vector for classification [22]. This is represented as:
Here, N is the number of hidden layers, X, the input signal and gN is the corresponding function to the layer N. A typical CNN model convolutional layer consists of a function g, with multiple convolutional kernels (\(h_{1}\),…\(h_{k - 1}\),\(h_{k}\)). Every \(h_{k}\) denotes a linear function in kth kernel, given by:
where (x, y, z) represents pixel position of input X, m represents height, n denotes width, w is depth of the filter, and \(V_{k}\) represents weight of kth kernel. The CNN schematic is shown in Fig. 3.
2.4 Evaluation Metrics
Various metrics are used to quantitatively evaluate the classifier performance of a DL system [33]. These include Accuracy (Acc), Sensitivity (Sen), Specificity (Spe), Area Under the Curve (AUC), Precision, and F1 score.
The trained Mask R-CNN model performance was quantitatively assessed by the mean average precision (MAP), namely the accuracy of lesion detection/segmentation on the validation set:
where A is the model segmentation result, and B is the contour tumor delineated by the radiologist (the ground truth). In the above equation, NT is the number of images; \({N}_{i}^{DR}\) represents the area overlap between the model detected lesion and the true clinical lesion regions and \({N}_{i}^{D}\) is the size of the true clinical lesion.
3 Results
To test the model, we used the training dataset. The left side of Fig. 3 the original cropped image, and the right side the mask produced by a radiologist. The performance of the trained Mask R-CNN model achieved a MAP value of 0.75 for the automatic lesion delineation in the testing dataset.
3.1 Breast DenseNet
Table 2 summarizes the results of the Breast DenseNet model and their comparison performance evaluation with different pre-trained models in terms of the Acc, Sen, Spe and AUC .
4 Discussion
In this work we used a BCDR dataset is one of the most utilized mammography databases for processing images; the others being MIAS, DDSM, and Inbreast [33]. The BCDR database contains 1734 total cases of patients with mammography and ultrasound images, clinical history and lesion segmentation, and has been used to train convolutional networks.
Thus, with respect to the segmentation process, several traditional methodologies have been used for extract the RoI area: i) threshold-based segmentation, (ii) region-based (iii) pixel-based, (iv) model-based (v) edge-based (vi) fuzzy theory, (vii) artificial neural network (ANN) and (viii) active contour-based [27]. But those studies used manually segmentation and the errors in the accuracy of the tumor deliniation can affect the results of the classificfation. This is one of the several reasons why researchers are using DL arquitectures. For example, Chiao et al. [25] built an automatic segmentation and classification model based on Mask RCNN in ultrasound images. It reached a mean average precision (MAP) of 0.75 for the detection and segmentation, which is similar to our results, and a benign or malignant classification accuracy of 85%.
For detection and classification process some traditional studies used support vector machine (SVM) [2, 49] methodology. Those methods extracted features manually from the RoI in breast ultrasound images and then these features were input to the SVM classifier.These were classified as benign or malignant lesions using texture morphological and fractal features. However, in the present work it was not necessary.
DL methods, have been used for their excellent performance in medical image classification. AlMasni [46], trained the YOLO method in clinical mammography images, which successfully (Acc = 97%) identified breast masses. Alkhaleefah et al., [50] used transfer learning technique to classify benign and malignant breast cancer by various CNN architectures: AlexNet, VGGNet, GoogleNet, and ResNet. However, these networks had been trained on large datasets such as ImageNet, which do not contain labeled breast cancer images and therefore lead to poor performance. Huang et al. [18] used a dense CNN to object recognition task and obtained significant improvements over other state-of-the-art [50, 51] with less computation to achieve high performance. He et al. [50] and Huang et al. [51] showed that not all layers may be needed and highlighted the fact there exist a great amount of redundancy in deep residual (ResNet) networks.
Based on these observations, our work used the DenseNet architecture.The Breast-DenseNet DL system presented here can detect the locations of masses on mammograms and classify them as benign or malignant, from the automatically segmented region which successfully accuracy of 97.7%. Also, the proposed methodology successfully identified breast masses in the dense tissues. We did not require filtering and noise elimination before segmentation and feature extraction to improve the accuracy [46]. The RoI regions were automatically delineated and the feature extraction tumor was done via YOLO using Mask RCNN.
5 Conclusions
We conclude that DL promises an improvement over other approaches. The Breast-Dense strategy is state-of-the-art and improves the state-of-the-art classification accuracy when using the BCDR dataset. The YOLO + DenseNet model trained on the dataset, achieved the best accuracy rate overall, and was used to develop a tumor lesion classification tool.
Breast-DenseNet provided highly accurate diagnoses when classifying benign from malignant tumors. Therefore, its predictor could be used as a preliminary tool to assist the diagnosis by the radiologist. Our future research includes deeper architectures as well as ultrasound, histopathology and PET images to deal with problems encountered in mammography images of highly dense breasts. It will be helpful to include other imaging techniques, in combination with mammography during the learning process, to help to model work as a robust breast mass predictor. In conclusion, Table 2 demonstrated that Breast DenseNet achieved better results compared to other state-of-the-art methods, which used the same public dataset.
References
Ferlay, J., et al.: Cancer incidence and mortality worldwide: sources, methods and major patterns in GLOBOCAN 2012. Int. J. Cancer. 136(5), E359–E386 (2015)
Ragab, D.A., Sharkas, M., Marshall, S., Ren, J.: Breast cancer detection using deep convolutional neural networks and support vector machines. Peer J. 7, e6201 (2019)
Shieh, S.H., Hsieh, V.C.R., Liu, S.H., Chien, C.R., Lin, C.C., Wu, T.N.: Delayed time from first medical visit to diagnosis for breast cancer patients in Taiwan. J. Formos. Med. Assoc. 113(10), 696–703 (2014)
Nahid, A.A., Kong, Y.: Involvement of machine learning for breast cancer image classification: a survey. Comput. Math. Methods Med. 2017, 29 (2017). https://doi.org/10.1155/2017/3781951
Bardou, D., Zhang, K., Ahmad, S.M.: Classification of breast cancer based on histology images using convolutional neural networks. IEEE Access 6, 24680–24693 (2018)
Skandalakis, J.E.: Embryology and anatomy of the breast. In: Shiffman, M. (eds) Breast Augmentation, pp. 3–24. Springer, Berlin, Heidelberg (2009). https://doi.org/10.1007/978-3-540-78948-2_1
Huang, Y.L., Chen, D.R., Lin, Y.C.: 3D Contouring for Breast Tumor in Sonography. arXiv preprint arXiv:1901.09407 (2019)
Al Rahhal, M.M.: Breast cancer classification in histopathological images using convolutional neural network. Int. J. Adv. Comput. Sci. Appl. 9(3), 64–68 (2018)
Lim, C.N., Suliong, C., Rao, C.V., et al.: Recent advances in breast cancer diagnosis entering an era of precision medicine. Borneo J. Med. Sci. (BJMS) 13(1), 3–9 (2019)
Karthiga, R., Narasimhan, K.: Automated diagnosis of breast cancer using wavelet based entropy features. In: Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), pp. 274–279. IEEE, Coimbatore, India (2018). https://doi.org/10.1109/ICECA.2018.8474739.
Han, Z., Wei, B., Zheng, Y., Yin, Y., Li, K., Li, S.: Breast cancer multi-classification from histopathological images with structured deep learning model. Sci. Rep. 7(1), 1–10 (2017)
Xie, J., Liu, R., Luttrell IV, J., Zhang, C.: Deep learning based analysis of histopathological images of breast cancer. Front. Gene. 10(80), 19 (2019). https://doi.org/10.3389/fgene.2019.00080
Toğaçar, M., Özkurt, K.B., Ergen, B., Cömert, Z.: BreastNet: a novel convolutional neural network model through histopathological images for the diagnosis of breast cancer. Physica A: Stat. Mech. App. 545,123592 (2020)
Pan, Y., et al.: Brain tumor grading based on neural networks and convolutional neural networks. In: 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 699–702. IEEE, Milan, Italy (2015)
Camacho-Piedra, C., Espíndola-Zarazúa, V.: Actualización de la nomenclatura BI-RADS® por mastografía y ultrasonido. Anales de Radiología, (México). 17(2), 100–108 (2018)
Huang, Y., Han, L., Dou, H., et al.: Two-stage CNNs for computerized BI-RADS categorization in breast ultrasound images. BioMed. Eng. OnLine 18, 8 (2019). https://doi.org/10.1186/s12938-019-0626-5
Liberman, L., Menell, J.H.: Breast imaging reporting and data system (BI-RADS). Radiol. Clin. 40(3), 409–430 (2002)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. IEEE, Honolulu, Hawaii (2017)
Kerlikowske, K., et al.: Performance of screening mammography among women with and without a first-degree relative with breast cancer. Ann. Internal Med. 133(11), 855–863 (2000)
Cao, Z., Duan, L., Yang, G., Yue, T., Chen, Q.: An experimental study on breast lesion detection and classification from ultrasound images using deep learning architec-tures. BMC Med. Imaging, 19(51), 9 (2019). https://doi.org/10.1186/s12880-019-0349-x
Duggento, A., et al.: An Ad Hoc random initialization deep neural network architecture for discriminating malignant breast cancer lesions in mammographic images. Contrast Media Mol. Imaging, 2019, 5982834 (2019). https://doi.org/10.1155/2019/5982834
Munir, K., Elahi, H., Ayub, A., Frezza, F., Rizzi, A.: Cancer diagnosis using deep learning: a bibliographic review. Cancers, 11(9), 1235, (2019). https://doi.org/10.3390/cancers11091235
Chougrad, H., Zouaki, H., Alheyane, O.: Deep convolutional neural networks for breast cancer screening. Comput. Methods Programs Biomed. 157, 19–30 (2018)
Das, K., Conjeti, S., Roy, A.G., Chatterjee, J., Sheet, D.: Multiple instances learning of deep convolutional neural networks for breast histopathology whole slide classification. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 578–581. IEEE, Washington, USA (2018)
Chiao, J.Y., et al.: Detection and classification the breast tumors using mask R-CNN on sonograms. Medicine. 98(19), e15200 (2019)
Jiang, Y., Chen, L., Zhang, H., Xiao, X.: Breast cancer histopathological image classification using convolutional neural networks with small SE-ResNet module. PloS ONE. 14(3), e0214587 (2019)
Jiménez-Gaona, Y., Rodríguez-Álvarez, M.J., Lakshminarayanan, V.: Deep-learning-based computer-aided systems for breast cancer imaging: a critical review. Appl. Sci. 10(22), 8298 (2020). https://doi.org/10.3390/app10228298
Duraisamy, S., Emperumal, S.: Computer-aided mammogram diagnosis system using deep learning convolutional fully complex-valued relaxation neural network classifier. IET Comput. Vision 11(8), 656–662 (2017)
Litjens, G., et al.: A survey on deep learning in medical image analysis. Med. Image Anal. 42, 60–88 (2017)
Castillo, D., Lakshminarayanan, V., Rodríguez-Álvarez, M.J.: MRI images, brain lesions and deep learning appl. Science 11, 1675 (2021). https://doi.org/10.3390/app11041675
Ravì, D., et al.: Deep learning for health informatics. IEEE J. Biomed. Health Inform. 21(1), 4–21 (2016)
Mohsen, H., El-Dahshan, E.S.A., El-Horbaty, E.S.M., Salem, A.B.M.: Classification using deep learning neural networks for brain tumors. Future Comput. Inf. J. 3(1), 68–71 (2018)
Matta, S.: Various image segmentation techniques. Int. J. Comput. Sci. Inf. Technol. (IJCSIT) 5(6), 7536–7539 (2014)
Zhou, Z., Wu, W., Wu, S., Tsui, P.-H., Lin, C.-C., Zhang, L., et al.: Semi-automatic breast ultrasound image segmentation based on mean shift and graph cuts. Ultrasound Imaging 36(4), 256–276 (2014)
Levman, J., Warner, E., Causer, P., Martel, A.: Semi-automatic region-of-interest segmentation based computer-aided diagnosis of mass lesions from dynamic contrast-enhanced magnetic resonance imaging based breast cancer screening. J. Digit. Imaging 27(5), 670–678 (2014)
Yap, M.H., et al.: Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J. Biomed. Health Inform. 22(4), 1218–1226 (2017)
Cheng, B., Ran, L., Chou, Y.H., Cheng, J.Z.: Boundary regularized convolutional neural network for layer parsing of breast anatomy in automated whole breast ultrasound. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer International Publishing, Cham, pp. 259–266 (2017). ISBN 978–3–319–66179–7
Huynh, B., Drukker, K., Giger, M.: MO-DE-207B-06: computer-aided diagnosis of breast ultrasound images using transfer learning from deep convolutional neural networks. Med. Phys. 243(6), 3705 (2016)
Nahid, A.A., Mehrabi, M.A., Kong, Y.: Histopathological breast cancer image classification by deep neural network techniques guided by local clustering. Biomed. Res. Int. 2018, 2362108 (2018). https://doi.org/10.1155/2018/2362108
Ragab, D.A., Sharkas, M., Marshall, S., Ren, J.: Breast cancer detection using deep convolutional neural networks and support vector machines. Peer J. 7, e6201 (2019). https://doi.org/10.7717/peerj.6201
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR, (2015). arXiv preprint arXiv:1409.1556 (2014)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90.
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2261–2269 (2017). https://doi.org/10.1109/CVPR.2017.243
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015). https://doi.org/10.1007/s11263-015-0816-y
Lopez, M.G., et al.: BCDR: a breast cancer digital repository. In: 15th International Conference on Experimental Mechanics, Porto, Portugal, vol. 1215, pp.1–5 (2012). https://bcdr.eu/
Marcomini, K.D., Carneiro, A.A., Schiabel, H.: Application of artificial neural network models in segmentation and classification of nodules in breast ultrasound digital images. Int. J. Biomed. Imaging. 2016, 13 (2016). https://doi.org/10.1155/2016/7987212
Al-Masni, M.A., et al.: Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system. Comput. Methods Programs Biomed. 157, 85–94 (2018)
Debelee, T.G., Schwenker, F., Ibenthal, A., Yohannes, D.: Survey of deep learning in breast cancer image analysis. Evol. Syst. 11(1), 143–163 (2019). https://doi.org/10.1007/s12530-019-09297-2
Ahmed, A.H., Salem, M.A.M.: Mammogram-Based cancer detection using deep convolutional neural networks. In: 2018 13th International Conference on Computer Engineering and Systems (ICCES), pp. 694–699. IEEE, Egypt (2018). https://doi.org/10.1109/ICCES.2018.8639224
Prabhakar, T., Poonguzhali, S.: Automatic detection and classification of benign and malignant lesions in breast ultrasound images using texture morphological and fractal features. In: 2017 10th Biomedical Engineering International Conference (BMEiCON), pp. 1–5. IEEE, Japan (2017)
Alkhaleefah, M., Ma, S.C., Chang, Y.L., Huang, B., Chittem, P.K., Achhannagari, V.P.:https://doi.org/10.3390/app10113999 Double-shot transfer learning for breast cancer classification from X-ray images. Appl. Sci. 10(11), 3999 (2020).
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Huang, G., Sun, Y., Liu, Z., Sedra, D., Weinberger, K.Q.: Deep Networks with Stochastic Depth. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds) Computer Vision – ECCV 2016. ECCV 2016. Lecture Notes in Computer Science, vol 9908. Springer, Cham. (2016). https://doi.org/10.1007/978-3-319-46493-0_39https://doi.org/10.1007/978-3-319-46493-0_39
Acknowledgement
VL would like to thank the natural sciences and engineering research council of Canada (NSERC) for a discovery grant. Y.J.G. and D.C.M. acknowledge the research support of Universidad Técnica Particular de Loja through the project PROY_INV_QUI_2020_2784 and the CSIC grant PTA2019–017113-1/AEI/https://doi.org/10.13039/501100011033.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Jiménez Gaona, Y., Rodriguez-Alvarez, M.J., Espino-Morato, H., Castillo Malla, D., Lakshminarayanan, V. (2021). DenseNet for Breast Tumor Classification in Mammographic Images. In: Rojas, I., Castillo-Secilla, D., Herrera, L.J., Pomares, H. (eds) Bioengineering and Biomedical Signal and Image Processing. BIOMESIP 2021. Lecture Notes in Computer Science(), vol 12940. Springer, Cham. https://doi.org/10.1007/978-3-030-88163-4_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-88163-4_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88162-7
Online ISBN: 978-3-030-88163-4
eBook Packages: Computer ScienceComputer Science (R0)