Abstract
By merging anatomical and functional imaging, this multimodal image fusion technique aims to capture certain types of tissues and structures throughout the body. Anatomical imaging techniques can produce high-resolution images of interior organs. If we measure or find medical images independently, such as anatomical and functional imaging, we risk losing essential information. The proposed method or approach, unlike many current medical fusion methods, does not suffer from intensity attenuation or loss of critical information because both anatomical images and functional images combine relevant information from images acquired, resulting in both dark and light images being visible when combined. Colour mapping is conducted on functional and anatomical images, and the resulting images are deconstructed into coupled and independent components calculated using spare representations with identical supports and a Pearson correlation constraint, respectively. The resulting optimization issue is tackled using a different minimization algorithm, and the final fusion phase makes use of the max-absolute-value rule. The image is then normalized, and colour mapping is performed once again by colouring the layers until we obtain the perfect fusion image. This experiment makes use of a number of multimodal inputs, including the MR-CT method's competition when compared to existing approaches such as various medical picture fusion methods. For simulation purposes, the MATLAB R2017b version tool is used in this work.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The process of joining two or more separate entities to form a new one is known as fusion [1–2]. Medical diagnoses, treatment, and other healthcare applications rely heavily on image fusions. Multimodal medical image fusion is a prominent subfield of image fusion that has made great progress in recent years [3]. Anatomical functional pictures, in particular, have recently been introduced in a study. Anatomical functional images come in a variety of shapes and sizes, each with its own set of features. Image together, in order to express information obtained from multimodal sources images in the same image at the same time to emphasize their respective benefits, in order to carry out complementary information, as well as comprehensive morphology and functional data that reflects physiological and pathological modifications [4].
2 Multimodal Fusion
In order to overcome the limitations of above-mentioned problems, a novel method is proposed on “Multimodal medical image fusion using minimization algorithm” as shown in Fig. 1.
In proposed method, it has two techniques:
-
1.
Functional to anatomical
-
2.
Anatomical to anatomical.
By using this method, we can obtain fused images with a large amount of information.
Using functional and anatomical images as input sources, these two photographs are combined and used to create the colour map [5]. Convert both RGB photographs to greyscale. Using the discrete cosine transformation, decompose the input images [6–8]. Apply the rule of maximal absolute fusion, carry out the picture reconstruction, and apply brightness to an image to boost overall contrast. Perform the standardization process, which increases the greyscale of the given image to a standard greyscale [9–12]. It now converts the grey colour to RGB. The final integrated output multimodal medical image is shown.
Consider the two input sources to be anatomical to anatomical images [13]. These two photographs are combined and used to create the colour map. Convert both RGB photographs to greyscale. Using the discrete cosine transformation, decompose the input medical photographs [14–15]. Apply the rule of maximal absolute fusion. Complete the image reconstruction. Perform the standardization process, which increases the greyscale of the given image to a standard greyscale. The final fused output multimodal medical image is shown in Figs. 2 and 3.
3 Results and Analysis
Compared to previous existing methods, the model can generate better fused image without any loss of information. Image fusion has so many contrast advantages; basically, it should enhance the image with all perspectives of image as shown in Table 1.
4 Conclusion
This paper examines many ways to picture fusion. Depending on the application, each method offers advantages and disadvantages. Although these methods improve image clarity to some extent, it has been noticed that the majority of them suffer from colour artefacts and image edge roughness. In the medical imaging industry, more information content and visualization in an image are necessary. In general, wavelet-based schemes outperform classical schemes, particularly in terms of preventing colour distortion, according to the findings of the study. As an image fusion method, SWT outperforms PCA and DWT. To overcome these limitations, we proposed a method of multimodal medical image fusion based on a minimization algorithm. The proposed approach of multimodal medical image fusion, which employs a minimization algorithm, yields higher-quality images with no information loss. The proposed method eliminates distortion in fusion photographs and takes substantially less time to execute than earlier methods. It delivers more information and enhances clarity, helping professionals to quickly examine patients’ diagnoses. This approach computes parameters such as average MSE, average absolute value, and elapsed time. The functional to anatomical approach requires substantially less time to perform than the anatomical to anatomical way in this proposed method.
Bibliography
James AP, Dasarathy BV (2004) Medical image fusion: a survey of the state of the art. Inf Fusion 19:4–19
Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Inf Fusion 33:100–112
Du J, Li W, Lu K, Xiao B (2004) An overview of multi-modal medical image fusion. Neurocomputing 215:3–20
Yin M, Liu X, Chen X (2019) Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Trans Instrum Meas 68(1):49–64
Du J, Li W, Xiao B, Nawaz Q (2016) Union Laplacian pyramid with multiple features for medical image fusion. Neurocomputing 194:326–339
Bhatnagar G, Wu QMJ, Liu Z (2013) Directive constrast based multimodal medical image fusion in NSCT domain. IEEE Trans Multimedia 15(5):1014–1024
Jiang Y, Wang M (2014) Image fusion with morphological component analysis. Inf Fusion 18:107–118
Du J, Li W, Xiao B (2017) Anatomical-functional image fusion by information of interest in local Laplacian filtering domain. IEEE Trans Image Process 26(12):58555865
Yang B, Li S (2012) Pixel-level image fusion with simultaneous orthogonal matching pursuits. Inf Fusion 13:10–19
Yu N, Qiu T, Bi F, Wang A (2011) Image features extraction and fusion based on joint spase representation. IEEE J Sel Topics Signal Process 5(5):1074–1082
Nabil A, Nossair Z, El-Hennawy A (2013) Filterbank-Enhanced IHS transform method for satellite image fusion. In: IEEE Apr 16–18, 2013, National Telecommunication Institute, Egypt
Shaik F, Sharma AK, Ahmed SM (2016) Hybrid model for analysis of abnormalities in diabetic cardiomyopathy and diabetic retinopathy related images. SpringerPlus 5:507. https://doi.org/10.1186/s40064-016-2152-2
Mirajkar Pradnya P, Ruikar SD (2013) Image fusion based on stationary wavelet transform. Published by International Journal of Advanced Engineering Research and Studies E-ISSN2249–8974
Kaur A, Sharma R (2016) Medical image fusion with stationary wavelet transform and genetic algorithm. In: Published by international journal of computer applications (0975–8887) international conference on advances in emerging technology (ICAET 2016)
VNMahesh ALLAM, CH NAGARAJU (2014) Blind Extraction Scheme for Digital Image using Spread Spectrum Sequence. Int J Emerg Res Manage Technol 3(28). ISSN: 5681–5688
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Shaik, F., Deepa, M., Pavan, K., Harsha Chaitanya, Y., Sai Yogananda Reddy, M. (2023). Multimodal Medical Image Fusion Using Minimization Algorithm. In: Kumar, A., Mozar, S., Haase, J. (eds) Advances in Cognitive Science and Communications. ICCCE 2023. Cognitive Science and Technology. Springer, Singapore. https://doi.org/10.1007/978-981-19-8086-2_34
Download citation
DOI: https://doi.org/10.1007/978-981-19-8086-2_34
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-8085-5
Online ISBN: 978-981-19-8086-2
eBook Packages: Computer ScienceComputer Science (R0)