Abstract
Purpose
Multimodality medical image fusion supports better visualization of complimentary information given by different medical imaging modalities. This helps the radiologist for the precise diagnosis of disease and treatment planning. Main purpose of this research is to design a unified frame work for fusion of different anatomical imaging modalities and fusion of functional image with an anatomical image.
Methods
A novel image fusion framework utilizing new features in Nonsubsampled Shearlet Transform (NSST) domain is proposed for fusion of anatomical images. The source images are represented in low-frequency (LF) and high-frequency (HF) sub-bands using NSST. LF sub-bands are combined by fusion rule based on sum of variation in squares. HF fusion rule is formulated based on two different features. Inverse NSST of fused sub-bands gives the fused image. Further, this framework is utilized for fusion of functional and anatomical images in l-Alpha-Beta color space.
Results
The proposed image fusion framework is validated on nine sets of CT-MRI, and SPECT-MRI images of different diseases. It is compared with state of the art image fusion methods both quantitatively and qualitatively.
Conclusions
Visual analysis of CT-MRI image fusion results reveal that the fused images by proposed method retain the salient information of both CT and MRI images with more contrast than other methods. Fused SPECT-MRI images by proposed method presents anatomical details of MRI images without altering the functional content of SPECT images. However, spectral distortion is present in other methods. Quantitative comparison proved the superiority proposed method compared to other methods.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Nemec SF, Donat MA, Mehrain S, Friedrich K, Krestan C, Matula C, Imhof H, Czerny C. CT—MR image data fusion for computer assisted navigated neurosurgery of temporal bone tumors. Eur J Radiol. 2007; 62(2):192–8.
Nemec SF, Peloschek P, Schmook MT, Krestan CR, Hauff W, Matula C, Czerny C. CT–MR image data fusion for computerassisted navigated surgery of orbital tumors. Eur J Radiol. 2010; 73(2):224–9.
Scott AM, Macapinlac H, Zhang JJ, Kalaigian H, Graham MC, Divgi CR, Sgouros G, Goldsmith SJ, Larson SM. Clinical applications of fusion imaging in oncology. Nucl Med Biol. 1994; 21(5):775–84.
Wan T, Zhu C, Qin Z. Multifocus image fusion based on robust principal component analysis. Pattern Recogn Lett. 2013; 34(9):1001–8.
Bai X, Zhou F. A unified form of multi-scale top-hat transform based algorithms for image processing. Optik Int J Light Electron Opt. 2013; 124(13):1614–9.
Bai X. Morphological image fusion using the extracted image regions and details based on multi-scale top-hat transform and toggle contrast operator. Digit Signal Process. 2013; 23(2):542–54.
Wang Z, Ma Y, Gu J. Multi-focus image fusion using PCNN. Pattern Recogn. 2010; 43(6):2003–16.
Wang Z, Ma Y, Cheng F, Yang L. Review of pulse-coupled neural networks. Image Vision Comput. 2010; 28(1):5–13.
Kavitha CT, Chellamuthu C. Medical image fusion based on hybrid intelligence. Appl Soft Comput. 2014; 20:83–94.
Li H, Manjunath BS, Mitra SK. Multisensor image fusion using the wavelet transform. Graph Model Im Proc. 1995; 57(3):235–45.
Ali FE, El-Dokany IM, Saad AA, Abd El-Samie FE-S. Curvelet fusion of MR and CT images. Prog Electromagn Res C. 2008; 3:215–24.
Yang L, Guo BL, Ni W. Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform. Neurocomputing. 2008; 72(1–3):203–11.
Lu HM, Nakashima S, Li YJ, Zhang LF, Yang SY, Seiichi S. An improved method for CT/MRI image fusion on bandelets transform domain. Appl Mech Mater. 2012; 103:700–4.
Bhatnagar G, Jonathan Wu QM, Liu Z. Human visual system inspired multi-modal medical image fusion framework. Expert Syst Appl. 2013; 40(5):1708–20.
Miao Q-g, Shi C, Xu P-f, Yang M, Shi Y-b. A novel algorithm of image fusion using shearlets. Optics Commun. 2011; 284(6):1540–7.
da Cunha AL, Jianping Z, Do MN. The nonsubsampled contourlet transform: theory, design, and applications. IEEE T Image Process. 2006; 15(10):3089–101.
Easley G, Labate D, Lim W-Q. Sparse directional image representations using the discrete shearlet transform. Appl Comput Harmon A. 2008; 25(1):25–46.
Wang Q-L. Nonseparable shearlet transform. IEEE T Image Process. 2013; 22(5):2056–65.
Bhatnagar G, Wu QMJ, Zheng L. Directive contrast based multimodal medical image fusion in NSCT domain. IEEE T Multimedia. 2013;15(5):1014–24.
Chen Y, Xiong J, Liu H-l, Fan Q. Fusion method of infrared and visible images based on neighborhood characteristic and regionalization in NSCT domain. Optik Int J Light Electron Opt. 2014; 125(17):4980–4.
Kong W. Technique for image fusion based on NSST domain INMF. Optik Int J Light Electron Opt. 2014; 125(11):2716–22.
Kong W, Liu J. Technique for image fusion based on NSST domain improved fast non-classical RF. Infrared Phys Techn. 2013; 61:27–36.
Kong W. Technique for gray-scale visual light and infrared image fusion based on non-subsampled shearlet transform. Infrared Phys Techn. 2014; 63:110–8.
Qu X-B, Yan J-W, Xiao H-Z, Zhu Z-Q. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Automatica Sinica. 2008; 34(12):1508–14.
Baohua Z, Xiaoqi L, Weitao J. A multi-focus image fusion algorithm based on an improved dual-channel PCNN in NSCT domain. Optik Int J Light Electron Opt. 2013; 124(20):4104–9.
Das S, Kundu MK. NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency. Med Biol Eng Comput. 2012; 50(10):1105–14.
Kong WW. Multi-sensor image fusion based on NSST domain I2CM. Electron Lett. 2013; 49(13):802–3.
Kong W, Liu J. Technique for image fusion based on nonsubsampled shearlet transform and improved pulse-coupled neural network. Opt Eng. 2013; 52(1):0170–1.
Kong W, Zhang L, Lei Y. Novel fusion method for visible light and infrared images based on NSST–SF–PCNN. Infrared Phys Techn. 2014; 65:103–12.
Baum KG, Schmidt E, Rafferty K, Krol A, Helguera M. Evaluation of novel genetic algorithm generated schemes for positron emission tomography (PET)/magnetic resonance imaging (MRI) image fusion. J Digit Imaging. 2011; 24(6):1031–43.
Baum KG, Helguera M, Krol A. Fusion viewer: a new tool for fusion and visualization of multimodal medical data sets. J Digit Imaging. 2008; 21(1):59–68.
Tu T-M, Su S-C, Shyu H-C, Huang PS. A new look at IHS-like image fusion methods. Inform Fusion. 2001; 2(3):177–86.
He C, Liu Q, Li H, Wang H. Multimodal medical image fusion based on IHS and PCA. Procedia Eng. 2010; 7:280–5.
Daneshvar S, Ghassemian H. MRI and PET image fusion by combining IHS and retina-inspired models. Inform Fusion. 2010; 11(2):114–23.
Ruderman DL, Cronin TW, Chiao C-C. Statistics of cone responses to natural images: implications for visual coding. J Opt Soc Am A. 1998; 15(8):2036–45.
Reinhard E, Adhikhmin M, Gooch B, Shirley P. Color transfer between images. IEEE Comput Graph. 2001; 21(5):34–41.
Ganasala P, Kumar V. CT and MR image fusion scheme in nonsubsampled contourlet transform domain. J Digit Imaging. 2014; 27(3):407–18.
Xydeas CS, Petrovic V. Objective image fusion performance measure. Electron Lett. 2000; 36(4):308–9.
Piella G, Heijmans H. A new quality measures for image fusion. Conf Proc IEEE Image process. 2004; 3:173–6.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ganasala, P., Kumar, V. Multimodality medical image fusion based on new features in NSST domain. Biomed. Eng. Lett. 4, 414–424 (2014). https://doi.org/10.1007/s13534-014-0161-z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13534-014-0161-z