Abstract
Image fusion is an emerging area of image processing. It integrates complementary information of different source images into a single fused image. In the proposed work, we have used nonsubsampled contourlet transform for fusion of images which is a shift-invariant version of contourlet transform. Along with this property, it has many advantages like removal of pseudo-Gibbs phenomenon, better frequency selectivity, improved temporal stability, and consistency. These properties make it suitable for fusion application. For fusing images, we have used local energy-based fusion rule. This rule depends on the current as well as the neighboring coefficients. Hence, it performs better than single coefficient-based fusion rules. The performance of the proposed method is compared visually and quantitatively with contourlet transform, curvelet transform, dual-tree complex wavelet transform, and Daubechies complex wavelet transform-based fusion methods. To evaluate the methods quantitatively, we have used mutual information, edge strength, and fusion factor quality measurements. The experimental results show that the proposed method performs better and is more effective than other methods.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Image fusion [1, 2] is a technique that integrates complementary information, scattered in different images into a single composite image. The fused image contains more information than any of its source images with reduced noise and artifacts. Hence, the results of the fusion are more useful for human perception and further image processing tasks. The idea behind this technique is the sensors of limited capacity that cannot capture the complete information of a scene. For example, in medical imaging [1], the computed tomography (CT) image is suitable for bone injuries, lungs, and chest problems, whereas the magnetic resonance imaging (MRI) is used for examining soft tissues like spinal cord injuries, brain tumors, etc. Various fusion methods are developed by the researchers. These methods are classified into pixel-level [3], feature-level [4], and decision-level fusion [5]. Pixel-level fusion is again divided into spatial- and transform-domain fusion. Spatial-domain fusion methods (averaging, weighted averaging, principal component analysis) are simple but sometimes give poor fusion results. Transform-domain fusion methods avoid the problems of spatial-domain methods and give better results. Different transforms are developed by the researchers such as Laplacian pyramid, gradient pyramid, wavelet transform [6], and contourlet transform [7]. Wavelet transform is very popular among the researchers, but it has limited number of directions, due to which it is not efficient in handling two-dimensional singularities. Contourlet transform, proposed by Do and Vetterli [7], provides flexible number of directions but is shift variant. Shift variance causes many problems like pseudo-Gibbs phenomenon near singularity. To tackle this problem, Chunha et al. [8] gave a multiscale, multidirectional, and shift-invariant transform known as nonsubsampled contourlet transform (NSCT). It has better frequency selectivity and improved temporal stability, and it removes the problem of pseudo-Gibbs phenomenon that causes smoothness near singularities. All these properties make it a suitable transform for image fusion.
In this work, we have proposed a local energy-based image fusion in NSCT domain. The source images are decomposed in transform domain by applying NSCT. Then, we have calculated local energy of each coefficient and then compared the local energy of corresponding coefficients of source images. The coefficient with highest local energy is selected. After obtaining the coefficient set for the fused image, the inverse NSCT is applied to get the final fused image.
The rest of the paper is organized as follows: Sect. 2 gives the basic concept of NSCT. Section 3 describes the proposed method in detail. Experimental Results and discussion are discussed in Sect. 4. Finally, the paper is concluded in Sect. 5.
2 Nonsubsampled Contourlet Transform (NSCT)
NSCT [9, 10] is a multiscale, multidirectional, and geometrical transform. Unlike contourlet transform, it is shift invariant. In contourlet, the shift variance is caused by the downsampling in Laplacian pyramid filter banks (LPFB) as well as in directional filter banks (DFB) [11]. NSCT achieves shift invariance by removing the downsampling and upsampling from the LPFB and DFB. The construction of NSCT is the combination of nonsubsampled pyramidal filter banks (NSPFB) and nonsubsampled directional filter banks (NSDFB). The NSPFB provides the multiscale property, whereas the NSDFB gives multidirectional property to NSCT. Its implementation is similar to that of nonsubsampled wavelet transform obtained by a`trous algorithm. Here, the filter of the next decomposition level is obtained by upsampling the filter of previous stage. Hence, multiscale property is achieved easily without extra filter design algorithm.
The DFB is proposed by Bamberger and Smith [11]. It is obtained by incorporating the critically sampled two-channel fan filter banks and resampling operator. The DFB has tree-like structure which splits the frequency plane in directional wedges. But it is not shift invariant. To make it shift invariant, the downsampler and upsampler are removed. This results in tree-shaped two-channel nonsubsampled filter banks.
The NSCT is constructed by combining the NSPFB and NSDFB. It is invertible and satisfies the anisotropic scaling law. Also, the removal of downsampling makes the filter designing simpler.
3 The Proposed Method
In this paper, we have proposed local energy-based image fusion in NSCT domain. In local energy-based fusion, the decision of coefficient selection is based not only on the coefficient itself but also on its neighboring coefficients. Hence, local energy-based fusion gives more efficient results than single coefficient-based fusion rules such as maximum selection or average fusion rules. The local energy is calculated by taking absolute sum of the neighboring coefficients of the current coefficient. Then, we compare the local energy of the coefficients of the source images and select the coefficient that has higher value of local energy.
In the proposed method, we have performed fusion in NSCT domain. It has better directional selectivity and is shift invariant. Also, it avoids the pseudo-Gibbs phenomenon, which is a big problem in wavelet transform- and contourlet transform-based fusion methods. The proposed algorithm can be summarized as below:
-
Step 1
The source images are decomposed into coefficient sets using the NSCT.
$$ Im_{1} (i,j)\mathop \to \limits^{\text{NSCT}} Cf_{1} (i,j)\quad Im_{2} (i,j)\mathop \to \limits^{\text{NSCT}} Cf_{2} (i,j) $$ -
Step 2
Local energy of each coefficient is calculated in 3 × 3 window using the following formula:
$$ e_{1} (i,j) = \sum\limits_{i - 1}^{i + 1} {} \sum\limits_{j - 1}^{j + 1} {Cf_{1} } (i,j)\quad e_{2} (i,j) = \sum\limits_{i - 1}^{i + 1} {} \sum\limits_{j - 1}^{j + 1} {Cf_{2} } (i,j) $$ -
Step 3
The coefficient with higher local energy is selected from the two.
$$ Cf(i,j) = \left\{ {\begin{array}{*{20}l} {Cf_{1} (i,j)} \hfill & {{\text{if}}\;|e_{1} (i,j)| \ge |e_{2} (i,j)|} \hfill \\ {Cf_{2} (i,j)} \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right. $$ -
Step 4
Reconstruct the final fused image by applying inverse NSCT on the above coefficients.
$$ Cf(i,j) \, \mathop \to \limits^{{{\text{Inverse}}\;{\text{NSCT}}}} \, F(i,j) $$
4 Experimental Results and Discussion
In this section, we have shown the results of the proposed method and its comparison with other methods. For experiment, we have taken two different multimodal medical images (Figs. 1a, b and 2a, b) of size 256 × 256. Each set contains one CT image and one MRI. To show the effectiveness of the proposed method, it is compared with curvelet transform [12], contourlet transform, and dual-tree complex wavelet transform [13] and Duabechies complex wavelet transform-based [14] methods with three different fusion rules (absolute maximum, local energy, and edge preserving) [10, 15, 16]. The visual results of the proposed and other methods are shown in Figs. 1 and 2. From Figs. 1 and 2, we see that the proposed method has better results than the other methods. Since human eye has limited vision capacity, it cannot observe the minute difference in the images. Hence, we have also evaluated the results quantitatively. Mutual information, edge strength (\( Q_{AB}^{F} \)), and fusion factor quality metrics [17] are used for comparing the results quantitatively. Results are presented in Table 1 for both set of images. From the combined observation of qualitative and quantitative results, we can say that the proposed method has better performance than the other methods.
5 Conclusions
This paper presents a local energy-based image fusion method using NSCT. NSCT provides better fusion results because of its shift-invariant property. It has better directional selectivity and avoids the pseudo-Gibbs phenomenon. These features of NSCT improve the results of the proposed method. The proposed fusion method is based on local energy. Local energy contains information of current coefficient and its neighboring coefficients. It carries more information than a single pixel has. Hence, it is more reliable and efficient than single pixel-based fusion rules. To prove the effectiveness of the proposed method, it is compared with curvelet transform, contourlet transform, dual-tree complex wavelet transform, and Daubechies complex wavelet transform-based fusion methods. Three standard quantitative measurements—mutual information, edge strength, and fusion factor—are used for evaluating the results quantitatively. Both the visual and quantitative results show that the proposed method has better performance than the curvelet, contourlet, dual-tree complex, and Daubechies transform-based fusion methods.
References
James, A.P., Dasarathy, B.V.: Medical image fusion: a survey of state of the art. Inf. Fusion 19, 4–19 (2014)
Khalegi, B., Khamis, A., Karray, F.O., Razavi, S.N.: Multisensor data fusion: a review of the state-of-the-art. Inf. Fusion 14(1), 28–44 (2013)
Yang, B., Li, S.: Pixel level image fusion with simultaneous orthogonal matching pursuit. Inf. Fusion 13(1), 10–19 (2012)
Kannan, K., Perumal, S.A., Arulmozhi, K.: The review of feature level fusion of multi-focused images using wavelets. Recent Pat. Signal Process 2, 28–38 (2010)
Prabhakar, S., Jain, A.K.: Decision-level fusion in fingerprint verification. Pattern Recogn. 35(4), 861–874 (2002)
Pajares, G., Cruz, J.: A wavelet based image fusion tutorial. Pattern Recogn. 37(9), 1855–1872 (2004)
Do, M.N., Vetterli, M.: The contourlet transform: an efficient directional multiresolution image representation. IEEE Trans. Image Process. 14(12), 2091–2106 (2005)
Chunha, L., Zhou, J., Do, M.N.: The nonsubsampled contourlet transform: theory, design and applications. IEEE Trans. Image Process. 15(10), 3089–3310 (2006)
Fu, L., Yifan, L., Xin, L.: Image fusion based on nonsubsampled contourlet transform and pulse coupled neural networks. In: 4th International Conference on Intelligent Computation Technology and Automation, pp. 572–575. IEEE press, Guangdong (2011)
Srivastava, R., Singh, R., Khare A.: Image fusion based on nonsubsampled contourlet transform. In International Conference on Informatics, Electronics and Vision, pp. 263–266. IEEE press, Dhaka
Bamberger, R.H., Smith, M.J.T.: A filter bank for the directional decomposition of images: theory and design. IEEE Trans. Signal Process. 40(4), 882–893 (1992)
Nehcini, F., Garzelli, A., Baronti, S., Alparone, L.: Remote sensing image fusion using the curvelet transform. Inf. Fusion 8(2), 143–156 (2007)
Hui, X., Yihui, Y., Benkang, C., Yiyong, H.: Image fusion based on complex wavelets and region segmentation. In: International conference on Computer Application and System Modeling (ICCASM), pp. 135–138. IEEE press, Taiyuan (2010)
Singh, R., Khare, A.: Fusion of multimodal medical images using Daubechies complex wavelet transform—a multiresolution approach. Inf. Fusion 19, 49–60 (2014)
Khare, A., Srivastava, R., Singh, R.: Edge preserving image fusion based on contourlet transform. In: 5th International Conference on Image and Signal Processing (ICISP), pp. 93–102. Springer, Morocco (2012)
Lu, H., Li, Y., Kitazono, Y., Zang, L.: Local energy based multi-focus image fusion on curvelet transform. In: International Symposium on Communication and Information Technologies (ISCIT), pp. 1154–1157. IEEE press, Tokyo (2010)
Xydeas, C.S., Petrovic, V.: Objective image fusion performance measure. Electron. Lett. 36(4), 308–309 (2000)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer India
About this paper
Cite this paper
Srivastava, R., Khare, A. (2015). Medical Image Fusion Using Local Energy in Nonsubsampled Contourlet Transform Domain. In: Sethi, I. (eds) Computational Vision and Robotics. Advances in Intelligent Systems and Computing, vol 332. Springer, New Delhi. https://doi.org/10.1007/978-81-322-2196-8_4
Download citation
DOI: https://doi.org/10.1007/978-81-322-2196-8_4
Published:
Publisher Name: Springer, New Delhi
Print ISBN: 978-81-322-2195-1
Online ISBN: 978-81-322-2196-8
eBook Packages: EngineeringEngineering (R0)