Abstract
Multi-modal medical image fusion brings many benefits to clinical diagnosis and analysis because it creates favorable conditions for diagnostic imaging practitioners to make a more accurate diagnosis. According to our current knowledge, there are still some disadvantages to current image fusion approaches. The first one is that the fused images often have low contrast. The reason for this is several approaches use a weighted average rule for fusing low-frequency components. The second drawback is that the loss of detailed information in the fused image. This can be explained by the fact that the high-frequency components synthesized by the rules are not really effective. In this paper, two novel algorithms are proposed to tackle the above two disadvantages. The first algorithm is based on the Equilibrium optimizer algorithm (EOA) to find optimal parameters to fuse low-frequency components. This allows the fused image to have good contrast. The second algorithm is based on the sum of local energy functions using the Prewitt compass operator to create an efficient rule for the fusion of high-frequency components. This allows the fused image to significantly preserve details transferred from input images. Experimental results show that the proposed approach not only effective in significantly enhancing the quality of the fusion image but also preserving edge information carried from input images.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The fusion of multi-modal medical images is the combination of useful information from multiple images individually to create a single image. This allows the fusion image to not only contains various types of information but also enhanced significantly in terms of quality. Currently, there are various medical images, such as single photon emission tomography (SPECT), computed tomography (CT), positron emission tomography (PET), and magnetic resonance imaging (MRI). Each kind of medical image can only contain one type of information. For instance, SPECT image provides functional and metabolic information with low spatial resolution, while MRI images contain anatomical information with high spatial resolution.
Currently, approaches proposed to fuse multi-modal medical images are grouped into two primary domains, namely spatial and transform domain [30]. The spatial domain-based approaches process directly on the pixels, blocks, or regions of the input images without making the transformation from the spatial domain to the transform domain. In this category, there are various proposed methods, such as pixel-based [24], region-based [34, 65], and block-based methods. The spatial approaches utilize various techniques, such as the maximum selection rule (MSR), the minimum selection rule, weighted-average rule [44]. The advantage of these methods is often implemented simply, and they have low computational complexity. The drawback of these methods is the loss of useful information in the composite image as they produce spectral distortion and color distortion. This disadvantage is overcome by transform-domain based approaches.
The transform domain-based methods are more commonly used in recently proposed methods, and they consist of three main stages, including image transformation, the fusion of components in the transform domain, and inverse transformation [30]. Firstly, the input medical images are transformed into a transform domain by adopting a certain method of image transformation. Secondly, the components in the transform domain are fused by a certain fusion rule. Finally, the fusion image is reconstructed by utilizing the corresponding inverse transformation on the fusion components. In general, transform domain-based approaches can be further divided into Pyramid methods, Wavelet transform methods, and Multi-scale geometric analysis (MGA) methods.
There are some disadvantages of the above transform domain-based methods. Pyramid-based approaches as the Laplacian pyramid (LP) [10, 14, 51] can cause the loss of considerable sourced information and suffer from the blocking effect since they supply only spectral information without having the directional information. Wavelet-based methods, such as Discrete wavelet transform (DWT) [58] and Dual-Tree Complex Wavelet Transform (DTCWT) [62], do not use phase information, so the main disadvantage of them is unable to preserve the edges and texture regions. MGA- based methods, such as Contourlet transform [57], Curvelet transform [37], Non-subsampling contour transform (NSCT) [23, 50, 64], Non-subsampled shearlet transform (NSST) [6, 27], provide both spectral and directional information, so they can transcend the limitations of pyramid-based and wavelet-based approaches. Nevertheless, the disadvantage of MGA-based methods is high computational complexity.
In recent years, many meta-heuristic optimization algorithms have been introduced and applied to medical image fusion. Some typical meta-heuristic optimization algorithms can be mentioned as Particle swarm optimization (PSO) [42], non-subsampled shearlet transform and particle swarm optimization (NSST-PSO) [47], quantum-behaved particle swarm optimization (QPSO) [55], gray wolf optimization [5], hybrid genetic–grey wolf optimization (HG-GWO) [4], modified central force optimization (MCFO) [12], chaotic grey wolf optimization (CGWO) [3], binary crow search optimization (BCSO) [39], the total variation (TV-L1) based cartoon-texture decomposition with particle swarm optimization (TV-L1-PSO) [38], and modified shark smell optimization (MSSO) [54].
The equilibrium optimizer algorithm was inspired by the dynamic mass balance in physics, and it was a recently proposed meta-heuristic algorithm. This algorithm was originally introduced by Faramarz [13] in 2020. Currently, this algorithm has been applied successfully in various problems such as feature selection [16], biological data classification [48], and vehicle routing problem [15]. However, according to our observation, there are no studies utilizing the EOA for medical image fusion. This encouraged and motivated us to propose a novel approach based on the EOA for medical image fusion.
In addition, according to our current knowledge, there are some limitations of current image fusion approaches. The first one is that the use of a weighted average rule for the fusion of low-frequency components. Some recent approaches have this limitation such as convolutional sparse representation (CSR) [28], convolutional sparsity with morphological component analysis (CSMCA) [29], two-scale image decomposition with sparse representation (TSID-SR) [33], two-scale image decomposition with structure tensor (TSID-ST) [11], and two-scale image decomposition with guided filtering and sparse representation (TSID-GF-SR) [40]. This limitation leads to a decrease in the intensity of the brightness of the fused image. The second limitation is that the use of fusion rules for high-frequency coefficients cannot be optimal because the detailed information preservation indexes are not really high. Some fusion rules can be mentioned as Max selection [44], Local variance [2], and Parameter-adaptive pulse coupled neural network (PA-PCNN) [61]. This is likely to result in the loss of detailed information in the fused image. In this paper, we propose two novel algorithms to address the above-mentioned limitations. Some advantages of the proposed approach are highlighted as follows:
-
The first algorithm makes use of the Equilibrium Optimizer Algorithm (EOA) to find optimal parameters with the aim of fusing the base layers. This allows the fusion image to have good contrast.
-
The second algorithm is based on the sum of local energy functions using the Prewitt compass operator to create an efficient rule for the fusion of the detail layers. This allows the fusion image to preserve significantly details transferred from input images.
The remaining of this paper is organized as follows: Some background knowledge, such as the TSD method, YCbCr color space, and the EOA algorithm, is introduced briefly in Section 2. Section 3 introduces two novel algorithms in the proposed approach: the first one based on the sum of local energy functions using the Prewitt compass operator is proposed to fuse detail layers, and the second algorithm based on the EOA is introduced to combine base layers. Section 4, the quality of fused images is evaluated by different indexes. Finally, the conclusion and future work are given in Section 5.
2 Background
Some background knowledge, such as TSD method, YCbCr color space, and EOA algorithm, are introduced in this section.
2.1 TSD method
There are a number of different methods for decomposing an image into two scales. These methods have been utilized in many medical image fusion approaches such as Convolutional sparse representation (CSR) [28], Two-scale image decomposition with statistical comparisons [9], Two-scale image decomposition with structure tensor (TSD-TS) [11], Two-scale image decomposition and sparse representation (TSD-SR) [33], and Two-scale image decomposition with guided filtering and sparse representation (TSD-GF-SR) [40]. In this section, we present the TSD method used in TSD-SR and CSR approaches. According to this method, an input image (Iin) is decomposed into two components. The first one is the base layer (Ib) having large scale variations, and the second one is the detail layer (Id) having small scale variations.
Figure 1 illustrates the use of TSD method for a input medical image.
Ib and Id components can be calculated as follows:
Firstly, the component Ib is found by addressing the following optimization problem (see Eq. (1)):
Where,
-
γ is the regularization parameter.
-
\(d_{x} = \begin {bmatrix} -1 & 1 \end {bmatrix}\) is a gradient operator in the horizontal direction.
-
\(d_{y} = \begin {bmatrix} -1 & 1 \end {bmatrix}^{T}\) is a gradient operator in the vertical direction.
The (1) is a Tikhonov regularization problem, and it can be addressed efficiently by making use of Fast Fourier Transform (FFT).
Secondly, the detail layer (Id) is calculated from input image Iin and the base layer (Ib) as shown in Eq. (2):
2.2 YCbCr color space
There are different types of color models proposed for the fusion of color medical image, such as IHS [25, 38, 46], HSV [20], YIQ [49], YUV [26, 31, 59], and YCbCr. Among these color models, YCbCr color space is effectively applied in various image processing problems, such as Watermarking [22], Multi-Focus Image Fusion [1], and Encryption for color images [60]. Therefore, this color space is opted for the composition of medical images in the proposed model.
The RGB-YCbCr transformation is represented as shown in Eq. (3):
The conversion of YCbCr colour model to RGB colour model is represented as shown in Eq. (4):
2.3 EOA Algorithm
The EOA algorithm was inspired by the dynamic mass balance in physics, and it was originally introduced by Faramarzi [13] in 2020. More details about the EOA algorithm can be found in Algorithm 1. Generally, the EOA algorithm is described by the following six steps:
-
Step 1: The population of particles is initialized
-
Step 2: The fitness of particles in the population is estimated.
-
Step 3: The equilibrium pool is constructed.
-
Step 4: Each exponential term (F) and the generation rate (G) are updated using Eqs. (5) and (6), respectively.
$$ F = {\upbeta}_{1} sign(k-0.5)[e^{-\gamma t}-1] $$(5)$$ G = \left\{\begin{array}{ll} 0.5.k_{1}.(K_{eq}-\gamma K).F & k_{2}\geq GP \\ 0 & k_{2} < GP \end{array} \right. $$(6)Where
$$ t =\left (1-\frac{Iter}{Iter_{max}}\right)^{{\upbeta}_{2} \frac{Iter}{Iter_{max}}} $$$$ {\upbeta}_{1} = 2, {\upbeta}_{2} = 1, GP = 0.5 $$ -
Step 5: Each particle concentration is updated using Eq. (7):
$$ K = K_{eq} + (K-K_{eq}).F + \frac{G}{\gamma V} (1-F) $$(7) -
Step 6: Go to step 3 until the stopping criteria is achieved.
3 The proposed approach
In this section, two novel algorithms are proposed. The first one, based on the sum of local energy functions using the Prewitt compass operator, is proposed to fuse detail layers. The second algorithm is designed to fuse base layers based on Equilibrium Optimizer Algorithm.
3.1 Fusion rule based on the sum of local energy function with Prewitt Compass Operator (SLE_PCO)
The detailed information of an image is contained in the high-frequency components, and the energy of detail layers of a sharp image is much larger than that of a blurred one. Therefore, some approaches, such as NSCT transform and local energy [2], Empirical wavelet decomposition and maximum local energy (EWT-MLE) [41], and Maximum local energy with compass operator [7, 8], have applied the fusion rules based on local energy.
The local energy LE(x,y) is calculated using Eq. (8).
Where
-
W is the unit matrix of size u x u.
-
Im is the image matrix.
Prewitt operator is one of the compass operators used to detect edge, and it was first introduced by Russell Prewitt. This method uses eight kernel masks in eight directions (see details in Fig. 2). Currently, this operator is widely and effectively applied in image processing algorithms such as Auto-focusing approach on multiple micro objects [32] and Quantum image edge extraction [63].
We propose a novel rule based on the sum of local energy function and Prewitt operator for fusing detail layers as Algorithm 2. Figure 3 illustrates functions (Ei, \(i = \overline {1,8}\) and Esum) using Prewitt compass method.
3.2 The proposed algorithm based on SLE_PCO and EOA
There are some main steps in the proposed method. In the first step, RGB-YCbCr transformation is implemented to convert colour medical images to YCbCr color space, and only the Y channel is utilized for the next step. In the second step, the TSD method is implemented to decompose gray medical images with the aim of receiving the base and detail layers. Then, the EOA algorithm is applied to find optimal parameters for the fusion of base layers, and the SLE_PCO method is implemented to compose the detail layers. Finally, two components (the detail layer and base layer) are summed to get the reconstructed image, and this fused image, together with Cb and Cr channels, is transformed to RGB color space. Figure 4 illustrates the diagram of the proposed approach utilizing TSD and EOA.
4 Experimental results
Quality indexes and the data set using for experiments are presented in this section. In addition, the results received from experiments are displayed and discussed.
4.1 Objective evaluation metrics
Currently, there are many image fusion quality indexes introduced to evaluate the fused images. In this section, eight metrics are selected, including Mean (intensity index), Standard deviation (SD - contrast index), average information of an image (Entropy), Sharpness index (S), Edge-based similarity measure (QAB/F) [56], Visual Information Fidelity for Fusion (VIFF) [19], Feature Mutual Information (FMI) [18]), and Mutual Information (MI) [17].
4.2 Experimental setup
Experimental data consists of three pairs of medical images (MRI-PET), namely Data-1 (a,b), Data-2 (c,d) and Data-3 (e,f) with a size of 256 × 256 pixels (See Fig. 5 for details). The above-mentioned images were obtained from the online source as follows: http://www.med.harvard.edu/AANLIB/.
Several different experiments were conducted to evaluate the effectiveness of the proposed approach. The first experiment: to evaluate the effectiveness of the EOA algorithm in the proposed method, some well-known metaheuristics optimization algorithms such as Particle Swarm Optimization (PSO) [21], Artificial Bee Colony (ABC) Optimization, Biogeography-based optimization (BBO) [43], Multi-Verse Optimizer (MVO) [36], and Whale Optimization Algorithm (WOA) [35] have been selected for comparison. 30 runs were performed with different optimization algorithms for each pair of images. Two indexes (Average and Standard deviation) are utilized to compare the overall performance of algorithms. In addition, the Wilcoxon rank-sum test [52], a non-parametric statistical test, was used to determine the significance of the results.
- The second experiment: :
-
Various compass operators such as Isotropic, Robinson, Kirsch, and Prewitt, were used in our approach to find the right compass operator.
- The third experiment: :
-
With regard to qualitative analysis, the results of our approach are compared with those of other latest approaches, namely: convolutional sparsity based morphological component analysis (CSMCA) [29], Non-subsampled contourlet transform (NSCT) [64], convolutional sparse representation (CSR) [28], Non-subsampled shearlet transform with parameter-adaptive pulse coupled neural network (NSST-PA-PCNN) [61], and Non-subsampled shearlet transform and multi-scale morphological gradient with pulse-coupled neural network (NSST-MSMG-PCNN) [45].
In terms of the performance evaluation of proposed approach, eight indexes were utilized, including Mean, SD, Entropy, Sharpness, QAB/F [56], VIFF [19], FMI [18], and MI [17]. Tools for experimenting with all the aforementioned methods are online available. The proposed algorithm is implemented with the following parameters in EOA algorithm:
-
The size of population: n = 50.
-
β1 = 2, β2 = 1,
-
The generation probability: GP = 0.5, V = 1.
-
The maximum number of iterations: Itermax = 50
4.3 Image fusion evaluation
Firstly, the three optimal parameters (α1,α2, and α3) of the proposed model are illustrated in Table 1.
Secondly, from Table 2, the average and standard deviation indexes obtained from the EOA algorithm are the lowest. This indicates that the EOA algorithm is effective in the proposed approach. Furthermore, from Table 3, the p-values obtained from the Wilcoxon rank-sum test are less than 0.05. This shows that the results are statistically significant. Therefore, these explain why we have opted for the EOA algorithm in the proposed approach.
Thirdly, from Table 4, experimental results show that the use of Prewitt compass operator in the proposed method is to bring the best results. This explains why the Prewitt compass operator is selected in Algorithm 2.
Fourthly, from the Tables 5, 6, 7 and the Figs. 6, 7, 8, 9, 11, and 13, it is clear that the CSMCA [29] and CSR [28] methods create the composite images having low contrast, while the contrast of the NSCT [23], NSST- PA-PCNN [61], NSST-MSMG-CNN [45] approaches are greater than those of the above-mentioned approaches. In particular, the proposed method gives the best contrast. For instance, from Table 5, contrast index (SD) of the proposed method was highest, at 0.3780, while the figures for other methods (CSMCA [29], NSCT [23], CSR [28], NSST-PA-PCNN [61], and NSST-MSMG-CNN [45]) were lower, at 0.2093, 0.3194, 0.1847, 0.3411, and 0.1621, respectively. Similarly, the proposed method also gives the best intensity, entropy, and sharpness. This shows that the proposed approach not only enhances significantly the contrast but also improves the amount of information and sharpness of composite image (Figs. 10, 11, 12, 13, and 14).
Finally, to assess the preservation of detailed information carried to the composite image from the input images, frames extracted from the fused images (Figs. 9, 11, 13) are shown in Figs. 10, 12, 14. It is easy to see that our method conserves detailed information from the input images, while remained methods produce much redundant information in the composite images. For instance, the QAB/F index of the proposed method is highest, at 0.8561 as shown in Table 7. Whereas, the figures for methods (CSMCA [29], NSCT [64], CSR [28], NSST-PA-PCNN [61], and NSST-MSMG-CNN [45]) were lower, at 0.7735, 0.7709, 0.7156, 0.7681, and 0.7112, respectively. Similarly, the results show that the values of VIFF [19], FMI [18], and MI [17] indexes are higher than those of the other approaches, this indicates that the proposed approach is highly effective in the fusion of medical images.
5 Conclusion
In this paper, two novel algorithms are proposed to address some limitations of current medical image fusion approaches. The proposed approach takes advantage of the TSD method to decompose the input medical images into two components, namely base and detail layers. The first algorithm allows the composite image to have good quality because the EOA algorithm is applied to find the optimal parameters for the fusion of the base layers. This overcomes the limitation of using a weighted average rule. The second algorithm (called SLE_PCO), based on the sum of local energy using the Prewitt compass operator, is implemented to fuse detail layers. This allows composite images to significantly preserve details, which overcomes the drawback of using fusion rules for detail layers.
The experiments were conducted on the dataset (Data-1, Data-2, and Data-3) to compare the result of the proposed method with those of other latest methods, such as CSMCA [29], NSCT [64], CSR [28], NSST-PA-PCNN [61], and NSST-MSMG-CNN [45]. Eight evaluation indexes, including Mean, SD, Entropy, Sharpness, QAB/F [56], VIFF [19], FMI [18], and MI [17], are utilized to assess the quality of the composite images. The experimental results show that our approach can significantly enhance the quality of the fusion image, and it is likely to ensure suitability for the human visual system. In addition, this proposed approach can also effectively conserve the edge information in the composite image.
In the future, there are some issues that we plan to address to improve the performance of current approaches. The first problem to be solved is to improve the quality of input images. This is because input medical images may be of low-quality, such as low-contrast, blur, and noise. This significantly affects the performance of image fusion methods. Therefore, image quality improvement plays a vital role in improving the quality of the fused images. For instance, the Non-Parametric Modified Histogram Equalization (NMHE) algorithm was used in the TSD-RS method [33] to enhance the low-contrast images. The second problem to be tackled is to preserve detailed information transferred to the fusion image. This can be explained by the fact that some existing decomposition methods can cause loss of information. Therefore, some novel image decomposition methods need to be applied to limit information loss in the fused images. For example, the Taylor Expansion algorithm was introduced to decompose input images into many intrinsic components in the Taylor expansion and convolutional sparse representation (TE-CSR) method [53].
References
Abdullatif AA, Abdullatif FA, Safar AA (2019) Multi -focus image fusion based on stationary wavelet transform and PCA on YCBCR color space. Journal of Southwest Jiaotong University 54(5). https://doi.org/10.35741/issn.0258-2724.54.5.37
Amini N, Fatemizadeh E, Behnam H (2014) MRI-PET Image fusion based on NSCT transform using local energy and local variance fusion rules. J Med Eng Technol 38(4):211–219. https://doi.org/10.3109/03091902.2014.904014
Asha CS, Lal S, Gurupur VP, Saxena PUP (2019) Multi-modal medical image fusion with adaptive weighted combination of NSST bands using chaotic grey wolf optimization. IEEE Access 7:40782–40796. https://doi.org/10.1109/access.2019.2908076
Daniel E (2018) Optimum wavelet-based homomorphic medical image fusion using hybrid genetic-grey wolf optimization algorithm. IEEE Sensors J 18(16):6804–6811. https://doi.org/10.1109/jsen.2018.2822712
Daniel E, Anitha J, Kamaleshwaran K, Rani I (2017) Optimum spectrum mask based medical image fusion using gray wolf optimization. Biomedical Signal Processing and Control 34:36–43. https://doi.org/10.1016/j.bspc.2017.01.003
Ding Z, Zhou D, Nie R, Hou R, Liu Y (2020) Brain medical image fusion based on dual-branch CNNs in NSST domain. BioMed Res Int 2020:1–15. https://doi.org/10.1155/2020/6265708
Dinh PH (2021) A novel approach based on grasshopper optimization algorithm for medical image fusion. Expert Syst Appl 171:114576. https://doi.org/10.1016/j.eswa.2021.114576
Dinh P-H A novel approach based on Three-scale image decomposition and Marine predators algorithm for multi-modal medical image fusion. Biomedical Signal Processing and Control. https://doi.org/10.1016/j.bspc.2021.102536
Du J, Li W (2019) Two-scale image decomposition based image fusion using structure tensor. Int J Imaging Syst Technol 30(2):271–284. https://doi.org/10.1002/ima.22367
Du J, Li W, Xiao B, Nawaz Q (2016) Union laplacian pyramid with multiple features for medical image fusion. Neurocomputing 194:326–339. https://doi.org/10.1016/j.neucom.2016.02.047
Du J, Fang M, Yu Y, Lu G (2020) An adaptive two-scale biomedical image fusion method with statistical comparisons. Comput Methods Prog Biomed 196:105603. https://doi.org/10.1016/j.cmpb.2020.105603
El-Hoseny HM, El-Rahman WA, El-Rabaie ESM, El-Samie FEA, Faragallah OS (2018) An efficient DT-CWT medical image fusion system based on modified central force optimization and histogram matching. Infrared Phys Technol 94:223–231. https://doi.org/10.1016/j.infrared.2018.09.003
Faramarzi A, Heidarinejad M, Stephens B, Mirjalili S (2020) Equilibrium optimizer: a novel optimization algorithm. Knowl-Based Syst 191:105190. https://doi.org/10.1016/j.knosys.2019.105190
Fu J, Li W, Du J, Xiao B (2020) Multimodal medical image fusion via laplacian pyramid and convolutional neural network reconstruction with local gradient energy strategy. Comput Biol Med 126:104048. https://doi.org/10.1016/j.compbiomed.2020.104048
Fu Z, Hu P, Li W, Pan JS, Chu SC (2021) Parallel equilibrium optimizer algorithm and its application in capacitated vehicle routing problem. Intell Autom Soft Comput 27(1):233–247. https://doi.org/10.32604/iasc.2021.014192
Gao Y, Zhou Y, Luo Q (2020) An efficient binary equilibrium optimizer algorithm for feature selection. IEEE Access 8:140936–140963. https://doi.org/10.1109/access.2020.3013617
Qu G, Zhang D, Yan P (2002) Information measure for performance of image fusion. Electron Lett 38:313–315
Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) A non-reference image fusion metric based on mutual information of image features. Comput Electr Eng 37:744–756. https://doi.org/10.1016/j.compeleceng.2011.07.012
Han Y, Cai Y, Cao Y, Xu X (2013) A new image fusion performance metric based on visual information fidelity. Inform Fusion 14:127–135. https://doi.org/10.1016/j.inffus.2011.08.002
Jin X, Chen G, Hou J, Jiang Q, Zhou D, Yao S (2018) Multimodal sensor medical image fusion based on nonsubsampled shearlet transform and s-PCNNs in HSV space. Signal Process 153:379–395. https://doi.org/10.1016/j.sigpro.2018.08.002
Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN’95 - International Conference on Neural Networks, IEEE, https://doi.org/10.1109/icnn.1995.488968
Khalili M (2015) DCT-Arnold chaotic based watermarking using JPEG-YCbcr. Optik 126 (23):4367–4371. https://doi.org/10.1016/j.ijleo.2015.08.042
Li H, Qiu H, Yu Z, Zhang Y (2016) Infrared and visible image fusion scheme based on NSCT and low-level visual features. Infrared Phys Technol 76:174–184. https://doi.org/10.1016/j.infrared.2016.02.005
Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Inform Fusion 33:100–112. https://doi.org/10.1016/j.inffus.2016.05.004
Li W, Jia L, Du J (2019) Multi-modal sensor medical image fusion based on multiple salient features with guided image filter. IEEE Access 7:173019–173033. https://doi.org/10.1109/access.2019.2953786
Liang X, Hu P, Zhang L, Sun J, Yin G (2019) MCFNEt: Multi-layer concatenation fusion network for medical images fusion. IEEE Sensors J 19(16):7107–7119. https://doi.org/10.1109/jsen.2019.2913281
Liu X, Mei W, Du H (2018) Multi-modality medical image fusion based on image decomposition framework and nonsubsampled shearlet transform. Biomedical Signal Processing and Control 40:343–350. https://doi.org/10.1016/j.bspc.2017.10.001
Liu Y, Chen X, Ward RK, Wang ZJ (2016) Image fusion with convolutional sparse representation. IEEE Signal Process Lett 23(12):1882–1886. https://doi.org/10.1109/lsp.2016.2618776
Liu Y, Chen X, Ward RK, Wang ZJ (2019) Medical image fusion via convolutional sparsity based morphological component analysis. IEEE Signal Process Lett 26:485–489. https://doi.org/10.1109/lsp.2019.2895749
Liu Y, Wang L, Cheng J, Li C, Chen X (2020) Multi-focus image fusion: A survey of the state of the art. Inform Fusion 64:71–91. https://doi.org/10.1016/j.inffus.2020.06.013
Liu Y, Zhou D, Nie R, Hou R, Ding Z, Guo Y, Zhou J (2020) Robust spiking cortical model and total-variational decomposition for multimodal medical image fusion. Biomedical Signal Processing and Control 61:101996. https://doi.org/10.1016/j.bspc.2020.101996
Lofroth M, Avci E (2018) Auto-focusing approach on multiple micro objects using the prewitt operator. Int J Intell Robot Appl 2(4):413–424. https://doi.org/10.1007/s41315-018-0070-x
Maqsood S, Javed U (2020) Multi-modal medical image fusion based on two-scale image decomposition and sparse representation. Biomedical Signal Processing and Control 57:101810. https://doi.org/10.1016/j.bspc.2019.101810
Meher B, Agrawal S, Panda R, Abraham A (2019) A survey on region based image fusion methods. Inform Fusion 48:119–132. https://doi.org/10.1016/j.inffus.2018.07.010
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008
Mirjalili S, Mirjalili SM, Hatamlou A (2015) Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput Applic 27(2):495–513. https://doi.org/10.1007/s00521-015-1870-7
Nencini F, Garzelli A, Baronti S, Alparone L (2007) Remote sensing image fusion using the curvelet transform. Inform Fusion 8(2):143–156. https://doi.org/10.1016/j.inffus.2006.02.001
Padmavathi K, Asha C, Maya VK (2020) A novel medical image fusion by combining TV-l1 decomposed textures based on adaptive weighting scheme. Engineering Science and Technology, an International J 23(1):225–239. https://doi.org/10.1016/j.jestch.2019.03.008
Parvathy VS, Pothiraj S (2019) Multi-modality medical image fusion using hybridization of binary crow search optimization. Health Care Management Science. https://doi.org/10.1007/s10729-019-09492-2
Pei C, Fan K, Wang W (2020) Two-scale multimodal medical image fusion based on guided filtering and sparse representation. IEEE Access 8:140216–140233. https://doi.org/10.1109/access.2020.3013027
Polinati S, Dhuli R (2020) Multimodal medical image fusion using empirical wavelet decomposition and local energy maxima. Optik 205:163947. https://doi.org/10.1016/j.ijleo.2019.163947
Shehanaz S, Daniel E, Guntur SR, Satrasupalli S (2021) Optimum weighted multimodal medical image fusion using particle swarm optimization. Optik 231:166413. https://doi.org/10.1016/j.ijleo.2021.166413
Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713. https://doi.org/10.1109/tevc.2008.919004
Sumathi M, Barani R (2012) Qualitative evaluation of pixel level image fusion algorithms. In: International conference on pattern recognition, informatics and medical engineering (PRIME-2012), IEEE, https://doi.org/10.1109/icprime.2012.6208364
Tan W, Zhang J, Xiang P, Zhou H, Thitøn W (2020) Infrared and visible image fusion via NSST and PCNN in multiscale morphological gradient domain. In: Schelkens P, Kozacki T (eds) Optics Photonics and Digital Technologies for Imaging Applications VI SPIE. https://doi.org/10.1117/12.2551830
Tan W, Thitøn W, Xiang P, Zhou H (2021) Multi-modal brain image fusion based on multi-level edge-preserving filtering. Biomedical Signal Processing and Control 64:102280. https://doi.org/10.1016/j.bspc.2020.102280
Tannaz A, Mousa S, Sabalan D, Masoud P (2019) Fusion of multimodal medical images using nonsubsampled shearlet transform and particle swarm optimization. Multidim Syst Sign Process 31 (1):269–287. https://doi.org/10.1007/s11045-019-00662-7
Too J, Mirjalili S (2020) General learning equilibrium optimizer: a new feature selection method for biological data classification. Appl Artif Intell, pp 1–17. https://doi.org/10.1080/08839514.2020.1861407
Ullah H, Ullah B, Wu L, Abdalla FY, Ren G, Zhao Y (2020) Multi-modality medical images fusion based on local-features fuzzy sets and novel sum-modified-laplacian in non-subsampled shearlet transform domain. Biomedical Signal Processing and Control 57:101724. https://doi.org/10.1016/j.bspc.2019.101724
Wang S, Shen Y (2020) Multi-modal image fusion based on saliency guided in NSCT domain. IET Image Processing. https://doi.org/10.1049/iet-ipr.2019.1319
Wang Z, Cui Z, Zhu Y (2020) Multi-modal medical image fusion by laplacian pyramid and adaptive sparse representation. Comput Biol Med 103823:123. https://doi.org/10.1016/j.compbiomed.2020.103823
Wilcoxon F (1945) Individual comparisons by ranking methods. Biometrics Bulletin 1(6):80. https://doi.org/10.2307/3001968
Xing C, Wang M, Dong C, Duan C, Wang Z (2020) Using taylor expansion and convolutional sparse representation for image fusion. Neurocomputing 402:437–455. https://doi.org/10.1016/j.neucom.2020.04.002
Xu L, Si Y, Jiang S, Sun Y, Ebrahimian H (2020) Medical image fusion using a modified shark smell optimization algorithm and hybrid wavelet-homomorphic filter. Biomedical Signal Processing and Control 59:101885. https://doi.org/10.1016/j.bspc.2020.101885
Xu X, Shan D, Wang G, Jiang X (2016) Multimodal medical image fusion using PCNN optimized by the QPSO algorithm. Appl Soft Comput 46:588–595. https://doi.org/10.1016/j.asoc.2016.03.028
Xydeas C, Petrovic V (2000) Objective image fusion performance measure. Electronics Lett 36:308. https://doi.org/10.1049/el:20000267
Yang S, Wang M, Jiao L, Wu R, Wang Z (2010) Image fusion based on a new contourlet packet. Inform Fusion 11(2):78–84. https://doi.org/10.1016/j.inffus.2009.05.001
Yang Y (2011) A novel DWT based multi-focus image fusion method. Procedia Eng 24:177–181. https://doi.org/10.1016/j.proeng.2011.11.2622
Yang Y, Wu J, Huang S, Fang Y, Lin P, Que Y (2019) Multimodal medical image fusion based on fuzzy discrimination with structural patch decomposition. IEEE J Biomed Health Inform 23 (4):1647–1660. https://doi.org/10.1109/jbhi.2018.2869096
Yang YG, Zou L, Zhou YH, Shi WM (2020) Visually meaningful encryption for color images by using qi hyper-chaotic system and singular value decomposition in YCbcr color space. Optik 213:164422. https://doi.org/10.1016/j.ijleo.2020.164422
Yin M, Liu X, Liu Y, Chen X (2019) Medical Image Fusion With Parameter-Adaptive Pulse Coupled Neural Network in Nonsubsampled Shearlet Transform Domain. {IEEE} Trans Instrum Meas 68:49–64. https://doi.org/10.1109/tim.2018.2838778
Yu B, Jia B, Ding L, Cai Z, Wu Q, Law R, Huang J, Song L, Fu S (2016) Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion. Neurocomputing 182:1–9. https://doi.org/10.1016/j.neucom.2015.10.084
Zhou RG, Yu H, Cheng Y, Li FX (2019) Quantum image edge extraction based on improved prewitt operator. Quantum Inf Process 18(9). https://doi.org/10.1007/s11128-019-2376-5
Zhu Z, Zheng M, Qi G, Wang D, Xiang Y (2019) A phase congruency and local laplacian energy based multi-modality medical image fusion method in NSCT domain. IEEE Access 7:20811–20824. https://doi.org/10.1109/access.2019.2898111
Zribi M (2010) Non-parametric and region-based image fusion with bootstrap sampling. Inform Fusion 11(2):85–94. https://doi.org/10.1016/j.inffus.2008.08.004
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interests
The authors declare that they have no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Dinh, PH. Multi-modal medical image fusion based on equilibrium optimizer algorithm and local energy functions. Appl Intell 51, 8416–8431 (2021). https://doi.org/10.1007/s10489-021-02282-w
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-021-02282-w