1 Introduction

Laparoscopic imaging modalities play a significant role in navigation during operation and treatment planning. Medical surgeons always focus on the quality of images that determine the best medical decision for the operating environment (Stoyanov 2012). In laparoscopic surgery, a small size camera is injected into the human body through a small incision. All the internal body structural and functional information can be seen and monitored with the help of an LCD screen placed in the operation room (Sdiri et al. 2016). The CO2 gas is inserted into the human abdominal area to expand the internal space so that surgical instruments can be easily operated on. The CO2 gas and dissection deformation of tissues produce smoke that causes the invisibility of organs (Kotwal 2016). The degradation and artifacts in laparoscopic images produce also due to many other factors such as dynamic homogenous internal structure, blood flow, dynamic illumination factor, optical instruments reflection, etc. (Hahn et al. 2017). The smoke effect during laparoscopic can severely degrade the image quality and also its effects on radiance information of image patches. The degraded and blurred images could reduce the visibility of the surgeon for diagnosis and also increase the probability of error during surgery. The smoke removal could reduce not only the surgery time but also be important for surgery planning and treatment. Therefore, an accurate smoke removal algorithm is required for better visualization of laparoscopic images (Sdiri et al. 2016; Hahn et al. 2017; Baid et al. 2017). There are many clinical applications of laparoscopy images, and it can help to diagnose multiple diseases at a very early stage (Azam, et al. 2021).

The smoke removal method is considered as image de-hazing that existed in literature (Salazar-Colores et al. 2020; Tan 2008a). The image de-hazing algorithms are classified into three groups (Bansal et al. 2017): image restoration, image enhancement, and fusion-based methods (He et al. 2011; Galdran 2018; Nair and Sankaran 2022). In the image restoration category, the haze-free image is obtained by using atmospheric degradation methods utilizing prior knowledge of image depth information. The prior information of hazy image derived first then by applying physical degradation model to obtain haze-free images. He et al. (He et al. 2011) proposed Dark Channel Prior (DCP) technique that is based on the restoration domain. In the image enhancement domain, there is no need of using an atmospheric physical model and prior estimation of depth information in images. In this method, the correlation algorithms are mostly used to enhance the local contrast of the images for better visualization (Li et al. 2018a). In this category, some of the techniques are the Retinex algorithm (Jobson 2004), histogram equalization (Thomas et al. 2011; Yu and Bajaj 2004), and wavelet-based algorithms (Rong and Jun 2014). In fusion-based methods (Ancuti and Ancuti 2013), the resultant enhanced image is obtained by fusing input blurred images (Azam et al. 2021). However, the required detailed information at a high level of accuracy in smoke-free images is still a challenging task. Gamma correction is utilized to split single input blurry and smoky images into different multi-exposure images then the MEF technique is implemented to fuse these multi-exposure images. The image contrast and saturation are used as image fusion weights during the fusion process (Ma et al. 2017). MEF techniques are used for enhancing the visual quality of degraded images. The advantages and drawbacks of these three domain encapsulated in Table 1.

Table 1 The overview of various smoke removal techniques with their strength and limitation
Table 2 Quantitative/objective evaluation results of the smoke-free images

In this article, we proposed a laparoscopic smoke removal method that removes the smoke effect and also enhanced the quality of the degraded images. The proposed method is based on the PASD-MEF technique. The MEF technique enhanced the local detail information of input laparoscopic images. A series of gamma corrections are used to remove the blurry patches in the images and also effectively increase the local contrast of the images. Whereas, Spatial Linear Saturation (SLS) is used to increase the color saturation of the laparoscopic images. Then, a set of images with under-exposure levels are formed. These under-exposure images now have high color saturation and enhanced contrast but low exposure levels. The proposed algorithm implemented a patch adaptive structure (PAS) technique that works on MEF. The advantage of using PAS and MEF is that they preserved the structure of laparoscopic images. The significant contribution of the proposed methodology is highlighted as follows:

  • Development of smoke removal self-fusion algorithm on smoky and blurry input images in a spatial domain. The smoke effect is removed with the help of contrast and saturation correction. SLS is implemented to increase the saturation contrast of images.

  • PASD algorithm is proposed for the spatial domain, MEF to enhance the visual quality of the degraded blur laparoscopic images. The adaptive selection of different patched size in images are obtained by using an implementation of block size and texture energy. Adaptive selection avoids the error of loss of information in both local structure and texture detail information of images during the smoke removal procedure.

  • The proposed algorithm PASD-MEF is verified both in a qualitative as well as quantitative manner. The article demonstrated that the proposed algorithm not only removes the smoke but also enhances the visual quality of the laparoscopic image for better visualization and diagnostic purposes.

  • The proposed algorithm is compared with other state-of-the-art smoke removal methods, and the proposed method showed significantly improved performance in terms of visual and statistical evaluation metrics.

The article arrangement is as follows: In Sect. 2, related works associated with haze and de-smoke are presented while Sect. 3 describes the proposed methodology. In Sect. 4, the quantitative and qualitative results are encapsulated, and the conclusion is drawn in Sect. 5.

2 Related works

There are many techniques presented in the literature for de-smoke of laparoscopic images (Sdiri et al. 2016; Hahn et al. 2017; Baid et al. 2017). A novel Bayesian inference that consists of a probabilistic graphical technique is applied on laparoscopic images (Baid et al. 2017). The model includes a prior model and is implemented on transmission map images. The transmission map is useful for color attenuation that is caused by smoke. Then, this work is extended in Salazar-Colores et al. (2020), to achieve smoke-free, noiseless, and remove the specular effect in images. Many other methods in the literature are related to laparoscopic smoke removal. These techniques use the atmospheric scattering model and work relatively the same as the dehazing techniques in the literature. The atmospheric model depends on the depth of images or the transmission map (He et al. 2011; Tarel and Hautière 2009; Zhu et al. 2015). He at el. proposed a DCP technique that relies on statistical observation and is implemented on outdoor hazy images (He et al. 2011). In this method, it is observed that most pixels have very low intensities values in at minimum single-color channel. In the DCP method, a prior estimation knowledge of image depth detail and transmission map is implemented. The density of the hazing scene acquired and high-quality non-hazy images are formed. This algorithm not effectively works on outdoor images that have a very high white radiance effect. However, some other methods do not require the estimation of transmission maps or image depth information. Tan et al. (2008b) directly enhance the local detail of images without any use of a transmission map. In (Ancuti and Ancuti 2013), a fusion-based method is proposed that relies on white balance phenomena to enhance the input images. A Laplacian pyramid representation technique is used for fusion purposes, and this method works on per pixel. The multi-scale fusion is implemented on hazy images and derived from a single resultant image. Most of the image smoke removal methods work as image restoration and smoke removal. Koschmieder (He et al. 2011; Tarel and Hautière 2009) proposed an atmospheric scattering scheme to solve the problem of degradation in images caused by smoke. This model is described in Eq.1.

$$I \left(y\right)= t \left(y\right).J \left(y\right)+ A.(1-t (y))$$
(1)

where I(y) represent the degraded images while J(y) is the haze-free image. The t(y) denotes the transmission medium and represents the quantity of light that spreads toward the target. In the above equation, the A denotes global atmospheric light. The product of t (y). J (y) represents the scene radiance. The term \(A.\left(1-t \left(y\right)\right)\) in Eq. 1 denotes the airlight. Air light produced by smoke dispersion increases the intensity of the object, which is assumed to be the primary cause of the color shift of the scene. This term for air light, especially for thick smoke, would dominate the strength of the scene. By rearranging the above equation, the haze-free image J(y) will be achieved. The haze-free image only is obtained when the value of A and t(y) is already achieved using apriori information and from the estimation solution. Equation 2 represents the rearranged form of Eq. (1). The common limitation J (x) can also be limited by implementing the maximum local contrast and saturation or distributing the specific color pixels in RGB space.

$$J\left(y\right)= \frac{I\left(y\right)-A}{t \left(y\right)}+ A$$
(2)

The Multi-exposure fusion techniques are also used in many image processing tasks where different sensors sequence of images fused to obtain a resultant single image (Ma et al. 2017). The gamma correction method is widely used in literature for image enhancement (Li et al. 2016). The existence of image fusion methods discussed in the literature are based on sparse representation (Li et al. 2018b, 2020), guided filtering techniques (He et al. 2010), multi-scale decomposition fusion techniques (Qi et al. 2020), patch structure decomposition (Yin et al. 2019), and multi-exposure image fusion. Galdran introduced multi-exposure fusion based on Laplacian pyramid fusion (LPF) for haze removal (Nan et al. 2016), then, in the space domain, the haze removal is converted to increase image contrast and saturation effect.

In this paper, we proposed a multi-exposure image fusion method for smoke removal. The adjustment of image saturation and contrast is done using gamma correction to split input images into multiple exposure images. MEF methods are used for image smoke removal, and it improved image enhancement. The fusion strategy helps to manipulate image contrast and saturation that enhance the visual quality of images. The gamma correction and image enhancement in our research work in the spatial domain, histogram equalization is added to gamma correction to increase the image contrast. Whereas traditional image enhancement methods are used for global contrast and saturation transformation of images. In the proposed methodology, the Adaptive Gamma Correction (AGC) technique is used to increase the transmission map t(x) that is used in Eq. (1) by the Koschmieder model. For further improvement of AGC, we used Laplacian-based solutions. Contrast adjustment solution integrated with AGC to remove the blurred effect in images. The detailed description of the proposed method is discussed in Sect. 3.

3 Proposed methodology

To avoid the estimation effect of atmospheric light and transmittance described in Eq. (1), the contrast enhancement and saturation adjustment technique in the spatial domain is suggested to achieve smoke-free laparoscopic images. According to Koschmieder model, the intensity range of input blurred images I(y) lies between values 0 to 1. The following condition J(y) ≤ I (y) ∀ y needs to satisfy to obtain a smoke-free image J(y). In this paper, we first make a set of under-exposed images U= {I1(y), I2(y), I3(y)…...Ik(y)} from the original smoke input image I(y). The under-exposed images always reduce the intensity variation in images. The under-exposure image I(y) inset of multiple under-exposure images contains high contrast and saturation but skip small detail structure information. These under-exposure images now have low exposure levels. We implemented a MEF technique to fuse all the under-exposed sets of images U= {I1(y), I2(y), I3(y)…...Ik(y)} into a single image to extract local detail information. The MEF technique fused different regions of images with good contrast and saturation level to obtain smoke-free single image J(y). The flowchart of the proposed methodology is shown in Fig.1. First, the set of multi-exposure images is obtained with the help of gamma correction. The linear adjustment associated with spatial saturation is also implemented on the image to increase the visual quality. Gamma correction is implemented for contrast level adjustment of images. The increase of the contrast of blurred areas in the images decreased the sharpness level of that area. To overcome this problem, we utilized a MEF technique that extracts those corresponding areas from multiple images and fused them into a single image with better contrast and saturation. For better fusion, it is important to maintain texture and color detail as same as the original image which is achieved by applying MEF with adaptive structure decomposition (ASD) of the image patch. In the proposed methodology, the texture information components of the image are obtained by using cartoon texture decomposition (Li et al. 2018c). The image texture entropy is calculated from the gray difference technique (Li et al. 2018c). The texture entropy value and image block size are treated in an image decomposition block. The overall image block is sub-divided into three independent components. Each component is processed individually to give the resultant fused smoke-free image. The proposed methodology is explained in the following sections.

Fig. 1
figure 1

Proposed methodology PASD-MEF framework

3.1 Gamma Correction and Contrast Adjustment

The overall image intensity of degraded image I(y) is adjusted by using gamma correction and modifying the intensity of the image by a power function as shown in Eq. (3).

$$I\left(y\right) \to \beta .{I(y)}^{\mu }$$
(3)

where the terms β and µ represent the positive constant. The visual differences are more prominent in the dark areas as compared to bright areas. The value of µ has chosen less than one µ < 1 for compressed bright intensities while it increases dark intensities in images for better visual detail. With the value of µ > 1, more bright intensities are allotted in a more extensive range after transformation, and dark intensities are compressed for that value range. The contrast of the image region can be expressed in Eq. (4).

$$C\left(\omega \right)= {I}_{\mathrm{max}}^{\omega }- {I}_{\mathrm{min}}^{\omega }$$
(4)

where \({I}_{\mathrm{max}}^{\omega }\)= max {I(y) | y ϵ ω} and \({I}_{\mathrm{min}}^{\omega }\)= min {I(y) | y ϵ ω}. In Figs. 2e and 3e, the image shows overexposure, and there is contrast detail information missing in both images. After applying the µ > 1 operation, the contrast detail of the image in Figs. 2g and 3g increases. In our proposed algorithm, the adjustment of gamma correction is used to modify the local contrast detail of input images. Gamma correction also removes the blurred effect in images as shown in Fig. 4h and 2h. In Figs. 2, 3, different exposure levels of laparoscopic images are shown. The left side images are over-exposure images while the move toward the right side the exposure level of images decreases. The resultant fused MEF images are shown on the rightmost side of Figs. 42.

Fig. 2
figure 2

Multi-exposure laparoscopic images of video 10 with smoke Level 4. a Over-exposed. b Normal exposed image, c under-exposed image. d Resultant fused image obtained from images (ac). e Zoom-in over-exposed image. f Zoom-in of normally exposed image. g Zoom-in under-exposed. h Zoom-in of the fused image

Fig. 3
figure 3

Sample dataset videos frames (a1a4) frames of video 1 where a1 represent level1 smoke and smoke increase from left to right a4 represent dense smoke of level 4 (b1b4) frames of video 5 (c1c4) frames of video 10 (d1d4) frames extracted from video 15 while (e1e4) frames of video 20

Fig. 4
figure 4

Multi-exposure laparoscopic images of video 1 with smoke Level 3. a Over-exposed, b Normal exposed image, c under-exposed image, d Resultant fused image obtained from images (ac). e Zoom-in over-exposed image. f Zoom-in of normally exposed image. g Zoom-in under-exposed. h Zoom-in of the fused image

3.2 Artificial multi-exposure fusion

After the contrast enhancement, the Spatial Linear Saturation (SLS) is implemented on multi-exposure laparoscopic images. The visual quality of images is improved by using the adjustment of local contrast and brightness of the images. The sequence of multi-exposure images U = {I1(y), I2(y), I3(y)…… Ik(y)} from input image I(y) is obtained with the help of gamma correction. For every image \({U}_{k}^{R}\left(y\right),{U}_{k}^{G}\left(y\right),{U}_{k}^{B}\left(y\right)\)} in the set of multi-exposure, the minimum and maximum components value of three-channel R, G, and B can be manipulated by using Eqs. (5) and (6). When ∆ = (RGBmax − RGBmin)/255 > 0, then the saturation of every pixel can be manipulated by using Eq. (7).

$$\mathrm{RGBmax }=\mathrm{ max }(\mathrm{max }\left(\mathrm{R},\mathrm{G}\right),\mathrm{B})$$
(5)
$$\mathrm{RGBmin }=\mathrm{ min }(\mathrm{min }\left(\mathrm{R},\mathrm{G}\right),\mathrm{B})$$
(6)
$$S=\left\{\begin{array}{c}\frac{\Delta }{\mathrm{value}} L<0.5\\ \frac{\Delta }{2-\mathrm{value}} L\ge 0.5\end{array}\right.$$
(7)

The term value and L can be defined in Eq. (8). When the saturation of every pixel value is computed then this operation is applied on each channel of image RGB described as in Eq. (9). We have taken the adjustment range of saturation for an image as [0,100].

$$\mathrm{Value} =\frac{\mathrm{RGBmax} + \mathrm{RGBmin}}{255}, \mathrm{where} \, L=\mathrm{value}/2$$
(8)
$${U}_{K}^{^{\prime}}\left(y\right)={U}_{k}\left(y\right)+\left({U}_{k}\left(y\right)-L\times 255\right)\times \beta $$
(9)
$$\beta =\left\{\begin{array}{c}\frac{1}{(S-1)} \mathrm{percent}+S\ge 1\\ \frac{1}{(-\mathrm{percent})} \, else\end{array}\right.$$
(10)

The final image obtained after the saturation operation applied on each channel of the image is described in Eq. (11).

$${U}_{K}^{^{\prime}}\left(y\right)=({U}_{k}^{{R}^{^{\prime}}}\left(y\right),{U}_{k}^{{G}^{^{\prime}}}\left(y\right),{U}_{k}^{{B}^{^{\prime}}}\left(y\right))$$
(11)

When the image saturation process is completed then MEF is applied to obtain the local detail information of the laparoscopic images. The proposed MEF scheme works on adaptive decomposition based on patch structure. The adaptive patch of an image determines using image texture entropy and patch size. The resultant fuse image is obtained by combining decompose patch images. The image cartoon decomposition is used for the analysis of structural information in an image (Li et al. 2018c) while texture components of the image give detailed information (Zhu et al. 2016). In the proposed work, the Vese Osher (VO) model is implemented based on variational image decomposition (Vese and Osher 2003) to the source images. The cartoon-texture decomposition determines by using Vese Osher (VO) model.

3.3 Adaptive Patch Structure and Image Intensity

When the texture component is determined, the gray difference technique is implemented to compute the image entropy value using texture features. Then adaptive path size selection of the image is selected. If pixel point is located at point (x, y) then a point \(p=(\Delta x, \Delta y)\) far away from pixel point is represented as \((x+\Delta x, y+ \Delta y)\). The grayscale based on different values can be calculated as in Eq. (12).

$${m}_{\Delta }\left(x,y\right)=m\left(x,y\right)-m(x+\Delta x, y+ \Delta y)$$
(12)

where m (x, y) denoted grayscale value, and \({m}_{\Delta }\left(x,y\right)\) represent the difference in grayscale value. The entropy value of laparoscopic images can be determined by using Eq. (13).

$$E= \sum_{i=0}^{n}p\left(i\right){log}_{2}[p\left(i\right)]$$
(13)

For complete image texture, the values of entropies can be calculated in the form of set E= {E1, E2, E3……., Ek,}, where E1, E2…... Ek is the entropy value of each image. Then final entropy value can be calculated by using the mean of all entropy values represented in Equation (14).

$$E=\frac{1}{K} \sum_{i=0}^{k}{E}_{i}$$
(14)

The adaptive patch size scheme preserved more detailed information during the fusion process. The optimal block size of each image can be calculated by using Eq. (15).

$${W}_{s}= {P}_{s} \left(0.1\right) x\left(\frac{{\left(\frac{E}{10}\right)}^{E}-{\left(-\frac{E}{10}\right)}^{-E}}{{\left(\frac{E}{10}\right)}^{E}+{\left(-\frac{E}{10}\right)}^{-E}}\right)+{P}_{s} ({e}^{-E} x \left(0.1\right)$$
(15)

And, \({W}_{s}\) is image patch size. The optimal block size can be achieved using the image entropy value. E in the above equation represents the Entropy value of a given image, these parameters are set for calculated image patch size. When the optimal value of \({W}_{s}\) achieved then a set of multi-exposure images decompose into sub-image of Ws x Ws size blocks. Structure decomposition algorithm (Ma et al. 2017) is implemented on each patch size of the image that is further divided into the following components: I) Ck, signal contrast strength II) signal structure strength Sk and III) mean intensity Ik. These three parameters have proceeded further to achieve the desired fused image patches \(\widehat{X}\). To obtain an appropriate fused patch image, we need three desired parameters that are \(\widehat{{C}_{k}, }\widehat{{S}_{k},}\widehat{I,}\) these parameters are explain below;

\(\widehat{{C}_{k}}\)= The desired contrast strength in the fused image was obtained by merging the highest contrast of all source sets of image patches with the same spatial position.

\(\widehat{{S}_{k}}\)= The desired signal structure fused block can be calculated by assigning weighted average value to image block contrast using input structure vector.

\(\widehat{{I}_{k}}\) = To obtain mean intensity components, the global and local mean intensity of the current source image is used as an input.

When \(\widehat{{C}_{k}, }{S}_{k},\widehat{{I}_{k}}\) components are calculated then fused image patch \(\widehat{X}\) obtained, and a new vector can be represented as shown in Eq. 16. The proposed MEF gives smoke-free, well-exposed, and high contrast images by artificially under-exposed/over-exposed images. The smoke in the image represented in Eq. (1) always reduces the intensity level of the images. The proposed algorithm works only on under-exposed images. Furthermore, if the exposure value increased then gamma correction can adjust the contrast of images and increase the visual quality of blurred laparoscopic images.

$$\widehat{X}= \widehat{{C}_{k} }.{S}_{k}+\widehat{{I}_{k}}$$
(16)

Multiple image patches of a fused image can be obtained by sliding the window, the pixels in covering patches are found the average valuse to give output. At that point, the fused image is formed by using Eq.17.

$$J(x)= \sum_{i=1}^{n}{\widehat{x}}_{i}$$
(17)

Gray difference technique implemented on gray-level images to obtain grayscale differential output (Li et al. 2018c). This can be represented in Eq. 18.

$$ I{\text{delta }}\left( {x,y} \right) = I\left( {x,y} \right) - I(x + {\text{Delta }}\left( x \right),y + {\text{Delta}}\left( y \right)) $$
(18)

Where I show image, x and y represent image points location. The point pixel close to (x, y) point represented by (x + Delta (x), y + Delta (y)). Where I delta shows the gray image differential value in the image I.

4 Experimental results

In this section, the dataset details and the proposed methodology subjective/qualitative and objective/quantitative results compared with other state-of-the-art techniques such as Dark Channel Prior (DCP) (He et al. 2011), Multilayer Perceptron Method (MPM) (Sebastián Salazar-Colores and Cruz-Aceves 2018), Color Attenuation Prior (CAP) (Zhu et al. 2015) is presented. The proposed method is implemented on MATLAB 2018a software where the hardware specification is Intel® Core i3-4010U CPU of clock speed 1.7GHz and RAM are 4GB.

4.1 Dataset

The dataset taken is a part of the ICIP LVQ Challenge dataset. That is a collection of a total of 800 distorted videos created using a set of 20 reference videos, each 10 seconds long (Khan, et al. 2020; Twinanda et al. 2017). Obtain these videos from the Cholec80 dataset (http://camma.u-strasbg.fr/datasets). The whole dataset consists of ten category videos group such that smoke videos, blurry, white Gaussian noise videos, etc. All videos with a 16:9 aspect ratio have a resolution of 512 by 288 and a 25 fps frame rate. Screen blending video editing software was used to generate smoke videos. By using the technique, a smoke video of a black background is mixed with the reference video so that the original video's black areas remain untouched while the smoke region overlays. Four various degrees of smoke intensity videos are created by adjusting the strength of the smoke video. The smoke group videos are a total of 80 in numbers. We collected 25 videos from the ICIP LVQ Challenge dataset among smoke group videos for this experimentation purpose. Then frames were extracted with a resolution of images 512 by 288 to test the proposed algorithm.

4.1.1 Qualitative visual results

The visual results of smoke images with level 3 smoke distortion are shown in Fig. 5 while the smoke images with level 4 distortion are shown in Fig. 6. It is observed that the DCP method can remove the smoke effect but the contrast and saturation balance of images reduces. In the CAP method, it is noticed that smoke is not well removed, and an unbalance natural color of images is also seen. While the MPM method, removes the smoke but local detail information of laparoscopic images is not visible. The proposed method not only removes the smoke from images but also enhanced the local contrast information of the images and the good saturation color are seen.

Fig. 5
figure 5

Qualitative visual results of smoke level 3 laparoscopic images a Input smoke and blur laparoscopic images where b–e images are resultant smoke-free and enhanced images. b DCP (Tan 2008a), c CAP (Azam et al. 2021), d MPM (Zhu et al. 2016), e Proposed method

Fig. 6
figure 6

Qualitative visual results of smoke level 4 laparoscopic images a Input smoke and blur Laparoscopic images where be images are resultant smoke-free and enhanced images. DCP (Tan 2008a), c CAP (Azam et al. 2021), d MPM (Zhu et al. 2016), e Proposed method

4.1.2 Quantitative evaluation

In objective evaluation, we choose non-reference image quality metrics because reference or any ground truth images are absent. The evaluation of the proposed method is performed by computing four metrics: FADE, JNBM, Blur, and Edge intensity. Fog Aware Density Evaluator (FADE) metric is used for analyzing smoke in the images (Choi et al. 2015). The perceptual fog density in the laparoscopic images can be computed by computing the FADE metric. If the value of FADE is lower, then it means that fog density is lower, for better smoke removal its value should be lower. The JNBM non-reference metric is based on sharpness and works best for blurry images (Ferzli and Karam 2009, 2006). This metric evaluates the quantity level of visual sharpness in the images. The higher value indicated that images are highly sharp and best for perceptual view. Furthermore, an Edge intensity metric is implemented, this metric gives information about the edge intensities that are not visible in source images. The higher value represented good edge intensity (Hautière et al. 2008). The non-reference blur perceptual metric is used to analyze blurriness in the image (Crete et al. 2007). Table 2 shows all the statistical results computed by these four non-reference metrics. The proposed method shows a significantly improved result as compared to other state of art techniques. The bold values indicated better performance results. The graphical objective evaluation results of smoke level 3 and level 4 images are shown in Figs. 5, 6. The bar-plot result of FADE, JNBM, Blur, and Edge intensity metrics is shown in Figs.7, 8.

Fig. 7
figure 7

Graphical objective evaluation results of FADE and blur metric

Fig. 8
figure 8

Graphical objective evaluation result of JNBM and Edge intensity metric

5 Conclusions

The proposed method of PASD-MEF is based on multi-exposure image fusion. The MEF works on the adaptive structure decomposition technique. A sequence of under-exposed images is extracted from the input single smoke and burry image. The Gamma correction is implemented to achieve a set of under-exposed images while the SLA scheme is applied for saturation adjustment. Adaptive structure decomposition (ASD) is used during the MEF procedure. The adaptive patch decomposition integrates all common regions from a series of images that have better contrast and saturation. Whereas MEF fused these sets of images into a single de-smoke image. The qualitative, as well as quantitative results, showed that the proposed method significantly improves the visual quality of images and also reduces the smoke from images. The main goal of this paper is to remove smoke and enhance laparoscopic images. The improved quality of images is useful in image-guided surgery and also helpful for surgeons for better visibility during surgery.

There are a few limitations, Fused image some-time produces very high edges and due to high edges the global brightness become a little dark as compared to the original, This algorithm use PASD as the fusion optimization method. A real-time implementation of this method is not possible. The fusion algorithm's efficiency will be improved in the future by implementing an effective fusion optimization algorithm. In the future, geometric data will be evaluated and scrutinized in better detail to increase fusion performance. Denoising and other image processing techniques will be used in the present solution. In further work, we will attempt to build fusion processes on a high-performance computing infrastructure capable of handling massive datasets.