Keywords

1 Introduction

The ratio of highest light intensity value to the lowest light intensity value is called as the dynamic range. It is measured in exposure value or stops. The dynamic range of the natural scenes is very high, but image acquisition devices like digital camera, camcorder, and conventional display devices like CRT or LCD monitors, printers are low dynamic range Medias. Screening devices have a dynamic range of 100:1, natural scene lit by bright sunlight, having deep shadows can have the high dynamic range of 100,000:1. Several LDR images are captured with different exposures using auto-exposure bracketing in a digital camera. Halos and contrast reversals will be more if the intensity values from different exposures are combined directly. Hence [1], radiometric response function is estimated by properly aligning captured LDR images, and by combining the intensity values from different exposures, the radiance map is estimated. Finally, tone mapping is performed to present HDR images on LDR Medias. Tone mapping decreases the image contrast by preserving the details. Use of global transfer curve is simplest method to compress HDR radiance image into LDR gamut. Here, gamma curve maps the HDR image into displayable gamut. Applying gamma should not be done individual as to each of the channels if so, colors get muted. To overcome this, image is decomposed into two layers based on luminance and chrominance components. Global mapping is applied to luminance layer, and color image is reconstructed. This global approach works fine with images having low range of exposures, and with widely varying exposures, it fails to preserve details. A method similar to dodging and burning can be used for tone mapping. As before, image is decomposed into luminance channel and chrominance channel. The log luminance of the image is found, and it is low-pass filtered which results in the base layer. Subtracting base layer from log luminance yields the detail layer. Base layer is scaled down to the required range of log luminance to reduce the contrast. New log luminance image is produced by adding scaled base layer to the detail layer. Tone-mapped luminance image is obtained by taking exponent of new log luminance image. But, this method produces halo artifacts around the edges with more contrast in detail layer. Therefore, linear filters are not suitable for tone mapping; local edge-preserving filters are used for decomposition. The filters which preserves edges and smoothens the images can be classified into two: (i) local optimization-based filters, includes anisotropic and robust anisotropic diffusion, bilateral filtering and its different versions, guided image filtering, weighted guided image filters, global gradient guided image filters and (ii) global optimization-based filters, includes weighted least square filters, total variation (TV) [2], fast weighted least squares (FWLS) [3], L-norm gradient minimization [4]. Though global optimization-based filters yield excellent quality, their computation cost is high, and they are time-consuming. Local filters have very good efficiency but suffer from halo and gradient reversal effects. This paper is organized as fallows. Section 2—discussion on various local filtering-based edge-preserving smoothing filters. Section 3—study of global optimization-based filters. Section 4—conclusion.

2 Local Optimization-Based Filters

2.1 Anisotropic Diffusion

This is one of the earlier method which reduces the image noise while preserving the edges. It is similar to the technique of forming a scale space, in which an image, based on process of diffusion creates parametrized group of successively blurred images. In this family, each of resulting images is obtained by convolution operation with 2D isotropic Gaussian filter as system response and the image as the input. This method considers each pixel of image as an energy sync interacting with neighbor pixels based upon pixel intensity differences and values of conductance which are obtained from the local edge approximate. [5]. Anisotropic diffusion decomposes the image into multi-layers, where base layer is the smooth layer and detail layer is difference layer. Both the layers are enhanced using enhancement algorithm such as alpha rooting [6] or logarithm transfer shifting [7]. Both the layers are fused together. Energy diffusion equation is used by anisotropic diffusion. Gradients of filtering image itself are used as a guide for the diffusion process, avoiding edge smoothing [8]. Anisotropic diffusion is adiabatic, i.e., it preserves energy [8]. Limitations of anisotropic diffusion are (i) “stair-casing” effect around smooth edges. (ii) It is a slow process due to its discrete diffusion nature. (iii) It is ill posed; due to shock forming process, large changes in output can be caused, even for infinitesimal changes in input [8]. (iv) It over sharpens the image edges [9]. (v) It is non-iterative function which slowly converges. Diffusion process is stopped in between to obtain the piecewise smooth images as it finally converges to a constant image [10]. Qiaosong Chen et al. introduced an algorithm based on affine Gaussian scale space for high dynamic range images [11]. Anisotropic feature is extracted from the HDR images, and they are reformed to be isotropic using fitting and affine transformation. Then, base layer of HDR image is formed by dodging and burning processing. Detail layer of HDR image is formed by two-scale edge-preserving decomposition. Two layers are combined to get the final output image. This method works well for high contortion and stance changes in HDR images.

2.2 Bilateral Filters

Is an efficient filter, which smoothens the image while preserving its discontinuities. It is also capable of decomposing the image into two layers based on different scales. The pixels having similarity in photometric range are also considered close as the pixels occupying nearby spatial locations; this is rationale of bilateral filter, and this is the difference between bilateral and Gaussian filters. Bilateral filter [12, 13] is defined as follows:

$$B\left[ I \right]_x = \sum G\sigma_s \left( {||x - y||} \right)G\sigma_r \left( {I_x - I_y } \right)I_y$$
(1)

BF[I]x is the normalized weighted average. If pixels y has intensity values different from Ix, influence of such pixels y is decreased by r, the range Gaussian. s decreases influence of distant pixels; it is spatial Gaussian. Without parameter Gσr, bilateral filter becomes Gaussian filter.

where Wx is a normalization factor:

$$W_x = G\sigma_s \left( {||x - y||} \right)G\sigma_r \left( {I_x - I_y } \right)$$
(2)

Amount of filtering for the image is measured by parameters σs and σr, these parameters filtering. Only, by increasing σs, aggressive smoothing cannot be achieved; even, σr should be increased. As range Gaussian is flatter, bilateral filter becomes closer to Gaussian blur as σr increases that means ability to preserve edges is reduced. Larger features get smoothed when σs increases. Weighted average is calculated for each pixel, and pixels value is replaced. The range parameter imposes penalty on adjacent pixels with different intensities and spatial component imposes the penalty on the distant pixels. Combination of spatial and range components ensures pixel is scaled controls that final result is contributed by similar nearby pixels. Bilateral filter decomposes the image into bi-layers. (i) Base Layer: larger-scale components (ii) Detailed layer: smaller-scale components (residual of the bilateral the filter). Other than tone mapping, bilateral filter is also used in tone management, texture and illumination separation, denoising, flash/no flash imaging, and so on. Structures are also preserved in case of videos, here instead of filters are applied to log intensities of HDR image. HDR image is decomposition into bi-layers known as base and detailed layer. Scaled-down base layer is added with the detail layer. Using bilateral filter, multi-scale decomposition can be in performed in several ways. In one of the method spatial filtering, temporal bilateral filter is applied iteratively to the smoothed versions of the input image. To not blur the edges during recursive iterations during the coarsening process, range-Gaussian width is decreased. In other way, during the coarsening process, the spatial Gaussian can be progressively increased to not blur the edges. Smoothing filters like bilateral filters used in image decomposition algorithms treat the detail as low-contrast and extract the local variation in different contrast levels as successive the width of range layers of detail. Fine-scale spatial variation may not be necessarily represented by such layers. But, these filters are best tool for tone mapping as they extract details based on contrast. Drawbacks of bilateral filter are as follows: (i) As at each position, weighted sum as to be estimated over a large neighborhood, it consumes more time for execution. (ii) In the process of preserving the edges, it sharpens the some edges, incurring undesirable effect “stair-case effect.” (iii) Predefined pixels neighborhood regions are required; determining them before hand is difficult as filtering performance is limited if a small regions have been selected, may result in cross-region mixing if large neighborhood is smoothing is used. In tone mapping process, the bilateral selected for regions having high textures [14]. (iv) Gradient reversal artifacts are produced along the few edges during detail enhancement process. [13, 15] and HDR compression, the unwanted components ni from the input image pi.(v) Involves trade-off between data smoothing and edge preservation. During process of separation of average surface from that of the detail surface, halo artifacts may be resulted at image edges if more smoothing is done. (vi) It is computationally expensive as it is nonlinear filter. Recursive version of bilateral filter was proposed by Yang [16], which defines photometric distance of bilateral filter as

$$pdBF({{\varvec{I}}}_{{\varvec{x}}},{{\varvec{I}}}_{{\varvec{y}}})=\sqrt{\sum_{a,a+1\epsilon \phi }{||{I}_{a}+1-{I}_{a}||}^{2}}$$
(3)

Here, Φ is the path which is predefined. It is the path connecting the pixels x and y. By adding the all the distances between adjacent pixels along a path Φ, the photometric distances are calculated. Ability to preserve edge becomes difficult, i.e., cross-region mixing is likely to occur though the computational cost is reduced. The Propagation filter [14] has the same goal as that of bilateral filter, and it works on the principle of photometric relationship existing between the image pixels. Cross-region mixing problem is relieved without using any spatial filtering functions. Preservation of image characteristics is better. In fast bilateral filtering [15], two acceleration techniques are used: (i) FFT and fast convolution can be used for computation as here bilateral filter is linearized. (ii) Key operations are decimated. Two strategies are used: (i) sub-sampling in spatial domain and (ii) in intensity domain, piece-wise linear approximation. Bilateral filter is also be further be used to obtain the joint bilateral filter [17], where rather than filtering, another guidance image is used to compute the input weights. This filter performance is better when it is difficult to extract the information on the edges of the images.

2.3 Guided Image Filtering

This is a smoothing filter used for edge preserving which has better behaviors near edges [18], and it does not have gradient reversal artifacts so best suited for detail enhancement and HDR compression. Guidance image is subjected to linear transformation which results in the filtered output, which is less smoothened and more structured than the input image. Regardless of the intensity range and kernel size, this filter exhibits relatively faster algorithm, and it is also non-approximate linear time. This filter is designed by the help of guidance image I, which is modeled with output image q. Let q be the linear transform of I in a window wk centered at the pixel k.

Drawbacks of guided filter are unwanted smoothing of edges known as “halos” are exhibited near few edges. This is because image cannot be represented well near edges by the linear model used.

$$qi={x}_{k}{I}_{i}+{y}_{k}\forall i\epsilon {\omega }_{k}$$
(4)

Here, the coefficients (xk, yk) are linear and constant in wk. If image I has an edge, then q will have an edge; this is ensured by local linear model. Constraints from the filtering are needed to determine the (ak, bk). By removing output, q can be obtain.

$$qi=pi-ni$$
(5)

2.4 Weighted Guided Image Filter

In the case of WGIF [19], weighting is done considering the edge into the account, and it is added with guided image filter. From both local smoothing filter as well as global smoothing filters, it inherits advantages such as (i) avoids the halo artifacts, (ii) the computational complexity is same as Gaussian filter. To design WGIF, the variances of all intensity values are computed locally in the guidance images, and this result is used to calculate the normalized weighting by normalizing with the local variance of a pixel. Because of this, WGIF can preserve the sharp edges of the HDR image like that of global filters. And also, halo artifacts are reduced. Gradient reversal is avoided in WGIF. An weighting based on edge awareness is calculated first, and it is considered with GIF to come up the with new filter, the WGIF.

2.5 Gradient Domain Guided Image Filtering [GDGIF]

GDGIF is designed to preserve the edges better than the other variants of the guide image filters. In this case, the constraint with edge awareness and of explicit first order is included [20]. Even though weighted guided image filter was working with the principle of the edge awareness, but, to treat edges, there are no explicit constraints in WGIF and even in GIF. As they consider edge-preserving process and image filtering process together, edges cannot be preserved well in some cases. The output image of GDGIF will have gradient similar to that to the input image. The regularization term of GDGIF is different from that of GIF and WGIF as it includes a constraint with an explicit edge awareness.

GDGIF is based on local optimization. First-order regularization term and zeroth-order data fidelity term composes the cost function of GDGIF. Due to this, images are represent more accurately near edges by factors in new local linear model, i.e., preserves edges better than WGIF and GIF. Edge-aware factor in WGIF is single scale, whereas in GDGIF, it is multiscale. This helps it to separate the edges from other details of the image efficiently. The complexity of GDGIF is same of that of GIF and doesn’t suffer from gradient reversal artifacts. In Fig. 1, the halo effects introduced by GIF are visible. Though WGIF reduces the effect, but still, they are visible, whereas GDGIF avoids the halo effect.

Fig. 1
figure 1

HDR image “office.hdr” is used in comparison of tone mapping results [20]. a GIF output, b WGIF output, c GDGIF output, d detail layer of a, e detail layer of b, f detail layer of c, g subtracted values of e, f shows that GDGIF is better than WIF

2.6 Robust-Guided Image Filtering [RGIF]

RGIF [21] consists of data term and a regularization term, to solve the challenges faced by guided image filter [18]. HDR tone mapping is achieved by the decomposition of layers by using RGIF [13, 15, 22]. After nonlinearly mapping the base layer, it is combined with the detailed layer. HDR tone mapping results based on different frameworks are shown in Fig. 2. Result of RGIF frame work on HDR images is shown; it can be observed that edges are preserved.

Fig. 2
figure 2

High dynamic range tone mappers compared: a High dynamic range image, b L0 Gradient [4], c Tone reproduction [23], d Weighted least squares [10], e RGIF [21]. In most cases of guided image filtering, texture copy should be avoided. The limitation of RGIF is its ability of suppressing texture copy

2.7 Anisotropic-Guided Filter

Ochotorena and Yamashita [24] designed the new filter called as anisotropic-guided filter (AnisGF) to preserve the edges better, by making use of the weighted mean to achieve highest possible diffusion. This is designed mainly to take of the detail halos in the output image. Guided filter and its variants perform poor when the both the input and guided images have the inconsistency in their structural features. This is mainly because of the reason that those filters do not use the weighted mean during the final steps. Ochotorena and Yamashita et al., suggested to use the weighted mean to obtain the highest possible diffusion.

2.8 Deep Neural Networks-Based Filters

Zhu et al. came up the solutions such as (i) performance of the algorithms when there is vast span of image parameters using a same setting, (ii) edge-preserving evaluation which is mostly based on the subjective process, (iii) lack of datasets. Here, to set up the benchmark, deep neural networks are used. Deep neural networks consist of weights (parameters) in larger number [25]. In other algorithms, even though the weights are used as per the awareness of the edges, they were not trained. Here, the weights are trained, and hence, performance is better even though there exists the different image components in the used datasets. Existing network architecture, deep residual networks (ResNets) are made used here. As ResNet baseline model is trained datasets constructed with criteria such that they can handle the task of avoiding halo artifacts by preserving the dynamic edges. Other part of image will be similar to the output of low-pass filtering whatever be the components of the images. The same tone mapping frame work in [15] is used here by just using the ResNet model in the place of the bilateral filtering model. When compared with other filtering results, such as that of bilateral filtering [15], local edge-preserving filter (LEP) [1], visual adaptation (VAD) [26] tone mappers, ResNet gives better results as shown in Fig. 3 and Table 1.

Fig. 3
figure 3

Results of different tone mappers. From summit to foot: the bilateral filter, VAD method, LEP method, and ResNet model. Bilateral filter introduces halo artifacts at the edges across the tree in the summit image. Information are lost in case of VAD method. The local edge preserving filter method results are not quit natural look. ResNet model conserves the edge information

Table 1 Tone-mapped image quality index (TMQI) [29] comparisons: Allegiance of structures and naturalness based on image statistics are measured by TMQI

2.9 Domain Transformation

Eduardo S L Gastal and Manuael M Oliveira came up with the new method for edge-preserving filtering [27] based on the edge-preserving filter by Barash et al. [28]. This is based on the isometric between the two dimensional curves manifold in the five-dimensional space and the real line. Here, the geodesic distances are maintained same among the points, meanwhile the input signal is warped adaptively in such a way that in linear time, one dimensional edge-preserving filtering can be performed. This method is faster comparatively as it uses one-dimensional techniques, and also, the memory utilization is less. Computational cost is low, and it can operates on color images at variable scales.

Tone mappers with edge awareness can operate well to preserve the edges avoiding the halo artifacts while converting HDR image to its low dynamic version. Figure 4 provides the comparison of the outputs tone mappers such as (a) recursive filter (b) weighted least square filter. Though the image quality looks same, recursive filter is faster comparatively.

Fig. 4
figure 4

Results and comparison of different tone mappers: a Recursive filter and b Weighted least square filter

2.9.1 Multi-scale Decomposition-Based Filter

The new method of edge preserving was proposed, where the image is subjected into multiscale decomposition [1]. This filter works on the adaptability principle applied locally on the image. The output image of the filter preserves the edges locally, and also, local mean is maintained everywhere. Here, image is decomposed into three detail layers and one base layer where base layer contains the local mean and edges are conserved in detail layer. This method yields good result in preserving and enhancing local details of the HDR image as per Fig. 5.

Fig. 5
figure 5

Reconstruction of HDR images using various algorithms: Images in the red rectangle are enlarged and shown. a Output of bilateral filtering [15]. b Output of weighted least square filtering. c Output of local extrema [22]. d Output of local edge-preserving filter [1, 10]. e close-up of a. f close-up of b. g close-up of c. h close-up of d

3 Global Optimization-Based Filters

Global filter generates filter image by utilization all the pixel information in the image. They are formulated by using an optimization problem which places relationship among the neighboring pixels. As edge-preserving smoothing filters operated locally results in the halo artifacts, the global optimization-based filters are introduced. In case of the globally optimized filters [2, 4, 6, 24], optimization performance criterion will contain two terms, namely regularization term and data term. The smoothness level of the reconstructed image is provided by regularization term, while fidelity of reconstructed image is measured by data term. Global optimization-based filters have high computational cost though they yield very good quality. In case bilateral filter, its spatial and range parameters are fixed. In case of guided image filter, Lagrangian factor is fixed. Whereas, Lagrangian factor varies in case of global optimized filter. For example, in case of weighted least square filter [10], Lagrangian factor is adaptive. This could be major reason for the halo artifacts in bilateral and guided image filters. In the case of adaptive bilateral filter (ABF) [30], which is a training-based approach, range similarity parameter is adaptive. In adaptive filter of [31], range similarity parameter and spatial similarity parameter both are adaptive. But, 3D convolution form is destroyed by adaptation of the parameters.

3.1 Weighted Least Square Filters [WLS]

Bilateral filter-based detail decomposition techniques are less appropriate for the multiscale decomposition of the HDR images as they are not capable to extract detail at variable scales. To perform the multiscale decomposition, details should be extracted in variable scales. WLS is based on the optimization of weighted least squares [10], specifically well suited for (i) multiscale detail extraction (ii) for progressive coarsening of images. Edge-preserving smoothing is compromise between smoothing the image everywhere and to retain significant gradients of the image [10]. Disadvantage of WLS is its computational cost.

3.2 L0 Gradient Minimization

This is the global optimization method, helpful for the preserving HDR image edges [4]. This can have control on large count of non-zero gradients in a sparsity control manner to approximate prominent structure. By limiting count of non-zero gradients, edges are strengthened. Smoothing is not performed local; as in the case of local edge-preserving filters, it is done globally. This helps to achieve better smoothing during decompositions of layers in the tone-mapping process and also preserves and strengthens the structures of the images. Limitation of this framework is, in some unavoidable challenging circumstances to remove details, oversharpening occurs.

3.3 Fast Global Image Smoothing

Fast global smoother [3] addresses the limitations of the local optimized edge-preserving filters. This method achieves inhomogeneous spatially while preserving the edges. By the help of the inhomogeneous laplacian matrix, which is specified over the multidimensional spatial domain, the solution for the linear system is obtained (Table 2).

Table 2 Comparison table

This is fast, compare to the other global optimization filters and halos effects are not found. The advantages of the filter help in the faster and finer tone mapping of HDR image. The metric, Mean Square Error (MSE) calculates the cumulative error square between the modified and input images.

3.4 Scale-Aware Edge-Preserving Image Filtering

Filtering can be obtained by iterative global optimization (IGO) [33] which provides scale awareness and preservation of edges of the HDR images. By the usage of IGO method, with that of the measurement including the scare awareness, it is shown that gradients can be suppressed on lesser-scale characteristics, and variations in the intensities are preserved. But still, this one suffers to preserve the edges in case of the complex-structured images.

3.5 Embedding Bilateral Filter in Least Squares

As discussed, global optimization yields the better results over the local filters but with the disadvantage of the computational cost and usually run slower than the local filters. In this filter [34], the advantages of local and global filters are combined as bilateral filter is embedded least squares. This results in better edge preserving of the HDR image by avoiding halos artifacts and gradient reversals. This filter is found almost ten times faster than weighted least square filters. The tone mapping is done according to the multiscale decomposition. The gradient reversal and halos artifacts problems in [12, 35] are overcome by this method.

4 Methodology

The block diagram of the multiscale decomposition of HDR images using the edge-preserving filters is shown in the 6. The high dynamic range image is the input to the edge-preserving filters such as bilateral filter, guided filter, and weighted least square filters. Edge-preserving filters smoothens the HDR image preserving the edges. The output of the edge-preserving filter is taken as the base layer, and the difference between the base layer and input image results in the detail layer, which preserves the edge details. Further, base layer is subjected to the multiscale smoothening using the edge-preserving filters. Here, in our experimentation, edge-preserving filters are used three times for the multiscale smoothening. Further, the smoothen image (final base layer) is added with enhanced detail layer to obtain the output HDR image, there by image is smoothen; hence, memory size is reduced by retaining the edge information. The following metric is employed to compare the output of the edge-preserving filters.

4.1 Peak Signal-to-Noise Ratio (PSNR)

PSNR is a metric in signal processing which measures quality of a signal by calculating ration between original signal and the noise signal. PSNR estimates the image quality by comparing the modified image with the input image. PSNR will have higher value for evenly distributed values rather than sparsely distributed MSE and PSNR are defined as follows:

$$MSE=\frac{1}{mn}\sum_{i=1}^{m}\sum_{j=1}^{n}({x}_{ij}-{y}_{ij}{)}^{2}$$
m :

number of rows in cover image

n :

number of columns in cover image

x ij :

pixel value from cover image

y ij :

pixel value from stego image

$$\mathrm{PSNR}(x,y)=\frac{10{{\log}}_{10}[{\max}({\max}(x),{\max}(y)){]}^{2}}{x-{y}^{2}}$$

4.2 mPSNR

mPSNR is a metric which computes the PSNR for mainly HDR images. It takes following input arguments: input reference image, input distorted image, the minimum exposure for computing mPSNR, and the maximum exposure for computing mPSNR. It provides the following output arguments. mpsnr: the multiple-exposure PSNR value. Higher values mean better quality. eMax: the maximum exposure for computing mPSNR. eMin: the minimum exposure for computing mPSNR.

4.3 HDR-VDP-3

HDR-VDP-3 compute visually significant differences between an image pair with following input arguments: task—the task for which the metric should predict. Test—image to be tested (e.g., with distortions) reference—reference image (e.g., without distortions) color encoding—color representation for both input images. Pixels per degree—visual resolution of the image. The function returns a structure with the following fields: Pmap—probability of detection per pixel (matrix 0–1) Pdet—a single-valued probability of detection (scalar 0–1) Cmap—threshold normalized contrast map so that Cmax = 1 corresponds to the detection threshold (Pdet = 0.5). Cmax—maximum threshold normalized contrast so that Cmax = 1 corresponds to the detection threshold (Pdet = 0.5). Q—quality correlate, which is 10 for the best quality and gets lower for lower quality. Q can be negative in case of very large differences. QJOD—quality correlate in the units of just objectionable differences. 1 JOD means that 75 reference image over a test image in a 2-alternative-forced-choice test (Fig. 6).

Fig. 6
figure 6

Block diagram: multiscale decomposition of HDR images using the edge-preserving filters

5 Results and Discussion

Multiscale decomposition of the HDR image was performed on Matlab R2020a using the edge-preserving filters such as bilateral filter, guided filter, and WLS filter. Images were compared with the metrics psnr, mPSNR, and HDRVDP3. The results are as shown below in Tables 3, 4, 5, and 6. Output was evaluated on the two different images “smallOffice.hdr” and “SpheronNiceo9E0.hdr” The table shows the psnr and mPSNR values of the base layer and detail layer compared with the input HDR image. A further table specifies the output arguments of the HDRVDP3, where the quality of the image is given the preferences. The quality of the detail layer is found to be good in the WLS filter.

Table 3 Base layer 3 of “smallOffice.hdr”
Table 4 Detail layer of “smallOffice.hdr”
Table 5 Base layer 3 of “SpheronNiceo9E0.hdr”
Table 6 Detail layer 3 of “SpheronNiceo9E0.hdr”

6 Conclusion

In this paper, filters used for multiscale decomposition of HDR image into base layer and detail layer are performed using the edge-preserving filters. There are two types of filter i) Local filtering-based edge-preserving smoothing filters, includes anisotropic filters and its variants, bilateral filters and its variants, guided filter and its variants. Anisotropic and bilateral filters are slow, and they suffer from stair-casing effect. Though guided filter is fast, it suffers from halo effects like all other local filtering-based edge-preserving smoothing filters. AnisGF brings anisotropy as an added capability to the existing guided filter, but still immune to the formation of the artifacts and it presents limitations with density dependence. ii) Global optimization-based filters like WLS are best in edge preserving and smoothing, but drawback is computational cost. WGIF and GDGIF take the advantage of both local filtering-based edge-preserving filters and global optimization-based filters. They preserve the edges better, and the complexity is same as Gaussian filter. Edge-aware factor in WGIF is single scale, whereas in GDGIF, it is multiscale, so it is better in preserving the edges of the HDR image. The embedded BF [34] has the advantages of local and global filters. This results in better edge preserving of the HDR image by avoiding halos artifacts and gradient reversals. This filter is found almost ten times faster than weighted least square filters. So, it can be concluded that filter incorporating the features from both local and global smoothing filters and which performs multiscale decomposition is best suited for HDR image decomposition and tone mapping. Research to be focused is on implementation of an efficient filter with the both local and global optimization features for the multiscale decomposition. The contrast of base layer can be compressed efficiently with a new compression technique, and detail layer can be enhanced further to preserve the edge information. A new frame work can be designed for tone mapping process.