Abstract
Contrast of images which are captured in poor weather (i.e., hazy, foggy, rainy and cloudy) degrades due to optical and physical properties of light and atmospheric particles. This degradation lessens the performance of computer vision assisted systems. The weight of degradation at a pixel depends upon depth of the pixel from camera. Therefore, accurate depth estimation of a pixel is essential to improve quality of image. This paper presents color attenuation prior-based depth approximation model to approximate depth of a pixel from the camera using a single degraded image. The proposed method observed that the depth of a pixel from the camera is directly proportional to the difference of the saturation from the sum of brightness and hue. Visual quality and quantitative metrics are used to compare results of the proposed method with prominent existing methods in literature.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Performance of surveillance systems, object recognition in outdoor systems, intelligent transportation systems, traffic monitoring using outdoor vision systems is highly interrupted by poor weather. Computer vision assisted systems work effectively only if input is noiseless. Image captured in mediocre weather degrades due to absorption and scattering of light by the particles present in atmosphere (such as fog, haze, and smoke), which fade color and contrast of the captured image. Camera receives sum of reflected light and scattered light as radiance of a pixel [2]. It is observed that amount of scattered light is more than absorbed light [28]. Thus, removal of fog or haze (dehazing) is highly essential. Main task of dehazing is to improve quality of degraded image using atmospheric scattering model [2, 21]. The quantity of degradation at a scene point depends upon depth of the scene point from camera. [2].
Histogram equalization and contrast stretching are generally used to enhance degraded images earlier. However, these methods focus on improvement of brightness and contrast, which make these methods unable to produce ideal dehazed images [4, 5, 19].
Methods in [2, 22, 23, 29] are based on atmospheric scattering model and generate quality results. However, the performance of these methods depends upon filtering approach. Due to inappropriate filtering or inexact depth estimation, these methods do not restore degraded edges.
It is observed that the depth of a pixel from the camera is directly proportional to the difference of the saturation from the sum of brightness and hue. Therefore, an enhanced model to approximate depth of each pixel is proposed. Original edges are preserved, and degraded edges are restored by the proposed method. Visual quality of results which are obtained by the proposed method is validated qualitatively and quantitatively.
Work in literature is presented in Sect. 2. Introduction to atmospheric scattering model and problem formulation is presented in Sect. 3. Mathematical foundation of the proposed work is discussed in Sect. 4. In Sect. 5, the proposed work is modeled mathematically. The process to restore original clear day image from hazy image is given in Sect. 6. Result analysis based on visual quality and quantitative metrics is discussed in Sect. 7. In Sect. 8, conclusion is presented.
2 Related Work
Concentration of haze changes with unknown depth of each scene point, which makes dehazing a tiring task. Objective of dehazing is to recover unknown scene depth using degraded image. Dehazing is mainly classified into three categories based on type of input; (1) extra information based [14, 20], (2) multiple images based [11,12,13, 17, 18], and (3) single image based [1, 2, 21, 22, 29].
Extra information-based methods need some additional input like cues of depth, which are obtained through different camera positioning [14, 20]. Due to extra need, these methods are not good for real-time applications. Multiple image-based methods [17, 18] require more than one image of a scene, which are taken at varying degree of polarization. However, these methods need extra hardware, which increases hardware cost and other expenses.
Thus, single image-based methods [1, 2, 21, 22, 29] have been proposed, which solves problem of dehazing by putting many constraints. Performance of these methods depends upon strong assumptions and priors. In [21], a method to maximize local contrast is proposed, which is based on an observation that haze free images have more contrast than hazy images. However, this method produces blocky artifacts due to window-based filtering. In [1], it is assumed that transmission and local surface shading are not correlated. However, method in [1] performs poorly in dense haze.
A most prominent work is proposed in [2]. In [2], it has been observed that one of the color channel of outdoor haze free images has very low intensity in non-sky region, which is used to estimate transmission. However, this method performs incorrectly in the presence of sky region, or an object brighter than atmospheric light.
More fast method is presented in [29], which estimates scene depth to recover scene transmission. This method is fast and handles sky regions to certain extent. However, it loses a few edges and does not recover degraded edges in certain hazy conditions due to inaccurate estimated depth.
3 Problem Formulation
Color and contrast of images change due to scattering of light reflected from scene point by atmospheric particles. Type, size, orientation, and distribution of particles decide severity of scattering [10]. Figure 1 shows the process of image formation in outdoor environment. A light beam reflected from surface of an object is attenuated due to atmospheric scattering. Camera receives fraction of non-attenuated light (direct attenuation) and attenuated light (airlight), which is described by atmospheric scattering model [10, 13]. Mathematical expression of atmospheric scattering model is given in Eq. 1.
where y is position (usually coordinate) of a scene point, intensity of degraded image at position y is \(I_2(y)\), intensity of original clear day image at position y is \(I_1(y)\), \(\text {Ar}\) is atmospheric light, transmission at position y is represented by \(\text {Tr}(y)\), and given by Eq. 2.
where scattering coefficient is \(\gamma \), \(\text {Dep}(y)\) is depth of a pixel at position y from camera. If atmosphere contains homogeneous and little particles in size, then \(\gamma \) and Ar will be constants. Approximation of \(I_1(y)\), \(\text {Tr}(y)\), and A from single degraded image \(I_2(y)\) is the main goal of dehazing. If \(\text {Dep}(y)\) is known, then \(\text {Tr}(y)\) can be obtained using Eq. 2. Obtained \(\text {Tr}(y)\) can recover image \(I_1(y)\) using Eq. 1.
4 Mathematical Foundation of the Proposed Model
Information about depth of a scene point is not contained in single image. Thus, dehazing is challenging due to unknown depth. To restore clear day image, depth recovery is essential. The proposed work is influenced by color attenuation prior [29] to approximate depth. According to [29], the depth of a pixel increases with increased difference of saturation and brightness at same pixel. The results produced by [29] are good; however, this method loses existing edges due to lack of accuracy in depth estimation. This inspired the proposed work to introduce an enhanced depth approximation model. Method in [29] works in HSV color space which can be best described by following transformation equations [7].
where \([r^{\prime }(y),g^{\prime }(y),b^{\prime }(y)]\) are normalize triplets representing r, g, b color intensities of input image at location y, and \(\delta (y)=\text {MAX}(y)-\text {MIN}(y)\). Saturation, brightness, and hue of input image at location y are represented by S(y), V(y), and H(y), respectively.
Equation 1 represents that original clear day image \(I_1(y)\) is degraded due to, (1) multiplicative reduced transmission \(\text {Tr}(y)\) and (2) additive airlight \(Ar(1-\text {Tr}(y))\). It is proved that the transmission \(\text {Tr}(y)\) depends upon depth Dep(y) of each scene point. Thus, the degradation increases with depth. Therefore, additive airlight increases with depth due to which MAX(y) and \(\text {MIN}(y)\) start approaching to airlight. At long distance, transmission will be zero and airlight will be maximum. Therefore, \(\text {Tr}(y)\) and \(\text {MIN}(y)\) will be almost same. Thus, \(\delta (y)\) will decrease with depth and can be expressed as a function of depth \(\text {Dep}(y)\) as shown in Fig. 2
Figure 2 shows \(\delta (y)\) as a function of depth \(\text {Dep}(y)\). Two solid lines represent values of \(\text {MIN}(y)\) and \(\text {MAX}(y)\) at varying level of depth \(\text {Dep}(y)\). Difference of \(\text {MAX}(y)\) and \(\text {MIN}(y)\) is the value of \(\delta (y)\) and represented by dashed lines. It can be observed that increased airlight causes increase in \(\text {MAX}(y)\) and \(\text {MIN}(y)\) intensity. However, at very long distance \(\text {MAX}(y)\) and \(\text {MIN}(y)\) will be almost equal due to reduced transmission and increased airlight. Thus, \(\delta (y)\) decreases with depth which causes saturation S(y) to decrease and brightness V(y) to increase. Therefore, [29] considered difference of brightness and saturation as a function of depth.
However, it can be observed from Eq. 3 that hue H(y) increases with depth due to reduction in \(\delta (y)\). This implies that hue H(y) and brightness V(y) are positively correlated with depth while saturation S(y) is negatively correlated.
Consider objects which are not too close to camera and not too far from camera. For these objects, maximum difference of \(V(y)- S(y)\) will be one, which is not true estimation of depth by [29]. Thus, difference of brightness and saturation is unable to estimate true depth. However, the value of \(H(y)+V(y)-S(y)\) will be more for same objects. Therefore, the model in [29] and the proposed model are combined to estimate the scene depth more accurately. This combination can be best represented by a combination of hue, brightness, and saturation as:
5 Mathematical Modeling
Mathematical model of the proposed enhanced depth approximation model is described as:
where D(y) is depth, B(y) is brightness, H(y) is hue, S(y) is saturation, \(c_1,c_2,c_3\) and \(c_4\) are linear coefficients, \(\alpha \) is used to normalization constant and \(\epsilon (y)\) is random image to represent random error of the model.
5.1 Computation of Linear Coefficients
Linear coefficients are computed with an objective to improve structural similarity index (ssim), which measures variance of mixed effect of contrast, structure and luminance of two images [16, 26].
where l(j, k) is luminance, c(j, k) is contrast, s(j, k) defines structure of two images j and k. Formulas to compute the value of l(j, k), c(j, k), and s(j, k) are defined as:
where \(\mu _j,\mu _k\) are local mean of images j and k, respectively, \(\sigma _j, \sigma _k\) are standard deviations of images j and k, respectively, \(\sigma _{jk}\) is cross covariance. The value of \(ssim(j,k)=1\) is possible, if and only if \(l(j,k)=1\), \(c(j,k)=1\), and \(s(j,k)=1\). Thus equating \(l(j,k)=1\), \(c(j,k)=1\), and \(s(j,k)=1\).
It can be inferred from Eq. 9 that ssim between two images will be high if squared difference of those images is low. Thus, linear coefficients are computed such that squared difference(error) is minimized. Therefore, ordinary least square estimation (OLS) is used to compute linear coefficients. Generalized regression for Eq. 6 can be expressed as:
where \(D_i(y)\) dependent variable of regression and represents random depth of ith sample. Brightness, hue, and saturation of ith sample are represented by \(B_i(y),H_i(y)\), and \(S_i(y)\), respectively. Random error of ith sample is \(\epsilon _i(y)\).
It is assumed that the random error \(\epsilon _i(y)\) is based on normal distribution with \(\sigma ^2\) variance. If \(\frac{c_1}{\alpha }\),\(\frac{c_2}{\alpha }\),\(\frac{c_3}{\alpha }\) and \(\frac{c_4}{\alpha }\) are replaced by \(\beta _0,\beta _1,\beta _2\) and \(\beta _3\), respectively in Eq. 10, then
where \(\beta _{i} \le 1\) for \((i=0,1,2,3)\). Using Eq. 11, the sum of square of errors s is given by following equation.
where \(\text {num}\) are number of samples used in OLS. The following equations are derived by equating partial differentiation of Eq. 12 to zero (i.e., \(\frac{\partial s}{\partial \beta _0}=0, \frac{\partial s}{\partial \beta _1}=0, \frac{\partial s}{\partial \beta _2}=0\) and \(\frac{\partial s}{\partial \beta _3}=0\)).
Solution of Eqs. 13, 14, 15, and 16 gives values of \(\beta _0,\beta _1,\beta _2\) and \(\beta _3\). Linear coefficients \(c_1,c_2,c_3\) and \(c_4\) can be obtained if \(\alpha \) is known. As discussed that \(\alpha \) is used to normalize depth. Thus, it can be obtained as:
where \(d_{\text {max}}\) is maximum of scene depth D(y).
5.2 Data Preparation for Regression Analysis Using Ordinary Least Square Estimation Method
Ground truth of the depth is unavailable due to constraint of nature (Depth of an object in a scene may change over time). Thus, 200 images captured in fine weather of outdoor scenes (mountains, animals, trees, etc.) are used to prepare sample space. For each sample image, depth \(D_i(y)\) is obtained randomly using Gaussian distribution with parameters \(\mu =0,\sigma ^2=0.5\). Experiments have been conducted to select value of \(\sigma ^2\) to bring proper diversity in the depth map. It is found that \(\sigma ^2=0.5\) is sufficient to bring proper diversity. Atmospheric light Ar is randomly obtained using uniform standard distribution. Hazy images with respect to each clear day image are prepared using Eq. 2. Curve of Eq. 11 is fitted on this sample space using Eqs. 13, 14, 15, and 16. This curve fitting gives values of linear coefficients (\(c1=0.0122\), \(c2=0.9592\), \(c3=0.9839\), and \(c4=0.7743\)).
6 Properties of the Enhanced Depth Approximation Model and Scene Restoration
6.1 Edge Preserving Property
Existing edges are preserved and degraded edges are recovered by enhanced depth approximation model. Gradient of Eq. 6 will be as in Eq. 18 [9].
where \(\Delta \epsilon = 0\) according to principle of OLS Eq. 18 proves that gradients of D(y) depend upon gradient of B(y), H(y), and S(y), which indicates presence of edge in D(y) if and only if there is an edge in B(y), or H(y), or S(y). Figure 3 shows effect of hue on color attenuation prior. It can be noticed from Fig. 3f, g that edges preserved by the proposed model are more accurate. Peak signal-to-noise ratio(psnr) obtained by the proposed model proves its accuracy.
6.2 White Regions Handling
Due to additive airlight, amount of whiteness increases in degraded image. Thus, differentiation of real white objects from atmospheric light becomes difficult. The proposed method may approximate wrong transmission in the presence of white objects in the scene. Presence of white objects results in increased brightness, low saturation, and moderate hue.
In [2, 29], the problem of white region is solved by assuming that pixels are locally at same depth. Minimum filter is used by these methods to refine depth map locally. However, minimum filters results in loss of existing edges as shown in Fig. 3f. Therefore, median filter is used by the proposed method, which solves problem of white object upto a level and preserve existing edges. Thus, the refined transmission is expressed as:
where \(T_r(y)\) is refined transmission, \(\omega _{r(y)}\) is window of size \(y\times y\), and \(\text {Tr}(x)\) is approximated transmission. To reduce blocking artifacts which are introduced due to window-based operation, refined depth map is further smoothed using guided filter [3].
6.3 Restoration of \(I_1(y)\)
Atmospheric light \(A^c\) in each c color channel is estimated using method of [2], where \(c\epsilon (R,G,B)\). The proposed method further takes minimum of atmospheric light of each color channel as global atmospheric light, which is given by Eq. 20.
where min is a function to compute minimum of given values, Ar is global atmospheric light. Equation 6 is used to approximate scene depth D(y), transmission is obtained using Eq. 2 and refined using Eq. 19. Once Ar and Tr(y) are obtained then Eq. 1 can be used to restore image \(I_1(y)\). The value of \(\gamma \) is critical in restoration of \(I_1(y)\). Low value of \(\gamma \) generates residual haze and its high value increases dehazing level, which darkens the dehazed image. Thus, the proper value of \(\gamma \) is vital for restoration. The proposed work considered \(\gamma =1\).
7 Experimental Analysis
The MATLAB version R2014a is used to implement the proposed method. Waterloo IVC dehazed image data set(WID) [6] is used to verify effectiveness of the proposed method. This data set consists of 25 hazy images of outdoor scenes. Results obtained by the proposed method are compared with method in [2, 8, 22, 23, 25, 29].
7.1 Qualitative Evaluation
Figure 4 shows comparison of the results based on visual quality. Figure 4b shows that method of [2] generates artifacts near depth discontinuities and color distortion in sky region. Figure 4c shows the results obtained in [23]. It can be observed from Fig. 4c that results are dark due to overestimation of transmission. Results obtained in [8] are promising as shown in Fig. 4d. However, method of [8] is time consuming due to regularization. Method of [22] obtains better results, which are shown in Fig. 4e. However, this method produces wrong results in dense haze. Method of [29] is fast. However, this method wrongly estimates depth in the presence of white object as shown in Fig. 4f. Method in [25] is fast in comparison with other methods. However, results obtained by method in [25] are not visually pleasant as shown in Fig. 4g. The proposed method obtains better visual results as shown in Fig. 4h. It restores natural colors in sky region as well as in non-sky region.
Furthermore, Fig. 5 shows the effect of varying \(\gamma \) on restored images. Image shown in Fig. 5a is restored with varying values of \(\gamma =[1,1.2,1.5]\) using the proposed method. Restored images are shown in Fig. 5b–d. It can be observed that darkness of restored images increases with increased value of \(\gamma \). Thus, adaptive \(\gamma \) is essential for accurate dehazing.
7.2 Quantitative Evaluation
The proposed method is validated using quantitative metrics to measure strength on the basis of restored edges, structure, and texture. Metrics e and \(\overline{r}\) are computed, which quantifies the strength of the proposed method to restore and preserve edges [24, 27] using WID data set. Increasing values of e and \(\overline{r}\) indicate improved quality of results.
Obtained values of metrics e and \(\overline{r}\) for images shown in Fig. 6 are given in Tables 1 and 2. The proposed method performs well in comparison with method in [2, 8, 22, 23, 29] on the basis of obtained values of e and \(\overline{r}\).
8 Conclusions
An enhanced depth approximation model has been proposed. The proposed depth approximation model is based on an observation that depth of a pixel from the camera is directly proportional to the difference of the saturation from sum of brightness and hue. Transmission obtained by the proposed method is further refined using local median filtering, which helps in preserving existing edges. Accuracy of the proposed model is proved on the basis of visual quantity and qualitative metrics. However, the proposed method is based on homogeneous scattering of light. Thus, this issues will be part of future work.
References
Fattal R (2008) Single image dehazing. In: Proceedings of ACM SIGGRAPH, pp 72:1–72:9
He K, Sun J, Tang X (2011) Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell 33(12):2341–2353
He K, Sun J, Tang X (2012) Guided image filtering. IEEE Transaction on Pattern Analysis and Machine Intelligence 35(6):1397–1409
Kim JY, Kim LS, Hwang SH (2001) An advanced contrast enhancement using partially overlapped sub-block histogram equalization. IEEE Trans Circ Syst Video Technol 11(4):475–484
Kim TK, Paik JK, Kang BS (1998) Contrast enhancement system using spatially adaptive histogram equalization with temporal fltering. IEEE Trans Consum Electron 44(1):82–87
Ma K, Liu W, Wang Z (2015) Perceptual evaluation of single image dehazing algorithms. In: IEEE International Conference on Image Processing, Quebec City, QC, Canada, pp 3600–3604, Sept 2015. https://doi.org/10.1109/ICIP.2015.7351475
MathWorks T, Color space conversion (2017), https://in.mathworks.com/help/vision/ref/colorspaceconversion.html
Meng G, Wang Y, Duan J, Xiang S, Pan C (2013) Efficient image dehazing with boundary constraint and contextual regularization. In: IEEE international conference on computer vision, pp 617–624
Mi Z, Zhou H, Zheng Y, Wang M (2016) Single image dehazing via multi-scale gradient domain contrast enhancement. IET Image Process 10(3):206–214. https://doi.org/10.1049/iet-ipr.2015.0112
Narasimhan SG (2004) Models and algorithms for vision through the atmosphere. Ph.D. thesis, New York, NY, USA
Narasimhan SG, Nayar SK (2000) Chromatic framework for vision in bad weather. In: IEEE conference on computer vision and pattern recognition, vol 1, pp 598–605
Narasimhan SG, Nayar SK (2003) Contrast restoration of weather degraded images. IEEE Trans Pattern Anal Mach Intell 25(6):713–724
Nayar SK, Narasimhan SG (1999) Vision in bad weather. IEEE Conf Compu Vis 2:820–827
Nayar SK, Narasimhan SG (2003) Interactive deweathering of an image using physical models. In: IEEE workshop on color and photometric methods in computer vision in conjunction with IEEE conference on computer vision, Oct 2003
Raikwar SC, Tapaswi S (2017) An improved linear depth model for single image fog removal. Multimedia Tools Appl 77(15):19719–19744
Raikwar SC, Tapaswi S (2018) Tight lower bound on transmission for single image dehazing. The Visual Computer. https://doi.org/10.1007/s00371-018-1596-5
Schechner YY, Narasimhan SG, Nayar SK (2001) Instant dehazing of images using polarization. IEEE Conf Comput Vis Pattern Recogn 1:325–332
Shwartz S, Namer E, Schechner YY (2006) Blind haze separation. IEEE Conf Comput Vis Pattern Recogn 2:1984–1991
Stark JA (2000) Adaptive image contrast enhancement using generalizations of histogram equalization. IEEE Trans Image Process 9(5):889–896
Tan K, Oakley JP (2000) Enhancement of color images in poor visibility conditions. IEEE Conf Image Process 2:788–791
Tan R (2008) Visibility in bad weather from a single image. In: IEEE conference on computer vision and pattern recognition, pp 24–26
Tang K, Yang J, Wang J (2014) Investigating haze-relevant features in a learning framework for image dehazing. In: IEEE international conference on computer vision and pattern recognition, pp. 2995–3002
Tarel JP, Hautière N (2009) Fast visibility restoration from a single color or gray level image. In: Proceedings of IEEE international conference on computer vision. pp 2201–2208, Sept 2009
Wang R, Li R, Sun H (2016) Haze removal based on multiple scattering model with superpixel algorithm. J Signal Process 127(C):24–36
Wang W, Yuan X, Wu X, Liu Y (2017) Fast image dehazing method based on linear transformation. IEEE Trans Multimedia 19(6):1142–1155. https://doi.org/10.1109/TMM.2017.2652069
Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image qualifty assessment: from error visibility to structural similarity. IEEE Trans Image Proces 13(4):600–612
Xu Y, Wen J, Fei L, Zhang Z (2015) Review of video and image defogging algorithms and related studies on image restoration and enhancement. IEEE Access 4:165–188
Zhang YQ, Ding Y, Xiao JS, Liu J, Guo Z (2012) Visibility enhancement using an image filtering approach. EURASIP J Adv Signal Process 2012(1):220–225
Zhu Q, Mai J, Shao L (2015) A fast single image haze removal algorithm using color attenuation prior. IEEE Trans Image Process 24(11):3522–3533
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Raikwar, S., Tapaswi, S. (2021). An Enhanced Depth Approximation Model for Haze Removal Using Single Image. In: Favorskaya, M.N., Mekhilef, S., Pandey, R.K., Singh, N. (eds) Innovations in Electrical and Electronic Engineering. Lecture Notes in Electrical Engineering, vol 661. Springer, Singapore. https://doi.org/10.1007/978-981-15-4692-1_52
Download citation
DOI: https://doi.org/10.1007/978-981-15-4692-1_52
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-4691-4
Online ISBN: 978-981-15-4692-1
eBook Packages: EnergyEnergy (R0)