1 Introduction

The computer vision industry is experiencing rapid growth, with various fields such as autonomous driving, aerospace photography, and urban planning relying heavily on computer vision technologies. Computer vision systems aim to replicate the visual capabilities of the human eye by perceiving images captured by the photographic equipment. Therefore, the quality of the acquired images directly impacts the effectiveness of the computer vision applications. Image quality is often influenced by both the imaging equipment and the external environment. While controlling the accuracy of imaging equipment is relatively straightforward, the frequent occurrence of hazy weather poses a significant challenge. Haze leads to the degradation of image details and loss of valuable information, severely impacting image acquisition and quality. This situation significantly affects the accuracy of image processing and the acquisition of essential image information.

Image defogging methods can generally be categorized into two main approaches. The first approach is image defogging based on a physical model. This method relies on the physical model of fog map imaging and incorporates certain priori estimation to achieve the desired defogging effect. However, due to various scene complexities, the effectiveness of the prior information is often limited, leading to certain challenges in physical model-based defogging. For instance, Tan proposed a defogging method that expands the local contrast of the recovered image, but it is prone to oversaturation [1]. Fattal introduced a technique that converts the image from RGB space to YUV space for processing, effectively eliminating light scattering and enhancing image contrast for defogging. However, this algorithm is less effective for processing images with dense fog [2]. He proposed a defogging model based on dark channel prior (DCP). However, the Laplace operator used in this approach can be inefficient and may result in a distorted sky effect [3]. The second approach is image defogging based on image enhancement. This method relies on modifying the color, brightness, or saturation of the image to enhance its visual effect and reduce or eliminate color shifts and distortions, achieving the purpose of image defogging. However, this approach often neglects the principles of fog map imaging, leading to potential loss of image details and less-than-ideal defogging outcomes. Common methods for image defogging based on image enhancement include histogram equalization, Retinex algorithm, wavelet algorithm, and homomorphic filtering algorithm. For instance, Chen utilized homomorphic filtering for image contrast enhancement, which effectively achieved defogging results. However, the global nature of this processing method can lead to the loss of fine details [4]. Jiang combined multi-scale Retinex (MSR) and homomorphic filtering to effectively process images containing thin clouds and dense fog. Nevertheless, this algorithm is relatively complex [5]. Fan combined wavelet and homomorphic filtering, using homomorphic filtering to process wavelet transform coefficients, enabling fast processing of fog-containing images [6]. Ma adopted the Retinex model to achieve fast image defogging with promising results [7]. Zhang combined guided filtering and Retinex to develop a new algorithm that can better process image brightness information, thereby eliminating the influence of halos and noise on the image [8]. Sang compared image enhancement methods with physical model-based image restoration methods and found that while histogram equalization offers fast imaging quality, its performance is inferior to image restoration methods [9].

To leverage the advantages of both physical model and image enhancement, some researchers have investigated hybrid image defogging methods that integrate both approaches [10,11,12,13,14,15,16,17,18,19,20]. For instance, Yang employed homomorphic filtering to enhance the minimum color component of the image and optimized the transmittance map in the DCP algorithm using the minimum color component as a guide image. This approach effectively addressed the slow defogging speed and significant color bias in the sky region encountered in dark channel algorithm [15]. Li combined homomorphic filtering with an improved DCP algorithm, utilizing homomorphic filtering to obtain a uniformly distributed haze image, followed by fog removal using the improved DCP algorithm, which proved effective for processing dense fog images [16]. Chen combined homomorphic filtering with restricted contrast adaptive histogram equalization based on DCP, resulting in improved fog removal performance, particularly for nighttime fog scenarios [17]. Huang utilized the histogram of the haze image to adjust the color scale of the image, enhancing the output quality of defogging images based on DCP and significantly improving the visual appearance [18]. Yu employed the color line model and homomorphic filtering to enhance the clarity of underwater images, which shares similarities with image defogging by utilizing the physical model and homomorphic filtering to enhance image quality [19]. Zhang used homomorphic filtering to enhance the image with uneven illumination before obtaining the dark channel image, and combined it with the DCP algorithm for image defogging. This approach effectively improved the clarity of defogged images and mitigated blurring issues [20].

The use of homomorphic filtering is significant, but we found that the application of homomorphic filtering in the use of the process involves the problem of empirically determining several parameters, and selecting appropriate empirical values is often difficult, leading to uncertainties regarding the image quality after defogging. Temporary tests for parameter selection also seriously affect the timeliness of image-defogging processing [21, 22]. Moreover, there is no effective control over the quality of the output image. Therefore, this paper proposes an optimal selection method of key parameters for homomorphic filtering based on information entropy. The method establishes an information entropy-based model of the relationship between slope sharpening and cutoff frequency parameters (C-D0 information entropy model). This model utilizes priori image information and employs the least squares method to rapidly determine the optimized parameters C and D0, ensuring that the information entropy of the optimally defogged image is at a high level.

2 Image defogging method based on homomorphic filtering

Image defogging using homomorphic filtering is an image enhancement technique that aims to improve contrast and adjust color to achieve a visual defogging effect. The imaging process of an object can be understood as the combined impact of the irradiation component and the reflection component. The irradiation component represents the total amount of light incident on the observed scene from the light source, while the reflection component refers to the total amount of light reflected by the objects within the scene [23]. The relationship can be expressed as:

$$f(x,y)=i(x,y)\cdot r(x,y)$$
(1)

Where \(i(x,y)\) denotes the irradiation component and \(r(x,y)\) denotes the reflection component. Generally, \(0<i(x,y)<\infty\), and \(0<r(x,y)<1\).

Equation (1) is taken logarithmically to separate the irradiated and reflected components, and then the Fourier variation is performed. By applying the homomorphic filtering transfer function \(H(u,v)\), we obtain the following expression:

$$H(u,v)F(u,v)=H(u,v)I(u,v)+H(u,v)R(u,v)$$
(2)

Where \(F(u,v)\), \(I(u,v)\) and \(R(u,v)\) are the expressions of \(\ln f(x,y)\), \(\ln i(x,y)\) and \(\ln r(x,y)\) after Fourier transformation, respectively; and the left and right three items in Eq. (2) can be expressed as \(G(u,v)\), \({G_I}(u,v)\) and \({G_R}(u,v)\), respectively.

The homomorphic filtering transfer function in Eq. (2) is as follows:

$$H(u,v)=({\gamma _H} - {\gamma _L})\left[ {1 - {e^{ - C \cdot \frac{{{D^2}(u,v)}}{{D_{0}^{2}}}}}} \right]+{\gamma _L}$$
(3)

Where \({\gamma _H}\) and \({\gamma _L}\) are the high-frequency and low-frequency gains, respectively, generally, \({\gamma _H}>1\), \({\gamma _L}<1\) ; C is the slope sharpening control parameter; and D0 is the cutoff frequency.

The Fourier inverse transformation of Eq. (2) is applied, and exponential operations are performed on both sides of the equation simultaneously, resulting in the generation of the final output image:

$${g_0}(x,y)=\exp (g(u,v))=\exp [{g_i}(u,v)+{g_r}(u,v)]={i_0}(x,y) \cdot {r_0}(x,y)$$
(4)

Where \({g_0}(x,y)\) is the output defogged image; \({i_0}(x,y)\) is the processed irradiated component; \({r_0}(x,y)\) is the processed reflected component; \({g_0}(x,y)\), \({g_i}(x,y)\), \({g_r}(x,y)\) are the expressions of the \(G(u,v)\), \({G_I}(u,v)\), \({G_R}(u,v)\) Fourier inverse transform.

The homomorphic filtering process is shown in Fig. 1.

Fig. 1
figure 1

Process of homomorphic filtering

3 Slope sharpening versus cutoff frequency model (C-D 0 information entropy model)

From Eq. (3), it can be seen that the size of D0 will affect the degree of image detail retention and also affect the brightness of the output image. Different combinations of C and D0 values are employed in the homomorphic filtering calculation, resulting in the output images depicted in Fig. 2. The first row of images represents the original images, while the second row shows the output images with a fixed D0 value of 10, incrementing C from 15 to 19 with a spacing of 1. Similarly, the third row displays the output images with a fixed C value of 20, incrementing D0 from 10 to 14 with a spacing of 1.

Fig. 2
figure 2

Output images using different D0 and C

Analyzing the results in Fig. 2, it becomes evident that by fixing when the other three parameters remain unchanged except for D0, decreasing D0 leads to more pronounced detail enhancement in the image and a brighter output image. When considering the other three parameters as fixed constants, it is observed that a larger value of C leads to more pronounced enhancement of image details and increased brightness in the resulting output image. The analysis of the output image results indicates that the image quality is influenced by the selection of both C and D0 values. The difficulty in using homomorphic filtering is that the optimal parameters have a large transformation range for different images, and there is a certain interaction between the parameters of homomorphic filtering. Consequently, achieving optimized image quality can be challenging when relying on empirical values of C and D0. However, relying solely on a large number of experiments to identify the optimal parameters will be time-consuming.

To solve this problem, information entropy as an image quality evaluation index is introduced. By utilizing image quality evaluation criteria, the strengths and weaknesses of the output image quality can be numerically and quantitatively assessed. When defogging a foggy image, the level of detail in the resulting output image experiences a significant increase. Therefore, information entropy can be employed to evaluate the image quality. Information entropy serves as a measure of the amount of information contained in the image. By comparing the information entropy of the input and output images, the evaluation of image quality can be carried out. The calculation method for information entropy is shown in the following equation:

$${H_{G{\text{ray}}}}= - \sum\limits_{{i=0}}^{{255}} {p(i){{\log }_2}\;p(i)}$$
(5)
$$\ln H=\frac{1}{3}\sqrt {\ln H_{{G{\text{ray}}R}}^{2}+\ln H_{{GrayG}}^{2}+\ln H_{{GrayB}}^{2}}$$
(6)

Where HGray is the image information entropy of grayscale image; H is the image information entropy of color image; and p(i) denotes the probability of occurrence of i pixel value in the image.

Through experiments on a large number of images, it has been observed that when applying homomorphic filtering for image defogging, there is consistently an upper limit to the information entropy of the resulting output image, regardless of the chosen values for C and D0, while keeping the high-frequency and low-frequency gains fixed. As exemplified in Fig. 3, the relationship between C, D0, and information entropy is shown in Fig. 4. In Fig. 4, the X-axis represents the slope sharpening parameter, the Y-axis represents the cutoff frequency, and the Z-axis represents the information entropy value of the output image.

Fig. 3
figure 3

Example of sample fog maps

Fig. 4
figure 4

Comparison of peak information entropy of output images

Observing the above Fig. 4, it becomes apparent that when employing various C and D0 values for homomorphic filtering across different images, the information entropy of the output image consistently exhibits a continuous and equal peak. It is noteworthy that the specific peak values differ for each image. Furthermore, for any given input D0 or C value, there exists a corresponding C or D0 value that enables the information entropy of the output image to reach its peak value. Based on this, a finite length interval from the information entropy of the original image to the peak of the information entropy can be obtained. By assessing the relative position of the information entropy of the output image within this interval, the accuracy of the output image can be evaluated. The information entropy of traditional empirical fixed-parameter homomorphic filtering for defogging will be at a certain position in Fig. 4.

The aforementioned analysis consistently demonstrates a strong correlation between the C and D0 values when achieving optimal image defogging effect. This correlation is visually depicted in Fig. 4, where the top view corresponds to the main view, as shown in Fig. 5. This representation provides a more intuitive understanding of the relationship between C and D0 values in achieving optimal defogging results.

Fig. 5
figure 5

Strong correlation between slope sharpening parameters and cutoff frequency parameters for different images

In Fig. 5, it is evident that despite the variations among input images, the homomorphic filtering operation attains the optimal result when the value of information entropy reaches its maximum. Notably, the curvilinear relationship between C and D0 appears remarkably similar across all images during this optimal state. To describe this phenomenon, this paper defines a function model that establishes the relationship between C and D0: \({D_0}=\varphi \left( C \right)\).

For the linear relationship between the parameters C and D0, the least squares method is used for the estimation of the polynomial model parameters to obtain the C-D0 information entropy model. The least squares polynomial curve fitting is a linear fit to discrete data based on the principle of minimizing the sum of squares of deviations. The benchmark can be expressed as Eq. (7) when m sets of discrete data exist [24].

$$\mathop {\hbox{min} }\limits_{\varphi } \sum\limits_{{i=1}}^{m} {\delta _{i}^{2}} =\sum\limits_{{i=1}}^{m} {{{\left[ {\varphi ({C_i}) - {D_{0i}}} \right]}^2}}$$
(7)

Where \({\delta _i}\) is the vertical distance from the discrete point D0i to the fitted polynomial; and \(\varphi ({C_i})\) is the fitted polynomial.

Usually, the fitted polynomial can be expressed as:

$$\varphi (C)={a_0}+{a_1}C+{a_2}{C^2}+ \cdots +{a_n}{C^n}$$
(8)

Substituting Eq. (8) into Eq. (7) and taking partial derivatives of Ci for both sides of the equation at the same time, Eq. (9) is obtained.

$$\left\{\begin{array}{l}\sum\limits_{i=1}^mD_{0i}=a_0n+a_1\sum\limits_{i=1}^mC_i+\cdots+a_n\sum\limits_{i=1}^mC_i^n\\\sum\limits_{i=1}^mC_iD_{0i}=a_0\sum\limits_{i=1}^mC_i+a_1\sum\limits_{i=1}^mC_i^2+\cdots+a_n\sum\limits_{i=1}^mC_i^{n+1}\\\cdots\\\sum\limits_{i=1}^mC_i^nD_{0i}=a_0\sum\limits_{i=1}^mC_i^n+a_1\sum\limits_{i=1}^mC_i^{n+1}+\cdots+a_n\sum\limits_{i=1}^mC_i^{2n}\end{array}\right.$$
(9)

Equation (9) is expressed in matrix form and subsequently simplified to derive Eq. (10).

$$C \times A={\hat {D}_0}$$
(10)

Where C is the value of slope sharpening control parameter in the discrete data; D0 is the value of cutoff frequency in the discrete data; and A is the polynomial coefficient matrix. The polynomial coefficients can be determined by substituting the discrete data C and D0, and the C-D0 information entropy model is constructed.

In the process of fitting polynomial curves using the least squares method, the quality of the curve fit is influenced by the polynomial order n. In this paper, the selection of the fitted polynomial order is based on the calculation of the root mean square error (RMSE) of the fitted curve. The RMSE quantifies the overall fit between the fitted curve and the discrete points, and it can be mathematically expressed as [25]:

$$\sigma {\text{=}}\sqrt {\frac{1}{n}\sum\limits_{{i=1}}^{n} {{{({D_{0i}} - {{\hat {D}}_{0i}})}^2}} }$$
(11)

Where n is the number of equations; and \({\hat {D}_0}\) is the fitted value of cutoff frequency.

Through the aforementioned process, the C-D0 information entropy model for a single image can be calculated. In Fig. 5, it can be seen that the distribution area of the C-D0 information entropy model of different images is also different. Merely calculating the C-D0 information entropy model of a single image fails to generalize to the majority of images effectively. However, through the calculation on a large number of samples, a pattern emerges in the distribution of the C-D0 information entropy model. The C-D0 information entropy models of different images can be categorized into three regions. Consequently, it is possible to cluster and partition the projection lines of multiple images, followed by fitting a polynomial function to each clustered region. This approach can obtain the C-D0 information entropy model that is applicable to a broader range of images.

Based on the aforementioned analysis, the homomorphic filtering algorithm based on C-D0 information entropy model can be implemented following these steps:

  • Step1: Calculate the single-image C-D0 information entropy model of multiple images;

  • Step2: Cluster and partition the obtained the C-D0 information entropy models, and calculate fitting polynomials for each region based on the distribution of the single-image C-D0 information entropy models;

  • Step3: Solve the parameters using the C-D0 information entropy model, resulting in each set of corresponding parameters. Apply these parameters to the homomorphic filtering process, and calculate the output information entropy for each set. Select the image with the highest output information entropy as the final output image.

4 Experimental results and analysis

4.1 Determination of parameters of C-D 0 information entropy model

In order to carry out the selection of polynomial order and the study of the polynomial curve distribution of the constructed C-D0 information entropy model, the determination of the parameters of the C-D0 information entropy model is carried out in this experiment. A large amount of discrete data is required to construct the C-D0 information entropy model. The selection of discrete data should be based on the condition that the output information entropy value should be as close as possible to the peak information entropy of the image when C and D0 are used for the defogging operation. That can ensure that the values of C and D0 used for the model construction can make the image defogging achieve a better optimization effect. To obtain enough sample data, a total of 40 images are selected, and C and D0 are sampled at a spacing of 0.1 starting from 0 so that each image has 500 × 800 sets of sample data, which could ensure that the amount of remaining data is sufficient for fitting the polynomial after screening the data.

The high-frequency gain and the low-frequency gain are set to 2 and 0.5, respectively. The fitted polynomial is constructed using C as the independent variable and D0 as the dependent variable. Based on Eqs. (10) and (11), the corresponding polynomial curves form the 2nd to the 10th order are calculated, along with the root mean square error (RMSE) for each image. The results are shown in Table 1. Among them, the 3rd order polynomial can be expressed in the form of Eq. (12).

$${\hat {D}_0}={A_0}+{A_1}C+{A_2}{C^2}+{A_3}{C^3}$$
(12)
Table 1 RMSE in different polynomial order
Fig. 6
figure 6

Information entropy peak curve aggregation graph

According to the fitting results in Table 1, it is found that the 3rd order polynomial has the highest accuracy.

For the purpose of display, the interval of C-values of the fitted polynomial is set to [0,100]. The corresponding fitted curves for all test images are shown in Fig. 6.

Upon observing Fig. 6, it becomes apparent that the peak information entropy curves of the images are grouped into three regions: top, middle, and bottom. Based on the regions where the fitted curves are situated, all the data are organized and partitioned to create a comprehensive fit for each region. The coefficients of the fitted curves for the three regions are shown in Table 2, and the corresponding fitted curve model is shown in Fig. 7. In Fig. 7, the red line indicates the fitted curve for the upper region, the green line indicates the fitted curve for the middle region, and the blue line indicates the fitted curve for the lower region.

Table 2 Final fitted curve coefficients
Fig. 7
figure 7

Line graph of polynomial fit

Based on the established third-order fitting polynomial, a connection between C and D0 can be established. Using this model, the most probable value of D0 corresponding to any value of C can be quickly determined. Consequently, this simplifies the process of parameter selection.

4.2 Experiments

4.2.1 Experimental project

  • Project 1

The purpose of this experimental project is to verify the correctness of the C-D0 information entropy model. Select an image, randomly select values of C, and calculate values of D0 according to the C-D0 information entropy model, and this set of values of C and D0 is used to defog this image. Then compare the information entropy values and the information entropy improvement of the defogged image with the original image. Also, calculate the percentage of information entropy improvement in the information entropy optimization interval for this defogged image compared to the original image.

  • Project 2

The purpose of this experimental project is to perform a test of the optimization capability of the C-D0 information entropy model. Any image is selected, values of C are randomly chosen, values of D0 are calculated according to the C-D0 information entropy model, and then this set of values of C and D0 is used to defog the image, and the procedure is repeated 800 times to obtain the information entropy values of each optimized image, and the percentage of the total number of images with improved fog images is counted, as well as the average rate of improvement of information entropy. The above data are analyzed to perform the validation of the experiment.

4.2.2 Experimental results and data analysis

  1. (1)

    Project 1

In order to validate the effectiveness of the C-D0 information entropy model, a total of 22 images are selected for experimentation. In this process, the value of C is randomly selected using a random function, and the corresponding value of D0 is calculated by the C-D0 information entropy model. The experimental results of these 22 images are shown in the following table.

Table 3 Comparison of the information entropy obtained from the C-D0 information entropy model in the case of random values of C

In Table 3, the maximum information entropy(Hmax) is the peak information entropy of the test images, the output information entropy(Hout) is the result calculated by the C-D0 information entropy model, and the information entropy enhancement(\(\Delta H\)) is the difference between the output information entropy and the information entropy of the original image. And the enhancement rate(Rh) is obtained by making a ratio between this difference and the information entropy interval of the original image, as shown in the following equation:

$${R_h}=\frac{{{H_{out}} - {H_{in}}}}{{{H_{max}} - {H_{in}}}}$$
(13)

Table 3 displays the average Rh of 91.02% for the set of 22 images. Notably, 21 of these images exhibit an enhancement rate exceeding 80%; among the 22 images, the highest enhancement rate can reach 100%, and the lowest enhancement rate is 54.99%. The results presented in Table 3 demonstrate a consistent improvement in the information entropy of the images processed by the model, indicating the model’s effective defogging capability for foggy maps.

From the overall data analysis, the information entropy improvement rate for all images is 91.02%. This signifies a notable enhancement in image quality and a better recovery of image information facilitated by the model. The ratio of the number of images with the enhancement rate exceeding 80% to the total number of images in Table 3 is 95.45%, which indicates that the model can achieve the effect of filtering out the best values by empirical determination over a large amount of time to some extent. Additionally, from the values of C in Table 3, it becomes apparent that each image corresponds to different optimal values of D0 when various C values are selected. The utilization of the C-D0 information entropy model allows for the swift identification of the most probable D0 value for each C value, thus meeting the requirements for image defogging. These findings collectively affirm the effectiveness of the model constructed in this paper.

Five images are selected from Table 3 for display, showing the original image and the defogged image, and the five images correspond to the maximum lifting rate, the minimum lifting rate, and the lifting rate in between, i.e., serial numbers 10, 11, 14, 18 and 19, as shown in Fig. 8. In Fig. 8, (a) is the original unprocessed image, and (b) is the homomorphic filtered defogged image obtained after calculating the parameters using the C-D0 information entropy model.

Fig. 8
figure 8

Comparison graphs of before and after processing of random C value images

After applying homomorphic filtering and defogging process with the parameters calculated using the C-D0 information entropy model, it is obvious from the image that the details become more prominent and noticeable. The interference caused by fog in the field of view is significantly reduced compared to the original image. Notably, there is a significant improvement for places with a closer depth of field; there is also a certain effect for places with a longer depth of field. Combining the data in Table 3, it can be observed that the influence of the information entropy enhancement rate mainly lies in the concentration size of the fog. The lowest Rh value in Table 3 is 54.99%, corresponding to Fig. 8(14a), which represents a case with a high fog concentration. The image contains less information at the farther depth of field. And it can be seen from the corresponding processed image that there is a significant improvement for the nearer depth of field. It is rather weak for the farther depth of field, so it leads to the situation of low lifting rate. At the same time, the low enhancement rate does not affect the output effect of image processing, i.e., the output image is still optimized compared to the original image. The fact that the model can optimize images even under complex conditions of dense fog demonstrates the robust anti-interference capability of the C-D0 information entropy model. The analysis of Fig. 8 as a whole reflects the following two phenomena.

  • From the image presented in Fig. 8, it is evident that the output image showcases a remarkable reduction in fog when compared to the original image. Moreover, the color recovery in the output image is notably improved, particularly in the vicinity of the depth of field, without any signs of color distortion. In the case of the sky area (19b), there is a localized occurrence of overexposure. However, it is controlled in a small area and does not bring interference to the main information of the image itself.

  • Upon observing the detailed portion of the image in Fig. 8, it becomes evident that significant optimization has occurred in the fine details. The textures within the processed image are now clearer, and the edges of the scenery appear more prominent. The distinction between light and dark regions in various locations of the image is more pronounced, resulting in a noticeable contrast enhancement. These improvements greatly enhance the overall visual experience.

Based on the aforementioned analysis, it is evident that the C-D0 information entropy model is highly effective in defogging images, thereby confirming the model’s efficacy. Specifically, for images with a thin layer of fog, the model successfully eliminates the negative effects caused by fog more comprehensively. In the case of images with a thicker fog, the model exhibits better recovery of the scenic information in the closer depth of field. Moreover, there is also a noticeable recovery effect for the farther depth of field.

  1. (2)

    Project 2

For each image, 800 sets of values of C are randomly selected in the interval [0,80], and the image is defogged 800 times according to the D0 parameters provided by the C-D0 information entropy model. The percentage of the total number of improved fog maps for each image(Rnum_improve) and its average information entropy enhancement rate(Rh_ave) are counted, and the results are shown in Table 4. The formulae for calculating Rnum_improve and Rh_ave are as follows:

$${R_{num\_improve}}=\frac{n}{N} \times 100\%$$
(14)
$${R_{h\_ave}}=\frac{{\sum\limits_{{i=1}}^{N} {{{({R_h})}_i}} }}{N} \times 100\%$$
(15)

Where n is the number of optimizations per image; N is the total number of calculations; and (Rh)i is the information entropy enhancement rate of the image in the ith operation.

Table 4 Results of C-value calculation for randomized 800 groups

In Table 4, only image 14 exhibits an improved fog map percentage of less than 80%, while 17 images demonstrate an enhanced fog map percentage exceeding 90%. Specifically, image 14 corresponds to Fig. 8(14a) and represents a densely foggy image. The presence of dense fog significantly obscures a substantial amount of information within the image, resulting in poorer calculation results. Repeated experiments have revealed that simple homomorphic filtering techniques yield relatively weak results for dense fog images, and selecting optimal parameter values is more difficult compared to images with thinner layers of fog. To address this issue, the C-D0 information entropy model is consistently employed to obtain the optimal parameter solution. It is evident that the values of C and D0 selected from the parameter calculation using the C-D0 information entropy model can be effectively applied to homomorphic filtered image defogging. Furthermore, the model can provide the most suitable value of D0 for each given C, ensuring optimal image defogging corresponding to specific C values.

In Table 4, it can be observed that for each image ranging from No. 1 to No. 22, the average improvement of information entropy of most images exceeds 80% in the 800-group calculation, with the exception of dense fog image No. 14, which demonstrates an average improvement of information entropy at 38.81%. Simultaneously, the output image exhibits significant enhancements in image clarity and the amount of information contained, surpassing the original image. Notably, the average improvement rate of information entropy can reach 95.91%, and the average improvement rate of 22 images can be calculated as 80.62%. These findings indicate that, even when the value of C is randomly provided, the corresponding value of D0 calculated by the C-D0 model can be effectively applied to image defogging, resulting in superior defogging effect and information recovery outcomes. Thus, the model demonstrates a good optimization capability and can provide a wide range of applicable intervals for the selection of C.

By selecting parameter values that yield optimal defogging effect and considering the correspondence for most values of C, it can be demonstrated that the C-D0 information entropy model can effectively guide parameter selection in homomorphic filtering, enabling the rapid determination of suitable D0 values. The model successfully addresses the issue of determining parameter values through extensive experiments, in which the chosen values may not always yield optimal results. Additionally, the wide range of parameter selection intervals provided is more adaptable to the needs of different tasks when using images.

5 Conclusions

In this paper, we propose an information entropy-based model, named the C-D0 information entropy model, which captures the relationship between slope sharpening and cutoff frequency parameters in homomorphic filtering, and the relationship is expressed visually. The primary objective of this model is to expedite the selection of the slope sharpening control parameter, C, and cutoff frequency parameter, D0, while simultaneously reducing the time required for empirical parameter determination. Furthermore, the model ensures the preservation of high-quality homomorphic filtering output.

The C-D0 information entropy model, constructed with information entropy as an intermediate variable, visually illustrates the correlation between C and D0. This visualization facilitates the rapid selection of appropriate C and D0 values. By employing information entropy as an intermediate quantity, the C-D0 model enables the selection of the parameters C and D0 that produce homomorphic filtering output results close to the best results. Through the experiments, the following conclusions are drawn.

  1. (1)

    The C-D0 information entropy model can effectively improve the timeliness of homomorphic filtering. Traditional homomorphic filtering uses the empirical fixed-value method for parameter selection, and cannot effectively verify the quality of the output image. By using the C-D0 information entropy model, the corresponding D0 value can be obtained quickly by inputting any C value, and the parameters obtained can ensure that the quality of the output image is at a high level.

  2. (2)

    The C-D0 information entropy model proposed in this paper proves to be effective in parameter selection. The parameters determined using the C-D0 model result in an average improvement of 91.02% in the information entropy of the image. This improvement enhances the quality of the input fog map, resulting in output image of superior quality in terms of both human eye vision and information entropy. Additionally, the model successfully preserves the fine details of the image. Notably, the model achieves optimal quality when dealing with images affected by thin fog. Furthermore, even in the case of dense fog, the model still has better results, particularly for image with a closer depth of field.

  3. (3)

    The C-D0 information entropy model demonstrates superior optimization performance. Among the 800 sets of randomly selected values of C, 92.12% of the data can make images achieve the optimization effect. On average, the optimized images exhibit an improvement rate of 80.62% in the information entropy compared to the original images. This wide range of parameter selection options for values of C offers flexibility in adapting to various environmental conditions.

Moreover, the construction of the C-D0 information entropy model requires a large amount of discrete data, and the model is more accurate as the amount of data increases. This provides a more diversified channel for the model combining homomorphic filtering and deep learning.