1 Introduction

Stochastic resonance-based noise-aided image processing has gained the attention of several researchers in the past two decades due to its remarkable counter-intuitiveness—the concept of utilizing noise to enhance the performance of a nonlinear system. The first experimental work on visualization of stochastic resonance was reported in [29] through a psychophysics experiment. In the past decade, several works on application of stochastic resonance for grayscale image or edge enhancement have been reported in literature in spatial [4, 6, 10, 11, 23, 27, 33, 34], frequency [12, 13, 24], wavelet [5, 25], and hybrid [7] domains. The broad framework of each of these works is to add controlled amount of noise to the image (values or coefficients) so as to increase its contrast and comprehension at optimal noise intensities.

A large number of techniques have focussed on the enhancement of grayscale images in the spatial domain, such as classical adaptive histogram equalization, gamma correction, retinex-based enhancement [15, 16]. A multiresolution fusion-based enhancement algorithm, for night vision applications, is reported by Liu and Laganiere [18], where the visual image is first enhanced based on a corresponding infrared image, and the final result is obtained by fusion of the two images. A steerable pyramid transform-based contrast enhancement algorithm is reported by Cherifi et al. [3], where contrast is amplified using a selective nonlinear function that simulates the nonlinear response of the human visual system toward a luminance stimuli. Another interesting contextual and variational contrast enhancement (CVC) algorithm is reported by Celik and Tjahjadi [2]. This algorithm is based on interpixel contextual information and maps the diagonal elements of a 2D input histogram to a target histogram. In 2012, Rivera et al. [26] reported a content-aware algorithm that enhances dark images, sharpens edges, reveals details in textured regions, and preserves the smoothness of flat regions. The enhancement algorithm reported by Hasikin et al. [9] presents an adaptive fuzzy intensity measure that selectively enhances dark regions of an image without brightening the bright regions. Lim et al. [17] have reported an extended histogram equalization-based method that first separates the test image histogram into two sub-histograms and then modifies individual sub-histograms through plateau limits and subsequent histogram equalization. Another method reported by Santhi and Banu [28] uses modified octagon histogram to address the drawbacks of conventional histogram equalization.

When operating in frequency domains, the Fourier and cosine domains (DFT and DCT) provide the spectral separation, and due to this property it is possible to enhance features by treating different frequency components differently. A popular conventional technique called alpha rooting [1] produces an increase in the contrast of the image when the magnitude of each transform coefficient is raised to a power \(\alpha , 0 < \alpha < 1\), and the sign or the phase of the coefficient is unchanged. Many algorithms can be found in literature that have been designed for both colored and grayscale images in block DCT domain [13, 21, 30]. However, there are some disadvantages in processing images using block DCT. Due to independent processing of blocks, as in most of the cases, the presence of blocking artifacts may become more visible in the processed data. Therefore, in this paper, we have addressed the problem of enhancement in frequency domain, by processing the global DFT coefficients using an iterative algorithm.

The contribution of the proposed technique is to explore the Fourier domain for contrast enhancement using an existing iterative noise-aided algorithm. Selection of parameters is done in a manner so as to reduce the noise in SR-enhanced image by a large factor. The difference between the current approach and existing SR-based approach is as follows. Wavelet and Fourier-based SR algorithms reported in [24, 25] used externally added noise to induce stochastic resonance, and chose parameters experimentally from a range of values for best results, while the current approach uses internal noise and chooses parameter so as to achieve maximum noise suppression in the enhanced image. The present approach is also different from that reported in [5, 6, 11, 13] in aspects of selection of bistable parameters. Parameters in [5, 11, 13], and [6] were selected by maximization of signal-to-noise ratio and system stability of a dynamic SR system, while here they have been chosen in order to ensure maximum noise suppression in the processed image. The approach used here is based on dynamic stochastic resonance where nonlinearity is introduced in the form of barrier potential, while one of our earlier works [14] uses non-dynamic suprathreshold SR (using thresholding and successive averaging of noisy image frames) to obtain enhancement.

The rest of the paper is organized as follows. Section 2 describes the concept of dynamic stochastic resonance (DSR) in brief. Section 3 describes the approach for selection of DSR parameters in the current context. Section 4 explains the mathematical formulation of the algorithm and the mechanism of coefficient rooting achieved by the SR-based iterative processing to produce contrast enhancement. Section 5 presents the proposed enhancement algorithm and a brief description of the quantitative metrics. Section 6 presents qualitative and quantitative results of the proposed algorithm on several test images and discusses various aspects like comparative analysis and computation costs. Section 7 outlines the findings of the work.

2 Dynamic stochastic resonance

It has been observed in one-dimensional signals that at an optimum ‘resonant’ value of noise, the signal crosses the threshold and transits into another (enhanced) state [19]. To establish the principles of stochastic resonance (SR) in applications of image processing, the discrete image pixels are treated as discrete particles, whereby the grayvalue of an image pixel (or a transformed coefficient) corresponds to a specific kinetic parameter of a physical particle in Brownian motion.

Dynamic SR can be explained using the motion dynamics of a particle placed in a bistable potential system in the presence of weak signal and noise. A particle (image) originally placed in one of the stable wells is subjected to tunable noise, while the system is fed with a weak periodic signal. At optimum intrinsic noise density (or optimum number of oscillations), a resonance between the noise and signal causes the particle to make a transition into the other well. The relatively large excursions in the particle’s position show up as an output that displays strong periodic effects of the weak signal. This is reflected as an increase in the contrast of the image after certain number of oscillations. Details of mathematical formulation of theory of dynamic stochastic resonance for images may be found at [6].

In the proposed analogy, this optimum amount of noise is achieved by stochastic approximation using a corresponding discrete iterative equation.

$$\begin{aligned} {x(n+1)} = {x(n)} + \Delta {t}\left[ ax(n)-bx^{3}(n) + \hbox {Input}\right] \end{aligned}$$
(1)

where ab, and \(\Delta {t}\) are the double-well parameters. Note that \(\hbox {Input}=B\sin (\omega {t}) + \sqrt{D}\xi (t)\) denotes the sequence of input signal and noise that here comprise the DFT coefficients. This denotation can be done keeping in view that the low-contrast image contains internal noise due to lack of illumination [11, 13]. This noise is inherent in its transformed coefficients, and therefore, these (DFT) coefficients can be assumed as a continuum of signal (image information) as well as noise. The final enhanced output is obtained after certain number of iterations. Here, \(\Delta {t}\) is the discrete sampling time applied to the system or individual iteration step size used in numerical simulation. Finally, the image is reconstructed in the spatial domain by finding absolute value after inverse DFT operation.

3 Selection of optimal DSR parameters for image enhancement

Double-well parameters for the iterative procedure have been selected by an approach adapted and modified from that proposed by [34]. The relationship between parameters as reported in [34] is

$$\begin{aligned} \frac{a}{b} = \frac{{\sigma _1}^2}{{\sigma _0}^2} \end{aligned}$$
(2)

where \(\sigma _1\) and \(\sigma _0\), respectively, denote the standard deviations of noise in the input and SR-enhanced image. Here, the noise deviation in the input image is analogous to the deviation of its DFT coefficients as explained earlier. To ensure that the noise variance in the enhanced image is less than that in the input low-contrast image by a large factor, the ratio of \({\sigma _1}^2\) to \({\sigma _0}^2\) should be very high, say of the order of \(10^6\). Parameter a was set as 0.01 after experimentation on around twenty-five images of different contrast qualities. In order to obtain a noise reduction factor of about \(10^6\), parameter b was found to be \(10^{-8}\) (Eq. 2). However, this selection of parameter is not rigid and may vary according to the requirement of the application area. Increment step parameter \(\Delta {t}\) (that in continuous domain represents sampling period) was taken to be 0.001. Smaller value of \(\Delta {t}\) may allow observation of incremental refinement after each iteration, but may require larger number of iterations to achieve similar target performance.

4 Mathematical formulation of DSR–DFT enhancement

Let I(pq) be an input low-contrast image, with (pq) as the spatial coordinates. Let F(uv) denote its Fourier transform, with u and v as frequency variables. We know that

$$\begin{aligned} F(u,v)={\sum _{p=0}^{M-1}\sum _{q=0}^{N-1}} I(p,q) e^{-j2{\pi }\left( \frac{up}{M} + \frac{vq}{N}\right) } \end{aligned}$$
(3)

In a two-dimensional dataset like a digital image, the DSR-iterative equation (Eq. 1) may be rewritten as:

$$\begin{aligned} \begin{aligned} x(u,v,n+1)&= x(u,v,n)\\&\quad {}+\Delta {t} \left[ a x(u,v,n) - b x^{3}{(u,v,n)} + F(u,v) \right] \end{aligned} \end{aligned}$$
(4)

Here, ab, and \(\Delta {t}\) are chosen as described in Sect. 3. Finding inverse DFT at any iteration, for \(n > 0\),

$$\begin{aligned} I_\mathrm{DSR}(p,q,n)=\left| \frac{1}{MN} {\sum _{u=0}^{M-1}\sum _{v=0}^{N-1}} x(u,v,n) e^{j2\pi \left( \frac{up}{M}+\frac{vq}{N}\right) } \right| \end{aligned}$$
(5)

This intermediate output image, \(I_\mathrm{DSR}\), is used to compute various performance metrics (as described later).

4.1 Mechanism of coefficient rooting using DSR on DFT magnitude spectrum

Figure 2 displays the mechanism of SR iteration on the real and imaginary parts, as well as on the magnitude of the DFT coefficients, and shows how it affects the contrast of the respective back-projected images. It can be inferred from Fig. 2 (and from the plot for three test images shown in Fig. 1 of Online Resource 1) that the variance (spread) of the Fourier coefficient distribution (real, imaginary, and magnitude) increases with increasing iteration count. Iterative scaling of Fourier coefficients leads to an increase in its mean, which in other words implies an increase in average brightness of the image. Moreover, it is general assumption that the noisy DFT coefficients follow a complex Gaussian distribution [8]. As such, the magnitude of a standard complex normal random variable has Rayleigh distribution, while Gaussian mixture distributions have also been fitted to discrete Fourier transform magnitude spectra [35]. Another important point to note is that raising the Fourier coefficient by an exponent less than unity leads to increase in contrast, commonly called coefficient rooting or alpha rooting [1]. It has been proved below that the mechanism of the increased variance (in a gaussian distribution) is also, in a way, equivalent to raising the Fourier distribution by an exponent less than unity, in a nonlinear fashion.

Definition

Let \(\sigma _{0} = k \sigma \), where \(k > 1\), implying \(\sigma _{0} > \sigma \). Let \(f_{\sigma _{0}}(x)\) be a gaussian function with standard deviation, \(\sigma _{0}\), and mean, \(\mu \).

Lemma

The increase in the variance of a gaussian distribution is equivalent to raising the gaussian distribution by an exponent less than unity.

Proof

$$\begin{aligned} f_{\sigma _0}(x)&= \frac{1}{\sqrt{2 \pi {\sigma _0}^2}} \exp \left( \frac{-{(x-\mu )}^2}{2 {\sigma _0}^2}\right) \end{aligned}$$
(6)
$$\begin{aligned}&= \frac{1}{\sqrt{2 \pi ({k \sigma })^2}} \exp \left( \frac{-{(x-\mu )}^2}{2 {(k\sigma })^2}\right) \end{aligned}$$
(7)

Simplifying after multiplying and dividing with \(({\frac{1}{\sqrt{2\pi {\sigma ^2}}}})^{\frac{1}{k^2}}\),

$$\begin{aligned} f_{\sigma _0}(x)= p~\times \left( \frac{1}{\sqrt{2 \pi {\sigma }^2}} ~\hbox {e}^{\frac{-{(x-\mu )}^2}{2 {\sigma }^2}}\right) ^{\frac{1}{k^2}} \end{aligned}$$
(8)

where \(p = \frac{(\sqrt{2 \pi \sigma ^2})^{\frac{1}{k^2}-1}}{k}\). Therefore,

$$\begin{aligned} f_{\sigma _0}(x)= p \times {\left[ f_\sigma (x)\right] }^{\frac{1}{k^2}} \end{aligned}$$
(9)

where \(f_{\sigma }(x)\) is a gaussian function with standard deviation, \(\sigma \).\(\square \)

This means that an increase in the variance of the (Gaussian-approximated) Fourier coefficient magnitude distribution is nonlinearly equivalent to raising the coefficient distribution by an exponent less than 1, and in turn, leading to an increase in contrast. An added advantage of the iterative SR equation is the observation of incremental refinement in the contrast and perceptual quality, which helps a user to obtain a target output performance. The increase in the variance also leads to an increase in overall energy of the image since all frequency components are being processed iteratively.

Thus, it can now be stated that the SR-iterative equation scales the coefficients (internal noise) in every iteration and produces a noise-induced transition of the image from a state of the poor contrast to that of high contrast, using the analogy presented in [6].

Fig. 1
figure 1

Experimental results using DFT-based SR on low-contrast images, a low-contrast image, Flower, b DSR-enhanced image, \(n=13\), c low-contrast image, Grass, d DSR-enhanced image, \(n=13\)

Fig. 2
figure 2

Effect of DSR iterations on probability density function (pdf) of DFT coefficients (real, imaginary, magnitude) of a dark image. a Input, b \(n=20\), c \(n=40\), d \(n=60\), e \(n=80\), f \(n=100\), g Input, h \(n=20\), i \(n=40\), j \(n=60\), k \(n=80\), l \(n=100\)

Fig. 3
figure 3

Experimental results using DFT-based SR on dark and low-contrast images, a naturally dark image, b DSR-enhanced image, \(n=17\), c low-contrast image, d DSR-enhanced image, \(n=34\), e low-contrast image, f DSR-enhanced image, \(n=15\)

5 Proposed algorithm for SR-based contrast enhancement

The proposed algorithm performs contrast enhancement on low-contrast images by iteratively applying DSR on the DFT coefficients of the image in question. If the input image is colored, the processing is carried out on DFT coefficients of the value component (V) of the image in Hue-Saturation-Value (HSV) space (for colored images) to ensure that its hue is undisturbed.

To gauge the performance of the algorithm, it is important to have metrics that denote the quality of image in terms of various aspects. Performance measures such as peak signal-to-noise ratio (PSNR), mean-square-error (MSE), structural similarity index measure (SSIM), quality index are not suitable for our purpose as they require a distortion-free reference image. A metric of contrast enhancement could be based on global variance and mean of original and enhanced images [25]. Therefore, a descriptor called image quality index, Q, has been used, such that \(Q={\sigma ^{2}}/{\mu }\), where \(\sigma \) and \(\mu \) are, respectively, the standard deviation and mean of the image. Relative contrast enhancement factor, F, is computed as the ratio of the post-enhancement \((Q_{B})\) to pre-enhancement \((Q_{A})\) image quality indices. For evaluation of perceptual quality, we have used a no-reference metric, which we shall refer to as perceptual quality metric (PQM) [31], where for good perceptual quality, the value of PQM should be close to 10 [21]. Here, perceptual quality is calculated from a model taking into account the activity, blurriness, and blockiness of an image. If the image is colored, one would also interested to observe the enhancement in terms of colorfulness, and therefore, a metric for color enhancement factor (CEF) [21] has been used. For good color and contrast enhancement, respective values CEF and F should be greater than 1. Code for computing PQM was obtained from [20]. Enhancement is also gauged subjectively using mean opinion scores (MOS) on a scale of 1–5 given by twenty subjects (with the following code: 0—Very poor, 1—Poor, 2—Average, 3—Good, 4—Very good, 5—Excellent). Thirteen of the subjects were image processing graduate students, while seven were naive, belonging to age group of 22–40 years, and with normal vision. Scores were obtained by showing (blind) enhanced images on a LED screen with paired comparisons w.r.t. input.

The basic steps of the SR-based algorithm are as follows:

Step 1: :

Compute the two-dimensional discrete fast Fourier transform (FFT) of the value component, V.

Step 2: :

With double-well parameters, \(a=0.01, b=10^{-8}\), and \(\Delta {t}=0.001\), compute the DSR-tuned set of coefficients using Eq. 4, assuming initial x as a zero matrix for mathematical convenience.

Step 3: :

Inverse FFT transformation is performed and performance metrics F, PQM, and CEF (for colored input) are computed after each iteration. To make the algorithm converge, iterations are performed up to an iteration count where the metric value of \(F(n)+\hbox {CEF}(n)\) becomes maximum, along with the constraint that \(\hbox {PQM}(n)\) is in the vicinity of value 10.

6 Results and discussion

Results of software implementation (on MATLAB v. 7.10) of the proposed algorithm have been shown in Figs. 1, 2, 3, stating their respective required iteration count. Figures 1a, c and 2a have been made low contrast by manipulation for testing. Figures 3a and 7a are naturally dark and low-contrast images and have been captured under poor illumination. Figure 3c, e shows standard images. Figure 5 shows the performance characterization of the proposed technique for Fig. 1a in terms of contrast enhancement factor (F), color enhancement factor (CEF), and perceptual quality measure (PQM), with respect to iteration count. Target output is obtained in the close vicinity of \(\hbox {PQM} = 10\), i.e., say \(10 \pm 0.5\), and corresponds to iteration 13 for Fig. 1. Further analysis is discussed through Figs. 4 and 6.

Table 1 Comparative performance of the proposed technique with various existing techniques using performance metrics F, PQM, CEF, and mean opinion score (MOS) for three test images
Fig. 4
figure 4

Zoomed-in portions of the outputs of DSR-intensity and DSR–DFT algorithms for the test image, Flower, a zoomed-in DSR-intensity output, b zoomed-in DSR–DFT output,

Comparison of empirical results of proposed technique with various other SR-based and non-SR-based techniques has been shown in Fig. 7 (another set of results in Fig. 3 of Online Resource 1). Comparative performance for Figs. 1a, c and 7a, in terms of the metrics, has been tabulated in Table 1. Comparative analysis with non-SR-based techniques, adaptive histogram equalization (AHE), gamma correction (Gamma), single-scale retinex (Retinex) [16], multi-scale retinex (MSR) [15], and modified high-pass filtering (MHPF) [32], alpha rooting (AR) [1], multicontrast enhancement (MCE) [30], and color enhancement using scaling (CES) [21] has been performed. Comparisons also include recent state-of-the-art techniques such as contextual and variational contrast enhancement (CVC) [2], content-aware enhancement using channel division (CACD) [26], and equalization of hybrid SV-DWT values (E-SVD–DWT) [22]. Since the proposed technique is an automatic algorithm, a comparison has been made with output of ‘Auto Contrast’ control of Adobe Photoshop CS2 (Photoshop). Among SR-based techniques, comparison with singular-value-based DSR (SVD–DSR) [11], DCT–DSR [13], DWT-DSR [5], intensity-domain [6], hybrid domain (SV-DWT-DSR) [7] and suprathreshold SR-based technique (SSR) [14] has been done. A comparison is also made with earlier existing wavelet and Fourier-based medical imaging techniques [24, 25]. Closer analysis has been discussed through Fig. 4.

It may be noted that among the frequency-domain SR-based algorithms, the results on Fourier domain are found to require least number of iterations (second only to intensity-based spatial domain results), while the degree of enhancement and colorfulness is comparable to other DSR-based methods. In comparison with other non-DSR-based methods, the proposed technique gives noteworthy enhancement and is second to performance of Adobe Photoshop. On an average, taking into consideration the overall performance, i.e., preservation of visual quality along with substantial enhancement in the contrast and colorfulness, the performance of the proposed DFT–DSR algorithm is always among the top three scorers. SV-DWT-DSR also competes closely with DFT–DSR in terms of visual qualities, MOS scores, and iteration counts. Subjectively, the DFT–DSR is ranked between average and good visual quality by twenty subjects, which is second to intensity-DSR and Photoshop for some instances. Thus, among frequency-domain algorithms, the DFT–DSR is found to display remarkable and better performance w.r.t. other techniques.

Figure 4 shows zoomed-in regions of the DSR-intensity and DFT–DSR for test image, Flower. While intensity-DSR displays faster (in terms of iteration) and higher performance metrics in terms of F, CEF, MOS, the PQM values of DFT–DSR are closer to 10 than that for intensity-DSR. Closer analysis of the zoomed-in (dark) portions of the images shows that while the local contrast of these regions in DFT–DSR is less, the enhancement in intensity-DSR has led to quantization, or large intensity contours leading to artifacts. However, these artifacts are less visible in the output of DFT–DSR, and hence the enhanced image looks smoother.

Fig. 5
figure 5

Performance characterization using the proposed DSR–DFT technique for Fig. 1a. Here, \(n_0\) is the iteration count at which optimal performance is obtained, i.e., \(\hbox {PQM} \sim 10\), and maximum value of \(F + \hbox {CEF}\)

Fig. 6
figure 6

Zoomed-in portions of the input and DSR–DFT-output of the test image, Grass, a zoomed-in input, b zoomed-in DSR–DFT output, c zoomed-in input, d zoomed-in DSR–DFT output

As shown in Table 1, while the overall visual quality of the output of the test image, Grass, is not very poor; however, the quantitative PQM values are quite low. This is explained through Fig. 6 that shows two different zoomed-in regions of input and DFT–DSR-enhanced output of the test image, Grass. As visible in Fig. 6a, the input itself contains some blocking artifacts. Please note that PQM is computed using the blockiness, blurriness, and activity measure of local neighborhoods in an image. Since the processing is not blockwise and does not include any special additional step to remove blockiness, the result is enhanced blocking artifacts in the output. This is noted as degradation in terms of PQM, and as a result its value is low. However, this low value is not due to the mechanism of the algorithm, but due to blocky nature of the input image. Since this blocking gradient is not high in the input, the PQM of the input is not too low. Also, as shown in Fig. 6c, the very dark (\({\sim } 0\)) regions of the image (with intensities lying between, say 0 and 5), after enhancement display contouring effect, as 0 is mapped to 0 in the enhanced image, but 5 is mapped to a much higher value. While this contouring may not be visible on the total appearance of the image, but in images with very dark areas (nearly 0 intensity regions), this effect may cause a decrease in overall PQM. It is important to note that PQM may not always be universally and absolutely identical or coherent with the subjective evaluations, and therefore, a more suitable quality metric may also be used.

A comparison with other recent state-of-the-art algorithms, such as enhancement in hybrid domains by equalization (E-SVD–DWT), contextual and variational enhancement (CVC), and content-aware enhancement with channel division (CACD), may also be analyzed. While for the Flower image, the output of DFT–DSR is marginally better than CACD and CVC, the visual qualities are comparable. However, DFT–DSR fares substantially better for the Grass and Chair images, both quantitatively and visually.

Fig. 7
figure 7

Comparison of DFT–DSR-based approach with other SR-based and non-SR-based techniques, a low-contrast image, b DFT–DSR, c SVD–DSR [11], d DCT–DSR [13], e DWT-DSR [5], f intensity-DSR [6], g SV-DWT–DSR [7], h SSR [14], i AR (\(\alpha =0.98\)) [1], j AHE, k Gamma, \(\gamma \)=1.9, l Photoshop Auto-Contrast, m Retinex [16], n MSR [15], o MHPF [32], p MCE [30], q CES [21], r CVC [2], s CACD [26], t E-SVD–DWT [22]

Results of [24, 25] were computed using zero-mean gaussian noise of variance 0.01 and by iterating until a maximum relative contrast factor (F) was obtained. Results of the same for test images Flower and Grass are shown in Table I and Fig. 2 of Online Resource 1. In both [24, 25], the best-enhanced image (on the basis of contrast quality and just-noticeable difference) was chosen from a matrix of certain number of images obtained using different values of a and b. In this paper, on the other hand, bistable parameters are being chosen with the consideration of maximal noise suppression. It is important to note that though with a larger range of parameters it is more probable to obtain better results using [25] than that reported in this paper, the process is quite computationally intensive. Externally added noise in wavelet domain resulted in noteworthy enhancement, comparable to that obtained using internal noise.

It can be observed from visual outputs and performance metrics that the proposed DFT-domain technique gives noteworthy improvement in the contrast, perceptual quality and colorfulness of the images in question. Though the color is preserved due to processing in HSV color space, the increase in colorfulness is due to scaling of the value vector in HSV color space, corresponding to marginal change in saturation. All the DSR-based techniques give comparable results, better than many of the techniques that they were compared with, in terms of perceptual quality. Also, while the focus of earlier DSR-based techniques such as [5, 11, 13] was on dark images, here DFT–DSR is observed to perform well on low-contrast images also.

6.1 Computational time

The computation time of the algorithm is guided by two factors: the optimal iteration count, \(n_0\), and the complexity of transformation. For each iteration from 1 to \(n_0\), the computation for DSR step is O(MN), while that of inverse FFT step is O(MN lg N). Since inverse transformation is done after each iteration, total complexity including one-time Fourier transformation is O(MN lg N) + \(n_0 \times O(MN + MN lg N)\). Note that this computation cost is excluding the computation of performance metrics after each iteration.

An average dark \(512 \times 512\) image takes around 1–5 s to reach target iteration count on an Intel(R) Core(TM)2 Duo Processor working on 2.53 GHz with 2 GB of RAM. To serve as a platform-independent comparison in computation speeds, computation costs of various DSR-based algorithms in big-O notation for a standard \(512 \times 512\) grayscale image were studied (and are shown in Table II of the Online Resource 1). It is important to note that there is no deterministic guarantee that the same F value shall be reached for each individual DSR-based method, because (as explained in the manuscript) the optimal output of each algorithm is bound by constraints of both perceptual quality and enhancement factors. Therefore, gauging speed via only one constraint will not adequately represent the speed of the algorithm (which is also image dependent). We call it ‘image dependent’ because the selection of parameters, a and b, is done in a manner to produce maximum noise suppression. Therefore, to achieve a fixed suppression ratio, images of different original contrasts may take different iterations to reach an optimal output.

The time to reach optimal output in seconds for a sample \(512 \times 512\) grayscale (Pepper) image is also displayed in Table II, Online Resource 1. Due to different underlying transformations, an iteration of one method may take longer or shorter than one iteration of another method. For example, while SVD–DSR gives an optimal iteration count of 5, it takes a massive 12 s to reach it, while DFT–DSR reaches its optimal iteration count, 13, in just 1.04 s. As apparent, the speed of DFT-based processing is second only to intensity-based SR processing.

7 Concluding remarks

A Fourier-based contrast enhancement technique for low-contrast images was studied and presented in this paper. Internal noise due to insufficient illumination, was utilized to produce a noise-induced transition of Fourier coefficients of a low-contrast image into a state of good contrast. Increase in the contrast was achieved by iterative scaling of DFT coefficients using an iterative equation that emulates the motion of a particle in a bistable double-well system. The overall effect of the increase in the variance of coefficient magnitudes was analytically proven to be similar to that of coefficient rooting. Performance was gauged in terms of relative contrast enhancement factor, perceptual quality measure, colorfulness and subjective quality of the enhanced image. It was found that the proposed DSR-based technique gives improved enhancement over many other existing techniques for both dark and low-contrast images. The approach currently works on experimental values of parameters a and \(\Delta {t}\), but may be extended to a more analytical selection criterion where all parameters could be input dependent following a mathematical model derived in image domain.