Abstract
High-quality displays tend to consume significant power in mobile devices. Currently, transmissive liquid crystal displays are among the most common. They are non-emissive and rely on a backlight behind the display panel. Brightness compensation refers to the signal processing technique to adaptively dim the backlight to reduce the power consumption, while increasing the pixel values to preserve the visual quality of the images or even enhance their contrast. Fast computation of the brightness compensation algorithm is essential for practical use. In this paper, we show that a state-of-the-art brightness compensation algorithm, which requires iterations and is computationally demanding, can in fact be solved with a closed-form solution. We also demonstrate with experimental results that we can achieve approximately an 800-fold speedup, while providing effectively identical images to those obtained by the original method.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Due to recent advancements in display technology, mobile devices, including smartphones and tablet computers, can display high-quality images. However, the high power consumption of these devices, with the display consuming most of the power [1, 2], means they have a short battery lifespan, which is a major disadvantage. Furthermore, as a bigger display consumes more power in general, reducing the power consumption of devices with bigger displays is also an important issue. Therefore, the development of image processing algorithms that can reduce the power consumption in display panels has been an important research topic, with many recent researches carried out in this area [2–12].
Image processing algorithms for power reduction have to consider the different characteristics of display panels. For example, recent flat panel displays can be categorized into emissive displays and non-emissive displays, depending on the light source [10]. Organic light-emitting diodes (OLEDs) and liquid crystal displays (LCDs) are the most representative examples of emissive and non-emissive displays, respectively. As transmissive LCDs, which use a backlight at the back of the panel, are the most widely used in the commercial market, most image processing techniques for power reduction have been developed for transmissive LCDs [2–9].
The backlight in a transmissive LCD consumes a significant amount of power, and the power consumption is proportional to the intensity of the backlight. Thus, power reduction algorithms dim the backlight while increasing pixel values to preserve the same level of perceived quality. This technique is referred to as brightness compensation. Among a variety of brightness compensation algorithms, one of the most effective methods is the optimized brightness-compensated contrast enhancement (BCCE) algorithm [8]. Given a backlight dimming factor, BCCE formulates the brightness compensation and the contrast enhancement simultaneously as a constrained optimization problem. Then, using convex optimization techniques, the BCCE algorithm obtains the optimal solution that maximizes the brightness-compensated image contrast subject to constraints on visual distortion.
1.1 Motivation and contributions
Although the BCCE algorithm provides high image qualities, even when the backlight intensity is reduced to save power, its real-world applications are limited. For example, as it employs an iterative method to solve the optimization problem, its computational complexity is too high for use in real-time applications. In this work, to address the aforementioned limitations of the BCCE algorithm, we develop a computationally efficient brightness compensation algorithm for transmissive LCDs, making it applicable to a wide range of practical applications, e.g., devices with limited computational resources. Specifically, our main contributions in this paper are as follows: First, we derive a mathematically rigourous closed-form solution to the BCCE formulation that enables both significant improvement in speed and complexity analysis, which are infeasible for the iterative solution. Second, we experimentally show that the proposed closed-form solution can drastically reduce the computation time, while providing practically identical results to those obtained with an iterative method.
The rest of this paper is organized as follows. Section 2 provides a succinct review of the BCCE algorithm. Section 3 then presents the detailed derivation of the closed-form solution, with the experimental results given in Sect. 4. Concluding remarks are given in Sect. 5.
2 The brightness-compensated contrast enhancement algorithm
We review the BCCE formulation in [8], which the proposed algorithm is based on, in this section.
2.1 Problem
The luminance of a transmissive LCD device, perceived by the human visual system, is determined by the backlight intensity and the transmittance [2]. The transmittance t(k) for pixel value k is modeled by
where \(\omega _1\), \(\omega _2\), and \(\gamma\) are device-dependent parameters [5]. Also, the perceived luminance L of a pixel with value k is given by
where \(B_{\max }\) is the maximum backlight intensity.
During brightness compensation, when the backlight intensity \(B_{\mathrm{max}}\) is reduced with the dimming factor \(b \in [0,1]\) for power reduction, the pixel value k is increased to \(y_k\) to maintain the perceived luminance. In [1], it was experimentally shown with a specific device that the power consumption is an approximately linear function of the backlight intensity, which is controlled by parameter b. From (2), the perceived luminance \(L'\) after brightness compensation is given by \(L' = b \cdot B_{\max } \cdot t(y_k)\). The brightness compensation maintains the perceived luminance, i.e., \(L = L'\), which yields the formula
Then, the transformation function that maps input pixel value k to output pixel value \(y_k\) can be written as \(\mathbf{y}= [y_0, y_1, \ldots , y_{255}]^T\) in a vector notation.
In addition to brightness compensation, a histogram modification-based global contrast enhancement technique can be used [8]. Specifically, let a column vector \(\mathbf{h}\) represent the histogram, in which the kth element \(h_k\) is the number of pixels with the value k. Then, we convert the input histogram \(\mathbf{h}\) to \(\mathbf{m}= f(\mathbf{h})\), where \(\mathbf{m}= [m_0, m_1, \ldots , m_{255}]^T\) denotes the modified histogram, and \(f: {\mathbb {R}}^{255} \rightarrow {\mathbb {R}}^{255}\) is a vector-converting function. Finally, the transformation function \(\mathbf{x}= [x_0, x_1, \ldots , x_{255}]^T\) is obtained by solving
where \(\mathbf{R}\in {\mathbb {R}}^{256 \times 256}\) is the bidiagonal differential matrix
and the normalized column vector of \(\mathbf{m}\) is given by
Here, the normalized vector \({\bar{\mathbf{m}}}_b\) is scaled up to the range \([0,y_{255}]\), so that the transformation functions \(\mathbf{x}\) and \(\mathbf{y}\) in (4) and (3), respectively, can exploit the same dynamic range.
Note that, while we use the HM techniques to explain the BCCE formulation for convenience [8], any global contrast enhancement techniques, e.g., [10, 13–18], can be used to obtain \(\mathbf{m}\) in (6), depending on applications and/or the characteristics of input images.
Then, two objectives, the brightness compensation and the contrast enhancement, can be achieved simultaneously by solving the optimization problem
where \(\alpha\) controls the relative importance between two objectives. Here, \(\Vert \mathbf{R}\mathbf{x}-\mathbf{R}\mathbf{y}\Vert ^2\) is used instead of \(\Vert \mathbf{x}-\mathbf{y}\Vert ^2\), so that it is in the same order of magnitude as \(\Vert \mathbf{Rx} - \bar{\mathbf{m}}_b\Vert ^2\).
2.2 Formulation
The transformation function \(\mathbf{x}\), obtained by solving (7), may produce pixel values greater than the maximum displayable value, e.g., 255 for an 8-bit system. In such a case, they should be restricted to the displayable range, leading to visual information loss. The information loss \(x_{c,k}\) for an output value \(x_k\) is quantified by the pixel value difference, i.e., \(x_{c,k} = \max \{ x_k - 255, 0 \}\) [8]. Then, the total information loss \(D(\mathbf{x})\) when displaying an output image is defined as
where \(\mathbf{x}_c = [x_{c,0}, x_{c,1}, \ldots , x_{c,255}]^T\), and \(\mathbf{H}= \text{ diag }(\mathbf{h})\).
Then, the goal is to solve the optimization in (7) subject to the constraint on the information loss \(D(\mathbf{x})\) in (8). This can be formulated as a constrained optimization problem, given by
where \(\lambda\) controls the trade-off between the BCCE and the information loss, and \(\succeq\) denotes the element-wise inequality between two vectors. In (9), the equality constraints \(x_0 = y_0\) and \(x_{255} = y_{255}\) restrict the output range of \(\mathbf{x}\) to the full dynamic range of the brightness compensation in (3), whereas the inequality constraint imposes the monotonicity of \(\mathbf{x}\).
Previously, the optimization problem in (9) was solved using the interior-point method [19]; however, as the process is iterative, it requires significant computational complexity and is unsuitable for real-time applications [8]. An approximation also exists in [20], but it may not satisfy all the constraints in (9), potentially causing degradations in image quality.
3 Closed-form solution
In this section, we derive a closed-form solution to the constrained optimization problem in (9), which is computationally efficient while satisfying all three constraints.
3.1 Convexity
We first show that the optimization in (9) is a convex optimization problem, which ensures that a closed-form solution, if it can be found, is a global solution. Specifically, since the first and second terms of the cost function in (9) are quadratic, we only show the convexity of the third term \(\mathbf{x}_c^T\mathbf{H}\mathbf{x}_c\).
Proof
We decompose the function \(D(\mathbf{x}) = \mathbf{x}_c^T\mathbf{H}\mathbf{x}_c= g(f(\mathbf{x})) = g(f_0(\mathbf{x}), \ldots , f_{255}(\mathbf{x}))\) with \(g(\mathbf{x}) = \Vert \mathbf{x}\Vert ^2\) and \(f(\mathbf{x}) = \mathbf{H}^{\frac{1}{2}}\mathbf{x}_c\), where the kth diagonal element of \(\mathbf{H}^{\frac{1}{2}}\) is \(h_k^{\frac{1}{2}}\). The function g is convex and non-decreasing in each argument. Also, the functions \(f_k(\mathbf{x})\)’s are convex, since each \(f_k(\mathbf{x})\) satisfies, for \(0 \le \theta \le 1\) and \(0 \le k \le 255\),
Therefore, \(D(\mathbf{x}) = g(f(\mathbf{x}))\) is a convex function of \(\mathbf{x}\), since the composition of a convex non-decreasing function and a convex function is convex [21]. Then, as the cost function is a convex function and the feasible constraint sets are convex, the problem in (9) is a convex optimization problem. \(\square\)
3.2 Closed-form solution
To solve the optimization problem, we define the Lagrangian \({\mathscr {L}}:{\mathbb {R}}^{256} \times {\mathbb {R}}^{256} \times {\mathbb {R}}^{256} \rightarrow {\mathbb {R}}\) associated with the problem in (9) as
where \({\varvec{\mu }}= [\mu _0, \mu _1, \ldots , \mu _{255}]^T \in {\mathbb {R}}^{256}\) and \({\varvec{\nu }}= [\nu _0, \nu _1, \ldots , \nu _{255}]^T \in {\mathbb {R}}^{256}\) are Lagrange multiplier vectors for the constraints, and \(\mathbf{I}_0 \in {\mathbb {R}}^{256 \times 256}\) is a matrix in which the first and last diagonal elements are one and the other elements are zero. Then, we can write the Karush–Kuhn–Tucker (KKT) conditions [21] for (11) as
where \(\mathbf{N}\) = \(\text{ diag }({\varvec{\nu }})\) and \(\mathbf{d}= 2\alpha {\bar{\mathbf{m}}}_b + 2(1-\alpha )\mathbf{R}\mathbf{y}\), respectively. The stationary condition in (16) is obtained by multiplying both sides of \(\frac{\partial {\mathscr {L}}(\mathbf{x},{\varvec{\mu }},{\varvec{\nu }})}{\partial \mathbf{x}} = \mathbf{0}\) by \(\mathbf{R}^{-T}\), because the matrix \(\mathbf{R}^T\) has full rank. Also, note that, in (16), since \(\mathbf{x}_c^T\mathbf{H}\mathbf{x}_c\) is not differentiable with respect to \(\mathbf{x}\), we use its subgradient \(2\mathbf{H}\mathbf{x}_c\) instead of the gradient.
We first expand the vector notations in (16) to obtain a system of equations and subtract the kth equation from the \((k+1)\)th one to eliminate \(\mu _{255}\). Then, we have a recursive system, i.e.,
Here, we denote \(\mu = \mu _0\) for notational simplicity. Note that, since \(x_0\) is fixed to a known value \(y_0\) by the minimum-value constraint in (9), we treat \(x_0\) as a constant instead of the optimization variable. By substituting \(x_k - x_{k-1}\) on the right-hand side recursively with the previous equations, we can rewrite (17) as
We can eliminate all \(\nu _k\) values for \(k \ge 1\) from (18) using (13)–(15) and express \(x_k\) values as a closed-form formula, which is a function of a single variable \(\mu\), provided \(\{x_0,\ldots ,x_{k-1}\}\) are given, i.e.,
Proof
We rewrite the KKT conditions in (13)–(15) as
for all \(k \ge 0\), and \(x_0 \ge 0\) and \(\nu _0 x_0 = 0\). We first assume that \(x_0 > 0\), thus \(\nu _0 = 0\). However, even when \(x_0 = 0\), we can similarly derive the solution by denoting \(\mu = \mu _0 - \nu _0\) in (17).
Let us assume that we have \(x_k\) for \(k \ge 0\). Then, from (18), we have
We consider two cases:
-
Case 1: \(x_0 + \frac{d_{k+1} - d_0}{2} + \lambda \sum _{i=0}^{k}h_i x_{c,i} + \frac{\mu }{2} > 0\).
-
Case 2: \(x_0 + \frac{d_{k+1} - d_0}{2} + \lambda \sum _{i=0}^{k}h_i x_{c,i} + \frac{\mu }{2} \le 0\).
In Case 1, \(x_{k+1} - x_k > 0\). Then, \(\nu _{k+1} = 0\) from (22), and
In Case 2, since \(x_{k+1} - x_k \le \frac{\nu _{k+1}}{2}\), \((x_{k+1} - x_k)^2 \le \frac{\nu _{k+1}}{2}(x_{k+1} - x_k) = 0\). Therefore,
Finally, combining (24) and (25) and changing indices as \(x_k = x_{k+1}\), we obtain the closed-form formula in (19). \(\square\)
Notice that each \(x_k\) in (19) is a monotonically increasing function of a single variable \(\mu\), which can be given by \(x_k = g_k(\mu )\). Then, we can find \(\mu\) that satisfies the maximum-value constraint in (9). Specifically, we define a function
and find the solution to \(f(\mu ) = 0\). Since \(f(\mu )\) is a monotonically increasing function, there exists a unique solution to \(f(\mu ) = 0\). In this work, we employ the secant method [22] to find the unique solution iteratively as similarly done in [10]. Let \(\mu ^{(n)}\) denote the value of \(\mu\) at the nth iteration. Then, we obtain the solution \(\mu\) by applying the secant formula
until convergence. More specifically, we define the convergence rate as \(\xi ^{(n)} = \big |\mu ^{(n)} - \mu ^{(n-1)}\big |\) and run the iteration until either \(\xi ^{(n)} < 10^{-5}\) or \(f\big (\mu ^{(n)}\big ) < 10^{-10}\). Finally, from \(\mu\), we compute all elements in the optimal transformation function \(\mathbf{x}\) via (19), which achieves the brightness-compensated contrast enhancement and the minimization of the information loss simultaneously subject to the minimum-value, maximum-value, and monotonic constraints in (9).
4 Experimental results
We evaluate the performance of the proposed solution on 24 images from the Kodak Lossless True Color Image Suite [23], including the Building, Hats, Window, and Stream images in Fig. 1. The test images have a resolution of \(768 \times 512\). We simulate the case when the backlight intensity is halved, i.e., \(b = 0.5\), and the histogram equalization for global contrast enhancement is employed by setting \(\mathbf{m}= \mathbf{h}\) in (6). However, as mentioned in Sect. 2.1, any global contrast enhancement techniques can be used to obtain \(\mathbf{m}\). The parameters \(\alpha\) and \(\lambda\) in (9) are fixed to 0.5 and \(\frac{1}{\mathbf{1}^T \mathbf{h}}\), respectively, in all experiments as done in [8]. Also, the device parameters \(\omega _1\), \(\omega _2\), and \(\gamma\) in (1) are set to 0.057, 1.224, and 1.691, respectively, which were found experimentally in [5]. For reproducibility, we provide MATLAB code on our project website.Footnote 1
Figure 1 compares the brightness-compensated images, corresponding to perceived luminance levels, obtained by the proposed closed-form solution with those by Tsai et al.’s algorithm [2], the iterative solution [8], and the approximate solution [20]. In Fig. 1b, Tsai et al.’s algorithm improves local contrast, but darkens the images compared with the original images in Fig. 1a. On the contrary, the iterative solution [8], the approximate solution [20], and the closed-form solution in Figs. 1c–e, respectively, yield output images with high qualities, better preserving brightness and enhancing global image contrast.
Figure 2 compares the transformation functions that yield the output images in Fig. 1. Tsai et al.’s algorithm produces identical transformation functions for the images with different characteristics, i.e., Building, Window, and Stream, which indicates that their algorithm is less adaptive. The approximate solutions darken the output images in lower intensity regions and violate the minimum-value constraint \(x_0 = y_0\) in (9), because the constraint is omitted in the approximation for simplicity. For example, for all images, the approximate solutions map the input pixel range [0, 50] to the output pixel values that are lower by about 10 than those of the iterative solutions and the closed-form solutions. These lowered transformation functions may cause perceptual detail losses in dark regions of the images, e.g., the doors in the Building image and the shadows in the Hats image. In contrast, the transformation functions obtained by the iterative solution and the closed-form solution are almost identical, which confirms accuracy of the closed-form solution.
In addition to subjective assessment, we compare the proposed closed-form solution with the conventional algorithms using three objective quality metrics: absolute mean brightness error (AMBE) [24], discrete entropy (DE) [25], and measure of enhancement (EME) [26]. Table 1 lists the average performance over all test images [23]. A low AMBE indicates that the algorithm well preserves the brightness of an input image, while high DE and EME imply that the image contains more details. The results demonstrate that the closed-form solution provides better performance than Tsai et al.’s algorithm and is comparable to the iterative and approximate solutions.
Table 2 compares the computational complexities and lists the average computation times to process all test images in [23] on a PC with a 2.6 GHz CPU and 8 GB RAM. The iterative solution requires the longest computation time, and the approximate solution shortens the time significantly, yet sacrificing the resulting image quality. The closed-form solution drastically reduces the time further, while producing identical transformation functions to those of the iterative solution. Specifically, the closed-form solution runs about 796 and 8.9 times faster than the iterative and approximate solutions, respectively. Note that, since big \({\mathscr {O}}\) notation ignores coefficients, the closed-form solution requires less time than Tsai et al.’s algorithm despite the constant M. We iteratively apply the secant formula in (27) to find a solution to \(f(\mu ) = 0\). The average number of iterations for all test images is only 8.79. Therefore, the proposed closed-form solution is efficient enough to be employed in a wide range of applications, even in devices with limited computational resources. Moreover, if it is implemented in hardware, such as field-programmable gate array (FPGA), its actual computational time would be significantly further reduced.
We also show how image sizes affect the computation times for each method. Table 3 compares the average computation times over 10 trials of the proposed closed-form solution with those of Tsai et al.’s algorithm, the iterative, and approximate solutions. For this comparison, we resize the input image with the resolution \(3840 \times 2160\), which corresponds to 4K ultra-high-definition (UHD), with scaling factors from 0.2 to 1, and then apply different methods. We observe that the computation times of the iterative solution are kept constant with small variations, while those of Tsai et al.’s algorithm, the approximate solution, and the closed-form solution increase as the image resolution gets higher. This is because the optimization procedure is dominant for the iterative solution, whereas the complexities of the contrast enhancement and optimization are comparable for the approximate and closed-form solutions. Also, since the contrast enhancement is dominant for Tsai et al.’s algorithm, its complexity gets substantially higher as the image resolution increases. However, the computation times of the closed-form solution are significantly lower than those of the competing algorithms, even when the image sizes are large. For example, for images with a 4 K resolution, which corresponds to the relative resolution of 1.0, the closed-form solution runs about 76.9, 162.9, and 3.34 times faster than Tsai et al.’s algorithm, the iterative solution, and the approximate solution, respectively. Thus, the closed-form solution derived in this work can be applicable to a wide range of devices, e.g., from mobile devices with small displays to large televisions.
Finally, in addition to the image size, we also compare how bit depth of an image affects the computation times for the iterative, approximate, and closed-form solutions. In this test, we take a test image of resolution \(692 \times 462\) with 12-bit depth as input, reduce the bit depth by one bit recursively, and then apply the BCCE algorithm using each method to obtain results. Table 4 shows the computation times for each method. We see that, as the bit depth increases, the computation times for the iterative and approximate solutions get higher prohibitively to be employed in practical applications. On the contrary, the computational times for the closed-form solution are significantly lower than those of the iterative and approximate solutions even when the bit depth is high. Furthermore, the relative times of the iterative and approximate solutions to the closed-form solution get higher as the bit depth increases. For example, for the 8-bit depth, the ratio of the computation times of the iterative, approximate, and closed-form solutions is 617:8:1, but it increases to 10008:305:1 for the 12-bit depth. Therefore, from this test, we can conclude that the proposed closed-form solution can be applicable to devices with not only larger but also higher bit depth displays compared with conventional methods [8, 20].
5 Conclusions
We developed a computationally efficient brightness compensation algorithm for transmissive LCDs in this work. We first demonstrated that the approach given in [8] leads to a convex optimization problem. Then, we derived a closed-form solution for the optimization, as opposed to the iterative solution given in that work. Experimental results show that the closed-form solution runs about 800 and 9 times faster than the iterative and approximate solutions in [8, 20], respectively, while providing identical or better results. In addition, we showed that the closed-form solution can also be applied to devices with high-resolution and higher bit depth displays, e.g., 4K displays. An important direction for future work is to incorporate more effective local contrast enhancement techniques [27, 28] into the formulation for brightness compensation.
References
Carroll, A., Heiser, G.: An analysis of power consumption in a smartphone. In: Proceedings of the USENIX Annual Technical Conference (2010)
Tsai, P.S., Liang, C.K., Huang, T.H., Chen, H.H.: Image enhancement for backlight-scaled TFT-LCD displays. IEEE Trans. Circuits Syst. Video Technol. 19(4), 574–583 (2009)
Chang, N., Choi, I., Shim, H.: DLS: dynamic backlight luminance scaling of liquid crystal display. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 12(8), 837–846 (2004)
Iranli, A., Lee, W., Pedram, M.: HVS-aware dynamic backlight scaling in TFT-LCDs. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 14(10), 1103–1116 (2006)
Ruggiero, M., Bartolini, A., Benini, L.: DBS4video: Dynamic luminance backlight scaling based on multi-histogram frame characterization for video streaming application. In: Proceedings of the ACM International Conference on Embedded Software, pp. 109–118 (2008)
Cho, S.I., Kang, S.J., Kim, Y.H.: Image quality-aware backlight dimming with color and detail enhancement techniques. J. Display Technol. 9(2), 112–121 (2013)
Huang, T.H., Shih, K.T., Yeh, S.L., Chen, H.H.: Enhancement of backlight-scaled images. IEEE Trans. Image Process. 22(12), 4587–4597 (2013)
Lee, C., Kim, J.H., Lee, C., Kim, C.S.: Optimized brightness compensation and contrast enhancement for transmissive liquid crystal displays. IEEE Trans. Circuits Syst. Video Technol. 24(4), 576–590 (2014)
Shih, K.T., Chen, H.H.: Exploiting perceptual anchoring for color image enhancement. IEEE Trans. Multimedia 18(2), 300–310 (2016)
Lee, C., Lee, C., Lee, Y.Y., Kim, C.S.: Power-constrained contrast enhancement for emissive displays based on histogram equalization. IEEE Trans. Image Process. 21(1), 80–93 (2012)
Dong, M., Zhong, L.: Power modeling and optimization for OLED displays. IEEE Trans. Mobile Comput. 11(9), 1587–1599 (2012)
Nam, Y.O., Choi, D.Y., Song, B.C.: Power-constrained contrast enhancement algorithm using multiscale retinex for OLED display. IEEE Trans. Image Process. 23(8), 3308–3320 (2014)
Wang, Q., Ward, R.K.: Fast image/video contrast enhancement based on weighted thresholded histogram equalization. IEEE Trans. Consum. Electron. 53(2), 757–764 (2007)
Arici, T., Dikbas, S., Altunbasak, Y.: A histogram modification framework and its application for image contrast enhancement. IEEE Trans. Image Process. 18(9), 1921–1935 (2009)
Celik, T., Tjahjadi, T.: Contextual and variational contrast enhancement. IEEE Trans. Image Process. 20(12), 3431–3441 (2011)
Cheng, F.C., Huang, S.C.: Efficient histogram modification using bilateral Bezier curve for the contrast enhancement. J. Display Technol. 9(1), 44–50 (2013)
Huang, S.C., Cheng, F.C., Chiu, Y.S.: Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 22(3), 1032–1041 (2013)
Huang, S.C., Chen, W.C.: A new hardware-efficient algorithm and reconfigurable architecture for image contrast enhancement. IEEE Trans. Image Process. 23(10), 4426–4437 (2014)
Byrd, R.H., Hribar, M.E., Nocedal, J.: An interior point algorithm for large-scale nonlinear programming. SIAM J. Optim. 9(4), 877–900 (1999)
Lee, C., Kim, J.H., Lee, C., Kim, C.S.: Power-constrained backlight scaling and contrast enhancement for TFT-LCD displays. In: Proceedings of the IEEE International Conference on Image Processing, pp. 2793–2796 (2012)
Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge Univ. Press, Cambridge, U.K. (2004)
Press, W., Teukolsky, S., Vetterling, W., Flannery, B.: Numerical Recipes in C : The Art of Scientific Computing, 2nd edn. Cambridge Univ. Press, Cambridge, U.K. (1992)
Kodak Lossless True Color Image Suite. http://r0k.us/graphics/kodak/
Chen, S.D., Ramli, A.R.: Minimum mean brightness error bi-histogram equalization in contrast enhancement. IEEE Trans. Consum. Electron. 49(4), 1310–1319 (2003)
Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27(379–423), 623–656 (1948)
Agaian, S.S., Silver, B., Panetta, K.A.: Transform coefficient histogram-based image enhancement algorithms using contrast entropy. IEEE Trans. Image Process. 16(3), 741–758 (2007)
Ke, W.M., Chen, C.R., Chiu, C.T.: BiTA/SWCE: image enhancement with bilateral tone adjustment and saliency weighted contrast enhancement. IEEE Trans. Circuits Syst. Video Technol. 21(3), 360–364 (2011)
Wang, S., Gu, K., Ma, S., Lin, W., Liu, X., Gao, W.: Guided image contrast enhancement based on retrieved images in cloud. IEEE Trans. Multimedia 18(2), 219–232 (2016)
Acknowledgements
This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NRF-2016R1C1B2010319) and in part by the Pukyong National University Research Fund in 2015 (CD20151264).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Lee, C., Lam, E.Y. Computationally efficient brightness compensation and contrast enhancement for transmissive liquid crystal displays. J Real-Time Image Proc 14, 733–741 (2018). https://doi.org/10.1007/s11554-016-0665-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11554-016-0665-0