1 Introduction

Due to recent advancements in display technology, mobile devices, including smartphones and tablet computers, can display high-quality images. However, the high power consumption of these devices, with the display consuming most of the power [1, 2], means they have a short battery lifespan, which is a major disadvantage. Furthermore, as a bigger display consumes more power in general, reducing the power consumption of devices with bigger displays is also an important issue. Therefore, the development of image processing algorithms that can reduce the power consumption in display panels has been an important research topic, with many recent researches carried out in this area [212].

Image processing algorithms for power reduction have to consider the different characteristics of display panels. For example, recent flat panel displays can be categorized into emissive displays and non-emissive displays, depending on the light source [10]. Organic light-emitting diodes (OLEDs) and liquid crystal displays (LCDs) are the most representative examples of emissive and non-emissive displays, respectively. As transmissive LCDs, which use a backlight at the back of the panel, are the most widely used in the commercial market, most image processing techniques for power reduction have been developed for transmissive LCDs [29].

The backlight in a transmissive LCD consumes a significant amount of power, and the power consumption is proportional to the intensity of the backlight. Thus, power reduction algorithms dim the backlight while increasing pixel values to preserve the same level of perceived quality. This technique is referred to as brightness compensation. Among a variety of brightness compensation algorithms, one of the most effective methods is the optimized brightness-compensated contrast enhancement (BCCE) algorithm [8]. Given a backlight dimming factor, BCCE formulates the brightness compensation and the contrast enhancement simultaneously as a constrained optimization problem. Then, using convex optimization techniques, the BCCE algorithm obtains the optimal solution that maximizes the brightness-compensated image contrast subject to constraints on visual distortion.

1.1 Motivation and contributions

Although the BCCE algorithm provides high image qualities, even when the backlight intensity is reduced to save power, its real-world applications are limited. For example, as it employs an iterative method to solve the optimization problem, its computational complexity is too high for use in real-time applications. In this work, to address the aforementioned limitations of the BCCE algorithm, we develop a computationally efficient brightness compensation algorithm for transmissive LCDs, making it applicable to a wide range of practical applications, e.g., devices with limited computational resources. Specifically, our main contributions in this paper are as follows: First, we derive a mathematically rigourous closed-form solution to the BCCE formulation that enables both significant improvement in speed and complexity analysis, which are infeasible for the iterative solution. Second, we experimentally show that the proposed closed-form solution can drastically reduce the computation time, while providing practically identical results to those obtained with an iterative method.

The rest of this paper is organized as follows. Section 2 provides a succinct review of the BCCE algorithm. Section 3 then presents the detailed derivation of the closed-form solution, with the experimental results given in Sect. 4. Concluding remarks are given in Sect. 5.

2 The brightness-compensated contrast enhancement algorithm

We review the BCCE formulation in [8], which the proposed algorithm is based on, in this section.

2.1 Problem

The luminance of a transmissive LCD device, perceived by the human visual system, is determined by the backlight intensity and the transmittance [2]. The transmittance t(k) for pixel value k is modeled by

$$\begin{aligned} t(k) = \omega _1 + \omega _2 \cdot (k/255)^\gamma , \end{aligned}$$
(1)

where \(\omega _1\), \(\omega _2\), and \(\gamma\) are device-dependent parameters [5]. Also, the perceived luminance L of a pixel with value k is given by

$$\begin{aligned} L = B_{\max } \cdot t(k), \end{aligned}$$
(2)

where \(B_{\max }\) is the maximum backlight intensity.

During brightness compensation, when the backlight intensity \(B_{\mathrm{max}}\) is reduced with the dimming factor \(b \in [0,1]\) for power reduction, the pixel value k is increased to \(y_k\) to maintain the perceived luminance. In [1], it was experimentally shown with a specific device that the power consumption is an approximately linear function of the backlight intensity, which is controlled by parameter b. From (2), the perceived luminance \(L'\) after brightness compensation is given by \(L' = b \cdot B_{\max } \cdot t(y_k)\). The brightness compensation maintains the perceived luminance, i.e., \(L = L'\), which yields the formula

$$\begin{aligned} y_k = 255 \cdot \left( \frac{\omega _1 (1- b) + \omega _2 (k/255)^\gamma }{\omega _2 b} \right) ^{1/\gamma }. \end{aligned}$$
(3)

Then, the transformation function that maps input pixel value k to output pixel value \(y_k\) can be written as \(\mathbf{y}= [y_0, y_1, \ldots , y_{255}]^T\) in a vector notation.

In addition to brightness compensation, a histogram modification-based global contrast enhancement technique can be used [8]. Specifically, let a column vector \(\mathbf{h}\) represent the histogram, in which the kth element \(h_k\) is the number of pixels with the value k. Then, we convert the input histogram \(\mathbf{h}\) to \(\mathbf{m}= f(\mathbf{h})\), where \(\mathbf{m}= [m_0, m_1, \ldots , m_{255}]^T\) denotes the modified histogram, and \(f: {\mathbb {R}}^{255} \rightarrow {\mathbb {R}}^{255}\) is a vector-converting function. Finally, the transformation function \(\mathbf{x}= [x_0, x_1, \ldots , x_{255}]^T\) is obtained by solving

$$\begin{aligned} \mathbf{R}\mathbf{x}= {\bar{\mathbf{m}}}_b, \end{aligned}$$
(4)

where \(\mathbf{R}\in {\mathbb {R}}^{256 \times 256}\) is the bidiagonal differential matrix

$$\begin{aligned} \mathbf{R}= \begin{bmatrix} 1&0&0&\cdots&0&0 \\ -1&1&0&\cdots&0&0 \\ 0&-1&1&\cdots&0&0 \\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots \\ 0&0&0&\cdots&1&0 \\ 0&0&0&\cdots&-1&1 \\ \end{bmatrix} \end{aligned}$$
(5)

and the normalized column vector of \(\mathbf{m}\) is given by

$$\begin{aligned} {\bar{\mathbf{m}}}_b = \frac{y_{255}}{\mathbf{1}^T \mathbf{m}} \mathbf{m}. \end{aligned}$$
(6)

Here, the normalized vector \({\bar{\mathbf{m}}}_b\) is scaled up to the range \([0,y_{255}]\), so that the transformation functions \(\mathbf{x}\) and \(\mathbf{y}\) in (4) and (3), respectively, can exploit the same dynamic range.

Note that, while we use the HM techniques to explain the BCCE formulation for convenience [8], any global contrast enhancement techniques, e.g., [10, 1318], can be used to obtain \(\mathbf{m}\) in (6), depending on applications and/or the characteristics of input images.

Then, two objectives, the brightness compensation and the contrast enhancement, can be achieved simultaneously by solving the optimization problem

$$\begin{aligned} \underset{\mathbf{x}}{\text{ minimize }} \;\; \alpha \Vert \mathbf{R}\mathbf{x}-{\bar{\mathbf{m}}}_b\Vert ^2 + (1-\alpha )\Vert \mathbf{R}\mathbf{x}-\mathbf{R}\mathbf{y}\Vert ^2, \end{aligned}$$
(7)

where \(\alpha\) controls the relative importance between two objectives. Here, \(\Vert \mathbf{R}\mathbf{x}-\mathbf{R}\mathbf{y}\Vert ^2\) is used instead of \(\Vert \mathbf{x}-\mathbf{y}\Vert ^2\), so that it is in the same order of magnitude as \(\Vert \mathbf{Rx} - \bar{\mathbf{m}}_b\Vert ^2\).

2.2 Formulation

The transformation function \(\mathbf{x}\), obtained by solving (7), may produce pixel values greater than the maximum displayable value, e.g., 255 for an 8-bit system. In such a case, they should be restricted to the displayable range, leading to visual information loss. The information loss \(x_{c,k}\) for an output value \(x_k\) is quantified by the pixel value difference, i.e., \(x_{c,k} = \max \{ x_k - 255, 0 \}\) [8]. Then, the total information loss \(D(\mathbf{x})\) when displaying an output image is defined as

$$\begin{aligned} D(\mathbf{x}) = \sum _{k=0}^{255} h_k x_{c,k}^2 = \mathbf{x}_c^T \mathbf{H}\mathbf{x}_c, \end{aligned}$$
(8)

where \(\mathbf{x}_c = [x_{c,0}, x_{c,1}, \ldots , x_{c,255}]^T\), and \(\mathbf{H}= \text{ diag }(\mathbf{h})\).

Then, the goal is to solve the optimization in (7) subject to the constraint on the information loss \(D(\mathbf{x})\) in (8). This can be formulated as a constrained optimization problem, given by

$$\begin{aligned} \begin{array}{ll} \underset{\mathbf{x}}{\text{ minimize }} &{} \; \alpha \Vert \mathbf{R}\mathbf{x}-{\bar{\mathbf{m}}}_b\Vert ^2 + (1-\alpha )\Vert \mathbf{R}\mathbf{x}-\mathbf{R}\mathbf{y}\Vert ^2 + \lambda \mathbf{x}_c^T\mathbf{H}\mathbf{x}_c\\ \text{ subject } \text{ to } &{} \; x_0 = y_0, \\ &{} \; x_{255} = y_{255}, \\ &{} \; \mathbf{R}\mathbf{x}\succeq \mathbf{0}, \end{array} \end{aligned}$$
(9)

where \(\lambda\) controls the trade-off between the BCCE and the information loss, and \(\succeq\) denotes the element-wise inequality between two vectors. In (9), the equality constraints \(x_0 = y_0\) and \(x_{255} = y_{255}\) restrict the output range of \(\mathbf{x}\) to the full dynamic range of the brightness compensation in (3), whereas the inequality constraint imposes the monotonicity of \(\mathbf{x}\).

Previously, the optimization problem in (9) was solved using the interior-point method [19]; however, as the process is iterative, it requires significant computational complexity and is unsuitable for real-time applications [8]. An approximation also exists in [20], but it may not satisfy all the constraints in (9), potentially causing degradations in image quality.

3 Closed-form solution

In this section, we derive a closed-form solution to the constrained optimization problem in (9), which is computationally efficient while satisfying all three constraints.

3.1 Convexity

We first show that the optimization in (9) is a convex optimization problem, which ensures that a closed-form solution, if it can be found, is a global solution. Specifically, since the first and second terms of the cost function in (9) are quadratic, we only show the convexity of the third term \(\mathbf{x}_c^T\mathbf{H}\mathbf{x}_c\).

Proof

We decompose the function \(D(\mathbf{x}) = \mathbf{x}_c^T\mathbf{H}\mathbf{x}_c= g(f(\mathbf{x})) = g(f_0(\mathbf{x}), \ldots , f_{255}(\mathbf{x}))\) with \(g(\mathbf{x}) = \Vert \mathbf{x}\Vert ^2\) and \(f(\mathbf{x}) = \mathbf{H}^{\frac{1}{2}}\mathbf{x}_c\), where the kth diagonal element of \(\mathbf{H}^{\frac{1}{2}}\) is \(h_k^{\frac{1}{2}}\). The function g is convex and non-decreasing in each argument. Also, the functions \(f_k(\mathbf{x})\)’s are convex, since each \(f_k(\mathbf{x})\) satisfies, for \(0 \le \theta \le 1\) and \(0 \le k \le 255\),

$$\begin{aligned}&f_k(\theta \mathbf{x}+ (1-\theta )\mathbf{y}) \nonumber \\&{} \quad = h_k^\frac{1}{2} \max \{\theta x_k + (1-\theta ) y_k - 255, 0\} \nonumber \\&{} \quad = h_k^\frac{1}{2} \max \{\theta (x_k - 255) + (1-\theta )(y_k - 255), 0\} \nonumber \\&{} \quad \le \theta h_k^\frac{1}{2} \max \{x_k- 255, 0\} + (1-\theta ) h_k^\frac{1}{2} \max \{y_k - 255, 0\} \nonumber \\&{} \quad = \theta f_k(\mathbf{x}) + (1-\theta )f_k(\mathbf{y}). \end{aligned}$$
(10)

Therefore, \(D(\mathbf{x}) = g(f(\mathbf{x}))\) is a convex function of \(\mathbf{x}\), since the composition of a convex non-decreasing function and a convex function is convex [21]. Then, as the cost function is a convex function and the feasible constraint sets are convex, the problem in (9) is a convex optimization problem. \(\square\)

3.2 Closed-form solution

To solve the optimization problem, we define the Lagrangian \({\mathscr {L}}:{\mathbb {R}}^{256} \times {\mathbb {R}}^{256} \times {\mathbb {R}}^{256} \rightarrow {\mathbb {R}}\) associated with the problem in (9) as

$$\begin{aligned} {\mathscr {L}}(\mathbf{x}, {\varvec{\mu }}, {\varvec{\nu }})&= \alpha \Vert \mathbf{R}\mathbf{x}-{\bar{\mathbf{m}}}_b\Vert ^2 + (1-\alpha )\Vert \mathbf{R}\mathbf{x}-\mathbf{R}\mathbf{y}\Vert ^2 \nonumber \\&{}\quad \, + \lambda \mathbf{x}_c^T\mathbf{H}\mathbf{x}_c+ {\varvec{\mu }}^T(\mathbf{I}_0\mathbf{x}- \mathbf{I}_0\mathbf{y}) - {\varvec{\nu }}^T\mathbf{R}\mathbf{x}, \end{aligned}$$
(11)

where \({\varvec{\mu }}= [\mu _0, \mu _1, \ldots , \mu _{255}]^T \in {\mathbb {R}}^{256}\) and \({\varvec{\nu }}= [\nu _0, \nu _1, \ldots , \nu _{255}]^T \in {\mathbb {R}}^{256}\) are Lagrange multiplier vectors for the constraints, and \(\mathbf{I}_0 \in {\mathbb {R}}^{256 \times 256}\) is a matrix in which the first and last diagonal elements are one and the other elements are zero. Then, we can write the Karush–Kuhn–Tucker (KKT) conditions [21] for (11) as

$$\begin{aligned} \mathbf{I}_0 \mathbf{x}- \mathbf{I}_0\mathbf{y}&= \mathbf{0}, \end{aligned}$$
(12)
$$\begin{aligned} \mathbf{R}\mathbf{x}&\succeq \mathbf{0}, \end{aligned}$$
(13)
$$\begin{aligned} {\varvec{\nu }}&\succeq \mathbf{0}, \end{aligned}$$
(14)
$$\begin{aligned} \mathbf{N}\mathbf{R}\mathbf{x}&= \mathbf{0}, \end{aligned}$$
(15)
$$\begin{aligned} 2\mathbf{R}\mathbf{x}- \mathbf{d}+ 2\lambda \mathbf{R}^{-T}\mathbf{H}\mathbf{x}_c+ \mathbf{R}^{-T}{} \mathbf{I}_0 {\varvec{\mu }}- {\varvec{\nu }}&= \mathbf{0}, \end{aligned}$$
(16)

where \(\mathbf{N}\) = \(\text{ diag }({\varvec{\nu }})\) and \(\mathbf{d}= 2\alpha {\bar{\mathbf{m}}}_b + 2(1-\alpha )\mathbf{R}\mathbf{y}\), respectively. The stationary condition in (16) is obtained by multiplying both sides of \(\frac{\partial {\mathscr {L}}(\mathbf{x},{\varvec{\mu }},{\varvec{\nu }})}{\partial \mathbf{x}} = \mathbf{0}\) by \(\mathbf{R}^{-T}\), because the matrix \(\mathbf{R}^T\) has full rank. Also, note that, in (16), since \(\mathbf{x}_c^T\mathbf{H}\mathbf{x}_c\) is not differentiable with respect to \(\mathbf{x}\), we use its subgradient \(2\mathbf{H}\mathbf{x}_c\) instead of the gradient.

We first expand the vector notations in (16) to obtain a system of equations and subtract the kth equation from the \((k+1)\)th one to eliminate \(\mu _{255}\). Then, we have a recursive system, i.e.,

$$\begin{aligned} \left\{ \begin{array}{ll} x_1 - x_0 = x_0 + \frac{d_1 - d_0}{2} + \lambda h_0 x_{c,0} + \frac{\mu }{2} + \frac{\nu _1 - \nu _0}{2}, &{} \hbox { if}\ k = 0, \\ x_{k+1} - x_k = x_k - x_{k-1} + \frac{d_{k+1} - d_k}{2} + \lambda h_k x_{c,k} &{}\\ + \frac{\nu _{k+1} - \nu _k}{2}, &{} \hbox { if}\ k > 0.\\ \end{array}\right. \end{aligned}$$
(17)

Here, we denote \(\mu = \mu _0\) for notational simplicity. Note that, since \(x_0\) is fixed to a known value \(y_0\) by the minimum-value constraint in (9), we treat \(x_0\) as a constant instead of the optimization variable. By substituting \(x_k - x_{k-1}\) on the right-hand side recursively with the previous equations, we can rewrite (17) as

$$\begin{aligned} x_{k+1} - x_k & {}= x_0 + \frac{d_{k+1} - d_0}{2} + \lambda \sum _{i=0}^{k}h_i x_{c,i} + \frac{\mu }{2} + \frac{\nu _{k+1} -\nu _0}{2}, \nonumber \\&\hbox { for}\ k \ge 0. \end{aligned}$$
(18)

We can eliminate all \(\nu _k\) values for \(k \ge 1\) from (18) using (13)–(15) and express \(x_k\) values as a closed-form formula, which is a function of a single variable \(\mu\), provided \(\{x_0,\ldots ,x_{k-1}\}\) are given, i.e.,

$$\begin{aligned} x_k =\, & {} x_{k-1} + \max \left( x_0 + \frac{d_k - d_0}{2} + \lambda \sum _{i=0}^{k-1}h_i x_{c,i} + \frac{\mu }{2}, 0 \right) ,\nonumber \\&\hbox { for}\ k \ge 1. \end{aligned}$$
(19)

Proof

We rewrite the KKT conditions in (13)–(15) as

$$\begin{aligned} x_{k+1} - x_k&\ge 0, \end{aligned}$$
(20)
$$\begin{aligned} \nu _k&\ge 0, \end{aligned}$$
(21)
$$\begin{aligned} \nu _{k+1} (x_{k+1} - x_k)&= 0, \end{aligned}$$
(22)

for all \(k \ge 0\), and \(x_0 \ge 0\) and \(\nu _0 x_0 = 0\). We first assume that \(x_0 > 0\), thus \(\nu _0 = 0\). However, even when \(x_0 = 0\), we can similarly derive the solution by denoting \(\mu = \mu _0 - \nu _0\) in (17).

Let us assume that we have \(x_k\) for \(k \ge 0\). Then, from (18), we have

$$\begin{aligned} x_{k+1} - x_k = x_0 + \frac{d_{k+1} - d_0}{2} + \lambda \sum _{i=0}^{k}h_i x_{c,i} + \frac{\mu }{2} + \frac{\nu _{k+1}}{2}. \end{aligned}$$
(23)

We consider two cases:

  • Case 1: \(x_0 + \frac{d_{k+1} - d_0}{2} + \lambda \sum _{i=0}^{k}h_i x_{c,i} + \frac{\mu }{2} > 0\).

  • Case 2: \(x_0 + \frac{d_{k+1} - d_0}{2} + \lambda \sum _{i=0}^{k}h_i x_{c,i} + \frac{\mu }{2} \le 0\).

In Case 1, \(x_{k+1} - x_k > 0\). Then, \(\nu _{k+1} = 0\) from (22), and

$$\begin{aligned} x_{k+1} = x_k + x_0 + \frac{d_{k+1} - d_0}{2} + \lambda \sum _{i=0}^{k}h_i x_{c,i} + \frac{\mu }{2}. \end{aligned}$$
(24)

In Case 2, since \(x_{k+1} - x_k \le \frac{\nu _{k+1}}{2}\), \((x_{k+1} - x_k)^2 \le \frac{\nu _{k+1}}{2}(x_{k+1} - x_k) = 0\). Therefore,

$$\begin{aligned} x_{k+1} = x_k. \end{aligned}$$
(25)

Finally, combining (24) and (25) and changing indices as \(x_k = x_{k+1}\), we obtain the closed-form formula in (19). \(\square\)

Notice that each \(x_k\) in (19) is a monotonically increasing function of a single variable \(\mu\), which can be given by \(x_k = g_k(\mu )\). Then, we can find \(\mu\) that satisfies the maximum-value constraint in (9). Specifically, we define a function

$$\begin{aligned} f(\mu ) = x_{255} - y_{255} = g_{255}(\mu ) - y_{255} \end{aligned}$$
(26)

and find the solution to \(f(\mu ) = 0\). Since \(f(\mu )\) is a monotonically increasing function, there exists a unique solution to \(f(\mu ) = 0\). In this work, we employ the secant method [22] to find the unique solution iteratively as similarly done in [10]. Let \(\mu ^{(n)}\) denote the value of \(\mu\) at the nth iteration. Then, we obtain the solution \(\mu\) by applying the secant formula

$$\begin{aligned} \mu ^{(n)} = \mu ^{(n-1)} - \frac{\mu ^{(n-1)} - \mu ^{(n-2)}}{f\big (\mu ^{(n-1)}\big ) - f\big (\mu ^{(n-2)}\big )} f\big (\mu ^{(n-1)}\big ), \nonumber \\ \quad n=2,3,\ldots \end{aligned}$$
(27)

until convergence. More specifically, we define the convergence rate as \(\xi ^{(n)} = \big |\mu ^{(n)} - \mu ^{(n-1)}\big |\) and run the iteration until either \(\xi ^{(n)} < 10^{-5}\) or \(f\big (\mu ^{(n)}\big ) < 10^{-10}\). Finally, from \(\mu\), we compute all elements in the optimal transformation function \(\mathbf{x}\) via (19), which achieves the brightness-compensated contrast enhancement and the minimization of the information loss simultaneously subject to the minimum-value, maximum-value, and monotonic constraints in (9).

4 Experimental results

Fig. 1
figure 1

Brightness-compensated contrast enhancement results on the Building, Hats, Window, Stream, Bikes, and Chalet images at \(b = 0.5\). The input images in (a) are compensated by Tsai et al.’s algorithm [2] in (b), the iterative solution [8] in (c), the approximate solution [20] in (d), and the closed-form solution in (e). For a better evaluation of the qualities, we recommend viewing this figure on a display device rather than examining the printed version

We evaluate the performance of the proposed solution on 24 images from the Kodak Lossless True Color Image Suite [23], including the Building, Hats, Window, and Stream images in Fig. 1. The test images have a resolution of \(768 \times 512\). We simulate the case when the backlight intensity is halved, i.e., \(b = 0.5\), and the histogram equalization for global contrast enhancement is employed by setting \(\mathbf{m}= \mathbf{h}\) in (6). However, as mentioned in Sect. 2.1, any global contrast enhancement techniques can be used to obtain \(\mathbf{m}\). The parameters \(\alpha\) and \(\lambda\) in (9) are fixed to 0.5 and \(\frac{1}{\mathbf{1}^T \mathbf{h}}\), respectively, in all experiments as done in [8]. Also, the device parameters \(\omega _1\), \(\omega _2\), and \(\gamma\) in (1) are set to 0.057, 1.224, and 1.691, respectively, which were found experimentally in [5]. For reproducibility, we provide MATLAB code on our project website.Footnote 1

Figure 1 compares the brightness-compensated images, corresponding to perceived luminance levels, obtained by the proposed closed-form solution with those by Tsai et al.’s algorithm [2], the iterative solution [8], and the approximate solution [20]. In Fig. 1b, Tsai et al.’s algorithm improves local contrast, but darkens the images compared with the original images in Fig. 1a. On the contrary, the iterative solution [8], the approximate solution [20], and the closed-form solution in Figs. 1c–e, respectively, yield output images with high qualities, better preserving brightness and enhancing global image contrast.

Fig. 2
figure 2

Comparison of the transformation funPctions obtained by different methods on a Building, b Hats, c Window, d Stream, e Bikes, and e Chalet

Figure 2 compares the transformation functions that yield the output images in Fig. 1. Tsai et al.’s algorithm produces identical transformation functions for the images with different characteristics, i.e., Building, Window, and Stream, which indicates that their algorithm is less adaptive. The approximate solutions darken the output images in lower intensity regions and violate the minimum-value constraint \(x_0 = y_0\) in (9), because the constraint is omitted in the approximation for simplicity. For example, for all images, the approximate solutions map the input pixel range [0, 50] to the output pixel values that are lower by about 10 than those of the iterative solutions and the closed-form solutions. These lowered transformation functions may cause perceptual detail losses in dark regions of the images, e.g., the doors in the Building image and the shadows in the Hats image. In contrast, the transformation functions obtained by the iterative solution and the closed-form solution are almost identical, which confirms accuracy of the closed-form solution.

Table 1 Objective assessment of the image contrast using three metrics: AMBE [24], DE [25], and EME [26]
Table 2 Computational complexity

In addition to subjective assessment, we compare the proposed closed-form solution with the conventional algorithms using three objective quality metrics: absolute mean brightness error (AMBE) [24], discrete entropy (DE) [25], and measure of enhancement (EME) [26]. Table 1 lists the average performance over all test images [23]. A low AMBE indicates that the algorithm well preserves the brightness of an input image, while high DE and EME imply that the image contains more details. The results demonstrate that the closed-form solution provides better performance than Tsai et al.’s algorithm and is comparable to the iterative and approximate solutions.

Table 3 Comparison of the average computation times in seconds to process the test images with various image resolutions

Table 2 compares the computational complexities and lists the average computation times to process all test images in [23] on a PC with a 2.6 GHz CPU and 8 GB RAM. The iterative solution requires the longest computation time, and the approximate solution shortens the time significantly, yet sacrificing the resulting image quality. The closed-form solution drastically reduces the time further, while producing identical transformation functions to those of the iterative solution. Specifically, the closed-form solution runs about 796 and 8.9 times faster than the iterative and approximate solutions, respectively. Note that, since big \({\mathscr {O}}\) notation ignores coefficients, the closed-form solution requires less time than Tsai et al.’s algorithm despite the constant M. We iteratively apply the secant formula in (27) to find a solution to \(f(\mu ) = 0\). The average number of iterations for all test images is only 8.79. Therefore, the proposed closed-form solution is efficient enough to be employed in a wide range of applications, even in devices with limited computational resources. Moreover, if it is implemented in hardware, such as field-programmable gate array (FPGA), its actual computational time would be significantly further reduced.

We also show how image sizes affect the computation times for each method. Table 3 compares the average computation times over 10 trials of the proposed closed-form solution with those of Tsai et al.’s algorithm, the iterative, and approximate solutions. For this comparison, we resize the input image with the resolution \(3840 \times 2160\), which corresponds to 4K ultra-high-definition (UHD), with scaling factors from 0.2 to 1, and then apply different methods. We observe that the computation times of the iterative solution are kept constant with small variations, while those of Tsai et al.’s algorithm, the approximate solution, and the closed-form solution increase as the image resolution gets higher. This is because the optimization procedure is dominant for the iterative solution, whereas the complexities of the contrast enhancement and optimization are comparable for the approximate and closed-form solutions. Also, since the contrast enhancement is dominant for Tsai et al.’s algorithm, its complexity gets substantially higher as the image resolution increases. However, the computation times of the closed-form solution are significantly lower than those of the competing algorithms, even when the image sizes are large. For example, for images with a 4 K resolution, which corresponds to the relative resolution of 1.0, the closed-form solution runs about 76.9, 162.9, and 3.34 times faster than Tsai et al.’s algorithm, the iterative solution, and the approximate solution, respectively. Thus, the closed-form solution derived in this work can be applicable to a wide range of devices, e.g., from mobile devices with small displays to large televisions.

Table 4 Comparison of computation times in seconds for the test image with various values of bit depth

Finally, in addition to the image size, we also compare how bit depth of an image affects the computation times for the iterative, approximate, and closed-form solutions. In this test, we take a test image of resolution \(692 \times 462\) with 12-bit depth as input, reduce the bit depth by one bit recursively, and then apply the BCCE algorithm using each method to obtain results. Table 4 shows the computation times for each method. We see that, as the bit depth increases, the computation times for the iterative and approximate solutions get higher prohibitively to be employed in practical applications. On the contrary, the computational times for the closed-form solution are significantly lower than those of the iterative and approximate solutions even when the bit depth is high. Furthermore, the relative times of the iterative and approximate solutions to the closed-form solution get higher as the bit depth increases. For example, for the 8-bit depth, the ratio of the computation times of the iterative, approximate, and closed-form solutions is 617:8:1, but it increases to 10008:305:1 for the 12-bit depth. Therefore, from this test, we can conclude that the proposed closed-form solution can be applicable to devices with not only larger but also higher bit depth displays compared with conventional methods [8, 20].

5 Conclusions

We developed a computationally efficient brightness compensation algorithm for transmissive LCDs in this work. We first demonstrated that the approach given in [8] leads to a convex optimization problem. Then, we derived a closed-form solution for the optimization, as opposed to the iterative solution given in that work. Experimental results show that the closed-form solution runs about 800 and 9 times faster than the iterative and approximate solutions in [8, 20], respectively, while providing identical or better results. In addition, we showed that the closed-form solution can also be applied to devices with high-resolution and higher bit depth displays, e.g., 4K displays. An important direction for future work is to incorporate more effective local contrast enhancement techniques [27, 28] into the formulation for brightness compensation.