Keywords

1 Introduction

In MPEG-like hybrid video encoders, quantization plays an overwhelmingly important role in coding rate-distortion (RD) performance. It not only determines the quantization distortion, but also has prominent impacts on the consumed coding rate. Since video coding standards only define the inverse quantization, many research works have explored how to efficiently quantize discrete cosine transform (DCT) coefficients while remaining compliant with respective video coding standards [1].

In early video codecs, DCT coefficients were quantized generally using a uniform scalar quantizer (USQ). Later on, a USQ with deadzone (USQ + DZ) was adopted in MPEG-4 and early H.264/AVC reference codes [1]. In USQ + DZ, fixed rounding offsets are usually employed and determined in a heuristic and empirical way. Coefficient-independent rounding based quantization in USQ and USQ + DZ is the so-called hard-decision quantization (HDQ), in which correlation among adjacent coefficients and their effects on quantization are not considered. The coefficient-wise processing in HDQ makes it friendly to hardwired video coder with parallel implementation. However, HDQ even deadzone HDQ both suffer from non-negligible rate distortion performance loss.

Soft decision quantization (SDQ) is a better alternative achieving superior RD performance contributed by full utilization of inter-coefficient correlation. A popular SDQ implementation is Viterbi trellis search [3], which was implemented for H.263+  [2] and for H.264/AVC [3]. However, running dynamic programming over the full trellis graph is computationally expensive. To get around this, in H.264/AVC and HEVC reference softwares JM and HM, a simplified suboptimal SDQ, called rate distortion optimized quantization (RDOQ) [4], was adopted. RDOQ is a simplified version of SDQ, implemented by employing dynamic programming in a similar way. SDQ achieves superior coding RD performance, approximately 6 ~ 8% bit rate saving as opposed to HDQ. In SDQ and RDOQ, multiple candidate results of quantization are competed and chosen using rate distortion optimization. As a result, heavy computation burden is one major challenge for SDQ and RDOQ. Moreover, dynamic programming based path search in SDQ results in severe sequential dependency, and serial processing in CABAC aggravates the data dependency in CABAC based SDQ [5].

Accounting for this issue, some literatures had made meaningful explorations to alleviate the computation burden of the SDQ [57]. These works decrease the computation of SDQ by decreasing the candidates of quantized results [5], employing fast computation for rate distortion of candidate coefficient levels [6, 7], and using fast bit rate evaluation [7]. These methods alleviate the computation burden in SDQ to certain extent, however they still suffer from some sequential dependency, or mainly designed for soft-targeted video coder optimization. In comparison, data dependency does not appear in the HDQ, such as the prevalent deadzone based HDQ. Coefficient level parallel processing can be achieved in coefficient-wise HDQ with obviously increased throughput by employing hardwired pipelining. As a result, HDQ is well-suited for hardwired video coder in terms of satisfactory throughput efficiency. Unfortunately, there is a nontrivial rate distortion performance gap between HDQ and SDQ.

In summary, it is meaningful to further improve the RD performance of HDQ for hardwired video coder, taking the inter-coefficient correlation into account by simulating the behavior pattern of SDQ. On one hand, the distribution characteristics of DCT coefficients have great influence on quantization results, and thus DCT distribution parameter and quantization parameter are taken as consideration factor from the viewpoint of macro level. On the other hand, context modelling is used in CABAC and the number of possible significant coefficients plays important role in determining the quantization results. From the viewpoint of micro level manipulation, the deadzone offset is supposed to be tuned taking inter-coefficient influence into consideration according to the number of the possible significant coefficients in the block.

According to the above analysis, this paper aims to optimize rounding offset model for deadzone HDQ to improve the coding RD performance. Bayes classification method is used to derive the coefficient-wise deadzone offset model which is described as functions of these three parameters, i.e. the quantization parameter, the parameter of component-wise DCT coefficients, and the number of possible significant coefficients prior to the current coefficient. In addition, the behaviour pattern of SDQ is analyzed and used as guidance for offset modelling to improve the RD performance of the proposed offset model based HDQ. This algorithm that proposed in this paper is well-suited for hardware coder design and achieves superior RD performance compared with deadzone HDQ thanks to considering the inter-coefficient correlation by offline methods to analyze the number of significant coefficients in the block.

The rest of this paper is organized as follows. Problem formulation is given in Sect. 2. The proposed HDQ algorithm is given in Sect. 3. Section 4 gives the experimental results. Finally, Sect. 5 concludes the paper.

2 Background and Problem Formulation

2.1 Difference Between Deadzone HDQ and SDQ

In CABAC-based SDQ, the output level of one coefficient not only depends on the levels of the anterior coefficients, i.e. backward dependency, but has influence on the following coefficients, i.e. forward influence. Intrinsically, dynamic programming such as Viterbi search is desired to track the inter-coefficient dependency in SDQ [5]. There are multiple sequentially scanned coefficients in one block, and each coefficient is described as a trellis stage in the graph [5]. There are multiple candidate quantized levels to be checked at each stage in SDQ, and they are described as candidate context states. The sequential coefficients and their candidate quantized levels form a trellis graph, and so the SDQ is actually a problem of searching for a path in the graph with minimum coding cost. The optimal path is composed by multiple adjacent branches, one survivor branch at one stage. The SDQ algorithm achieves superior coding performance, approximately 6 ~ 8 % bit rate saving, at the cost of high dependency caused by Viterbi algorithm and CABAC.

The HDQ algorithm is coefficient-wise based on memoryless source assumption, i.e. there is no dependency among adjacent coefficients. As a result, HDQ is well-suited for parallelism implementation. Compared with USQ, the USQ + DZ achieves considerable RD performance improvement by employing the statistical characteristics of entropy coding [1]. However, fixed offsets, 1/3 and 1/6 for intra and inter modes, are used in deadzone HDQ in H.264 JM and HEVC HM codecs [8]. Memoryless source assumption does not hold for the context based entropy coding, such as CABAC. It means that the fixed-offset HDQ is not fully optimized compared with optimal SDQ in terms of rate distortion optimization.

It is meaningful to excavate the inner characteristics of SDQ as guidance to propose adaptive offset model for new deadzone HDQ. The goal is to approach the RD performance of the SDQ and maintain the advantage of coefficient-wise processing in HDQ. The source distribution parameter and inter-coefficient influence will be taken into consideration.

2.2 Challenge in Adaptive Coefficient-wise HDQ

A deadzone offset δ is employed to adjust the quantization result φ in deadzone HDQ, and that can be described as follows.

$$ \phi = floor(\frac{ |u |}{q} + \delta ) $$
(1)

Here, q is the quantization step size determined by the quantization parameter Qp. In H.264/AVC and HEVC, q is equal to pow(2,(Qp-4)/6). floor is the rounding operator. u is the DCT coefficients to be quantized. | | is the absolute value of the operation. There are several factors that should be considered in designing optimal coefficient-wise deadzone HDQ.

Firstly, the deadzone offset δ in HDQ is supposed to be determined in a coefficient-wise way. In general, Laplacian distribution is used to model the DCT coefficient, and the probability density function is described as follows.

$$ f_{i} = \frac{1}{{2\Lambda _{i} }}e^{{ - \frac{{|u_{i} |}}{{\Lambda _{i} }}}} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} and{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}\Lambda _{i} = \frac{{\sigma_{i} }}{\sqrt 2 } = \frac{1}{n}\sum\limits_{j = 1}^{n} {|u_{ij} |} $$
(2)

Ʌi and \( \sigma_{i} \) are the model parameter and the standard deviation of the ith frequency component. uij is the DCT coefficient and uij is the jth DCT coefficient of the ith frequency component. Based on Laplacian DCT distribution model, the deadzone offset δ of HDQ is typically determined as follows using rate distortion optimization [1].

$$ \delta = \frac{q}{2} - \frac{\lambda }{{2\ln 2 \times\Lambda }} $$
(3)

Here, λ is Lagrange multiplier and equals to ln2(power(q-δ,2)-power(δ,2))/q in [1]. However, there is an egg-and-chicken problem because the parameters λ and δ are dependent with each other. The coefficient-wise model in (3) is built in a macroscopic way based on statistical analysis. However, the SDQ algorithm manipulates the quantization result in a microscopic way, specifically according to the probabilities of the contexts in CABAC. Moreover, based on λ equals to ln2(power(q-δ,2)-power(δ,2))/q, coefficient level solutions are theoretically desired for Eq. (3), which is not easy to be solved.

Thirdly, the adaptive deadzone offset in Eq. (3), derived from coefficient-level models without considering inter-coefficient influence, will unavoidably suffer from RD performance degradation. In CABAC, coding bit of a quantization coefficient is determined by the probability state of the context, which is modelled according to the numbers of coefficients with intensity equal to 1 and larger than 1, i.e. Numeq1 and NumLg1, prior to the current coefficient. Under the criterion of rate distortion optimization, SDQ considers the inter-coefficient correlation and adjusts the quantization coefficient level. The code rate of the quantization coefficient is related to the number of nonzero (significant) quantization coefficients which are in current block and in the adjacent block.

Therefore, this paper take this factor into consideration in a micro way, i.e. employing the number of significant coefficients in one block in determining the deadzone offset and quantization results. However, the number of significant coefficient of a certain coefficient is not available before the SDQ algorithm finish the trellis search. Thus, it is not possible to measure the accurate number of significant coefficient in deadzone HDQ. By striking a compromise, we define the number of possible significant coefficients for the i-th coefficient ηi according to the HDQ quantization results as follows.

$$ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \eta_{\text{i}} = \sum\limits_{j = i + 1}^{N} {\varpi ( |u_{\text{i}} | { - }\frac{q}{ 2} )} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{and}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \varpi (x) = \mathop \{ \nolimits_{{0{\kern 1pt} ;{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} x \le 0}}^{{1{\kern 1pt} ;{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} x > 0}} $$
(4)

When the amplitude of the DCT coefficient is larger than q/2, the coefficient is considered as a possible significant coefficient. ηi is the number of possible significant coefficients prior the current i-th coefficient. For all coefficients on one block, their ηi can be estimated in parallel due to the coefficient-wise processing in HDQ as shown in (4). Suppose N is the number of the last DCT coefficient in the adjacent block, so ηi is an integer ranging from 0 to N-i.

Here we first quantitatively evaluate the degree of which ηi affect the quantization result of the current i-th coefficient. We use φ HDQ(u) and φ SDQ(u) to distinguish the quantization amplitude of u in HDQ and SDQ respectively. There are two possible situations according to the condition whether φ HDQ(u) is equal with φ SDQ(u) or not, i.e. φ HDQ = φ SDQ and φ HDQ ≠ φ SDQ. Taking 4 × 4 transform block as an example, the statistical distribution of ηi of sequences in the Table 1 is counted respectively. Figure 1 shows these sequences’ average results of the statistical distribution of ηi in the case of two cases. We can make comparison between the ηi results of two cases, i.e. φ HDQ = φ SDQ and φ HDQ ≠ φ SDQ, as shown and in Fig. 1. The result shows that the distribution of ηi is more dispersed when φ HDQ = φ SDQ, while ηi are mainly concentrated in the vicinity of zero comparatively in the case of φ HDQ ≠ φ SDQ. This statistical differences of ηi in two opposite cases give us the insight that ηi can be employed to aid in deriving deadzone offset model in terms of simulating SDQ. This work will adjust the deadzone offset δ according to the actual distribution of ηi to simulate the SDQ decision mechanism as far as possible.

Table 1. The BD-PSNR loss of the deadzone HDQ algorithms using fixed-offset and the proposed adaptive offset compared with optimal SDQ
Fig. 1.
figure 1

The histogram results of ηi in the case of φ HDQ = φ SDQ and φ HDQ ≠ φ SDQ

3 Improved HDQ with the Proposed Adaptive Deadzone Offset

3.1 Heuristic Deadzone Offset Modelling

As analyzed above, deadzone offset modelling is supposed to be built adaptively for deadzone HDQ. Instead of RD model based derivation method shown in Eq. (3), this work attempts to estimate optimal deadzone offset model by simulating the behavior of SDQ and construct an adaptive deadzone offset model based on statistical analysis method.

As shown in Fig. 2, statistical analysis and heuristic method for model derivation is employed. We use the classification method in II-C to distinguish two kinds of situations. The “inlier” and “outlier” which represent DCT coefficients samples of two categories respectively, are collected for offline deadzone offset modeling. The “inlier” samples are the DCT coefficients in the case of φ HDQ = φ SDQ; the “outlier” samples are the DCT coefficients in the case of φ HDQ ≠ φ SDQ. Taking the inter-coefficient correlation into consideration as analyzed in (3), the deadzone offset δ is related with the quantization parameter Qp, the Laplacian distribution parameter Ʌ and the number of possible significant coefficients prior to the current coefficient, i.e. ηi. As a result, the parameter (Qp, Ʌ, η) combination samples of the “inlier” and “outlier” coefficients are collected for statistical analysis. In addition, the “inlier” offset range (δmin1, δmax1) and the “outlier” offset range (δmin2, δmax2), in which the resulting results of HDQ are equal to those of SDQ of two kinds of samples, are also recorded simultaneously. These statistic samples will be used for off-line offset modeling.

Fig. 2.
figure 2

Heuristic deadzone offset model derivation.

3.2 Analysis on Optimal Offset Distribution

When the HDQ fails to track the optimality of SDQ, the quantizaton results of HDQ will be different from those of SDQ, i.e. φ HDQ ≠ φ SDQ. In these samples, we find that the probability of φ HDQ(u) = φ SDQ(u) + 1 is very close to 1, and this phenomenon can be explained that smaller quantization level in SDQ, with changed level intensity equal to 1, will result in predominate rate saving, which is larger than the increased distortion. The algorithm design of adaptive deadzone HDQ is just to find adaptive deadzone offset (δbest) which is suitable for both of two kinds of samples.

The method of deriving the ranges of the possibly reasonable offsets is analyzed as follows. A well-designed offset δ for deadzone HDQ may achieve right classification with result identical with SDQ, or wrong classification with result differing from SDQ. δbest will be determined from a great amount of samples using statistical analysis method, i.e. Bayes method. Intuitively, we estimate appropriate deadzone offset range that can make HDQ achieving the identical result as SDQ. The suitable deadzone offset ranges of two cases are estimated respectively, and their upper and lower bounds of offset ranges of two kinds of simples, i.e. (δmin1, δmax1,) and (δmin2, δmax2), are estimated as follows.

$$ \begin{aligned} (\delta_{\hbox{min} 1} ,\delta_{\hbox{max} 1} ) & = \mathop {\arg }\limits_{{\delta_{best} }} (floor(\frac{|u|}{q} + \delta_{best} ) = \phi_{HDQ (u )} ) \\ (\delta_{\hbox{min} 2} ,\delta_{\hbox{max} 2} ) & = \mathop {\arg }\limits_{{\delta_{best} }} (floor(\frac{|u|}{q} + \delta_{best} ) = \phi_{HDQ (u )} - 1) \\ \end{aligned} $$
(5)

Here, φ HDQ(u) = floor(|u|/q + 0.5) is HDQ quantization intensity. As for the “inlier” samples, we get the possible range of the optimal δ under the constraint that floor(|u|/q + δbest) is equal to φ HDQ(u). As for the “outlier” samples, we get the possible range of the optimal δ under the constraint that floor(|u|/q + δbest) is equal to φ HDQ(u)-1. In a word, we estimate the possible ranges of the optimal δ under the constraint of φ HDQ(u) = φ SDQ(u) for both kinds of samples.

Offline statistic analysis is carried out using a great amount of two kinds of samples and their offset ranges. Histogram based non-parametric analysis is employed to estimate these two probability density function curves. The maximal possible solution range for (δmin, δmax), i.e. (0,1), is partitioned into N segments, and the actual range (δmin, δmax) of all samples are compared for grouping and classification respectively. Then, ranges (δmin, δmax) of all samples are compared for classification respectively. If one segment is within range (δmin, δmax), its histogram count θy(k) is increased by ζ(k). θy(k) is expressed as follows.

$$ \begin{aligned} \theta_{\text{y}} (k) & = \theta_{y} (k) + \zeta (k);\,\,\,\,\,\,if(\frac{k - 1}{N},\frac{k}{N}) \in (\delta_{\hbox{min} y} ,\delta_{\hbox{max} y} ) \\ \theta_{y} (k) & = \theta_{y} (k);\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\kern 1pt} otherwise\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,y = 1\,\,or\,\,2 \\ \end{aligned} $$
(6)

In this work, different weight is used for histogram estimation accounting for statistical characteristics. That is, different weight ζ(k) is sued for different each subsection for δbest histogram statistical analysis. Gaussian function is used for modeling the weight of each sub section (ζ(k)). Gaussian function based ζ(k) is employed and expressed as follows.

$$ {\begin{aligned} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \hfill \\ \zeta ({\text{k}}) = \frac{1}{{\sigma_{1} \times \sqrt {2 \times \pi } }} \times e^{{( - \frac{{({\text{k}} - \mu_{1} )^{2} }}{{2 \times \sigma_{1}^{2} }})}} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\text{and}}{\kern 1pt} {\kern 1pt} {\kern 1pt} \mu_{1} = \frac{{\delta_{ \hbox{max} } + \delta_{\hbox{min} } }}{2},\sigma_{1} = \frac{{\delta_{\hbox{max} } - \delta_{\hbox{min} } }}{\alpha }{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \hfill \\ \end{aligned}} $$
(7)

Here, α is equal to 6. The segment-wise histogram results of δbest, cnty(Qp,Ʌ) are obtained independently in the case of different Qp and Ʌ. y = 1 and y = 2 correspond to the “inlier” and “outlier” samples. It is well-known that Ʌ is related with the coefficient position index (i). One example of cnty(Qp,Ʌ(i)) of different coefficients with two kinds of samples is shown in Fig. 3.

Fig. 3.
figure 3

The histogram results of possible δbest cnt1(Qp,Ʌ) and cnt2(Qp,Ʌ)

3.3 The Adaptive Deadzone Offset Model

According to the statistical samples, we build an adaptive model δi = (Qp, Ʌi) using heuristic method for deadzone offset. The model is constructed by maximizing the positive judgment probability and minimizing the probability of wrong judgment. In fact, θ1(k) and θ2(k) actually reflect the right classification probabilities of the “inlier” and “outlier” coefficients. We perform normalization for the histogram results, θy(k), to derive the condition probability py, i.e. p1(δ) and p2(δ) respectively. Therefore, δbest can be determined by taking the peak value of the weighted histogram as shown in Eq. (8). The schematic diagram and the actual statistics results are shown respectively in Fig. 4.

Fig. 4.
figure 4

The sketch map of δbest and the statistical results of δbest

$$ \delta_{\text{i}} = \mathop {\arg }\limits_{{\delta_{i} (\Lambda _{i} ,Qp)}} \hbox{max} \{ p_{1} (\delta_{i} ) + p_{2} (\delta_{i} )\} $$
(8)

On the basis of the above model, we built a factor \( \upphi(\upeta_{\text{i}} ) \) which can adjust the deadzone offset according to the number of possible significant coefficients in the adjacent block. Here, i is the coefficient position index and the factor \( \upphi(\upeta_{\text{i}} ) \) is expressed as follows.

$$ \varphi (\eta_{i} ) = \beta \times \arctan (\eta_{i} - \frac{{\eta_{i\hbox{max} } }}{\gamma }){\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} and{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \eta_{i\hbox{max} } = N - i $$
(9)

Here, N is the number of the last DCT coefficient in the adjacent block. ηimax is equal to N-i. We had evaluated the RD performance in the cases of different combinations between (β, γ). The simulation results indicate that superior RD performance appear when β is equal to -0.03 and γ is equal to 3, as shown in Fig. 5. Therefore, the adaptive deadzone offset model with ηi is expressed as flowers.

Fig. 5.
figure 5

The RD performance of different combinations between β and γ

$$ \delta^{,}_{\text{i}} = \mathop {\arg }\limits_{{\delta_{i} (\Lambda _{i} ,Qp)}} \hbox{max} \{ p_{1} (\delta_{i} ) + p_{2} (\delta_{i} )\} + \varphi (\eta_{i} ) $$
(10)

According to the method of modeling above, in the case of η1 = 10, one example of the surf results of δ1, in the case of different combinations between Qp and Ʌ1, is shown in Fig. 6.

Fig. 6.
figure 6

The surf results of δbest

4 Experimental Results

The proposed adaptive offset model for deadzone HDQ is verified in H.264 and H.265 standards. The fixed-offset HDQ algorithm and the optimal SDQ are taken as the performance comparision anchor. These quantization algorithms are applied in both the final mode coding and the rate distortion optimized mode decision loop. Standard D1, 720p, and 1080p format video sequences are used for simulation. Rate control is turned off, and the quantization parameters 22, 27, 32 and 37 are used for simulation, covering low, medium and high bit rate applications. IPBBPBB GOP structure is used, and 100 frames are tested for all resolution video sequences. The PSNR degradation (BD-PSNR) and rate increment percentage (BD-RATE) are used for performance comparison [8].

The rate distortion curves of the 1080p BasketballDrive sequence are taken as example shown in Fig. 7. The anchor optimal SDQ, the fixed-offset deadzone HDQ, and the proposed algorithm are compared. In addition, Table 1 gives the detailed BD-PSNR and BD-RATE results [8]. Relatively, larger RD performance improvement is observed in higher resolution video sequences. Intensive results show that the proposed algorithm only has 0.03921 dB BD-PSNR loss on average, with 1.51 % average BD-RATE increment, in comparison with the SDQ algorithm in the case of 1080p sequences. In addition, the proposed algorithm achieves 0.08836 dB BD-PSNR improvement on average, with 3.097 % average rate saving (BD-RATE), in comparison with the fixed-offset deadzone HDQ algorithm. The proposed adaptive offset HDQ algorithm is considerably superior than the fixed-offset HDQ, and has close performance with the optimal SDQ algorithm. Compared to the SDQ algorithm, the proposed algorithm has much smaller complexity and is well-suited for hardwired video coder in terms of satisfactory throughput efficiency.

Fig. 7.
figure 7

The RD performance of three kinds of quantization algorithms.

As for complexity, the additional computation of the proposed algorithm, in comparison with the fixed-offset HDQ algorithm, is just simple function call as shown in Eq. (10) or tabulation shown in Fig. 6, so it is almost ignorable.

5 Conclusions

Sequential processing hinders soft-decision quantization (SDQ) from effective hardware implementations, while hard-decision quantization (HDQ) suffers from obvious coding performance loss compared with SDQ. Based on statistics analysis and heuristic modelling, this paper proposes a content-adaptive deadzone quantizer to minimize the rate distortion performance difference between the deadzone HDQ and SDQ. An adaptive deadzone offset model is built according to the quantization parameter, the coefficient-wise DCT distribution parameter, and the number of possible significant coefficients in the block. Simulation results verify that the proposed adaptive HDQ algorithm, in comparison with fixed-offset HDQ, achieves 0.08836 dB PSNR increment and 3.097 % bit rate saving in 1080p sequences with almost negligible complexity increase. In addition, this work, in comparison with the SDQ, achieves less than 0.03921 dB PSNR loss and 1.51 % bit rate increment.