Abstract
Turbo codes are among the best error-correcting codes, but trade-offs between performance and complexity in decoding are required for hardware implementation. In this paper, a novel extrinsic information scaling scheme for max-log-MAP decoder is proposed. It scales and combines extrinsic information generated at successive iteration round. The proposed method is evaluated for 3GPP LTE turbo codes in terms of decoding performance, complexity, and convergence. The simulation results show it has decoding gain near to log-MAP while keeps almost the same computation complexity as max-log-MAP with slight increment in memory resource. Moreover, it maintains insensitivity to SNR estimation error of max-log-MAP algorithm. Compared with conventional scaling scheme, it accelerates extrinsic information exchange between two constituent decoders to get better convergence and decoding performance.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Turbo codes have been widely adopted in many wireless communication standards such as CCSDS and LTE due to their excellent error-correcting performance. The constituent soft-input, soft-output (SISO) decoders exchange their extrinsic information by employing maximum a posteriori (MAP) algorithm iteratively. To reduce computational complexity, it is usually performed in logarithmic domain, turning to log-MAP (LM) algorithm. For further simplicity of hardware implementation, sub-optimal algorithms such as max-log-MAP (MLM) were proposed. MLM algorithm has the least computational complexity and owns the advantage of insensitivity to SNR mismatch [7, 12]. However, it has about 10% performance degradation relative to the optimal LM [8].
To reduce the performance loss of MLM, several trade-off modification methods were proposed. Substantially, there are two kinds. The first kind aims at substituting the correction function in LM with its approximation variants [2, 6]. Although their performance can get close to the optimal LM, they are susceptible to SNR mismatch [4], especially in high-order modulation cases, which limits its application in high-throughput systems [12]. The other kind is the extrinsic information (EI) scaling scheme [1, 11, 13]. It makes compensations by multiplying a constant value, which is called the scaling factor (SF), on the EI from output of the SISO decoder. It is insensitive to SNR mismatch error as MLM algorithm [3, 4]. Moreover, compared with MLM, it provides an additional 0.2–0.4 dB decoding gain with almost the same computational complexity [11]. EI scaling prompts the mutual information exchange between constituent decoders to accelerate the convergence of turbo decoding process, which can be monitored by extrinsic information transfer (EXIT) chart [5, 10].
In this paper, a novel EI scaling and combining scheme is proposed. It features the scaling combination of EI at both current and previous iteration rounds. It retains the properties of EI scaling algorithm and further improves the decoding performance as well. EXIT analysis indicates that it accelerates the convergence of the decoding process compared with conventional scaling scheme. The paper is arranged as follows. In Sect. 2, we review conventional MAP family turbo decoding algorithms. The scaling combination scheme is proposed in Sect. 3. In Sect. 4, we give the simulation results in three aspects of performance, complexity, and convergence. Finally, conclusions are drawn in Sect. 5.
2 Review of MAP Family Turbo Decoding Algorithms
In this section, we briefly review some typical MAP family decoding algorithms in turbo codes, including the optimal algorithm and its sub-optimal variants, which contain various notations and guidance that are necessary to our following statement.
2.1 The Log-MAP Algorithm
Define log-likelihood ratio (LLR) for reliability estimation of kth information bit \(u_k\) as:
where Y is the received sequence. Then, the a posteriori probability of \(u_k\) can be computed as follows:
where \({A_k}(s)\), \({B_k}(s)\), and \({M_k}({s^{\prime }},s)\) are forward, backward, and branch metrics in logarithmic domain. Recursive relations for first two metrics are:
For bivariable cases, \(\max ^{*}\) operation is defined as:
For multivariable cases, the computation can be operated recursively. Using Jacobian logarithmic simplification [6], it can be written as:
The term \(f_c(a,b)\) is called the correction function. LM algorithm is optimal as it takes all the trellis paths into consideration and calculates the maximum a posteriori probability values for each information bit. However, it is not suitable for hardware implementation because of its nonlinear computation in correction function. Moreover, it is susceptible to SNR mismatch.
2.2 The Max-Log-MAP Algorithm
Replacing \(\max ^{*}\) function with \(\max \) approximation in (2)–(4), we attain MLM algorithm. It neglects the correction term, so it has the least computation complexity but becomes sub-optimal consequently. It has performance degradation of 0.3–0.5 dB relative to optimal LM. However, it is a basic decoding algorithm in most hardware implementations. There are two main reasons for this. Firstly, MLM merely contains addition (subtraction) and maximum comparison selection operations to make the add-compare-select (ACS) unit qualified to its computation tasks [9]. Secondly, MLM is insensitive to the estimation of channel noise as the susceptible correction function part is discarded. Therefore, the decoder can work without accurate channel estimation.
3 Scaling Max-Log-MAP Algorithm
After SISO decoder generates the a posteriori information \(L_{apo} (u_k)\), EI is extracted as follows:
where \({L_{apo}}({u_k})\), \({L_{ext}}({u_k})\), and \({L_{int}}({u_k})\) on right-hand side indicate a posteriori information, a priori information (AI), and the channel intrinsic information of \(u_k\), respectively. In turbo decoding process, EI output from one constituent decoder becomes AI input of the other. In this way, EI is exchanged between two SISO constituent decoders alternately. Finally, after a fixed number of iterations, the estimation of \(u_k\) is attained from \(L_{apo} (u_k)\).
EI scaling scheme is a modification to MLM indirectly. It is independent of a posteriori information generation. Therefore, its computational complexity is nearly the same as MLM. For conventional scaling scheme of MLM (S-MLM) [1, 11], its additional burden is merely a multiplier. In comparison, the decoding performance is improved by 0.2–0.4 dB under optimum scaling factor in case of 3GPP standard.
As information entropy of EI is strongly related to its variance, increasing it properly will help faster converge during iteration [5]. Inspired by Chase’s combining method in HARQ mechanism, an EI scaling and combining scheme of MLM (SC-MLM) is proposed:
where s is called scaling factor (SF) and \(L_{ext}^{last}({u_k})\) is the EI generated at the last iteration. For the first iteration round, it is initialized to 0. As shown in (8), both scaling and combining operations are used for the evolution of EI.
The structure of the SC-MLM decoder is depicted in Fig. 1. The subscripts a and e denote the AI and EI, respectively, while the superscript last denotes the EI generated at the last iteration. Compared with conventional turbo decoder, the EI should be saved (dotted line) temporarily for scaling and combining at the next iteration; therefore, additional memory is required.
4 Simulation Results
Unless otherwise mentioned, our proposed method is evaluated for generator polynomial (13,15)\(_\mathrm{{oct}}\) Turbo codes in 3GPP LTE standard. The code length is 2048 with rate 1/3. They are transmitted over AWGN channel with QPSK modulation. All the BER results are obtained through over 10\(^5\) transmission frames. For a decoder, the maximum number of iterations is 4 and no SNR mismatch is assumed. The simulation results comparison will be conducted in three aspects including decoding performance, computational complexity, and iteration convergence.
4.1 Performance Analysis
Figure 2a shows the BER performance between LM, MLM, linear LM (LLM) [2], and two scaling MLM schemes versus \(E_b/N_0\) ranging from 0 to 2dB. SF is set to 0.7. In practice, considering about throughput and latency, SISO decoder usually works in a parallel manner, whose results are shown in Fig. 2b with a parallel degree of 8. It should be noted that LM is no longer optimal under parallel decoding. The simulation results show that the improvement under parallel decoding is quite more evident.
SF is a crucial parameter in scaling scheme. The BER performance versus a range of SF from 0.5 to 0.95 for SC-MLM is shown in Fig. 3 under \(E_b/N_0\) 1.2, 1.5, and 1.8 dB conditions. SF value corresponding to the lowest BER is considered to be optimum. The simulation results imply irregular relevance between optimum SF and SNR.
Sensitivity to SNR mismatch between MLM, SC-MLM, and LLM is shown in Fig. 4 under 1.2 dB \(E_b/N_0\). The label of x-axis \(\Delta E_b/N_0\) denotes SNR mismatch, where negative values represent underestimation, while positive values stand for overestimation. The simulation results show LLM is sensitive to SNR mismatch as LM, especially under underestimation condition. Comparatively, SC-MLM maintains the insensitive characteristic of MLM.
4.2 Complexity Analysis
The computational complexity of both recursive metrics and EI is shown in Table 1. Here, N is the code length, M is the number of encoder shift registers, and X is defined as \(X = N*2^M\). For simplicity, logarithmic and exponential operations are considered as identical, the same with subtractions and additions. As the scaling schemes only have an impact on EI computation, their operation numbers are nearly equal.
4.3 Convergence Analysis
The EXIT chart and iterative decoding trajectories are given in Fig. 5, where \(I_A\) and \(I_E\) represent the mutual information of AI and EI respectively. Vertical lines in trajectories represent half-iteration in SISO decoder 1, while horizontal lines denote that in the other SISO decoders. EXIT curve is the envelope of optimal LM decoding trajectories. Usually, decoding procedure starts at the original point and terminates at the right top after several half-iterations. Assume either mutual information larger than 0.9 for sufficient EI exchange to terminate the iteration; LM and SC-MLM need three rounds, while S-MLM requires 4. It can be observed from trajectories that SC-MLM scheme accelerates EI exchange to make iterative decoding converge faster. The simulation is conducted under BPSK modulation with 1dB \(E_b/N_0\).
5 Conclusion
In this paper, a novel EI scaling and combining scheme for MLM algorithm is proposed to improve turbo decoding performance. It prompts EI exchange to converge faster by EI scaling and combining. Compared with conventional MLM, it provides additional 0.2 dB decoding gain, which is more obvious under parallel decoding, while maintains almost the same computational complexity. Compared with LM, it inherits the insensitivity to SNR mismatch of MLM. Compared with S-MLM, it accelerates the EI exchange to make earlier iteration termination. The simulation is also made to find the optimum SF. In a hardware implementation, additional memory should be needed for saving EI generated at each iteration round. In conclusion, it is a well trade-off scheme between performance and complexity in turbo decoding.
References
Alvarado A, Nunez V, Szczecinski L, Agrell E. Correcting suboptimal metrics in iterative decoders. In: IEEE international conference on communications; 2009. p. 1–6.
Cheng JF, Ottosson T. Linearly approximated log-map algorithms for turbo decoding. In: Vehicular technology conference proceedings, 2000. Vtc 2000-Spring Tokyo, vol 3. 2000 IEEE; 2000. p. 2252–56.
El-Khamy M, Wu J, Lee J, Roh H, Kang I Near-optimal turbo decoding in presence of SNR estimation error. In: Global communications conference; 2013. p. 3737–42.
El-Khamy M, Wu J, Lee J, Kang I. Online log-likelihood ratio scaling for robust turbo decoding. IET Commun. 2013;8(2):217–26.
Hagenauer J. The exit chart—introduction to extrinsic information transfer in iterative processing. In: 2004 European signal processing conference; 2004. p. 1541–48.
Liu Z, Wu B, Ye T. Improved turbo decoding with multivariable taylor series expansion. IEEE Commun Lett. 2017;99:1–1.
Papaharalabos S, Mathiopoulos PT, Masera G, Martina M. Non-recursive \({\max ^*}\) operator with reduced implementation complexity for turbo decoding. IET Commun. 2012;6(7):702–7.
Robertson P, Villebrun E, Hoeher P. A comparison of optimal and sub-optimal map decoding algorithms operating in the log domain. In: IEEE international conference on communications, 1995. ICC’95 Seattle, gateway to globalization, vol 2; 1993. p. 1009–13.
Roth C, Belfanti S, Benkeser C, Huang Q. Efficient parallel turbo-decoding for high-throughput wireless systems. IEEE Trans Circ Syst I Regul Pap 2014;61(6):1824–35 (2014)
Sapra R, Jagannatham AK. Exit chart based ber expressions for turbo decoding in fading MIMO wireless systems. IEEE Commun Lett. 2015;19(1):10–3.
Vogt J, Finger A. Improving the max-log-map turbo decoder. Electron Lett. 2000;36(23):1937–9.
Worm A, Hoeher P, Wehn N. Turbo-decoding without snr estimation. IEEE Commun Lett. 2000;4(6):193–5.
Yue DW, Nguyen HH. Unified scaling factor approach for turbo decoding algorithms. IET Commun. 2010;4(8):905–14.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Sun, L., Wang, H. (2020). Improved Max-Log-MAP Turbo Decoding by Extrinsic Information Scaling and Combining. In: Liang, Q., Liu, X., Na, Z., Wang, W., Mu, J., Zhang, B. (eds) Communications, Signal Processing, and Systems. CSPS 2018. Lecture Notes in Electrical Engineering, vol 516. Springer, Singapore. https://doi.org/10.1007/978-981-13-6504-1_42
Download citation
DOI: https://doi.org/10.1007/978-981-13-6504-1_42
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-6503-4
Online ISBN: 978-981-13-6504-1
eBook Packages: EngineeringEngineering (R0)