1 Introduction

Turbo codes for channel coding were presented by Berrou et al. in 1993, which brought a drastic development in channel coding. They gave superior Forward Error Correction (FEC) performance than other codes. By achieving half a decibel of Shannon's Limit, Berrou and Glavieux (2003) provide reliable data transmission. The turbo codes have been applied in different standards and various generations of cellular systems like 4G and 5G (Lin et al. 2018; Morgado et al. 2018; Schaich et al. 2016; Zhan et al. 2015).

Among the two decoding algorithms used for Turbo decoding, the implementation process is tedious in the Maximum A-posteriori Probability (MAP) algorithm. A modified version of the MAP algorithm, called the Log-MAP algorithm was developed, which avoids complex numerical computations. Further simplification to Log-MAP algorithm was made using 'max' operator and Max-Log-MAP algorithm is obtained. The second algorithm for Turbo decoding called, Soft-Output Viterbi Algorithm (SOVA), has some deprivation in performance with moderate complexity (Aarthi et al. 2020).

This paper proposes optimization based on Correction Factor approach to be applied to a Turbo decoder's extrinsic information. This CF-based optimization reduces the overestimation of reliability values. The work also provides a mathematical relation between CF and SNR. The rest of the paper has been structured as follows: Sect. 2 gives the related works about turbo decoding algorithms. Section 3 describes the proposed Correction factor approach for DOCF- Max-Log-MAP algorithm. Simulation results and analysis of the proposed system has been shown in Sect. 4. Section 5 gives the conclusion of the work.

2 Related works

Many research works were carried out to improve the performance of Log-MAP, Max-Log-MAP and SOVA algorithms. Among these, SOVA has got much attention because of its widespread applications (Sklar 1997). So, various modifications have been developed and presented. Among different methods used to improve these practical decoding algorithms' performance, correction factor (CF) methods were significant because it is easy to implement and is effective. The traditional SOVA tends to overestimate the reliability values. Hence, the extrinsic information's normalization based on the CF approach was first taken by El Gamal and Hammons (2001). Here the soft output of SOVA is based on the assumption that it is Gaussian distributed. The use of CF on the extrinsic information was shown by Xiang et al. 2019, which surges linearly with the number of times the decoding process is done. Based on a different perspective given by Huang et al. 2016, the optimal CF was a constant value. The combination of methods given by Xiang et al. 2019 and the Gaussian assumption by El Gamal and Hammons (2001) was used by Sakthivel and Pradeep (2018). Here, the output of decoder 1 is based on the Gaussian assumption and decoder 2 uses normalized and constant CF. In another approach given by Gnanasekaran et al. 2012, CF for decoder one is optimized and decoder 2 uses a constant factor of 0.75, with analysis of CODEC parameters. A similar CF concept is used by Aarthi and Dhulipala (2020), but for SOVA algorithm in MIMO-OFDM system.

In the work given by Pei et al. 2017, quantization with various levels was done to the CF, together with a normalization method based on the pseudo median filtering technique. This CF method resulted in the development of robust SOVA decoder. The Scaling Factor concept was also implemented in the MLMAP algorithm and has been given by Vogt and Finger (2000). But all these correction techniques were based on the theory that the distribution of extrinsic information is Gaussian, but practically it is not sternly Gaussian (Fowdur et al. 2016). Table 1 compares various performance metrics like BER, latency, complexity, parallel processing and optimization among closely related works available in the literature. The table highlights the main contribution and need for the proposed DOCF-MLMAP algorithm.

Table 1 Performance comparison among related works in the literature

2.1 Motivation and problem statement

In general, Turbo decoder has higher complexity when compared with the Turbo encoder. Turbo decoder complexity depends on the decoding Algorithm. Predominant Turbo decoding algorithms are MAP and SOVA. MAP algorithm is computationally intense but gives optimal performance. This necessitated the development of its variant called the Max-Log-MAP algorithm. Max-Log-MAP algorithm simplified the MAP algorithm by transferring the recursions (forward and backwards) into the log domain and providing an approximation to reduce its complexity dramatically. Because of this approximation, its performance is sub-optimal. SOVA is the least complex of all the SISO decoders and is about half as complex as the Max-Log-MAP algorithm. However, SOVA is also the least accurate, performing 0.7 dB worse than MAP algorithm, making it unsuitable for wireless applications. Hence, less complexity in decoding gives a degraded or sub-optimal performance. This necessitates the development of a suitable algorithm providing optimal performance with less complexity. The problems associated with complexity and the difficulty associated with the hardware implementation of the MAP algorithm lead to the formulation of the MLMAP algorithm. MLMAP algorithm reduces the complexity by utilizing the max function and not considering the correction term, resulting in approximation error. So, its performance becomes sub-optimal. Two problems deteriorate the performance of the sub-optimal decoding algorithms:

  1. (i)

    The reliability of extrinsic information is too optimistic

  2. (ii)

    Correlation between intrinsic and extrinsic information.

These two misrepresentations result in sub-optimal performance of MLMAP algorithm. Conventional MLMAP produces highly optimistic reliability output due to incomplete path choice during the decoding process. This occurs when the competitor of Maximum Likelihood (ML) path is unexploited. In general, traditional MLMAP tends to neglect the 'true' closest ML path. Thus, the remaining paths governing the reliability have lower path metric, causing incomplete path choice and the risk of losing its most likely member. Hence, the reliability decision of MLMAP is too optimistic, affecting both the extrinsic information and LLR values, thereby giving a sub-optimal performance.

2.2 Objective and contribution of the research work

This work aims at improving the performance of the Max-Log-MAP Turbo decoding algorithm. This is attributed as follows: Both 4G and 5G mobile communication standard considered LTE based Enhanced Turbo code as a promising channel coding scheme. It employs a scaled max-log-MAP algorithm with fixed scaling factor 0.75 providing better error correction than LDPC and polar codes (Morgado et al. 2018). On comparing the decoder complexity, polar decoders are less complicated than Turbo decoders, especially for short frames (Iscan et al. 2016; Aarthi et al. 2019; Bayrakdar et al. 2016, 2013; Bayrakdar 2020). Also, sensor networks used in industrial control applications require better BER performance, and error correction is critical for sensor networks. Research efforts were also made to reduce the decoding complexity and improve BER performance in LTE based systems. So scaling of extrinsic information is proposed for MLMAP Turbo code (Lin et al. 2018; Fowdur et al. 2016). The scaling factor was also calculated based on Pearson's correlation coefficient, but these factors are not optimal. To address these obstacles, modifications to the conventional MLMAP algorithm have been proposed, providing better BER with reduced complexity.

3 Correction factor for DOCF-Max-Log-MAP

There are two distortions which result in the sub-optimal performance of MLMAP algorithm (Chaikalis et al. 2014). They are the optimistic effect of reliability values and connection between the intrinsic and extrinsic information (Huang et al. 2016). Of the two distortions, the decoding performance is primarily affected due to over-optimistic effect but slightly due to the correlation effect. This paper focusses on reducing the overestimation of reliability values, which depends on SNR. To alleviate this distortion, compensation for \(L_{e} \left( {d_{k} } \right)\) is found by scaling the extrinsic information using the appropriate correction factor.

Figure 1 shows Max-Log-MAP Turbo decoder with Double Optimized Correction factor. The figure displays the architecture of the proposed turbo decoder. There are two decoders, separated by an interleaver, which fit the constituent encoder. There are three inputs for each decoder: channel systematic bits, encoder parity bits and the other decoder's a-priori knowledge. Decoders run recursively and generate soft LLR outcomes. Decoder 1 takes only the channel output value and creates soft output that has been sent to decoder 2 during the first iteration. Decoder 2 utilizes decoder 1 data and the interleaved channel output version to generate a soft output which is the decoder one's feedback. During the second iteration, Decoder 1 decrypts the output of channels along with decoder 2 output obtained in the first iteration. The decoder can generate specific soft outputs with these extra details. The loop is continued and the outcomes are obtained more accurately as the iteration continues. It shows how the standard turbo decoder is configured with the suggested findings: CF1 and CF2 are multiplied by the external decoded data. The pair of correction factors corresponding to the inner and outer decoder that gives the lowest BER for a given SNR is considered an optimized CF pair.

Fig. 1
figure 1

Turbo Decoder with DOCF

Generally, the Log Likelihood Ratio (LLR) for ith data bit, given as yi for each decoder is given as,

$$l\left( {y_{i} } \right) = \log \left[ {\frac{{P\left( {y_{i} = \left. 1 \right|Y} \right)}}{{P\left( {y_{i} = \left. 0 \right|Y} \right)}}} \right]$$
(1)

This LLR can be separated into three terms,

$$l\left( {y_{i} } \right) = l_{apri} \left( {y_{i} } \right) + l_{c} \left( {y_{i} } \right) + l_{extrinsic} \left( {y_{i} } \right)$$
(2)

\(l_{apri} \left( {y_{i} } \right)\) = A-priori information of yi, \(l_{c} \left( {y_{i} } \right)\) = Channel measurement information, \(l_{extrinsic} \left( {y_{i} } \right)\) = Extrinsic information.

Equation (2) denotes the LLR of the optimal MAP decoding algorithm. The LLR has three parts: \(l_{c} \left( {y_{i} } \right)\) is channel reliability value, which is a constant and depends on the channel model and SNR. \(y_{i}\) is the output from the channel corresponding to information bit i. \(l_{extrinsic} \left( {y_{i} } \right)\) is the extrinsic information of optimal MAP algorithm.

Figure 1 shows that the extrinsic information corresponding to decoder 1 \(l_{extrinsic}^{1} \left( {y_{i} } \right)\) becomes the a-priori information \(l_{apri} \left( {y_{i} } \right)\) for decoder 2.

$$l_{extrinsic}^{1} \left( {y_{i} } \right) = l_{apri} \left( {y_{i} } \right){\text{ and}}$$
(3)
$$l_{extrinsic}^{1} \left( {y_{i} } \right) = l_{extrinsic}^{2} \left( {y_{i} } \right)$$
(4)

Now, LLR for MAP algorithm is given by

$$l(y_{i} ) = l_{c} \left( {y_{i} } \right) + l_{extrinsic}^{1} \left( {y_{i} } \right) + l_{extrinsic}^{2} \left( {y_{i} } \right)$$
(5)

Consider the Max-Log-MAP algorithm, the sub-optimal LLR is given in terms of estimates as follows

$$l\mathop {\left( {y_{i} } \right)}\limits^{ \wedge } = l_{apri} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right) + l_{c} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right) + l_{extrinsic} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right)$$
(6)

Also,

$$l_{extrinsic}^{1} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right) = l_{apri} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right)$$
(7)
$$l_{extrinsic}^{1} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right) = l_{extrinsic} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right)$$
(8)

\(\wedge\)—denotes estimated values for suboptimal Max-Log-MAP algorithm. In general, suboptimal LLR estimate \(l\mathop {\left( {y_{i} } \right)}\limits^{ \wedge }\) is less reliable than \(l\mathop {\left( {y_{i} } \right)}\limits^{{}}\). So, \(l_{extrinsic}^{k} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right)\) is less dependable than \(l_{extrinsic}^{k} \left( {\mathop {y_{i} }\limits^{{}} } \right)\), where k = 1,2 corresponding to decoder 1 and 2. So it is proposed to use correction terms or correction factors (CF1 and CF2) on extrinsic information \(l_{extrinsic}^{1} \left( {\mathop {y_{i} }\limits^{{}} } \right)\) and \(l_{extrinsic}^{2} \left( {\mathop {y_{i} }\limits^{{}} } \right)\) respectively.

This improves the overall reliability of \(l\mathop {\left( {y_{i} } \right)}\limits^{ \wedge }\). So LLR expression for Max-Log-MAP can be modified using CFs to make it as true LLR.

$$l(\mathop y\limits^{ \wedge } _{i} ,CF_{1} ,CF_{2} ) = l_{c} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right) + (CF_{1} )\left( {l^{1} _{{extrinsic}} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right)} \right) + (CF_{2} )\left( {l^{2} _{{extrinsic}} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right)} \right)$$
(9)

The Minimum Mean Square Error (MMSE) criterion is used and hence expression for CF1 and CF2 are obtained.

For a particular SNR, the Mean Square Difference (MSD) between LLR of optimal MAP and the proposed OCF-Max-Log-MAP algorithm with correction factors CFs is,

$$l_{k} (CF_{k} ) = E\left\{ {\left( {CF_{k} l_{extrinsic}^{k} \left( {\hat{y}_{i} } \right) - l_{extrinsic}^{k} \left( {y^{i} } \right)} \right)} \right\} \, k = 1, \, 2$$
(10)

To get exact soft outputs, CFk values should be recalculated every time the turbo decoder yields a new extrinsic data. Furthermore, for each extrinsic data produced from the decoder, BER is calculated. Pair of CFs providing the least BER is acquired and is considered to be optimized.

Mean Square Difference (MSD) describes the efficiency of the proposed algorithm. The smaller value of MSD provides better performance of the proposed algorithm. If MSD=0 then, the proposed suboptimal algorithm performs equally well with optimal MAP algorithm. So, optimized (CFk) can be found to minimize αk(CFk), such that d(αk(CFk))/ d(CFk)=0.

$${\text{Now (CF}}_{{\text{k}}} {\text{) E}}\left( {l_{{extrinsic}}^{k} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right)} \right)^{2} = {\text{E}}\left( {l_{{extrinsic}}^{k} \left( {\mathop {y_{i} }\limits^{{}} } \right)} \right)^{2}$$
(11)
$${\text{(CF}}_{{\text{k}}} {\text{)}} = {\text{ }}\frac{{{\text{E}}\left( {l_{{extrinsic}}^{k} \left( {\mathop {y_{i} }\limits^{{}} } \right)} \right)^{2} }}{{{\text{E}}\left( {l_{{extrinsic}}^{k} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right)} \right)^{2} }}{\text{ where }}k{\text{ = 1, 2}}$$
(12)

The distortion due to overestimation for reliability values causes

$${\text{E}}\left( {l_{{extrinsic}}^{k} \left( {\mathop {y_{i} }\limits^{{}} } \right)} \right)^{2} \le {\text{E}}\left( {l_{{extrinsic}}^{k} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right)} \right)^{2}$$
(13)

So, (CFk) is always less than or equal to unity. This work proposes the dependence of CFs on SNR and the reason is attributed as follows

$$l\mathop {\left( {\mathop y\limits^{ \wedge } _{i} ,CF_{1} ,CF_{2} } \right)}\limits^{{}} \cong C_{m} \left( {SNR} \right) + (CF1)\left( {l^{1} _{{extrinsic}} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right)} \right) + (CF2)\left( {l^{2} _{{extrinsic}} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right)} \right)$$
(14)

where

$$l_{c} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right) \cong C_{m} \left( {SNR} \right)$$

Cm is Rayleigh fading amplitudeCm = 1 for AWGN channel

$$l(\mathop y\limits^{ \wedge } _{i} ,CF_{k} ) \cong C_{m} \left( {SNR} \right) + (CF_{k} )\left( {l^{k} _{{extrinsic}} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right)} \right)$$
(15)
$$CF_{k} \cong \frac{{l(\mathop y\limits^{ \wedge } _{i} ,CF_{k} ) - C_{m} \left( {SNR} \right)}}{{\left( {l^{k} _{{extrinsic}} \left( {\mathop {y_{i} }\limits^{ \wedge } } \right)} \right)}}{\text{ where }}k = 1, 2$$
(16)

The expression shows the dependence of optimized CFs and SNR.

4 Simulation results and discussion

The proposed DOCF-MLMAP algorithm's simulation employs QPSK modulation, with 2048 bit random interleaver (Benedetto and Montorsi 1996; Sadjadpour et al. 2001), rate ½ and frame limit as 500. Two channels, AWGN and Rayleigh fading, are considered for analysis with eight decoding iterations. Simulation parameters used to obtain the correction factors for the proposed DOCF-MLMAP is given in Table 2.

Table 2 Simulation parameters

The range of correction factors considered for DOCF-MLMAP algorithm is from 0.05 to 0.95 and its performance is shown in Fig. 2. The optimized values of CF1 are obtained from Gnanasekaran et al. 2012. Figure 2 demonstrates the primary technique for achieving optimized BER values for CFs. Since the CF values have to be less than one, values from 0.05 to 0.95 will be tried in combination with values obtained by Gnanasekaran et al. 2012 and the least BER values are obtained for each decoder. In Fig. 2, a step size of 0.05 is used to increment CF values. Smaller step size values provide the decoder with more precision. When the step size is large, it becomes difficult to define specific soft outputs and configured CFs. Therefore, the 0.05 value is chosen to be the optimal step size. The lowest BER has been found for a given pair of CFs (CF1 and CF2). These optimal CF values would reduce MLMAP's optimistic impact and thus increase its efficiency.

Fig. 2
figure 2

Error plot (various Correction Factors (CF2) Vs SNR) for DOCF- Max-Log-MAP algorithm

Table 3 gives the BER corresponding to the Correction Factor (CF) ranging from 0.05 to 0.95 and SNR between 2 and 16 dB for QPSK modulation. The CF given here corresponds to decoder 2, used with CF for decoder one obtained from Gnanasekaran et al. 2012. Here, Single Optimized Correction Factor—MLMAP (SOCF-MLMAP) algorithm is used, where CF1 has been optimized by fixing CF2 value as 0.75. In this work, fixed CF2 is optimized for improved performance, and thus both the CFs are optimized, and DOCF-MLMAP is achieved.

Table 3 BER for varying CF and SNR

From Fig. 2 and Table 4, it is shown that for each value of SNR, the correction factors are not only optimal but also adaptive. In the work given by Vogt and Finger (2000), constant CF is employed and in the work done by Gnanasekaran et al. 2012, single adaptive correction factor (CF1) is used, rather than fixed CF for MLMAP algorithm. The proposed method, called DOCF-MLMAP decoding algorithm uses optimized and adaptive CF pairs (CF1 and CF2). Values of optimized CF (CF1), for various SNR used in DOCF-MLMAP, is obtained from Gnanasekaran et al. 2012.

Table 4 Optimized Correction Factors and BER for varying SNR

The Fig. 3 shown below justifies that using the adaptive correction terms (CF1 and CF2) is even more superior to the former techniques which use constant correction factor given by Vogt and Finger (2000) and single optimized CF for the decoder given by Gnanasekaran et al. 2012. The performance of various CF techniques, given in Fig. 3 for AWGN and Rayleigh fading channel.

Fig. 3
figure 3

BER plot for proposed DOCF-MLMAP algorithm for AWGN and Rayleigh fading channel with rate ½, 2048 bit random interleaver, code generator (7,5) and eight iterations

In AWGN channel, it is observed from Fig. 3 that, over the SNR range greater than 4 dB, the BER curve of SOCF-MLMAP is clearly better than the BER curve of constant CF method. Also, the curve for proposed DOCF-MLMAP with optimized CF pair is better than SOCF-MLMAP curve. At higher SNR (> 6 dB) the SOCF and DOCF-MLMAP are much better than constant CF method. SOCF-MLMAP shows performance improvement of 1.5 dB over constant CF method. DOCF-MLMAP illustrates added enhancement of 0.25 dB over SOCF-MLMAP at BER of 10–4 over the curve.

The superiority of the proposed CF method over traditional methods for uncorrelated Rayleigh fading channel is depicted in Fig. 3. It is observed from Fig. 3 that all the three CF methods show equal BER performance until SNR of 10 dB. But for higher SNR values (> 10 dB), SOCF-MLMAP gives 1.5 dB performance improvement than constant CF method. The proposed MLMAP algorithm provides additional improvement of 0.6 dB at BER of 10–4 than SOCF-MLMAP. Hence in Rayleigh fading channel also DOCF-MLMAP is the better performer than its two predecessors.

The performance of the proposed DOCF-Max-Log-MAP algorithm with adaptive and optimal correction factor pair compared with SOCF-Max-Log-MAP algorithm is shown in Fig. 4. They are in turn compared with their root algorithms, Log-MAP and Max-Log-MAP for various iterations. This comparison is made in the AWGN channel at SNR of 10 dB. This graph is an affirmation that the proposed Max-Log-MAP is better than SOCF-Max-Log-MAP when compared in terms of BER. Its widely known that Log-MAP is better than Max-Log-MAP in terms of BER because Log-MAP is optimal.

Fig. 4
figure 4

BER of MLMAP, SOCF-MLMAP and proposed DOCF-MLMAP with Log-MAP for varying number of iterations. The code generator (7, 5), rate 1/2, 2048-bit random interleaver, for 10 dB in AWGN channel

Interestingly, for the number of iterations above 6, DOCF-MLMAP is performing better than Log-MAP. Note that this phenomenon has been reported by Vogt and Finger (2000). It can also be seen that DOCF-MLMAP is showing even better BER value than SOCF-MLMAP. The introduction of optimal and adaptive correction factor pair gives better performance than SOCF-MLMAP. For five iterations the BER for DOCF-MLMAP is 2.5 × 10–6 while SOCF-MLMAP is 5 × 10–6.

The complexity of any decoding algorithm is proportional to the number of iterations used. It can be inferred from the plot that Single Optimized CF and Double Optimized CF Max-Log-MAP achieve the efficient BER performance by 5 iterations. Beyond that, increasing the iterations does not affect BER and it continues to be the same. So the complexity has been reduced by 37.5%, for SOCF-MLMAP and DOCF-MLMAP compared to MLMAP decoding algorithm. Reduced complexity is one of the desired criteria for better system design.

Also, the proposed DOCF-Max-Log-MAP algorithm achieves complexity reduction with even better BER value. The number of iterations used, BER and the percentage decrease in complexity for Log-MAP, MLMAP, SOCF-MLMAP and DOCF-MLMAP decoding algorithms are displayed in Table 5. It can be construed that at the cost of BER, MLMAP reduces its complexity. But, DOCF-MLMAP reduces the complexity by three times the MLMAP, with even better BER performance.

Table 5 Complexity reduction for each Decoding Algorithm

Complexity reduction in terms of percentage is calculated by the formula given below:

$${\text{Complexity reduction in }}\% = \left( {{1} - ({\text{I}}_{{{\text{const}}}} /{\text{I}}_{{{\text{total}}}} } \right))\, \times \,{1}00\%$$
(17)

where Iconst = Iteration from which BER is constant.Itotal = 8 (Total number of iterations considered for simulation).

The performance of Log-MAP, MLMAP, SOCF-MLMAP and DOCF-MLMAP algorithms in AWGN and the fading channel has been shown in Fig. 5, which plots BER the respective algorithms against SNR. Obviously, after the SNR of 6 dB, the proposed DOCF-MLMAP algorithm delivers an outstanding performance better than that of the predecessors SOCF-MLMAP and Max-Log-MAP algorithms reach a BER as low as 1 × 10–6 at 12 dB.

Fig. 5
figure 5

BER of Turbo decoding algorithms in AWGN and fading channel for eight iterations. The code generator (7,5), rate 1/2, 2048-bit random interleaver

The performance analysis is also extended for the Rayleigh Fading channel, is shown in Fig. 5. The proposed DOCF-MLMAP algorithm proves to be superior in performance to the former algorithms in fading channel as well. There is a considerable betterment in the BER of DOCF-MLMAP algorithm past the SNR value of 12 dB. At 14 dB the BER of DOCF-MLMAP reaches 2 × 10–6 which validates its superiority as mentioned above. Also, for both channel conditions, the proposed DOCF-MLMAP performs either close to or slightly better than the optimal Log-MAP decoding algorithm.

The following has been inferred from the above graphs: The Log-MAP algorithm is optimal but complex. So it becomes unsuitable for practical implementation. MLMAP algorithm has simple implementation features but gives a sub-optimal performance. DOCF-MLMAP algorithm is optimal and straightforward than SOCF-MLMAP decoding algorithm.

A plot between correction factor CF2 and SNR for DOCF-MLMAP algorithm is shown in Fig. 6. The graph depicts the relation between CF2 and SNR. This relational dependence provides the Turbo decoder with the nature of being adaptive. Depending on the received SNR the adaptive Turbo decoder can be made to set the correction factor CF2. By curve fitting method, an expression is obtained showing the relational dependence between CF2 and SNR.

$$f(x) = { - 0}{\text{.0001x}}^{{5}} { + 0}{\text{.0020x}}^{{4}} { - 0}{\text{.0077x}}^{{3}} { - 0}{\text{.0461x}}^{{2}} + {0}{\text{.3406x}} + { 0}{\text{.3875}}$$
(18)

where \(f(x) = CF_{2}\) and \(x = SNR\).

Fig.6
figure 6

Curve fitted plot between SNR and optimal correction factor CF2 for DOCF-MLMAP decoding algorithm

Above is an expression giving the relation between CF2 and SNR for proposed DOCF-MLMAP algorithm using the curve fit tool.

Table6 shows the elaborate BER analysis for the proposed algorithms concerning various CODEC parameters like code rate, generator polynomial and different channel conditions. From Table5, it can be identified as the best performing code with the least BER at least SNR. In general, compared to AWGN, the performance of the fading channel is poor. Rayleigh fading performance improves drastically only for higher values of SNR. Among various decoding Algorithms being used, MLMAP tends to be a sub-optimal Algorithm giving the least performance. SOCF-MLMAP with single optimized CF gives improved performance than MLMAP. Comparing SOCF-MLMAP and DOCF-MLMAP, for all CODEC parameters, the later with two optimized CF have displayed the best BER performance in both AWGN and fading channel. On comparing the code rates, rate 1/3 with increased redundancy performs better than the code with rate 1/2. Hence increasing the number of redundant bits will give more protection to the information transmitted.

Table 6 Performance analysis of MLMAP, SOCF-MLMAP and DOCF-MLMAP for various CODEC parameters

5 Conclusion

The sub-optimal performance of MLMAP algorithm is mostly attributed to the optimistic estimation of reliability values. This paper aims to reduce the optimistic effect of MLMAP using appropriate CF values for the constituent turbo decoders. The sub-optimal Max-Log-MAP algorithm's performance is improved by using optimized and adaptive correction factor pair CF1 and CF2. A pair of appropriate correction factors (CF) scales the extrinsic information exchanged in every iteration, between the constituent decoders. The selection of correction factors is dependent on Signal to Noise Ratio (SNR). The CFs of both inner and outer decoders are optimized to a lowest Bit Error Rate (BER) for improved performance. In AWGN channel, the proposed DOCF-MLMAP algorithm delivers an outstanding performance better than that of the predecessors SOCF-MLMAP (which uses single optimized correction factor) and MLMAP algorithms. DOCF-MLMAP algorithm reaches a BER as low as 1 × 10–6 at 12 dB in AWGN channel. The proposed DOCF-MLMAP algorithm proves to be superior in performance to the former algorithms in fading channel as well. There is a considerable betterment in the BER of DOCF-MLMAP algorithm past the SNR value of 12 dB in fading channel. The benchmark for any decoding algorithm is reduced BER with reduced complexity. The proposed DOCF-Max-Log-MAP algorithm reduces BER along with simple implementation. Existence of relational dependence between correction factor CF2 and SNR has also been proved.

The concept of introducing correction factor can be also be extended to SOVA and Bidirectional SOVA decoding Algorithms, which overcomes the complexity issues of MAP and Log-MAP Algorithms. Unified correction factor approach could be proposed using appropriate correction factors that jointly overcome the effect due to both the distortions (over-optimistic effect and correlation effect).