Keywords

1 Introduction

Wireless communication has been progressing since the nineteenth century and has been one of the most important features in our everyday life. Technologies such as 3G, 4G, and 5G depend on Orthogonal Frequency Division Multiplexing (OFDM) as a medium for wireless communication (Vijay Kumar 2021). OFDM is a multicarrier modulation technique (Lavanya et al. 2019; Khosla et al. 2018) where high-rate data is divided into many slow rate data, and these data will be modulated by subcarriers. OFDM has high spectral efficiency which is one of the main reasons why it has been used by a lot of communication standards such as Digital Audio Broadcasting (DAB), Digital Video Broadcasting (DVB), WiMAX, Long-Term Evolution (LTE), and many more. By using Inverse Fast Fourier Transform (IFFT), OFDM could provide a whole lot of advantages such as low complexity, high bandwidth efficiency, high resistance to frequency selective fading, and many more (Lavanya et al. 2019; Murugan and Sivakumar 2015). Unfortunately, OFDM has one main disadvantage which is the high Peak to Average Power Ratio (PAPR) that can cause power amplifiers to work in the nonlinear region by producing signal excursions from the high peak (Kaur 2015; Poonam 2015). Complex Digital to Analog Converter (DAC) together with High Power Amplifier (HPA) can be used to counter-effect the issue, but power consumption and the cost of the system will increase. Intermodulation distortion might occur due to loss of orthogonality. Bit Error Rate and battery life of terminal will face degradation (Lavanya et al. 2019). So, it is crucial to find the right PAPR reduction techniques to avoid all these issues that might arise. Some of the reduction techniques that have been proposed in the past year are clipping, coding scheme, phase optimization, nonlinear companding, tone reservation, tone injection, partial transmit sequence, and selective mapping (SLM) (Murugan and Sivakumar 2015; Kaur 2015). Some of these techniques can lead to the high cost of the system and also don’t provide much reduction (Poonam 2015). So, in this paper, we decided to use selective mapping together with clipping to achieve the optimum reduction value.

SLM was first introduced in 1996 and has been popularly used as a PAPR reduction technique. By using this method, the input data will be multiplied with phase sequences to generate new input signals. Then, IFFT will convert these signals into the time domain, and the signal with the lowest PAPR value will be chosen as the output signal (Kaur 2015). One of the setbacks of SLM is it has high computational complexity (Sudha et al. 2015). In this paper, we proposed to further reduce the PAPR value with clipping after SLM. Clipping is one of the simplest and widely used reduction technique with low computational complexity and take place in the domain (Dubey and Shrivastava 2015). The part of the signal that exceeds the threshold will be clipped so the receiver needs to realize the situation while retrieving back the transmitted data which is hard and could cause in-band and out-of-band distortion. These distortions will degrade BER and spectral efficiency. While this issue can be solved by applying filtering, peak regrowth could happen (Murugan and Sivakumar 2015). To optimize the hybrid of these two reduction techniques, we use the firefly algorithm (FA). FA is a metaheuristic algorithm inspired by the behavior of fireflies and their flashing patterns. There are three basic rules that we need to keep on when using this algorithm which is (Zhang et al. 2016):

  1. 1.

    Every firefly has the same sex.

  2. 2.

    Firefly attractiveness increase with their brightness.

  3. 3.

    The brightness of fireflies depends on the objective function.

In the next section, we will discuss OFDM, PAPR, SLM, clipping, and FA mathematically. We will also discuss how we implement the proposed method. Results of the simulation will be stated in Sect. 3 and followed by conclusion in Sect. 4.

2 System Model

Subcarriers in OFDM are overlapping but also orthogonal to each other which increases high data rates and bandwidth efficiency. Orthogonality also helps combat Inter-carrier Interference (ICI) and Inter-symbol Interference (ISI) (Kaur 2015). Flat fading channel generated from frequency selective channel making OFDM robust to multi-path fading. Data is converted from frequency to time domain by IFFT. IFFT is used instead of IDFT due to its computational efficiency. All these processes will be re-versed on the receiver side (Lavanya et al. 2019). Mathematically, a high data rate is divided into N low-rate data with equal subcarrier spacing through serial to parallel converter where (Lavanya et al. 2019; Khosla et al. 2018) \(\Delta f = 1/NT\). Every subcarrier is modulated by the modulation technique chosen. The input in the frequency domain can be described as (Lavanya et al. 2019) \(A = \left( {A_0 ,A_1 ,A_2 , \ldots ,A_n - 1} \right)T\). After IFFT, the signals can be represented as (Lavanya et al. 2019) \(y = \left( {a,{ }a_1 ,{ }a_2 , \ldots ,a_n - 1} \right)T\). The orthogonality of the subcarriers needs to satisfy the following condition (Lavanya et al. 2019), \(\int_0^T {a_1 \left( t \right)a_2 \left( t \right){\text{d}}t} = 0\), where the subcarrier frequency satisfies the following equation (Lavanya et al. 2019):

$$f_n = \frac{n}{T} + f_{RF} ,\quad n = 0,1,2 \ldots N - 1,$$
(1)

where \(f_{RF}\) represents multiple radio frequencies. OFDM signal can be represented as (Lavanya et al. 2019)

$$x\left( t \right) = \int\limits_{n = 0}^{N - 1} {A_n e^{j\frac{2\pi \Delta ft}{T}} } ;\quad 0 \le t \le NT.$$
(2)

When N number of independently modulated signals with sinc waves and non-constant envelopes get added (Kaur 2015), it causes a peak power and create PAPR. This peak power can be very high compared to the other average power and caused high PAPR (Praveen Pawar 2016). PAPR is the ratio of peak power to average power (Poonam 2015) and can be defined as (Lavanya et al. 2019; Dubey and Shrivastava 2015)

$${\text{PAPRdB}} = 10 \log 10 \left( {\frac{{P_{{\text{peak}}} }}{{P_{{\text{avg}}} }}} \right).$$
(3)

The performance of PAPR reduction can be analyzed using Complementary Cumulative Distribution Function (CCDF) (Murugan and Sivakumar 2015; Dubey and Shrivastava 2015) to know the probability of PAPR to exceed threshold value (Lata and Thakur 2015) and can be expressed as (Sudha et al. 2015)

$${\text{CCDF}} = \Pr \left( {{\text{PAPR}} > {\text{PAPR}}_0 } \right) = (1 - (1 - e^{ - PAPR_0 } )^N .$$
(4)

SLM is part the of symbol scrambling technique (Lata and Thakur 2015), where the input data is divided into M number of sub-blocks, and once the parallel conversion ends, the signals will be multiplied with a phase sequence that can be represented as (Kaur 2015) \(P_u = \left( {P_1 ,P_2 ,P_3 , \ldots P_u } \right)\), where \(u = (0,1,2 \ldots U),\) to make OFDM data blocks to be phase rotated. OFDM signal with the lowest PAPR value can be represented as (Kumari and Chawla 2017)

$$X^u \left( t \right) = \frac{1}{{\sqrt {N} }}\int\limits_{n = 0}^{N - 1} {X_n P_{u,n} e^{j2\pi f_n t} } .$$
(5)

The computational complexity faces by this technique is due to the high usage of IFFT operations. SLM also transmits together side information to help receiver recover all the information transmitted. Clipping amplitude can be explained by the following equation (Kumari and Chawla 2017):

$$\begin{aligned} A\left( X \right) & = x,\; \left| x \right| \le B \\ & \quad B, \;\left| x \right| > B, \\ \end{aligned}$$
(6)

where \(B\) is the clipping level, \(x\) is the signal value, and \(A(X)\) is the amplitude function. In receiver, to recover all the information back, two parameters need to be considered is size and location of the clip (Kumari and Chawla 2017). B can be calculated as (Tang et al. 2020) \(B = \gamma \surd P_{{\text{av}}}\), where \(P_{{\text{av}}}\) is the average power and γ is the clipping ratio. Only large signals will be limited by the threshold while smaller signals will remain unchanged. As filtering is needed due to out-of-band emission by clipping, it can be defined as (Tang et al. 2020)

$$\begin{aligned} H\left( k \right) & = 1,\;0 \le k \le N - 1 \\ & \quad 0,\;N \le k \le LN - 1. \\ \end{aligned}$$
(7)

Two important things in FA are the formulation of light intensity and change of attractiveness. In this simulation, we know that light intensity \(I\) changes according to the distance \(r\) and light absorption parameter φ. This relationship can be seen from the following equation (Zhang, et al. 2016), \(I = I_0 e^{ - \varphi r}\), where \(I_0\) is the light intensity when \(r = 0\). We also can define light attractive coefficient \(\beta\) the same as light intensity. The distance between two fireflies can be calculated by the following equation (Zhang, et al. 2016) \(r_{ij} = \sqrt {{\int_{k = 1}^d {(x_{i,k} - x_{j,k} )^2 } }}\), where \(d\) is the number of dimensions. The amount of movement of firefly also can be calculated as following (Zhang, et al. 2016) \(x_i = x_i + \beta_0 e^{ - \varphi r} \left( {x_j - x_i } \right) + \alpha \varepsilon_i\), where the third term is random variables coming from different distributions.

2.1 Proposed Method

Once parameters are initiated, input signals will be modulated using 16-QAM constellation and converted into parallel signals using serial to parallel converter. These signals will be divided into several sub-blocks and multiplied with phase factors and produce new input signals. The signal with the lowest PAPR will be chosen. The amplitude of the signals will represent the population of the fireflies. Light intensity corresponds to the objective function which is to find the signal with the lowest PAPR value. So, firefly that satisfies the requirement will be brighter and attract other fireflies to it, and this movement can be expressed by the following equation (Singh and Patra 2018):

$$b_i = b_i + \beta \left( r \right)\left( {b_j - b_i } \right).$$
(8)

This new signal will be compared with randomly chosen amplitude signal. The one with higher amplitude value will be chosen to be clipped. If both signals have the same amplitude value, a random walk step will be applied to randomly choose between the two signals. This process will keep on repeated until maximum iteration reach. Figure 1 shows the flowchart of the implementation process (Table 1).

Fig. 1
A flowchart consists of, 1. Start, 2. Parameter initialization, 3. Modulation, 4. Serial to parallel conversion, 5. Divided into sub-blocks, 6. Multiplied with phase factors, 7. Generate fireflies, 8. Light intensity evaluates objective function, 9. Choose signals with the lowest P A P R value, 10. Compare amplitude value of two signals, 11. Both signals have the same amplitude value? If yes, clip the signal with higher amplitude value, If No, Randomly choose between the two signals, 12. If maximum iteration is reached, then go to step 13, else go to step 10, 13. End

Flowchart of proposed method

Table 1 Proposed method simulation parameters

3 Results

The number of iterations, clipping factor, and sub-blocks used in Table 2 is 100, 0.6, and 44, respectively. For every signal, the value of PAPR increases as the number of subcarriers increases. Also, can be seen that the performance of PAPR get better as a new method added to the algorithm. The proposed method is proven to give the best results out of all the methods used. The value of PAPR manages to be reduced by more than 80% from the original OFDM signal. The increase in value of PAPR proportionally to the number of subcarriers is due to increase in computational complexity and limited phase weighing factors. Even with high number of subcarriers, the proposed method still can effectively reduce PAPR value.

Table 2 PAPR performance with different number of subcarriers

The number of iteration, subcarriers, and sub-blocks in Table 3 is 100, 64, and 4, respectively. Clipping ratio definitely affects the performance of PAPR. The value of PAPR increases proportionally with the clipping ratio. This is because when using lower clipping ratio, the symbol to error ratio (SER) degradation due to nonlinear distortion counteracted by the saved energy at power amplifier and vice versa. Even so, the proposed method still provides a better performance than the other method. The value of PAPR was able to be reduced by almost half of its original value.

Table 3 PAPR performance with different number of clipping ratio

The number of clipping ratio, subcarriers, and sub-blocks in Fig. 2 is 0.6, 64, and 4, respectively. As the number of iterations increase, the value of PAPR decreases, and this is due to computational complexity getting lower. Higher number of iterations increases the processing time and eventually increases the number of function evaluation.

Fig. 2
A line graph between, C C D F Performance and P A P R 0 in decibels for k values of 100, 200, 300, 400 and 500. The lines lie at the top and drops sharply down. The line k equals 500 drops sharply at 0.5 decibels, followed by k equals 400 at 1.1 decibels, k equals 300 at 1.38, k equals 200 at 1.5, and k equals 100. Values are estimated.

PAPR performance with different number of iterations

4 Conclusions

There are a lot of research done on finding the right technique to reduce PAPR value in OFDM. In this paper, the combination of SLM, clipping, and FA was proposed due to the effectiveness, simplicity, and efficiency of the techniques. PAPR performance is proven to be improved with lower value of clipping ratio and number of subcarriers, and also higher number iterations. It is also proven that the performance of PAPR by using SLM-clipping-FA is much better than conventional clipping or conventional SLM. Computational complexity also has been reduced by applying a higher number of iterations.