Keywords

1 Introduction

Recently, with the development of wireless communication technology, the spectrum resources are increasingly scarce. The concept of cognitive radio was proposed in 1999 [1]. In short, what cognitive radio needs to do is to effectively use "spectrum hole" to communicate [2].

One of the basic tasks is signal parameter identification. However, a wide band will cause great pressure on the ADC of the receiver. This paper focuses on algorithms based on undersampling. The basic point is to estimate the bandwidth based on the undersampling power spectrum estimation. From the point of view of cyclostationarity, Reference [3] systematically presents how to recover the cyclic spectrum of the signal in the framework of compressed sensing (CS).

In [4, 5], the problem of power spectrum estimation under the new sampling framework is studied. In [6, 7], a linear algorithm combining power spectrum estimation and wavelet edge detection is studied.

The rest of this paper is organized as follows: Sect. 2 introduces the algorithm of power spectrum estimation based on undersampling. After that, the corresponding bandwidth estimation strategy is given in Sect. 3. In Sect. 4, numerical simulations are conducted to evaluate the performance of the proposed algorithm. Finally, conclusions are drawn in Sect. 5.

2 Power Spectrum Recovery Algorithm of Undersampling Signal

We regard the modulated signal as a wide and stable random signal. According to the literature [8], under the framework of compressed sensing (CS), the sampling rate of CS receiver is \(f_{{{\text{s}},{\text{cs}}}} = \left( {M/N} \right)f_{{\text{s}}}\). fs is Nyquist sampling rate. \(M/N \in (0,1]\) is the compression rate. Such a linear compression sensing process can be described as:

$$z = Ax$$
(1)

For x (n), its autocorrelation function is independent of time:

$$r_{x} (n,\tau ) = r_{x} (\tau ),\forall n.$$
(2)

According to Wiener-Khinchin law, we have

$$s_{x} = Fr_{x}$$
(3)

where \(s_{x} = \left[ {s_{x} \left( 0 \right), \ldots ,s_{x} \left( {N - 1} \right)} \right]^{T} ,r_{x} = \left[ {r_{x} \left( 0 \right), \ldots ,r_{x} \left( {N - 1} \right)} \right]^{T}\).

The mapping relationship between (1) and (2) can be written as follows:

$$\begin{aligned} & \left[ {{\text{vec}}\left\{ {R_{x} } \right\}} \right]_{{\left( {n - \tau } \right)N + n}} = \left[ {R_{x} } \right]_{{\left( {n,n - \tau } \right)}} = r_{x} \left( \tau \right) \\ & \left[ {{\text{vec}}\left\{ {R_{x} } \right\}} \right]_{nN + n - \tau } = \left[ {R_{x} } \right]_{{\left( {n - \tau ,n} \right)}} = r_{x} \left( \tau \right) \\ & \tau \in \left[ {0,N - 1} \right],n \in \left[ {\tau ,N - 1} \right] \\ \end{aligned}$$
(4)

Obviously, \({\text{vec}}\left\{ {R_{x} } \right\}\) can be linked directly with \(r_{x}\) by a linear mapping matrix \(P_{N} \in \left\{ {0,1} \right\}^{{N^{2} \times N}}\)

$${\text{vec}}\left\{ {R_{x} } \right\}{ = }P_{N} r_{x}$$
(5)

The specific form of \(P_{N}\) is:

$$\begin{aligned} P_{N} \left( {\left( {n - \tau } \right)N - n,\tau } \right) & = P_{N} \left( {nN + n - \tau ,\tau } \right) = 1 \\ \forall \tau & = 0, \ldots ,N - 1;n = \tau , \ldots ,N - 1 \\ \end{aligned}$$
(6)

The autocorrelation matrix \(R_{z} { = }E\left\{ {z*z^{H} } \right\} \in R^{M \times M}\) still has \(M\left( {M + 1} \right)/2\) degrees of freedom. Using the approach above, we can get

$$r_{z} = Q_{M} {\text{vec}}\left\{ {R_{z} } \right\}$$
(7)
$$\begin{aligned} p\left( {n,\tau } \right) & = \tau M - \tau \left( {\tau - 1} \right)/2 + n \\ & \quad \forall \tau \in \left[ {0,M - 1} \right],0 < n < M - 1 - \tau \\ \end{aligned}$$
(8)
$$\begin{aligned} q_{1} \left( {n,\tau } \right) & = \left( {n + \tau } \right)M + n \\ q_{2} \left( {n,\tau } \right) & = nM + n + \tau \\ \end{aligned}$$
(9)
$$\left\{ {\begin{array}{*{20}l} {\left[ {Q_{M} } \right]_{{\left( {p\left( {n,\tau } \right),q_{1} \left( {n,\tau } \right)} \right)}} = \left[ {Q_{M} } \right]_{{\left( {p\left( {n,\tau } \right),q_{2} \left( {n,\tau } \right)} \right)}} = \frac{1}{2} + \frac{1}{2}\delta_{\tau ,0} } \hfill \\ {\left[ {Q_{M} } \right]_{{\left( {p\left( {n,\tau } \right),q} \right)}} = 0,\forall q \in \left[ {0,M^{2} - 1} \right]/\left\{ {q_{1} \left( {n,\tau } \right),q_{2} \left( {n,\tau } \right)} \right\}} \hfill \\ \end{array} } \right.$$
(10)

Due to \(R_{z} { = }E\left\{ {z*z^{H} } \right\} \in R^{M \times M}\), the relationship between \(R_{z}\) and \(R_{x}\) is:

$$R_{z} = AR_{x} A^{H}$$
(11)

Using the properties of matrix, \({\text{vec}}\left\{ {UXV} \right\} = \left( {V^{T} \otimes U} \right){\text{vec}}\left\{ X \right\}\) (\(\otimes\) stands for Kronecker-product) we can get:

$$\begin{aligned} r_{z} & = Q_{M} {\text{vec}}\left\{ {AR_{x} A^{H} } \right\} \\ & = Q_{M} \left( {A \otimes A} \right){\text{vec}}\left\{ {R_{x} } \right\} = \Phi r_{x} \\ \end{aligned}$$
(12)

By introducing (3), the linear relationship between power spectra is:

$$r_{z} = \Phi F^{ - 1} s_{x} = \Psi s_{x}$$
(13)

where \(\Psi { = }\Phi F^{ - 1}\). The size of \(\Psi\) is \(\left( {M\left( {M + 1} \right)} \right)/2 \times N\). The compression rate has a maximum:

$$\left( {\frac{M}{N}} \right)_{{\min }} = \frac{{\sqrt {8N + 1} - 1}}{{2N}}\xrightarrow{{N \to \infty }}\sqrt {\frac{2}{N}}$$
(14)

If the received signal is sparse in the frequency domain, then L1 norm regularization can be introduced to obtain convex problems to guarantee sparsity:

$$s_{x} = \mathop {\arg \min }\limits_{{s_{x} }} \left\| {r_{z} - \Psi s_{x} } \right\|_{2}^{2} + \lambda \left\| {s_{x} } \right\|_{1}$$
(15)

These convex problems can be solved by existing convex optimization software kits (such as CVX toolkit based on MATLAB).

3 Bandwidth Estimation Strategy Based on Recovered Power Spectrum

In this section, we mainly consider how to estimate the bandwidth according to the power spectrum. The signal in Fig. 1 is BPSK signal, with signal-to-noise ratio of 5 dB, sampling rate of 40KHz, carrier frequency of 16KHz, bit rate of 1 kHz, and hyper parameter of 2. The strategy to directly estimate bandwidth is shown in Table 1.

Fig. 1
figure 1

Recovery of undersampling power spectrum

Table 1 Undersampling bandwidth estimation strategy

4 Numerical Simulations

4.1 The Relationship Between Bit Rate and Bandwidth Estimation

The modulation signal simulated in this section is BPSK signal. Most of the simulation conditions are the same as Fig. 1. According to Fig. 2, the computational complexity of the least square method is much lower than that of the convex optimization method, and it can be more accurate.

Fig. 2
figure 2

Relationship between bit rate and bandwidth estimation

4.2 The Performance of Bandwidth Estimation Against Noise

Most of the simulation conditions are the same as Fig. 1. Bit rate is 1.6 KHz.

According to Fig. 3, the computational complexity of convex optimization is not only higher (as shown in Table 2), but also has poor performance. When the SNR is low, as shown in Fig. 4, the power spectrum estimation has a large deviation with high probability. Therefore, in the later simulation, we use the least square method.

Fig. 3
figure 3

Performance against noise

Table 2 Computational complexity comparison
Fig. 4
figure 4

Poor performance of power spectra estimation under low SNR

4.3 The Choice of Undersampling Matrix

We choose random 01 matrix, Bernoulli matrix, and Gaussian matrix to analyze. Most of the simulation conditions are the same as Fig. 2. According to Fig. 5, only the random 01 matrix can give reasonable bandwidth estimation results. In addition, the hardware implementation difficulty of random 01 matrix is the lowest.

Fig. 5
figure 5

Performance of different sampling matrices

5 Conclusion

In this paper, we first deduce the algorithm of undersampling power spectrum estimation under the assumption of wide stability and then give the bandwidth decision strategy. We analyze the performance of the algorithm from the aspects of bit rate, SNR, sampling matrix, and compression ratio.

The simulation results show that the least square method has great advantages in computation and estimation accuracy, which means for bandwidth estimation, sparse constraints are unnecessary. Meanwhile, random 01 matrix is a suitable undersampling matrix.