1 Introduction

Underdetermined blind source separation (UBSS) is the process of restoring a set of unknown source signals from their linear instantaneous mixture. The problem can be described as follows:

$$\begin{aligned} \mathbf{x}(t)=\mathbf{As}(t),\;t=1,\;2,\;\ldots ,\;L, \end{aligned}$$
(1)

where \(\mathbf{x}(t)=[x_1 (t),\;x_2 (t),\;\ldots ,\;x_M (t)]^{\mathrm {T}}\) and \(\mathbf{s}(t)=[s_1 (t),\;s_2 (t),\;\ldots ,\;s_N (t)]^{\mathrm {T}}\) are the observed and unknown source signals, respectively, and t is the discrete time sequence. \(\mathbf{A}=(a_{ij} )_{M\times N} \in R^{M\times N}\) represents an unknown mixing matrix assumed to be row full rank, L is the length of the signals, and \(M<N\ll L\). In UBSS, identification of the mixing matrix and restoration of the sources are two distinct problems. For linear memoryless mixture models (1), even if the mixing matrix is perfectly known, there exist infinite solutions [10]. In addition, priors are necessary to restore sources. Sparse signal representation is an effective method, requiring sources that are sparse or can be decomposed into a combination of sparse components [15, 21]. Sparsity of source signals implies that, in each column of \(\mathbf{s}\), only a few significant values (active sources) exist, whereas most of the other elements are almost zero (inactive sources).

Compressed sensing (CS) is a new framework for signal recovery that has attracted considerable interest over the past few years [6, 23]. This framework assumes that the signal is sparse or compressible.

The CS model can be described as follows:

$$\begin{aligned} \mathbf{y}=\varvec{\Phi \theta }, \end{aligned}$$
(2)

where \(\varvec{\Phi }\) denotes the \(M_1 \times N_1\) (\(M_1 <N_1 \)) matrix known as the measurement matrix. Although reconstruction of the signal \(\varvec{\uptheta }\) from \(\mathbf{y}\) is an ill-posed problem, prior knowledge of signal sparsity allows recovery of \(\varvec{\uptheta }\) from only \(M_1 =O(K\log N_1 )\) samples [2].

The search for a solution with minimal \(l_0 \)-norm, i.e., minimum number of nonzero components, is a natural and straightforward reconstruction method. However, this \(l_0 \)-norm minimization problem tends to become intractable as the dimension increases. It is also very sensitive to noise, and cannot be used for practical applications [2]. To overcome these severe drawbacks in signal recovery, significant attention is given to a family of algorithms that include thresholding [18], orthogonal matching pursuit (OMP) [14], stagewise OMP (StOMP) [5], regularized OMP (ROMP) [12], compressive sampling matching pursuit (CoSaMP) [11], subspace pursuit (SP) [21], basis pursuit principle [19], and generalized OMP (gOMP) [16]. To recover sources successfully, all of these approaches require that the measurement matrices \(\varvec{\Phi }\) follow a uniform uncertainty principle, where each submatrix of \(\varvec{\Phi }\) has to be well designed to satisfy the restricted isometry property (RIP) with a constant parameter [2, 17]. A measurement matrix \(\varvec{\Phi }\) satisfies the RIP of order K if there exists a constant \(\delta _K \in (0,1)\) such that

$$\begin{aligned} (1-\delta _K )\parallel \,\varvec{\uptheta }\parallel _2^2 \le \parallel \varvec{\Phi \theta }\parallel _2^2 \le (1+\delta _K )\parallel \varvec{\uptheta }\parallel _2^2 ,\;\varvec{\uptheta }\in \Sigma _K , \end{aligned}$$
(3)

for any K-sparse vector \(\varvec{\uptheta }\). In particular, the minimum of all constants \(\delta _K \) satisfying model (3) is called the restricted isometry constant, denoted by \(\delta _K (\varvec{\Phi })\).

Various measurement matrices have been investigated in recent years. The first family of measurement matrices consisted of the random Gaussian matrix, Bernoulli matrix, sub-Gaussian matrix, and basis transformation [8]. Although these matrices each have their own unique advantages, their common drawback is that they are random matrices. In source separation, the measuring system is not normally under our control.

Beginning from the RIP conditions of model (2), the bound on \(\delta _K (\varvec{\Phi })\) was given in [1]. Then, the CS methods reported in [20, 22] can be used to recover the sources efficiently when the UBSS model is interleaved with the CS model. These methods have demonstrated good performance when validated via numerical experiments. However, use of CS methods in the UBSS model (1) is not clearly elucidated.

In 2007, a polynomial deterministic matrix method was proposed for compressed sensing [3]. The method affords good reconstruction properties similar to the Gaussian random matrix, but the number of measurements is inflexible [9]. Therefore, the blocked polynomial deterministic matrix was constructed in [9]. However, when the CS method is applied to the UBSS model, the measurement matrix does not meet the requirements of a blocked polynomial deterministic matrix, i.e., that the elements be either 0 or 1, rendering this method inapplicable to UBSS.

In this study, we extend the result from the blocked polynomial deterministic matrix to a more general case without any restriction on the elements of the matrix. Consequently, \(\mathbf{A}\) is assumed to be known.

The remainder of this paper is organized as follows: In Sect. 2, we discuss the UBSS and CS models, present the construction of the measurement matrix, and describe the notations used herein. In Sect. 3, we illustrate the proof of a theorem that the measurement matrix must satisfy. In Sect. 4, the proposed algorithm for sparse signal recovery is presented. Section 5 presents the simulation results, and concluding remarks are presented in Sect. 6.

2 UBSS and CS

2.1 Notations

The notations used in this study are as follows:

  • \(\parallel \cdot \parallel _p \) denotes the \(l_p \)-norm.

  • \({\#}(\Gamma )\) denotes the number of elements in set \(\Gamma \).

    $$\begin{aligned} \varvec{\uptheta }_i =[s_1 (i),\;s_2 (i),\;\ldots ,\;s_N (i)]^{\mathrm {T}},\;i=1,\;2,\;\ldots ,L. \end{aligned}$$
  • \(\Sigma _K \) denotes the set of all vectors \(\mathbf{s}\in R^{N}\) such that at most K coordinates of \(\mathbf{s}\) are nonzero.

  • \(\Gamma _i \subset \{(i-1)N+1,\;(i-1)N+2,\;\ldots ,\;iN\}\) denotes the set corresponding to \(\varvec{\Pi }_i \) and \({\#}(\Gamma _i )=K_i \).

  • \(\varvec{\Pi }_{\Gamma _i } \) represents the \(M\times K_i \) matrix formed by the columns of \(\varvec{\Pi }_i \) with indices from \(\Gamma _i \).

2.2 Relationship between UBSS and CS

The UBSS problem can be formulated as a compressed sensing model when the mixing matrix \(\mathbf{A}=(a_{ij} )_{M\times N} \) is known.

The relationship between UBSS and compressed sensing becomes evident if we interleave observations and sources into vectors as follows:

$$\begin{aligned} \mathbf{y}=[x_1 (1),\;\ldots ,\;x_M (1),\;x_1 (2),\;\ldots ,\;x_M (2),\;\ldots ,\;x_1 (L),\;\ldots ,\;x_M (L)]^{\mathrm {T}} \end{aligned}$$
(4)

and

$$\begin{aligned} \varvec{\uptheta }=[s_1 (1),\;\ldots ,\;s_N (1),\;s_1 (2),\;\ldots ,\;s_N (2),\;\ldots ,\;s_1 (L),\;\ldots ,\;s_N (L)]^{\mathrm {T}}. \end{aligned}$$
(5)

Then, the mixing system in Eq. (1) can be expressed as

$$\begin{aligned} \mathbf{y}=\varvec{\Pi \theta }. \end{aligned}$$
(6)

This is identical to the compressed sampling measurement equation

$$\begin{aligned} \varvec{\Pi }=\left( {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} \mathbf{A}&{} \mathbf{0}&{} \cdots &{} \mathbf{0} \\ \mathbf{0}&{} \mathbf{A}&{} \cdots &{} \mathbf{0} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \mathbf{0}&{} \mathbf{0}&{} \cdots &{} \mathbf{A} \\ \end{array} }} \right) _{ML\times NL} \buildrel \textstyle .\over = \left( {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\varvec{\Pi }_1 }&{} \mathbf{0}&{} \cdots &{} \mathbf{0} \\ \mathbf{0}&{} {\varvec{\Pi }_2 }&{} \cdots &{} \mathbf{0} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \mathbf{0}&{} \mathbf{0}&{} \cdots &{} {\varvec{\Pi }_L } \\ \end{array} }} \right) , \end{aligned}$$
(7)

where \(\varvec{\Pi }_i \) denotes the i-th matrix \(\mathbf{A}\) in \(\varvec{\Pi }\).

2.3 Deterministic Construction of Matrix

To understand this property better, we first review some of the important results of the compressed sensing model (2) using the RIP in [2, 17]. Consider the \(M\times {\#}(\Gamma )\) matrices \(\varvec{\Phi }_\Gamma \) formed by the columns of \(\varvec{\Phi }\) with indices from \(\Gamma \), where \(\Gamma \subseteq \{1,2,\ldots ,N_1 \}\). Then, Eq. (3) shows that the Gramian matrices

$$\begin{aligned} \mathbf{B}_\Gamma =\varvec{\Phi }_\Gamma ^{\mathrm{T}} \varvec{\Phi }_\Gamma , {\#}(\Gamma )=K \end{aligned}$$
(8)

are bounded on the \(l_2 \)-norm and are uniform for all \(\Gamma \) such that \({\#}(\Gamma )=K\) [3]. The matrix \(\mathbf{B}\) is nonnegative definite and symmetric, i.e., has eigenvalues in \([1-\delta _K ,\;1+\delta _K ]\):

$$\begin{aligned} 1-\delta _K \le \parallel \mathbf{B}_\Gamma \parallel _2 \le 1+\delta _K . \end{aligned}$$
(9)

A deterministic construction of matrices that satisfy the RIP is presented in [3].

Theorem 1

Let \(\varvec{\Phi }_0 \) be an \(M_1 \times N_1 \) matrix with columns \(v_Q , Q\in P_r \) with the columns ordered lexicographically with respect to the coefficients of the polynomials and \(M_1 =p^{2}, N_1 =p^{r+1}\). Then, the matrix \(\varvec{\Phi }=\frac{1}{\sqrt{p}}\varvec{\Phi }_0 \) satisfies the RIP with \(\delta =\frac{(k-1)r}{p}\) for any \(k<\frac{p}{r}+1\), where \(P_r \) denotes the set of polynomials of degree \(\le r, (0<r<p)\) on finite fields, making any polynomial \(Q(x)=b_0 +b_1 x+\ldots +b_r x^{r}\in P_r\), where \(b_0 ,\;b_1 ,\;\ldots ,\;b_r \) are the coefficients [3].

In this theorem, the size and value of \(\varvec{\Phi }\) are fixed, which is not feasible for practical applications. In the next section, we extend this theoretical result to the general case. The size and value of \(\varvec{\Phi }\) are more flexible, and this result can be applied to UBSS.

3 Theoretical Result for UBSS Based on CS

In UBSS, it is assumed that each signal \(\mathbf{s}(t)\in R^{N},t=1,2,\ldots ,L\) can be sparsely represented by K (usually \(K\ll N\)) or fewer nonzero (significant) components. For simplicity but without loss of generality, in this study, \(\mathbf{s}(t)\) is considered to be sparse in the time domain.

Here, three assumptions are presented:

  1. (i)

    The columns of \(\mathbf{A}\) are normalized, i.e., \(\sum \limits _{i=1}^M {a_{ij}^2 } =1,\;j=1,\;2,\;\ldots ,\;N\), and any \(M\times M\) submatrix of the mixing matrix \(\mathbf{A}\) is full rank [4, 7].

  2. (ii)

    The sparsity of \(\varvec{\uptheta }\), as defined in Eq. (5), is K, and the sparsity of \(\mathbf{s}_i (t)\) is \(K_i ,\;i=1,\;2,\;\ldots ,\;L\), which implies that \(\sum \limits _{i=1}^L {K_i } =K\).

  3. (iii)

    Let \(q=\mathop {\max }\limits _{i\in \{1,2,\ldots ,L\}} (K_i )\) and \(\Theta \) be the index set of arbitrarily chosen q columns from \(\mathbf{A}\). We suppose that \(\sum \limits _{i_1 , i_2 \in \Theta , i_1 \ne i_2 } {\langle \mathbf{a}_{i_1 } ,\;\mathbf{a}_{i_2 } \rangle } <1\), where \(\mathbf{a}_k \) represents the k-th column of \(\mathbf{A}\).

We now provide a deterministic construction of matrices that satisfy the RIP in UBSS.

Theorem 2

Let \(\varvec{\Pi }\) denote a matrix constructed according to Eq. (7) that satisfies assumptions (i), (ii), and (iii). If

$$\begin{aligned} \delta _i =(K_i -1)\mathop {\max }\limits _{k, h\in \Gamma _i , k\ne h} \{\mathbf{D}_\Gamma (k,\;h)\}<1, \end{aligned}$$
(10)

where \(\mathbf{D}_\Gamma (k,\;h)\) is the k-th row and h-th column element of the Gramian matrix of \(\varvec{\Pi }_\Gamma \), which represents an \(ML\times K\) matrix formed by the K columns of \(\varvec{\Pi }\), then matrix \(\varvec{\Pi }\) satisfies the RIP with \(\delta =\mathop {\max }\limits _{i=1,2,\ldots ,L} \delta _i\).

Proof

The proof is given in the Appendix.

Remark 1

The RIP is a sufficient condition to guarantee that matrix \(\varvec{\Phi }\) has good source recovery performance.

Remark 2

The RIP is a condition on the spectral norm of matrices \(\mathbf{D}_\Gamma =\varvec{\Pi }_\Gamma ^{\mathrm{T}} \varvec{\Pi }_\Gamma \). We have bounded the spectral norm by bounding the \(l_1 \)- and \(l_\infty \)-norms. Direct estimation of the spectral norm may lead to a stronger result than that of the \(l_1 \)- and \(l_\infty \)-norms.

4 Algorithm

In this section, we summarize the algorithm process, which involves two main steps. First, it judges whether the measurement matrix satisfies the RIP based on Theorem 2, then it uses the OMP algorithm to reconstruct the sources. The algorithm can be outlined as follows:

figure a

5 Simulation Results

Two experiments were performed to evaluate the effect of the algorithm on the separation performance relative to the shortest path method [21]. These two experiments show the recovery capability in the time and frequency domains for different sizes.

To evaluate the reconstruction results, the signal-to-interference ratio (SIR) was defined as a reconstruction index [7],

$$\begin{aligned} \hbox {SIR}_i =10\log _{10} \left( {\frac{\sum \limits _{t=1}^L {\mathbf{s}_i^2 } (t)}{\sum \limits _{t=1}^L {(\varvec{\hat{{s}}}_i (} t)-\mathbf{s}_i (t))^{2}}} \right) ,\;i=1,\;2,\;\ldots ,\;N, \end{aligned}$$
(11)

where \(\mathbf{s}_i \) is the source signal and \(\hat{\varvec{{s}}}_i \) is the corresponding reconstructed signal. This means that, the larger the SIR, the better the performance of the algorithm.

Experiment 1

\(M=2, N=3,max(k_i )=2\).

In this experiment, we considered a blind source separation scenario with \(M=2\) mixtures and \(N=3\) source signals \(\mathbf{s}_1 ,\mathbf{s}_2 ,\mathbf{s}_3 \), which means that the dimension of the mixing matrix \(\mathbf{A}\) is \(2\times 3\). Three sparse source signals were generated with length of \(L=128,\max (K_i )=2\). The waveforms of the three original signals are shown in Fig. 1.

Fig. 1
figure 1

Three source signals \(\mathbf{s}_1, \mathbf{s}_2 , \mathbf{s}_3 \)

The mixing matrix is as follows:

$$\begin{aligned} \mathbf{A}=\left[ {{\begin{array}{c@{\quad }c@{\quad }c} {0.8192}&{} {0.6727}&{} {0.4657} \\ {0.5735}&{} {0.7399}&{} {0.8849} \\ \end{array} }} \right] , \end{aligned}$$

where \(\mathbf{A}\) was randomly generated, normalized, and known when generated in the experiment. The detailed process to generate the matrix was as follows: First, we randomly generated a matrix \(\mathbf{{A}'}=({a}'_{ij} )_{M\times N} \), then we normalized matrix \(\mathbf{{A}'}\) to \(\mathbf{A}=(a_{ij} )_{M\times N} \hbox { s.t. }\sum \limits _{i=1}^M {a_{ij}^2 } =1,\;j=1,\;2,\;\ldots ,\;N\). Note that any \(M\times M\) submatrix of the mixing matrix \(\mathbf{A}\) is full rank.

Figure 2a shows the waveform of the observed signals. The observed signals were then interleaved into a column vector as indicated in Eq. (4). The waveform is displayed in Fig. 2b.

Fig. 2
figure 2

Waveforms of observed signals: a waveforms of observed signals \(\mathbf{x}_1 ,\mathbf{x}_2 \) based on Eq. (1), b waveform after transforming the observed sources into a column vector \(\mathbf{y}\)

According to Theorem 2, we computed \(\mathbf{A}^{\mathrm {T}}{} \mathbf{A}\) as follows:

$$\begin{aligned} \mathbf{G}=\mathbf{A}^{\mathrm {T}}{} \mathbf{A}=\left[ {{\begin{array}{c@{\quad }c@{\quad }c} {1.0000}&{} {0.9754}&{} {0.8890} \\ {0.9754}&{} {1.0000}&{} {0.9681} \\ {0.8890}&{} {0.9681}&{} {1.0000} \\ \end{array} }} \right] . \end{aligned}$$

Then, the off-diagonal entries \(\mathbf{D}_\Gamma (k,\;h)=\mathbf{G}(k,\;h)<1,\hbox { }k\ne h\), such that

$$\begin{aligned} \delta _i =(K_i -1)\mathop {\max }\limits _{k, h\in \Gamma _i , k\ne h} \{\mathbf{D}_\Gamma (k,\;h)\}=0.9754<1. \end{aligned}$$

Therefore, the original sources can be estimated by using an appropriate CS algorithm. Figure 3a shows the separation results obtained by OMP. Then, the resultant single vector was split into multiple separation vectors, which are displayed in Fig. 3b.

Fig. 3
figure 3

Separation results: a separation results obtained by OMP (red dots original sources \(\theta \), blue circle corresponding estimations \(\hat{{\theta }}\)), b waveform after splitting the single vector \(\hat{{\theta }}\) into multiple separation vectors \(\hat{{\mathbf{s}}}_1 ,\hat{{\mathbf{s}}}_2 ,\hat{{\mathbf{s}}}_3 \). For color references in this figure, please refer to the online version of the paper (Color figure online)

We performed 100 trials to evaluate the reconstruction results. The average SIR was used to evaluate the reconstruction errors. Table 1 records the average SIR compared with the shortest path method, showing that UBSS based on OMP obtains results similar to those of the shortest path method when the measurement matrix satisfies Theorem 2.

Experiment 2

\(M=3,\,N=4,\,max(k_i )=2\).

In practical applications, not all signals are sparse in the time domain, but they are sparse in transformed domains. In this experiment, we created four source signals, namely \(\mathbf{s}_1 , \mathbf{s}_2, \mathbf{s}_3, \mathbf{s}_4 \), that are not sparse in the time domain but are sparse in Fourier-transform domains \({{\tilde{\mathbf{s}}}}_1, {{\tilde{\mathbf{s}}}}_2, {{\tilde{\mathbf{s}}}}_3, {{\tilde{\mathbf{s}}}}_4 \) with length \(L=128,\;M=3,\;N=4,\;\max (K_i )\,{=}\,2\). The mixing matrix is as follows:

$$\begin{aligned} A=\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {0.8256}&{} {0.3828}&{} {0.4857}&{} {0.4053} \\ {0.2008}&{} {0.7021}&{} {0.0197}&{} {0.5610} \\ {0.5273}&{} {0.6004}&{} {0.8739}&{} {0.7218} \\ \end{array} }} \right] , \end{aligned}$$

which was randomly generated and normalized.

Fast Fourier transformation was applied to obtain a sparse representation of each signal. The waveforms of the four original signals and their corresponding FFT coefficients are shown in Fig. 4. Here, we deal with the real and imaginary parts using the OMP method. The inverse FFT method is then applied to the reconstructed data comprising the real and imaginary parts of the separation vectors. Figure 4a shows the source signals, Fig. 4b shows the spectrum of the sources with FFT, and Fig. 4c and d depict the real and imaginary parts of the FFT domains of the sources, respectively.

Table 1 Average SIR of sources and separation signals in experiment 1
Fig. 4
figure 4

Source signals and FFT coefficients: a source signals, b amplitude spectrum of sources with FFT, c real part of FFT domains of sources, and d imaginary part of FFT domains of sources. k stands for the number of FFT points

Fig. 5
figure 5

Interleaving real and imaginary parts of FFT domains of source signals into a column: a interleaving the real parts of the FFT coefficients of sources into a column \(real(\theta )\), b interleaving the imaginary parts of the FFT coefficients of sources into a column \(imag(\theta )\). k stands for the number of FFT points

First, the real and imaginary parts of the FFT domains of the sources were interleaved into a column as indicated by Eq. (5). Corresponding scatterplots are shown in Fig. 5a, b.

Figure 6a shows the waveform of the observed signals. Then, the real and imaginary parts of the FFT domains of the observed signals were interleaved into a column vector as indicated in Eq. (4). The waveform is displayed in Fig. 6b, c.

Fig. 6
figure 6

Waveforms of observed signals and their real and imaginary parts of FFT domains: a the observed signals \(x_1 ,x_2 ,x_3 \), b interleaving the FFT coefficients of observed signals into vectors with real part real(y), c interleaving the FFT coefficients of observed signals into vectors with imaginary part imag(y). k stands for the number of FFT points

According to Theorem 2, we computed \(\mathbf{A}^{\mathrm {T}}{} \mathbf{A}\) as follows:

$$\begin{aligned} \mathbf{G}=\mathbf{A}^{\mathrm {T}}{} \mathbf{A}=\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {1.0000}&{} {7736}&{} {0.8658}&{} {0.8279} \\ {0.7736}&{} {1.0000}&{} {0.7245}&{} {0.9824} \\ {0.8658}&{} {0.7245}&{} {1.0000}&{} {0.8387} \\ {0.8279}&{} {0.9824}&{} {0.8387}&{} {1.0000} \\ \end{array} }} \right] . \end{aligned}$$

Then,

$$\begin{aligned} \delta _i =(K_i -1)\mathop {\max }\limits _{k, h\in \Gamma _i , k\ne h} \{\mathbf{D}_\Gamma (k,\;h)\}=0.9824<1. \end{aligned}$$

Therefore, the real and imaginary parts of the FFT domains of the sources can be exactly reconstructed by using the OMP algorithm. Figure 7a and b present the separation results for the real and imaginary parts, respectively. All sparse representations were reconstructed properly.

Fig. 7
figure 7

Separation results using the OMP method for real and imaginary parts of FFT domains of observed signals: a separation results for real parts (red dots real parts \(real(\theta )\) of original sources, blue circle corresponding real parts \(real(\hat{{\theta }})\) of estimations), b separation results for imaginary parts (red dots imaginary parts \(imag(\theta )\) of original sources, blue circle corresponding imaginary parts \(imag(\hat{{\theta }})\) of estimations). k stands for the number of FFT points. For color references in this figure, please refer to the online version of the paper (Color figure online)

The single vector was split into multiple separation vectors. Finally, the inverse FFT method was applied to the reconstructed data comprising the real and imaginary parts of the separation vectors, and the four original signals were recovered. The final results are shown in Fig. 8.

Fig. 8
figure 8

Recovered signals and their FFT spectrum: a recovered signals \(\hat{{\mathbf{s}}}_1, \hat{{\mathbf{s}}}_2, \hat{{\mathbf{s}}}_3, \hat{{\mathbf{s}}}_4 \), b amplitude spectrum \(\tilde{\hat{{\mathbf{s}}}}_1 ,\tilde{\hat{{\mathbf{s}}}}_2 ,\tilde{\hat{{\mathbf{s}}}}_3 ,\tilde{\hat{{\mathbf{s}}}}_4 \) of the recovered signals with FFT. k stands for the number of FFT points

After 100 trials, the average SIR relative to that obtained using the shortest path method is recorded in Table 2, showing that UBSS based on OMP produced identical results for both the proposed method and the shortest path method when the measurement matrices satisfy Theorem 2.

Table 2 Average SIR of sources and separation signals in experiment 2

6 Conclusions

Separation of underdetermined mixtures is usually addressed in the framework of sparse signal representation. In recent years, compressed sensing theory has been adopted for signals that permit a sparse representation. The major contribution of this paper is the extension of a recent theoretical result. Numerical experiments demonstrate the separation performance of the proposed theory. The result can be directly extended to higher-dimensional as well as dependent source separation.

However, there is a limitation to the proposed theory: The linear transform used to make the sources sparse and the maximum sparsity of all the blocks must be known a priori. Extending the proposed results to a more general case (such as when the separation of convolutive mixtures or the sparsity is unknown) is left for future work.