1 Introduction

Most signals, such as audio and images, contain metadata. Metadata can be signal-based, which describes quantitative properties of the signal, such as its sampling rate, as well as semantic, which describes, for example, contextual properties. In speech processing, semantic metadata could consist of the speaker’s language or gender. Whether signal-based or semantic, including metadata as a secondary input into neural network models may provide relevant information which would translate into an economy of training time, model parameters and flexibility. However, metadata typically has a different dimensionality than the input signals, making its incorporation into those models not trivial.

The main focus of this paper is to study the effectiveness of schemes to process signals and exploit metadata jointly using neural network models. We focus on the task of Sound Source Localization (SSL) [1] using distributed microphone arrays to demonstrate the effectiveness of our proposed approach. In the context of SSL, relevant metadata which is exploited by classical methods is the microphone positions, which can be acquired by manual measurement or using self-calibration [2] methods. Other relevant metadata is the room dimensions and its reverberation time.

SSL refers to the task of estimating the spatial location of a sound source, such as a human talker or a loudspeaker. In this scenario, metadata refers to properties of the acoustic scene such as the coordinates of microphones, dimensions of the room and, the reflection coefficient of the walls. SSL has many applications, including noise reduction and speech enhancement [3], camera steering [4] and acoustic Simultaneous Localization and Mapping (SLAM) [5]. In turn, distributed microphone arrays have become an active research topic in the signal processing community due to their versatility. Such arrays may be composed of multiple network-connected devices, including everyday devices such as cell phones, smart assistants, and laptops, for example. The array and the constituent devices may be configured as a Wireless Acoustic Sensor Network (WASN) [6].

SSL approaches may be divided into classical signal processing-based and data-driven neural network-based methods. By explicitly exploiting metadata describing microphone positions and room dimensions, classical approaches may be applied to different rooms and microphone configurations. Conversely, neural network approaches have recently achieved state of the art results for source localization [7,8,9], at the expense of requiring one network to be trained for every microphone topology. One reason current neural approaches do not incorporate the microphones’ positional information is that the microphones’ signal and positional data are very different from one another in nature and dimension.

Previous work which discusses the joint processing of signals and metadata is [10], where a single input neural network is used to process metadata in conjunction with a low-dimensional physical signal. However, unlike our work, the method of [10] is restricted to multilayer perceptron architectures and one-dimensional input and metadata, limiting its application in practical scenarios.

Another related field is multimodal fusion [11, 12], although this is usually concerned with learning representations using two types of signals, such as audio-visual data. Simultaneously processing signals and metadata have also been explored using non-neural models for sound source separation [13], where metadata consists of information about the type of sound (speech, music) and how the sources were mixed. However, none of the existing work discusses effective schemes for incorporating and evaluating signals and metadata of different dimensionality.

Our main contribution is the DI-NN neural network architecture, which is capable of processing high-dimensional signals, namely spectrograms, along with a relevant metadata vector of lower dimensionality. An overview diagram of our approach is shown in Fig. 1, which will be discussed in Section 3.2. We compare our method against three baselines for the task of Positional Sound Source Localization (PSSL), namely, a metadata-unaware Convolutional Recurrent Neural Network (CRNN), a metadata-aware classical signal processing approach, as well as an alternate metadata-aware neural network. Our proposed method is able to outperform all baselines by a large margin in realistic scenarios. In contrast to previous approaches [9, 14], our network dispenses with the need for training a network for each scenario, broadening our method’s applicability.

Fig. 1
figure 1

Overview of the Dual-Input Neural Network (DI-NN) approach

This work continues as follows. In Section 2, an overview of neural and non-neural SSL methods will be discussed. The approach for training our proposed DI-NN for SSL is described together with several baseline methods in Section 3. In Section 4, the experiments comparing our approach with the baselines using multiple datasets are described. Finally, results and conclusions are drawn in Section 5.

2 Prior art on sound source localization

2.1 Neural-based methods

In recent years, deep neural networks have been widely adopted for the task of sound source localization. The various approaches differ in the input features used, the network architectures and output strategies. Most studies focus on the task of Direction-of-Arrival (DOA) estimation, i.e., estimating the angle between the propagation direction of the acoustic wavefront due to the source and a reference axis of the array.

Practicioners have experimented with many types of neural input features, such as the raw audio samples of the microphone signals [9], their frequency-domain representation through the Short Time Fourier Transform (STFT) [15], their cross-spectra [16] or cross-correlation [8]. Multiple architectures have been also tested, including the Multi-layer Perceptron (MLP) [8], Convolutional Neural Networks (CNNs) [17] and residual networks [18]. In this work, we focus on the Convolutional Recurrent Neural Network (CRNN) architecture, which has received widespread adoption in the field [7, 19, 20]. Finally, approaches differ in terms of the network’s output strategy. While regression-based approaches directly estimate the source’s coordinates, classification based-approaches discretize the source locations to a grid of available positions. We refer to [21] for a discussion on the merits of both approaches. We also refer the reader to a substantial survey of neural SSL papers [22].

In this paper, we focus on the task of estimating the absolute Cartesian coordinates of the source, which we shall refer to as Positional Sound Source Localization (PSSL), and has applications in robot navigation [5] and noise reduction [23]. The PSSL task has been much less studied using neural methods. To the best of our knowledge, only [14] and [9] focus on PSSL. However, both these approaches only work for the same room with fixed relative microphone positions. We believe this shortage of studies to be at least in part due to the lack of an architecture capable of incorporating the scene’s metadata, which is addressed by our proposed DI-NN. We also refer to the recent L3DAS22 challenge [24], where practitioners were invited to develop 3D PSSL algorithms for a realistic office environment containing a pair of microphone arrays.

2.2 Classical signal processing methods

Classical approaches to SSL have been widely studied within the signal processing community. In PSSL approaches, the source’s coordinates are estimated using a model involving signal processing, physics and geometry. By measuring differences in the microphone signals’ amplitudes and phases, distance metrics between the microphones and source can be estimated. These estimates can in turn be combined to estimate the source’s coordinates [1]. Besides the microphones’ signals, the positions of microphones are usually needed for the position of the source to be estimated. Available approaches for SSL may be classified as delay-based [1, 25], energy-based [26, 27], subspace-based [28] and beamforming-based [29, 30] approaches. We shall focus on delay-based approaches and will provide background for our baseline method.

Delay-based SSL methods usually rely on computing the Time-Difference-of-Arrival (TDOA) between each microphone pair within the system, which corresponds to the difference in time taken for the source signal to propagate to different microphones. The locus of candidate source positions with the same TDOA with respect to a microphone pair is, when considering planar coordinates, a hyperbola [1, 25]. The source is located at the intersection of the hyperbolae defined by all microphone pairs. The multiple TDOAs can be combined using a Least-Squares (LS) framework [31], or using a Maximum Likelihood (ML) approach if some noise properties of the system are known [1]. In general, TDOAs are estimated using cross-correlation based methods such as Generalized Cross-Correlation with Phase Transform (GCC-PHAT) [32], which are shown to be somewhat robust to reflections produced in the room due to, for example, the walls, ceiling and furniture, i.e. reverberation [33].

3 Method

3.1 Signal model and scope of this work

Our scope is restricted to the localization of a static source at the planar coordinates \(\varvec{p}_s = [p_s^x, p_s^y]^T\). The source emits an intermittent signal s(t) at time t. In our experiments, s(t) may consist of White Gaussian Noise (WGN) as well as of speech utterances. Also, M static microphones with known positions are present in the room, each placed at coordinates \(\varvec{m}_i = [m_i^x, m_i^y]^T\). Both source and microphones are enclosed in a room of planar dimensions \(\varvec{d}=[d^{x}, d^{y}]^T\). The amount of reverberation in the room is modeled by its reverberation time r, a measure of the amount of time it takes for a sound to decay by 60 dB from its original level. The signal \(y_i\) received at microphone i is

$$\begin{aligned} y_i(t) = a_i s(t - \tau _i) + \epsilon _i(t) \;. \end{aligned}$$
(1)

In (1), \(a_i\) is a scaling factor representing the attenuation suffered by the wave propagating from \(\varvec{p}_s\) to \(\varvec{m}_i\). We assume that the gains between the microphones are approximately calibrated, although we show in Section 4.3 that our method is robust to uncalibrated microphones of the same kind. \(\tau _i\) is the time taken for a sound wave to propagate from the source to microphone i, and \(\epsilon _i(t)\) models the noise. We assume \(\tau _i\) to be equal to \(\Vert \varvec{m}_i - \varvec{p}_s \Vert _2 /c\), where \(\Vert \varvec{m}_i - \varvec{p}_s\Vert _2\) is the Euclidean distance between the source and the microphone located at \(\varvec{m}_i\), c is the speed of sound and \(\Vert \cdot \Vert _2\) represents the \(L_2\)-norm.

We also define \(\varvec{y}(t) = [y_1(t), \dotsc , y_M(t)]^T\) as the vector containing all microphone signals at discrete time index t. The Short Time Fourier Transform (STFT) of \(y_i(t)\) is \(Y_i(\ell ,f)\), for frequency f and time frame \(\ell\), and \(\varvec{Y}(\ell ,f) = [Y_1(\ell ,f), \dotsc , Y_M(\ell ,f)]^T\). The STFT [34] represents the frequency content of a signal over time, and is a widely used feature for source localization using neural networks [15, 19]. Figure 2 shows the magnitude representation of \(\varvec{Y}\) at the input.

Fig. 2
figure 2

Detailed DI-NN architecture for the task of PSSL

Finally, the metadata vector \(\varvec{\phi } \in \mathbb {R}^{N_{\phi }}\) is the concatenation of the coordinates of the microphones, the room dimensions and reverberation time, as shown in Fig. 2. We chose the three aforementioned types of metadata as the room dimensions and microphone coordinates are explicitly exploited in classical localization methods such as the LS. Furthermore, we included the reverberation time as an additional metadata to verify whether its knowledge can reduce the detrimental effect of reverberation in localization methods. However, other metadata could have been exploited such as the energy ratio between the microphone signals, or the absoption coefficients of the walls.

3.2 Proposed method: dual input neural network

Our proposed DI-NN architecture is comprised of two neural networks, a feature extraction network and a metadata fusion network as can be seen in Fig. 1. An additional third network, called the metadata embedding network is also used in the alternative DI-NN-Embedding network, which will be presented in Section 3.3 .

The input of the network consists of the STFTof the microphone signals as defined in Section 3.1. Instead of using the complex representation generated by the STFT, we split the real and imaginary parts of the STFT \(\varvec{Y}\) use them as separate channels as in [19], giving rise to \(2*M\) input channels. The role of the feature extraction network is to transform this high dimensional tensor into a one dimensional feature vector which compactly represents relevant information for the task in hand. In our experiments, we adopt a CRNN [35] as our feature extraction network, due to its wide adoption for SSL [7, 20, 36].

This metadata-unaware vector is then concatenated to the available metadata, thus creating a metadata-aware feature vector. For our application, the metadata is a one-dimensional vector consisting of the positions of the microphones, the dimensions of the room, and its reverberation time. This metadata-aware feature vector is then fed to a metadata fusion network, whose role is to merge the metadata and feature vector to produce the result. In our experiments, we adopt a two-layer Fully Connected Neural Network (FC-NN) which maps the metadata-aware features to a two dimensional vector corresponding to the estimated coordinates of the source.

Our feature extractor CRNN is divided into two sequential sub-networks: a CNN block, responsible for extracting local patterns from the input data and a Recurrent Neural Network (RNN), responsible for combining these pattens into global, time-independent features. A diagram representing the components of the DI-NN network is shown in Fig. 2.

The convolutional block receives a tensor of shape (MLF) representing a multi-channel complex STFT, where M represents the number of audio channels, L represents the number of time frames generated by the STFT, and F is the number of frequency bins used. The role of this block is two-fold: firstly, to combine local information across all microphone channels, and secondly to reduce the dimensionality of the data to make it more tractable for the RNN layer.

The convolutional block consists of four sequential layers, where each performs three sequential operations. Firstly, a set of K convolutional filters is applied to the input signal, resulting in K output channels. Secondly, a non-linear activation function is applied to the result. Finally, an average pooling operation is applied to the width and height of the activations, generating an output of reduced size. After passing the input through the four convolutional layers, we perform a global average pooling operation across all frequencies, generating a two-dimensional output matrix.

After the convolutional block, the resulting matrix serves as input to a bidirectional, gated recurrent unit neural network (GRU-RNN) [37]. As sound may not be present throughout the whole duration of the audio signal, such as during speech pauses, the RNN is important for propagating location information to silent time-steps. After this network, we reduce the dimensions of the features once again by performing average pooling on the time dimension, resulting in a vector of time-independent features.

The output of the feature extraction network are then concatenated to the available metadata and serve as input to the metadata fusion network. This network consists of a set of two fully connected layers which map the metadata-aware features to a two-dimensional vector corresponding to the estimated cartesian coordinates of the active source. We jointly train both networks using the same loss function, defined as the \(L_1\)-norm or the sum of the absolute error between the network’s estimate of the source coordinates \(\hat{\varvec{p}}_s\) and the target \(\varvec{p}_s\), given by

$$\begin{aligned} \mathcal {L}(\varvec{p}_s, \hat{\varvec{p}}_s) = |\varvec{p}_s - \hat{\varvec{p}}_s| \;. \end{aligned}$$
(2)

We also considered using the more common squared error loss. Although both losses yielded similar results in our experiments, we chose the absolute error for its easier interpretability, since it corresponds to the distance in metres between target and estimated coordinates.

3.3 DI-NN-Embedding

To test whether it is advantageous to process the metadata before combining it with the microphone features, we also propose a variant of the DI-NN model, where the metadata \(\varvec{\phi }\) is processed by a metadata embedding network to produce an embedding, which is then concatenated to the microphone features. This network is represented by the metadata embedding network block in Fig. 1.

3.4 Baseline: least-squares based source localization

Our final comparative baseline is the Least-Squares (LS) algorithm [1] which uses the signal model defined in Section 3.1. We provide an overview of the algorithm below. We define the theoretical TDOA between microphones i and j with respect to the source coordinates \(\varvec{p}_s\) as

$$\begin{aligned} \tau _{ij}(\varvec{p}_s) \triangleq \frac{\Vert \varvec{m}_i - \varvec{p}_s \Vert _2 - \Vert \varvec{m}_j - \varvec{p}_s \Vert _2}{c} \;, \end{aligned}$$
(3)

where c is the speed of sound. Next, the measured TDOA between microphones \(\hat{\tau}_{ij}\) is estimated from the cross-correlation peak between the received signals according to

$$\begin{aligned} \hat{\tau }_{ij} \triangleq \underset{t}{\text {arg}\,\text {max}}\ (\mathcal {C}(t; y_i, y_j)) \;, \end{aligned}$$
(4)

where \(\mathcal {C}\) denotes the cross-correlation operator, usually computed in the frequency domain using the GCC-PHAT algorithm [32]. We then aggregate the total error for all microphone pairs using

$$\begin{aligned} E(\varvec{p}_s) \triangleq \sum _{i=1}^{m} \sum _{j \ne i} E_{ij}(\varvec{p}_s) \;, \end{aligned}$$
(5)

where \(E_{ij}(\varvec{p}_s) \triangleq |\tau _{ij}(\varvec{p}_s) - \hat{\tau }_{ij}|^2\) is the squared difference between the theoretical and measured TDOA of each microphone pair in (3) and (4), respectively. To estimate the source’s location, we compute the values of E for a set of candidate locations \(\varvec{p}_s\) within the room. In the absence of noise and reverberation, the location with the minimum error corresponds to the true position of the source [1]. Figure 3 shows the heatmaps or error grids generated using the LS algorithm in an anechoic and a reverberant room. The position of the source is estimated by selecting the positions that minimize the total error,

$$\begin{aligned} \hat{\varvec{p}}_s = \underset{p_{s}}{\text {arg}\,\text {min}}\ E(\varvec{p}_s) \;. \end{aligned}$$
(6)
Fig. 3
figure 3

Error grid produced by the LS algorithm for an anechoic and a reverberant room of the same dimensions and microphone coordinates

Figure 3 illustrates the limitations of the LS algorithm when the reverberation time is large. The two figures show the results of our algorithm for two simulations, where one source and four microphones are placed in a room with the same dimensions. When the room is simulated to be anechoic, i.e., all the reflections are absorbed, the algorithm produces a sharp blue peak in the heatmap. Conversely, when the simulated room is reverberant, the peak becomes much more dispersed. An explanation for this is that the model used by the LS method assumes anechoic propagation between the source and microphones, i.e., no reflections are assumed. Conversely, we will show that the DI-NN model is able to localize sources in reverberant environments, as it is trained using a reverberant dataset. A study conducted in [38] shows that speech inteligibility is maximized in rooms with a reverberation time between 0.4 and 0.5 ms, therefore limiting the practical application of the LS method on those environments.

4 Experimentation

This section describes our experiments with DI-NNs with three SSL datasets representing scenarios of varying difficulties. For each dataset, our approach is compared to two other methods. The first method is a CRNN with the same architecture but without using the available metadata, i.e., without the “Concatenate" block in Fig. 2. By comparing this network’s performance to the DI-NN, we can see the performance gains of our proposed method. The second comparative method is the classical LS source localization method described in Section 3.4. The experiments will be described below.

All of our experiments consisted of randomly placing one source and four microphones within a room. The height of the microphones, source and room were fixed for all experiments. For each experiment, the goal of the proposed method and baselines was to estimate the planar coordinates of the source within the room using a one-second multichannel audio signal as well as the positions of the microphones. We emphasize that the training and testing samples do not overlap, and hence demonstrate our method’s effectiveness for handling unseen scenes and metadata. We refer the reader to Appendix A for a discussion on the independence of our datasets.

To simulate sound propagation in a reverberant room, we used the image source method [39] implemented by the Pyroomacoustics Python library (MIT license) [40]. We trained our neural networks using PyTorch (BSD license) [41] along with the PyTorch Lightning (Apache 2.0 license) library [42]. The models were trained using a single NVIDIA P100 GPU with 16 GB of RAM memory. The configuration of our experiments is managed using the Hydra  (MIT license) library [43]. We release the code used for generating the data and training the networks on GitHubFootnote 1, as well as a Kaggle notebook Footnote 2 to allow reproduction of the experiments without the need for any local software installation. The hyperparameters used for training the proposed method and baselines are shown in Table 1.

Table 1 Hyperparameters

4.1 Simulated anechoic rooms

The goal of this experiment is to evaluate the performance of the DI-NN and baselines in multiple rooms and microphone positions in the absence of reverberation. Our dataset generation procedure is shown in Fig. 4a. For each dataset sample, we randomly select two numbers from a uniform distribution in the interval [3, 6] m representing the room’s width and length. The height of the rooms is fixed at 3 m. Next, we randomly place one microphone along a line segment 0.5 m away and parallel to each room’s walls. We chose to place the microphones close to the wall as a simplified localization scenario, as our main goal is to test the effectiveness of our metadata fusion procedure. Nonetheless, this scenario is realistic in the context of smart rooms, where the microphones are usually placed in or near the room’s walls.

Fig. 4
figure 4

Experimental setup. a For the anechoic and reverberant simulations, each of the four microphones \(m_i\) is placed on a random point along the the coloured arrows, while the source s is randomly placed on a point within the rectangle defined by them. b The sampling procedure for Section 4.3, where positions of the microphones and source are randomly drawn from each differently coloured set of points

Finally, the source is randomly placed in the room, following a uniform distribution while respecting a minimum margin of 0.5 m from the walls. In this experiment, the source signal is WGN, and 30 dB Signal-to-Noise Ratio (SNR) sensor noise, simulated using WGN, is also added to each microphone. A dataset of 15,000 samples is generated, from which 10,000 samples are used for training, 2,500 for validation, and 2,500 for testing.

4.2 Simulated reverberant rooms

The data for the simulated reverberant rooms experiment is generated similarly to the anechoic experiment. However, instead of simulating sound propagation in an anechoic environment, each dataset sample is randomly assigned a reverberation time value for its corresponding room from a uniform distribution within the range of [0.3 – 0.6] s. This value is used to simulate reverberation using the image source method [39]. For the source signal, we use speech recordings from the VCTK corpus [46]. The number of training, testing and validation samples is same as in the above section.

4.3 Real recordings

For this experiment, instead of simulations, we use measurements from the LibriAdhoc40 dataset [47] (GPL3 license). The signals were recorded in a highly reverberant room containing a grid of forty microphones and a single loudspeaker, which was placed in one of four available locations. The microphones recorded speech sentences taken from the Librispeech [48] corpus, which were played back through the loudspeaker. The reverberation time measured by the dataset authors was of approximately 900 ms.

To generate each dataset sample, we subselect four of the forty available microphones. We restrict our microphone selection to the outermost microphones of the grid, where one microphone per side is selected. A visual explanation of our microphone selection procedure is provided in Fig. 4b. There are four available positions for the microphones near each of the west and east walls and seven positions near each of the north and south walls. Furthermore, there are four available source positions. There are, therefore, \(4\times 4 \times 7\times 7 \times 4 =\) 3,136 source/microphone combinations available for selection. Finally, we randomly select four speech utterances for each combination, resulting in a dataset of 12,544 samples. We use 50% of those combinations for training, 25% for validation and 25% for testing. To create the training dataset for this experiment, we augment the aforementioned training split with the training data of the reverberant dataset described in Section 4.2, resulting in a dataset consisting of 10,000 \(+\) 6,272 \(=\) 16,272 signals.

4.4 Metadata sensitivity study

In practical scenarios, the metadata, e.g., microphone coordinates and room reverberation time in PSSL, are uncertain because they are typically estimated or measured. To investigate the robustness of our approach to such uncertainties, we conducted a sensitivity study using the test dataset in Section 4.2. We modify the dataset by introducing different levels of perturbations to the input metadata, followed by a computation of the mean localization error for each level using the model trained on Section 4.2.

Our first three studies consist of perturbing the microphone coordinates of the testing dataset with increasing levels of random Gaussian noise. The reported precision of microphone coordinates measured optically is under a millimeter [49]. Conversely, when these are estimated using self-localization algorithms, the reported errors are under 7 cm [50, 51]. We therefore choose the standard deviation levels of the introduced noise to 1, 10 and 50 cm. In our fourth study, we introduced random Gaussian noise to the reverberation time with a standard deviation of 200 ms, based on reported errors obtained on reverberation estimation procedures [52, 53].

4.5 Metadata relevance study

To quantify the contribution of each metadata category to the improvement in localization performance, we conducted a metadata relevance study where we trained the DI-NN network using six different combinations of the microphone positions, room dimensions and reverberation time. The results are summarized in Table 3.

5 Results and discussions

5.1 Results

Figure 5a compares the average error of our proposed DI-NN and DI-NN-Embedding methods to the CRNN and LS baselines. To obtain statistically significant results, we train the DI-NN, DI-NN-Embedding and CRNN models four times independently for each experiment using random initial network parameters. The results shown in Fig. 5 are averaged across the four times, with error bars showing the standard deviation across the runs. Conversely, as the LS method is deterministic, it does not require multiple runs.

Fig. 5
figure 5

a Mean localization error for DI-NNs and baselines on different datasets. b Normalized histogram comparison between the DI-NN and the CRNN baseline on the recorded dataset. c Cumulative version of (b)

A first remark is that although the LS approach is very effective in the anechoic scenario, its performance is degraded on the other datasets, indicating its sensitivity to reverberation. The CRNN outperforms the LS method in reverberant scenarios without knowledge of the microphone’s coordinates. Interestingly, the CRNN baseline is also obtains good localization performance on the recorded dataset, indicating that the network is able to infer the metadata to an extent when trained on a single room.

However, by exploiting the microphone coordinates, the DI-NN is shown to significantly improve the performance compared to the CRNN. The most significant difference is observed in the anechoic case, where an improvement close to three times is obtained. In this case, the microphone coordinates are more useful as this information cannot be derived from the signals. In a reverberant room, however, the network might be able to use reflections to its advantage, as discussed in [54], to infer the microphone coordinates and making the metadata less useful. Figure 5a also shows the errors obtained using the alternative DI-NN-Embedding architecture were similar to the DI-NN in all scenarios, indicating no advantage in the proposed embedding, although it still allows the network to exploit the metadata.

In turn, Fig. 5b compares the normalized error histograms between our approach and the CRNN baseline on the real recordings test dataset. The mode of the DI-NN’s error is centred on the 0-15 cm bin compared to the 15-30 cm bin for CRNN’s error. In other words, only the DI-NN is median-unbiased. The cumulative distribution for the same data is shown in Fig. 5c. While the DI-NN is shown to locate over 50% and 80% of the dataset samples with less than 15 and 45 cm error, the CRNN achieves the same errors for less than 20% and 60% of the data, respectively.

The results of the sensitivity study conducted in Section 4.4 are displayed in Table 2. The last column refers to the relative error increase between the perturbed case and the noiseless experiment conducted in Section 4.2. The results show that our approach is robust to the uncertainty inherent in practical measurements of the microphone coordinates and reverberation time estimates. The case where the microphone coordinates are disturbed by an extreme error of 0.5 m (more than five times above typical errors) has been included to demonstrate the impact of including microphone coordinates for PSSL, reiterating the importance and improved performance of metadata in our proposed fusion approach.

Table 2 Metadata sensitivity analysis

Finally, the results of the metadata relevance analysis study described in Section 4.5 are displayed in Table 3. Each line represents a version of the DI-NN model trained on the reverberant dataset. The first three columns describe which metadata types are used in the model, and the last column shows the model performance relative to the model using all metadata, represented in the first line. The results show that the microphone coordinates are the most relevant for the model. In fact, using the microphone coordinates alone provides the best results. The results also indicate that the room dimensions are more relevant than the reverberation time in the absence of the microphone coordinates.

Table 3 Metadata relevance analysis

5.2 Limitations and extensions

Our approach exploits the metadata, such as the microphone coordinates and reverberation time and therefore this data must be known a priori or somehow measured. We have, however, shown that using this additional information is justified by a significant improvement in performance. While we have also assumed that the gains of the microphones are calibrated in our experiments, which may not be verifiable in practical scenarios, we have shown in Section 4.3 that our model can perform well even when using uncalibrated microphones of the same kind. If calibration cannot be ensured, extracting gain invariant features from the signal pairs such as the cross spectra [16] may be used as a preprocessing step.

We have also limited our scope to the localization of one static sound source using static microphones to focus on metadata fusion. However, extensions to moving sources and microphones could be possible by using smaller processing frames, for example. Another extension would be to estimate the three dimensional coordinates of the source. Finally, a possible extension for multiple source localization is expanding the output of DI-NN to a vector of size 2N, where N is the number of maximum sources, and performing Permutation Invariant Training (PIT) [55].

6 Conclusion

In this work, we proposed DI-NN, a simple yet effective way of jointly processing signals and relevant metadata using neural networks. Our results for the task of SSL on multiple simulated and recorded scenarios indicate that the DI-NN is able to exploit successfully the metadata, as its inclusion reduced the mean localization error by a factor of at least two compared to the CRNN baseline, as well as significantly improving localization results in comparison with the classical LS algorithm in reverberant environments. Additional relevance and sensitivity studies revealed that the microphone coordinates the most important metadata, and that the DI-NN is robust to realistic noise in the metadata.