Keywords

1 Introduction

Depth estimation is an important task in computer vision, since it forms the basis of many algorithms in applications such as 3D scene reconstruction [2, 38, 39, 47, 55] or autonomous driving [52, 57, 60] among others. Inferring depth from a single image is an inherently ill-posed problem due to a scale ambiguity: an object in an image will appear the same if it were twice as large and placed twice as far away [20]. Nevertheless, deep neural networks are able to provide reliable, dense depth estimates by learning relative object sizes from data [10]. To this end, there are two main learning paradigms: supervised training from dense [12, 49, 51] or sparse [34] ground truth depth maps, e. g.obtained by a Time-of-Flight [22, 43] sensor such as LiDAR [6], and self-supervised training which exploits 3D geometric constraints to construct an auxiliary task of photometric consistency between different views of the same scene [13, 16, 17, 62]. The latter approach is particularly useful as it does not require ground truth depth images, and can be applied on sequences of frames taken by an ordinary, off-the-shelf monocular camera.

Fig. 1.
figure 1

(a) Top to bottom: input images from ScanNet [7] (scene 0019_00), predicted depths (far  close), predicted uncertainties (low  high); (b) Top: reconstructed scene from all dense depth predictions, bottom: reconstructed scene from filtered depths. Notice how the predicted uncertainty highlights regions (circled in red) for which the network would not have received meaningful error signal during self-supervised training, and is therefore susceptible to mistakes. Those can then be filtered using thresholded uncertainties as masks, leading to a sparser but more accurate scene reconstruction (Color figure online)

To reliably make use of the estimated depth in downstream tasks, a dense quantification of the uncertainty associated to the predictions is essential [28]. Consider the example given in Fig. 1 where the depth predictions for the overexposed, blank white walls are compromised (see red markings in Fig. 1a), leading to a noisy scene reconstruction as shown in the top part of Fig. 1b. To mitigate this, one can use the uncertainty maps to filter the potentially erroneous depth pixels and produce a sparser but more accurate mesh, cf.bottom of Fig. 1b. However, obtaining meaningful confidence values from a single image in a fully self-supervised learning setting is an especially challenging task, as the depth is only indirectly learnt. Consequently the majority of existing uncertainty-aware methods are either trained in a supervised fashion [4, 31, 35, 50], assume that multiple views are available at test time [26] or model other types of uncertainty, e. g.on the photometric error [58].

The goal of our work is to extend self-supervised depth training with principled uncertainty estimation. To that end, we present Variational Depth Networks (VDN)—an entirely monocular, probabilistically motivated approach to depth uncertainty. It builds upon established self-supervised methods and leverages advancements in approximate Bayesian learning. Specifically, VDN extends MonoDepth2 [17] to model the depth as a continuous distribution, whose parameters are optimised using the framework of variational inference [18, 30]. As a result, the network learns to assign high uncertainty to regions for which the depth can vary a lot without significantly increasing the photometric error, and low uncertainty otherwise. Building up on this idea, in Sect. 4 we also present a new method to quantitatively evaluate the utility of the uncertainty maps in a 3D reconstruction task using the ScanNet dataset [7], and benchmark the quality of the 2D depth predictions on the KITTI dataset [14]. In summary, our main contributions are as follows:

  • We propose VDN as a novel probabilistic framework for monocular, self-supervised depth estimation, which uses approximate Bayesian inference to learn a continuous, parametric distribution over the depth. The uncertainty is then expressed as the variance of this distribution.

  • We show qualitatively that the obtained uncertainty is more interpretable as it highlights regions in the image which are difficult to learn in a self-supervised setting.

  • We also demonstrate that high confidence predictions are more likely to be accurate. For that, we propose an evaluation scheme based on the task of 3D scene reconstruction, where the depth uncertainty is used to filter unreliable predictions before fusion.

2 Related Work

Self-supervised Uncertainty. Self-supervised learning for monocular depth estimation was originally proposed by Zhou et al. [62]. Their core idea is that a network that predicts the depth and relative pose of a video frame can be optimised by using the photometric consistency with warped neighbouring frames as a loss function. They also include an explainability mask in their network to account for moving objects and non-Lambertian surfaces, which can be interpreted as a form of uncertainty estimation. Later, Godard et al. [17] consolidated several improvements into a conceptually simple method called MonoDepth2, which did not include the explainability mask since it did not have a significant impact on the accuracy of the estimated depth in practice.

Klodt and Vedaldi [31] were the first to probabilistically model the depth, pose and photometric error and use the estimated uncertainties to down-weight regions in the image that violate the colour constancy assumption made by the photometric objective function. The depth and poses are modelled through Laplacian distributions where the likelihoods of target depth and pose, obtained from a classical Structure-from-Motion system [36], are maximised. In contrast to their method, ours is self-contained, i. e.it does not rely on external teachers and therefore its performance is not bounded by the quality of those. In an analogous way, Yang et al. [58] also model the photometric error as a Laplacian distribution, and show that its variance can be used to improve the downstream task of visual odometry [59].

Alternatively, depth estimation can be reframed as a discrete classification problem, as proposed by Johnston and Carneiro [24], which allows for computing the variance without any additional prediction head in the network. However, their approach does not have strong guarantees on the quality of the output distribution [19] and in practice the variance appears to mostly inversely correlate with the predicted disparity except for the furthest regions in the image. On the other hand, Poggi et al. [42] present a comprehensive summary of various depth uncertainty estimation techniques for self-supervised learning and propose a combination of ensembling and self-teaching methods as an effective way to improve depth accuracy. They also propose evaluation metrics based on sparsification, which can be used to assess the quality of the predicted uncertainty. In our work we will compare to baselines from both [24] and [42].

Last, the shortcomings of photometric uncertainty estimation in the context of Multi-View Stereo [44] are addressed by Xu et al. [56] with the goal of directly improving the predicted depth. In contrast, we aim for a monocular method with interpretable uncertainty values.

Supervised Uncertainty. A fully supervised probabilistic approach is taken by Liu et al. [35], where the authors update a discrete depth probability volume (DPV) for each image, by fusing information from consecutive frames in an iterative Bayesian filtering fashion. Due to the discrete nature of the DPV, arbitrary distributions can be expressed, however to obtain an initial estimate for it, one needs to compute a cost volume from a number of frames in a video sequence. Moreover, their confidence maps show banding artefacts originating from the discrete depth representation in the cost volume.

Whereas most prior work uses a Laplacian or Gaussian distribution to model the depth and its uncertainty, ProbDepthNet from Brickwedde et al. [4] uses a Gaussian mixture model (GMM). The main benefit of GMMs is that they can represent multi-modal distributions, which can occur in foreground-background ambiguity. Walz et al. [50] propose a method for depth estimation on gated images and model the aleatoric depth uncertainty. Ke et al. [26] have the goal to improve scene reconstruction using depth uncertainty in a two-stage method: (i) predict a rough depth and uncertainty estimates using optical flow and triangulation from multiple frames; and (ii) refine the outputs of the first stage in an iterative procedure based on recurrent neural networks.

3 Methods

3.1 Background and Motivation

Fundamentals. Let \(\mathcal {D}= \{I_t\}_{t=1\dots N}\) be a sequence of image frames and \(T_{t\rightarrow s}\) the corresponding 3D camera motion from a target frame t to a source frame s. Further, let K denote the camera intrinsic matrix, projecting from 3D camera coordinates to 2D pixel coordinates \(x \in \mathcal {X}\). Then, by exploiting 3D geometric constraints, one can cast the task of learning a depth map \(D_t\) for a frame \(I_t\) as a photometric consistency optimisation problem between the target and the warped source frames [13, 17, 62]:

$$\begin{aligned} \mathcal {L}_\textrm{photo}\left( I_t, D_t\right) = \sum _{x \in \mathcal {X}} \Vert I_s\langle K T_{t \rightarrow s} D_t(x) K^{-1}x\rangle - I_t(x) \Vert , \end{aligned}$$
(1)

where \(I_s\langle \cdot \rangle \) stands for a (bilinear) interpolation on the source frame \(I_s\), following the notation of [17]. For the sake of notational brevity, here and throughout the rest of the paper we omit the dependencies on K as well as on \(I_s\) and \(T_{t \rightarrow s}\) in the losses.

The estimated depth \(D_t\) is usually expressed as the inverse disparity output of a deterministic convolutional neural network \(\mu _\theta \), parametrised by weights \(\theta \):

$$\begin{aligned} D_t = {\mu _\theta (I_t)}^{-1}. \end{aligned}$$
(2)

For numerical reasons, the disparity output is activated by a sigmoid non-linearity and stretched to a predefined \(\left[ d_\textrm{max}^{-1}, d_\textrm{min}^{-1}\right] \) range. In practice, the loss from Eq. (1) is also extended to account for multiple source frames (e. g.using the minimum reprojection error [17]) and combined with other terms such as structural similarity [53] or smoothness regularisation [16, 17]. In this paper we will refer to the extended loss as \(\mathcal {L}_\textrm{photo}\) and to the full model as MonoDepth2. Importantly, this will serve us as a basis framework for monocular, self-supervised depth learning upon which we will introduce a probabilistic extension in Sect. 3.2.

Fig. 2.
figure 2

(a) A sample input image from ScanNet [7] (scene 0000_00); (b) Photometric uncertainty; (c) Variational depth uncertainty; (low   high)

Uncertainty Estimation. Despite its wide-spread popularity, MonoDepth2 is not designed to account for the uncertainty associated with \(D_t\). Following a paradigm of modelling the aleatoric uncertainty explicitly [28] one can reframe the loss from Eq. (1) into an exponential family likelihood with a learnable variance \(\hat{\sigma }_\theta \):

$$\begin{aligned} p(I_t \mid D_t) \propto \frac{1}{\hat{\sigma }_\theta (I_t)} \exp {\left( -\frac{\mathcal {L}_\textrm{photo}\left( I_t, D_t\right) }{\hat{\sigma }_\theta (I_t)}\right) }, \end{aligned}$$
(3)

where we abuse the notation for the weights \(\theta \) and the neural network \(\hat{\sigma }\), which may share only some of its parameters with \(\mu \). At this point, it is important to clarify that \(\hat{\sigma }_\theta (I_t)\), as used in Eq. (3), accounts merely for the variance in the photometric error, \(\mathcal {L}_\textrm{photo}\), and not the predicted depth \(D_t\). To give and intuitive explanation why the two uncertainties are not interchangeable, consider the following thought experiment: let all pixels in \(I_t\) and \(I_s\) have the same colour value. Then, for any predicted \(D_t\) and arbitrary \(T_{t \rightarrow s}\) we have that \(I_t(x) = I_s\langle K T_{t \rightarrow s} D_t(x) K^{-1}x\rangle = I_s(x),\, \forall x \in \mathcal {X}\) and the likelihood from Eq. (3) is maximised with \(\hat{\sigma }_\theta (I_t) \rightarrow 0\). Thus the photometric variance will collapse, while the actual depth variance is large.

In reality, this scenario can occur at large textureless surfaces, such as walls or overexposed regions close to light sources. Figures 2a and 2b show an example input and the corresponding photometric uncertainty. Notice how the network confidence is lowest in the aforementioned regions and highest on their boundaries or in high-frequency patterned areas, where small changes in \(D_t\) can substantially increase \(\mathcal {L}_\textrm{photo}\). Thus, the photometric variance does not necessarily correlate with the uncertainty in the depth estimate, and in some cases it is even complementary to the latter. On the other hand, VDN is able to assign high depth variance to those regions, cf. Fig. 2c.

Despite that, the photometric uncertainty has been reported to quantitatively improve the depth estimates [42, 58]. We hypothesise that this can be attributed to the effect of loss attenuation as the supervisory signal is not dominated by noise stemming from the difficult, depth sensitive areas such as non-Lambertian objects, similarly to the observations made by [28] in a supervised depth regression setup. Nevertheless, there are real-world applications, such as 3D scene reconstruction where proper depth uncertainty estimation is of greater importance, as we will show experimentally in Sect. 4.

3.2 Variational Depth Networks

Objective.   In the following, we will introduce a probabilistic extension to the self-supervised depth learning pipeline, in which the variance of the predicted depth maps can be reliably estimated. Intuitively speaking, we will assume that \(D_t\) is a random variable following some conditional distribution and we will make the image warping transformation in Eq. (1) aware of the probabilistic nature of \(D_t\). We find this intuition to fit well into the Bayesian framework of reasoning and we will leverage approximate variational inference [18, 25, 30] to optimise a parametric distribution over \(D_t\).

In essence, it requires that we specify a likelihood \(p({I_t}\mid {D_t})\), a prior \(p({D_t})\) and a posterior distribution \(p({D_t}\mid {I_t})\) to which a tractable approximation, \(q_{\theta }({D_t}\mid {I_t})\) is fit. Then, using \(q_\theta \) we can derive a lower bound on the marginal log-likelihood:

$$\begin{aligned} \mathbb {E}_{p_\mathcal {D}}{[\log {p_\theta ({I_t})}]}&= \mathbb {E}_{p_\mathcal {D}}{\bigg [\log {\mathbb {E}_{q_\theta }{\bigg [\frac{p{({I_t}\mid {D_t})} p{({D_t})}}{q_{\theta }({D_t}\mid {I_t})}\bigg ]}\bigg ]}} \end{aligned}$$
(4)
$$\begin{aligned}&\ge \mathbb {E}_{p_\mathcal {D},q_\theta }{\bigg [\log {\frac{p({I_t}\mid {D_t}) p{({D_t})}}{q_{\theta }({D_t}\mid {I_t})}}\bigg ]}. \end{aligned}$$
(5)

This can be further decomposed into a log-likelihood and a KL-divergence term, into the so-called evidence lower bound:

$$\begin{aligned} \begin{aligned} \mathcal {L}_\textrm{ELBO}(I_t, D_t)&= \mathbb {E}_{p_\mathcal {D},q_\theta }{[\log {p({I_t}\mid {D_t})}]} \\&\quad - \mathbb {E}_{p_\mathcal {D}}{[{\text {KL}}{(q_{\theta }({D_t}\mid {I_t})}\mid \mid {p{(D_t)}})]}. \end{aligned} \end{aligned}$$
(6)

One can show that maximising \(\mathcal {L}_\textrm{ELBO}\) w. r. t.\(\theta \) is equivalent to minimising \(\mathbb {E}_{p_\mathcal {D}}{[{\text {KL}}{(q_{\theta }({D_t}\mid {I_t})}\mid \mid {p{({D_t}\mid {I_t})}})]}\) thus closing the gap between the approximation and the underlying true posterior [18, 25]. For the likelihood of VDN we choose an unnormalised density as in Eq. (3), however, throughout this work we will not model both the photometric and depth uncertainty, so as to isolate the effects of our contribution. In the subsequent sections we will specify the exact form of \(q_{\theta }({D_t}\mid {I_t})\) and \(p{({D_t})}\).

Approximate Posterior.   In the context of depth estimation, one has to take into account two considerations when choosing a suitable family of variational distributions. First, it has to have a positive, bounded support over \([d_\textrm{min}, d_\textrm{max}]\) and, second, it has to allow for reparametrisation so that the weights \(\theta \) can be learnt with backpropagation. One such candidate distribution is given by the truncated normal distribution [5], constrained to the aforementioned interval, whose location parameter is defined by the output of the neural network \(\mu _\theta \) and the scale by \(\sigma _\theta \), similarly to Eq. (3). Unlike the photometric variance \(\hat{\sigma }_\theta \), \(\sigma _\theta \) will have a direct relation to the variance of the estimated depth. For numerical reasons, however, it may be beneficial to express the approximate posterior over disparity instead of depth [17], and convert disparity samples to depth as per Eq. (2):

$$\begin{aligned} q_{\theta }({D_t^{-1}}{I_t}) = \mathcal {N}_\textrm{tr}\left( D_t^{-1} \mid \mu _\theta (I_t), \sigma _\theta (I_t), d_\textrm{max}^{-1}, d_\textrm{min}^{-1} \right) . \end{aligned}$$
(7)

Backpropagating to \(\mu _\theta \) and \(\sigma _\theta \) is possible through a reparametrisation using the inverse CDF function, which is readily implemented in TensorFlow [1, 11] and in third-party packages [40] for Pytorch [41].

Here we assume that \(q_\theta \) is a pixelwise factorised distribution and we obtain a disparity prediction using the mode, \(\mu _\theta (I_t)\). Since we have defined a distribution over the disparity, it is not straightforward to obtain the mode of the transformed distribution over the depth, \(q_\theta ^{-1}\). Fortunately however, for the given truncated normal parametrisation and the reciprocal transformation one can compute it analytically from the density of \(q_\theta ^{-1}\) using the change of variables trick, see Appendix A.2 for details:

$$\begin{aligned} \begin{aligned} \textrm{mode}\left( q_\theta ^{-1}\left( D_t\mid I_t\right) \right) =\min \left( \max \left( m, d_\textrm{min} \right) , d_\textrm{max} \right) , \\ \text {where}\quad m = \frac{\sqrt{\mu _\theta (I_t)^2 + 8\sigma _\theta (I_t)^2} - \mu _\theta (I_t)}{4\sigma _\theta (I_t)^2}. \end{aligned} \end{aligned}$$
(8)

Finally, to obtain the estimated pixelwise depth uncertainty, one can compute the sample variance of \(q_\theta ^{-1}\).

Fig. 3.
figure 3

Model overview of VDN with an example input from ScanNet [7] (scene 0000_00). Given a target image \(I_t\), the subnetworks \(\mu _\theta \) and \(\sigma _\theta \) predict the pixelswise location and scale parameters of the approximate posterior, resulting in a factorised distribution over disparities. Then, multiple samples are drawn and the reciprocal of each is used independently in a warping transformation of a source image \(I_s\), assuming known intrinsics K and pose \(T_{t \rightarrow s}\). The warped and interpolated source frames are used to compute the likelihood. The prior is given by the predicted location and scale parameters from a set of pseudo-inputs \(U_i\) as per [48]. The arrow denotes a sampling operation

Prior. The choice of depth prior is particularly important for us because it can adversely bias the shape of the variational posterior. To understand the reason for that, one has to compare the VDN model with a regular VAE [30]: while both models encode the input image in a latent representation, a VDN does not use a learnable decoder to form the likelihood but rather a fixed warping transformation. This means that a bias in the latent space, cannot be compensated for during decoding, resulting in hindered weight optimisation. For this reason we opt for a learnable prior given by the aggregated approximate posterior, which is provably the optimal prior for that task [46, 48], see Appendix A.1 for details:

$$\begin{aligned} p^*\left( D_t\right) = \sum _{I_t \in \mathcal {D}} q_{\theta }({D_t}\mid {I_t}) p_\mathcal {D}({I_t}). \end{aligned}$$
(9)

Unfortunately, however, the estimation of the aggregate posterior is computationally prohibitive for large, high-dimensional datasets. Therefore, we employ an approximation by Tomczak et al. [48], called VampPrior, where the prior is given as a mixture of the variational posteriors computed on a set of learnable pseudo inputs \(\{U_i\}_{i=1\dots k}\):

$$\begin{aligned} p{(D_t)} \approx \frac{1}{k}\sum _{i=1}^k q_{\theta }({D_t}\mid {U_i}). \end{aligned}$$
(10)

Earlier, we expressed the approximate posterior in disparity- rather than depth-space and consequently the prior becomes a mixture distribution over disparities too. Since the KL-divergence is invariant to continuous, invertible transformations [33] (such as the reciprocal relation of depth and disparity), one can compute \({\text {KL}}{(q_{\theta }({D_t^{-1}}\mid {I_t})}\mid \mid {p{(D_t^{-1})})}\) instead. In summary, all of the components of VDN are presented in Fig. 3.

4 Experiments

4.1 Setup

Datasets

ScanNet. The ScanNet [7] dataset contains 1513 video sequences collected in indoor environments, annotated with 3D poses, dense depth maps and reconstructed meshes. The reason to use this dataset is to evaluate the per-image depth and uncertainty estimation and to assess the utility of uncertainty in 3D reconstruction. Consequently, we use the ground-truth poses to compute the photometric error instead of predicting them. For training we only consider every \(10^{\text {th}}\) frame as target to reduce redundancy and for each, we find a source frame both backwards and forwards in time with a relative translation of 5–10cm and a relative rotation of at most 5\(^\circ \). All images are resized to \(384\times 256\) pixels. We use the ground-truth poses to compute the photometric error and do not train a network to predict the pose since we want to focus our analysis in this experiment on the depth and uncertainty estimates only.

KITTI. The KITTI dataset [14] is an established benchmark dataset for depth estimation research and consists of 61 sequences collected from a vehicle. Following [17], we use the Eigen split [12], resize the input images to \(640\times 192\) and evaluate against LiDAR ground-truth capped at 80 m. Unlike the ScanNet experiments, here the camera poses are learnt the same way as in [17] so as to allow for fair comparison.

Metrics

3D. Previous works on 3D reconstruction [3, 37, 45] use point-to-point distances as the basis for comparing to ground-truth meshes. They convert each mesh to a point cloud by only considering its vertices, or by sampling points on the faces, essentially discarding the surface information of the mesh. However, if a predicted point lies on the surface of the ground-truth mesh it can still incur a non-zero error since only the distance to the closest vertex is considered. To mitigate this, we propose to use a cloud-to-mesh (\(\textrm{c}\rightarrow \textrm{m}\)) distance as a basis for our 3D reconstruction error computation, which is readily available in open-source software like CloudCompare [15]. Given a mesh \(\mathcal {M}=(\mathcal {V},\mathcal {F})\), where \(\mathcal {V}\) denotes the vertices and \(\mathcal {F}\) the faces, we compute the accuracy as the fraction of vertices for which the Euclidean distance to the closest face \(f' \in \mathcal {F}'\) in another mesh \(\mathcal {M}'\) is smaller than a threshold \(\epsilon \):

$$\begin{aligned} \textrm{acc}_{\textrm{c}\rightarrow \textrm{m}}\left( \mathcal {M}, \mathcal {M}'\right) = \frac{1}{\left|{\mathcal {V}}\right|}\sum _{v \in \mathcal {V}} \mathbbm {1}\left[ \min _{f' \in \mathcal {F}'}\textrm{dist}\left( v, f'\right) < \epsilon \right] . \end{aligned}$$
(11)

Here \(\mathbbm {1}[\cdot ]\) denotes the indicator function. Given predicted and ground-truth meshes, \(\mathcal {M}_\textrm{pred}\) and \(\mathcal {M}_\textrm{gt}\) respectively, we define the precision as \(\textrm{acc}_{\textrm{c}\rightarrow \textrm{m}}(\mathcal {M}_{\textrm{pred}}, \mathcal {M}_{\textrm{gt}})\) and the recall as \(\textrm{acc}_{\textrm{c}\rightarrow \textrm{m}}(\mathcal {M}_{\textrm{gt}}, \mathcal {M}_{\textrm{pred}})\). The F-score is the harmonic mean of the precision and recall [32]. Following standard practices in 3D reconstruction literature [37, 45] we use a threshold of \(\epsilon =5\,\text {cm}\) in all our evaluations.

2D. For the evaluation of the 2D predicted depth maps we compute the widely used metrics proposed by Eigen et al. [12]. Uncertainty is evaluated using sparsification curves [23] and the Area Under the Sparsification Error (AUSE) and Area Under the Random Gain (AURG) as proposed by Poggi et al. [42]. Note AURG and AUSE are computed w. r. t.another 2D depth metric and therefore comparison among different models is fair only when they perform similarly on that metric too.

Implementation Details

Network Architectures and Training Details. Even though our model architecture strictly follows [17] there are a couple of deviations. In particular, to accommodate the prediction of the distribution location and scale parameters, we duplicate the original disparity decoder architecture and, for the scale parameter only, change the output activation to linear. To avoid numerical instability issues with the scale, we clip it to the \([10^{-6}, 3]\) interval. In all our experiments we use a ResNet-18 encoder [21], pretrained on ImageNet [9], the Adam optimiser [29] with an initial learning rate of \(10^{-4}\) which we reduce by a factor of 10 after 30 epochs, for a total of 40 epochs. The VampPrior for our VDN models is computed as described in Sect. 3.2 with 20 pseudo-inputs, which we initialise by broadcasting a random colour value over the height and width dimensions. To estimate the loss \(\mathcal {L}_\textrm{ELBO}\) from Eq. (6) the approximate posterior is sampled 10 times.

3D Reconstruction. We use the TSDF-fusion algorithm implemented in Open3D [61] to reconstruct ScanNet [7] scenes. To speed up reconstruction, we only integrate every \(10^{\text {th}}\) frame and, during fusion, we use a sample size of 5 cm and a truncation distance of 20 cm. For evaluation we use the ground-truth meshes provided with the dataset.

4.2 ScanNet: Uncertainty-Aware Reconstruction

To evaluate the usefulness of the predicted uncertainty we use the task of 3D reconstruction on ScanNet [7] scenes. In this experiment we leverage the depth uncertainty for measurement selection by masking out pixels with uncertainty above a preselected threshold during the integration process. We compare our method to several other recently proposed depth uncertainty estimation methods, all implemented on top of the same MonoDepth2 framework. Photometric uncertainty refers to Eq. (3), which is used by D3VO [58] to improve visual odometry. Self-teaching refers to the method proposed by Poggi et al. [42], where we use the model without uncertainty as a teacher for training the student network in a supervised way. Discrete depth predicts a discrete disparity volume [24], from which continuous depth and variance can be derived. Each of these methods constitute a fair baseline as all are fully self-supervised and monocular.

Fig. 4.
figure 4

ScanNet: mean reconstruction precision (a) and recall (b) as well as 2D depth RMSE (c) curves on the validation set for various filtering thresholds on the uncertainty. A monotonically decreasing precision curve indicates that the uncertainty correlates well with the errors in the depth maps used for fusion while a higher recall means that smaller portions of the geometry are being removed

Table 1. ScanNet: , and metrics. All methods are based on the same MonoDepth2 architecture and are our own (re)implementations. \(\uparrow \) and \(\downarrow \) denote if higher or lower score is better

Table 1 summarises the results for the standard 2D depth and 3D reconstruction metrics. First, we note that photometric uncertainty performs considerably worse than the other methods. Discrete depth performs generally on par with the No uncertainty baseline. VDN slightly outperforms all baselines on most metrics. Figure 4a shows the mean reconstruction precision when increasing the uncertainty threshold at which predictions are considered valid. We expect to see a downwards trend, as using more uncertain predictions should decrease the accuracy of the reconstructed mesh. Here, the photometric uncertainty does not show this behaviour, whereas the variational and discrete uncertainty do show it, with discrete generally having a higher precision everywhere except when using more than 90% of the pixels. Conversely, Fig. 4b shows the mean reconstruction recall where a rapid increase signifies that larger pieces of the scene geometry are being cut out. For the sake of completeness, in Fig. 4c we also a provide similar plots for the mean RMSE as measured on 2D depth images. Figures 5a and 5b show reconstructions from ground-truth and predicted depths for all uncertainty-aware baselines, and Figs. 5c and 5d depict the corresponding cloud-to-mesh distances and uncertainties. Notice how the photometric uncertainty anti-correlates with the precision, while the discrete depth merely increases the uncertainty with the distance from the camera. The output of the self-teaching model is not very interpretable either as it models the aleatoric noise in the teacher network. More qualitative examples are disclosed in Appendix B.

Fig. 5.
figure 5

(a) Meshes constructed using the ground-truth depth maps from ScanNet [7] (scene 0019_00); (b) Coloured meshes using the predicted depths; (c) Meshes from predicted depth, coloured by the cloud-to-mesh distances from the ground-truth; (d) Meshes from predicted depth, coloured by the depth uncertainty; (low high)

4.3 ScanNet: Prior Ablation Study

To investigate the adverse effects of naively specifying a prior distribution over the disparity, we compare the VampPrior against a truncated normal distribution with fixed location and scale parameters at 0.5 and 2.0 respectively, in two training scenarios: on the full training data and on a subset of 10 scenes only. The latter setup is especially interesting because it exacerbates any undesirable influence the prior might have onto the approximate posterior due to the lack of sufficient training data. While both priors are capable of regularising the spread of the variational posterior, the VampPrior shows superior results as presented in the bottom half of Table 1. In particular, in the low data regime, it achieves significantly better scores on most metrics.

Fig. 6.
figure 6

(a) Sample input images from the Eigen test split [12] in KITTI [14]; (b) Predicted disparities; (c) Predicted disparity variance; (d) Estimated depth variance using 100 samples

Table 2. KITTI: 2D and evaluation results on the Eigen test split [12] with raw LiDAR ground truth (80 m)

4.4 KITTI: 2D Depth Evaluation

In order to benchmark VDN on the KITTI dataset [14] against comparable prior work, we have selected as baselines the original MonoDepth2 [17], referred to as No uncertainty, the MonoDepth2 (Boot+Self) from Poggi et al. [42], which does account for depth uncertainty through self-teaching and bootstrapped ensemble learning, and the Photometric uncertainty baseline also presented in [42] under the name MonoDepth2-Log. Table 2 shows the depth and uncertainty results for the VDN and the baselines. Our model performs slightly worse than the baselines except for the RMSE-AUSE and AURG metrics, which we attribute to the increased amount of noise during training, stemming from the stochastic sampling operations. Figure 6a shows three example inputs from the test set with their corresponding disparity location and scale predicted parameters in Figs. 6b and 6c. The resulting depth uncertainty is illustrated in Fig. 6d, which highlights the depth ambiguity of the sky and distant, indistinguishable objects.

5 Conclusions

We have presented a probabilistic extension of MonoDepth2, which learns a parametric posterior distribution over depth. The method yields useful uncertainty, which correlates well with the error in the depth predictions and consequently, we have shown that one can use the uncertainty to mask out unreliable pixels and improve the precision of meshes in a 3D scene reconstruction task. Such masking, however, can come at a cost of decreased recall, resulting in sparser meshes. It is therefore a promising direction for future work to combine our method with a disparity [27] or mesh completion algorithm [8]. Other extensions of our work could combine the photometric and variational depth uncertainties, as the former is complementary to the latter, or apply VDN to multi-view, self-supervised depth estimation [54]. Finally, we note that due to the stochastic nature of our method, it is moderately demanding on computation and memory resources during training, as an additional forward-pass is needed for the VampPrior, and multiple samples are drawn from the approximate posterior to estimate the likelihood and KL-divergence terms of the loss. In addition, the depth uncertainty is computed from samples of the transformed disparity posterior. For the training and evaluation of all models we have used a single NVIDIA RTX A5000 GPU with 24 GB of memory.