1 Introduction

Semantic segmentation of anatomical structures and pathologies is a crucial step in clinical diagnosis and many downstream tasks. The majority of recent automated segmentation methods treat the problem as a one-to-one mapping from image to output mask (e.g.  [6]). However, medical segmentation problems are often characterised by ambiguities and multiple hypotheses may be plausible [10]. This is in part due to inherent uncertainties such as poor contrast or other restrictions imposed by the image acquisition, but also due to variations in annotation “styles” between different experts. To account for such ambiguities it is crucial that prediction systems provide access to the full distribution of plausible outcomes without sacrificing accuracy. Predicting only the most likely hypothesis may lead to misdiagnosis and may negatively affect downstream tasks.

Recent work proposed to account for the uncertainty in the learned model parameters using an approximate Bayesian inference over the network weights [2]. However, it was shown that this method may produce samples that vary pixel by pixel and thus may not capture complex correlation structures in the distribution of segmentations [4]. A different line of work accounts for the possibility of different outcomes by training an ensemble of M networks [5] or by training a single network with M heads [7]. Both approaches, however, can only produce a fixed number of hypotheses. This problem is overcome by the conditional variational autoencoder (cVAE), an extension of [3] for modelling conditional segmentation masks given an input image [8]. Finally, the recently proposed probabilistic U-NET combines the cVAE framework with a U-NET architecture [4]. The authors showed that, given ground-truth annotations from multiple experts, the method can produce an unlimited number of realistic segmentation samples. Moreover, the method was shown to outperform various related methods including network ensembles, M-heads [7] and the Bayesian SegNet [2].

Fig. 1.
figure 1

(Left) Example of hierarchical segmentation generation with segmentation output at each level (\(\hat{s}_\ell \)) in the top row, right to left, and corresponding residual refinements to the prostate peripheral zone class in the bottom row. (Right) Corresponding graphical model (for N independent samples).

However, as we will show, the probabilistic U-NET produces samples with limited diversity. We believe this may be due to the fact that stochasticity is only introduced in the highest resolution level of the U-NET, and because the network can choose to ignore the random draws from the latent space since it is only concatenated to the channels. In this work, we propose a novel hierarchical probabilistic model which can produce segmentation samples closely matching the ground-truth distribution of a number of annotators. Inspired by Laplacian Pyramids, the model generates image-conditional segmentation samples by generating the output at a low resolution and then continuously refining the distribution of segmentations at increasingly higher resolutions. In contrast to prior work, the variations on each resolution level are governed by a separate latent variable, thereby avoiding the problems mentioned above. This process is illustrated in Fig. 1. We show that compared to recent work, our proposed Probabilistic Hierarchical Segmentation (PHiSeg) produces samples of significantly better quality for two challenging segmentation tasks, both, when trained with multiple annotations, and a single annotation per image. Furthermore, the mean prediction of our model performs on par with the standard U-NET in terms of segmentation accuracy.

2 Methods

We start by assuming that the segmentations s given an input image x are generated from L levels of latent variables \(z_\ell \) according to the graphical model shown in Fig. 1. Thus, the conditional distribution p(s|x) is given by the following expression for the general case of L latent levels:

$$\begin{aligned} p(\mathbf {s}|\mathbf {x}) = \int p(\mathbf {s}|\mathbf {z}_1, \dots , \mathbf {z}_L)p(\mathbf {z}_1|\mathbf {z}_2,\mathbf {x})\cdots p(\mathbf {z}_{L-1}|\mathbf {z}_L,\mathbf {x})p(\mathbf {z}_L|\mathbf {x})d\mathbf {z}_1\cdots d\mathbf {z}_L. \end{aligned}$$
(1)

We further assume that each latent variable \(z_\ell \) is responsible for modelling the conditional target segmentations at \(2^{-\ell +1}\) of the original image resolution (e.g. \(\mathbf {z}_1\) and \(\mathbf {z}_3\) model the segmentation at the original and at 1/4 of the original resolution, respectively.). This does not result from the graphical model itself but is rather enforced by our implementation thereof as will become clear shortly.

We aim to approximate the posterior distribution of \(p(\mathbf {z}|\mathbf {s},\mathbf {x})\) using a variational approximation \(q(\mathbf {z}|\mathbf {s},\mathbf {x})\) where we used \(\mathbf {z}\) to denote \(\{\mathbf {z}_1,\dots ,\mathbf {z}_L\}\). It can be shown that \(\log p(\mathbf {s}|\mathbf {x}) = \mathcal {L}(\mathbf {s}|\mathbf {x}) + {{\,\mathrm{{\text {KL}}}\,}}(q(\mathbf {z}|\mathbf {s},\mathbf {x})||p(\mathbf {z}|\mathbf {s},\mathbf {x}))\), where \(\mathcal {L}\) denotes the evidence lower bound, and \({{\,\mathrm{{\text {KL}}}\,}}(\cdot , \cdot )\) the Kullback-Leibler divergence [3, 4, 8]. Since \({{\,\mathrm{{\text {KL}}}\,}}(\cdot , \cdot )\ge 0\), \(\mathcal {L}\) is a lower bound on the conditional log probability with equality when the approximation q matches the posterior exactly. Using the decomposition in Eq. 1 we find that for our model

$$\begin{aligned} \begin{aligned} \mathcal {L} =&{{\,\mathrm{{\mathbb {E}}}\,}}_{q(\mathbf {z}_1,\dots ,\mathbf {z}_L|\mathbf {x},\mathbf {s})} \left[ \log p(\mathbf {s}|\mathbf {z}_1,\dots ,\mathbf {z}_L)\right] -\alpha _L {{\,\mathrm{{\text {KL}}}\,}}\left[ q(\mathbf {z}_L|\mathbf {s},\mathbf {x})||p(\mathbf {z}_L|\mathbf {x})\right] \\&- \sum _{\ell =1}^{L-1} \alpha _\ell {{\,\mathrm{{\mathbb {E}}}\,}}_{q(\mathbf {z}_{\ell +1}|\mathbf {s},\mathbf {x})} \left[ {{\,\mathrm{{\text {KL}}}\,}}\left[ q(\mathbf {z}_\ell |\mathbf {z}_{\ell +1}, \mathbf {s}, \mathbf {x}) || p(\mathbf {z}_\ell |\mathbf {z}_{\ell +1}, \mathbf {x}) \right] \right] , \end{aligned} \end{aligned}$$
(2)

with \(\alpha _\ell =1\). A complete derivation can be found in Appendix A. The \(\alpha _\ell \) are additional heuristic variables which we introduced to help account for dimensionality differences between the \(\mathbf {z}_\ell \) (explained below). Following standard practice we parametrise the prior and posterior distributions as axis aligned normal distributions \(\mathcal {N}(\mathbf {z}|\mu , \sigma )\). Specifically, we define

$$\begin{aligned} p(\mathbf {z}_\ell |\mathbf {z}_{\ell +1}, \mathbf {x})&= \mathcal {N}\left( \mathbf {z}|\phi ^{(\mu )}_\ell (\mathbf {z}_{\ell +1},\mathbf {x}), \phi ^{(\sigma )}_\ell (\mathbf {z}_{\ell +1},\mathbf {x})\right) \end{aligned}$$
(3)
$$\begin{aligned} q(\mathbf {z}_\ell |\mathbf {z}_{\ell +1}, \mathbf {x}, \mathbf {s})&= \mathcal {N}\left( \mathbf {z}|\theta ^{(\mu )}_\ell (\mathbf {z}_{\ell +1}, \mathbf {s}, \mathbf {x}), \theta ^{(\sigma )}_\ell (\mathbf {z}_{\ell +1}, \mathbf {s}, \mathbf {x})\right) , \end{aligned}$$
(4)

where the \(\phi , \theta \) are functions parametrised by neural networks. Note that in contrast to the variational autoencoder [3], the \(p(\mathbf {z}_\ell |\cdot , \mathbf {x})\) are also parametrised by neural networks similar to [4, 8]. Lastly, we model \(p(\mathbf {s}|\mathbf {z})\) as the usual categorical distribution with parameters (i.e. softmax probabilities) predicted by another neural network. By parametrising all distributions using neural networks, this can be seen as a hierarchical conditional variational autoencoder with the posteriors \(q(\mathbf {z}_\ell |\cdot , \mathbf {s}, \mathbf {x})\) and priors \(p(\mathbf {z}_\ell |\cdot , \mathbf {s})\) encoding \(\mathbf {x}\) and \(\mathbf {s}\) into latent representations \(\mathbf {z}_\ell \), and the likelihood \(p(\mathbf {s}|\mathbf {z})\) acting as the decoder. Our implementation of this model using a neural network for \(L=3\) is shown in Fig. 2. In that figure it can be seen that the total number of resolution levels of the network (i.e. number of downsampling steps plus one) can be larger than the number of latent levels. The example in Fig. 2 has a total of 4 resolution levels, of which only \(L=3\) are latent levels. We obtained the best results with 7 total resolution levels of which \(L=5\) are latent levels. The prior and posterior nets have identical structure but do not share any weights. Similar to previous work, all three subnetworks are used for training but testing is performed by using only the prior and the likelihood networks [3, 4, 8].

From Fig. 2 it can be seen that latent variables \(\mathbf {z}_\ell \) will form the skip connections in a U-NET-like architecture. However, unlike [6] and [4], each skip connection corresponds to a latent variable \(\mathbf {z}_\ell \) such that no information can flow from the image to the segmentation output without passing a sampling step. We do not map the latent variables to a 1-D vector but rather choose to keep the structured relationship between the variables. We found that this substantially improves segmentation accuracy. As a result, latent variable \(\mathbf {z}_\ell \) has a dimensionality of \(r_x 2^{-\ell +1}\times r_y 2^{-\ell +1} \times D\), where D is a hyper-parameter and \(D=2\) for all experiments, and \(r_x, r_y\) are the dimensions of the input images. The latent variable \(\mathbf {z}_\ell \) is limited to modelling the data at \(2^{-\ell +1}\) of the original resolution due to the downsampling operations before it. It then passes up the learned representation to the latent space embedding above (\(\mathbf {z}_{\ell -1}\)) to perform a refinement at double the resolution. This continues until the top level is reached. To further enforce this behaviour the likelihood network is designed to generate only residual changes of the segmentation masks for all \(\mathbf {z}_\ell \) except the bottom one. This is achieved through the addition layers before the outputs (see Fig. 2). Our model bears some resemblance to the Ladder Network [9] which is also a hierarchical latent variable model where inference results in an autoencoder with skip connections. Our work differs substantially from that work in how inference is performed. Furthermore, to our knowledge, the Ladder Network was never applied to structured prediction problems.

Fig. 2.
figure 2

Schematic network architecture of the proposed method for \(L=3\) latent levels and 4 resolution levels.

Training and Predictions: We aim to find the neural network parameters which maximise the lower bound \(\mathcal {L}\) in Eq. 2. The analytical form of the individual terms is prescribed by our model assumptions: since the posterior and prior both are modelled by normal distributions, the KL terms can be calculated analytically [3]. Our choice of likelihood results in a cross entropy term \({{\,\mathrm{{\text {CE}}}\,}}(\hat{\mathbf {s}}_1, \mathbf {s}_{gt})\), with \(\hat{\mathbf {s}}_1\) the predicted segmentation and \(\mathbf {s}_{gt}\) the corresponding ground-truth. Similar to previous work we found that it is sufficient to evaluate all of the expectations using a single sample [3]. Two deviations from the above theory were necessary for stable training. First, the magnitude of the KL terms depends on the dimensionality of \(\mathbf {z}_\ell \). However, since the dimensionality of \(\mathbf {z}_\ell \) in our model grows with \(O(2^\ell )\), this led to optimisation problems. To counteract this, we heuristically set the weights \(\alpha _\ell =2^{\ell -1}\) in Eq. 2. Secondly, to enforce the desired behaviour that \(\mathbf {z}_\ell \) should only model the data at its corresponding resolution, we added deep supervision to the output of each resolution level (\(\hat{\mathbf {s}}_\ell \) in Fig. 2). The cost function used for this is again the cross entropy loss, \({{\,\mathrm{{\text {CE}}}\,}}(ups(\hat{\mathbf {s}}_\ell ), \mathbf {s}_{gt})\) for \(\ell > 1\), where \(ups(\cdot )\) denotes a nearest neighbour upsampling to match the size of \(\mathbf {s}_{gt}\). While \(\mathbf {z}_\ell \) can only model the data at a certain resolution, it may ignore this responsibility and focus only on matching the prior and posterior. Deep supervision effectively prevents this behaviour.

We trained the model using the Adam optimiser with a learning rate of \(10^{-3}\) and a batch-size of 12. We used batch-normalisation on all non-output layers. All models were trained for 48 h on a NVIDIA Titan Xp GPU and the model with the lowest total loss on a held-out validation set was selected.

After the model is trained, segmentation samples for an input image \(\mathbf {x}\) can be generated by first obtaining samples \(\mathbf {z}_\ell \) using the prior network and then decoding them using the likelihood network.

3 Experiments and Results

We evaluated our method on two datasets: (1) the publicly available LIDC-IDRI dataset which comprises 1018 thoracic CT images with lesions annotated by 4 radiologists [1]. Similar to [4] we extracted square 2D patches of size \(128\times 128\) pixels such that each patch was centred on a lesion. (2) We also evaluated our method on an in-house prostate MR dataset of 68 patients acquired with a transverse T2-weighted sequence (in-plane resolution \(0.1875\times 0.1875\) mm\(^{2}\) and slice thickness 3.3 mm). The transition and peripheral zones were manually annotated by 4 radiologists and 2 non-radiologists. We processed the data slice-by-slice (approx. 25 slices per volume), where we resampled each slice to a resolution of \(0.6\times 0.6\) mm\(^{2}\) and took a central crop of size \(192\times 192\). We divided both datasets into a training, testing and validation set using a random 60-20-20 split.

For all experiments we compared our method (PHiSeg) with \(L=5\) latent levels and a total of 7 resolution levels to the probabilistic U-NET [4]. In order to exclude network capacity as an explanation for performance differences, we aimed to model our network components as closely as possible after the probabilistic U-NET. We used batch normalisation layers for both methods which deviates from [4] but did not affect the results negatively. Furthermore, to demonstrate that modelling the segmentation problem at multiple resolution levels is beneficial, we also compared against a variation of PHiSeg with only \(L=1\) latent levels (i.e. no skip connections or latent space hierarchy). Lastly, for some experiments we compared to a deterministic U-NET using the same architecture as for the probabilistic U-NET but with no stochastic components.

We evaluated the techniques in two experiments. First, we trained the methods using the masks from all available annotators, where in each batch we randomly sampled one annotation per image. We were interested in assessing how closely the distribution of generated samples matched the distribution of ground-truth annotations. To this end, we used the generalised energy distance \( D^2_\text {GED}(p_{gt}, p_{\mathbf {s}}) = 2{{\,\mathrm{{\mathbb {E}}}\,}}[d(\mathbf {s},\mathbf {y})] -{{\,\mathrm{{\mathbb {E}}}\,}}[d(\mathbf {s},\mathbf {s}')] - {{\,\mathrm{{\mathbb {E}}}\,}}[d(\mathbf {y},\mathbf {y}')], \) where d is 1 minus the intersection over union, i.e. \(d(\cdot , \cdot ) = 1-\text {IoU}(\cdot , \cdot )\), and \(\mathbf {s},\mathbf {s}',\mathbf {y},\mathbf {y}'\) are samples from the learned distribution \(p_\mathbf {s}\), and ground-truth distribution \(p_{gt}\) [4]. The GED reduces the sample quality to a single, easy-to-understand number but, as a consequence, cannot be interpreted visually. Therefore, we additionally aimed to produce pixel-wise maps showing variability among the segmentation samples. We found the expected cross entropy between the mean segmentation mask and the samples to be a good measure, i.e. \(\gamma (s_i) = {{\,\mathrm{{\mathbb {E}}}\,}}[{{\,\mathrm{{\text {CE}}}\,}}(\bar{s_i}, s_i)]\) with i the pixel position and \(\bar{s_i}\) the mean prediction. \(\gamma \) is statistically similar to variance with the L2-distance replaced by CE. However, we believe it is more suitable for measuring segmentation variability. Examples of our \(\gamma \)-maps along with sample segmentations are shown in Fig. 3. We quantify how well the \(\gamma \)-maps for each method predict regions with large uncertainty using the average normalised cross correlation (NCC) between the \(\gamma \)-maps and the CE error maps obtained with respect to each annotator:

$$\begin{aligned} \mathcal {S}_\text {NCC}(p_{gt}, p_{\mathbf {s}}) = {{\,\mathrm{{\mathbb {E}}}\,}}_{\mathbf {y}\sim p_{gt}} \left[ \text {NCC}({{\,\mathrm{{\mathbb {E}}}\,}}_{\mathbf {s}\sim p_{\mathbf {s}}}[{{\,\mathrm{{\text {CE}}}\,}}(\bar{\mathbf {s}},\mathbf {s})], {{\,\mathrm{{\mathbb {E}}}\,}}_{\mathbf {s}\sim p_{\mathbf {s}}}[{{\,\mathrm{{\text {CE}}}\,}}(\mathbf {y},\mathbf {s})]) \right] . \end{aligned}$$
(5)

Results for both \(D^2_\text {GED}\) and \( \mathcal {S}_\text {NCC}\) are shown in the top part of Table 1. All measures were evaluated with 100 samples drawn from the learned models.

Fig. 3.
figure 3

Ground-truth annotations and samples for two of the evaluated methods trained with masks from 6 or 1 experts(s). Average error maps \({{\,\mathrm{{\mathbb {E}}}\,}}_{\mathbf {y},\mathbf {s}}[{{\,\mathrm{{\text {CE}}}\,}}(\mathbf {y}, \mathbf {s})]\) and \(\gamma \)-maps (\({{\,\mathrm{{\mathbb {E}}}\,}}[{{\,\mathrm{{\text {CE}}}\,}}(\bar{\mathbf {s}}, \mathbf {s})]\)) for each model are shown in the left and right-most column, respectively.

Secondly, we set out to investigate the models’ ability to infer the inherent uncertainties in the annotations from just one annotation per training image. To this end, we trained the above models by using only the annotations of a single expert. For the evaluation we then computed the \(D^2_\text {GED}\) and \(\mathcal {S}_\text {NCC}\) using all available annotators. Additionally, we evaluated the models in terms of conventional Dice score evaluated with masks from the single annotator as ground-truth. To get a single prediction from the probabilistic models we used \(\bar{\mathbf {s}}\). This allowed us to obtain an indication of conventional segmentation accuracy. The results are shown in the bottom part of Table 1.

Table 1. Quantitative results for all metric. Statistically significant improvements (\(p<0.01\) with paired student’s t-test) over all other methods are indicated in bold.

We observed that when using all annotators for training, PHiSeg (\(L=5\)) produced significantly better \( D^2_\text {GED}\) and \(\mathcal {S}_\text {NCC}\) scores compared to all other methods. This can be observed qualitatively in Fig. 3 for a prostate slice with large inter-expert disagreements. Both, the prob. U-NET and PHiSeg (\(L=5\)) produced realistic samples but PHiSeg (\(L=5\)) was able to capture a wider variability. Furthermore, as indicated by the high \(\mathcal {S}_\text {NCC}\) values, PHiSeg’s (\(L=5\)) \(\gamma \)-maps were found to be very predictive of where in the image the method’s average prediction errors will occur. Similar results were obtained when training with only one annotator. We noticed that in this scenario the prob. U-NET may in some cases fail to learn variation in the data and revert back to an almost entirely deterministic behaviour (see fourth row in Fig. 3). We believe this can be explained by the prob. U-NET’s architecture which, in contrast to our method, allows the encoder-decoder structure to bypass the stochasticity. While our method also predicted smaller variations in the samples, they were still markedly more diverse. The lower performance of PhiSeg (\(L=1\)) indicates that using multiple resolution levels is crucial for our method. More samples for the prostate and LIDC-IDRI datasets can be found in Appendix B. From Table 1 it can be seen that no significant differences between the Dice scores were found for any of the methods (except PHiSeg’s (\(L=1\))), including the det. U-NET. From this we conclude that neither PhiSeg (\(L=5\)) nor the prob. U-NET suffer in segmentation performance due to their stochastic elements.

4 Discussion and Conclusion

We introduced a novel hierarchical probabilistic method for modelling the conditional distribution of segmentation masks given an input image. We have shown that our method substantially outperforms the state-of-the-art on a number of metrics. Furthermore, we demonstrated that PHiSeg was able to predict its own errors significantly better compared to previous work. We believe that proper modelling of uncertainty is indispensable for clinical acceptance of deep neural networks and that having access to the segmentation’s probability distribution will have applications in numerous downstream tasks.