1 Introduction

Deep learning has had a tremendous impact on various fields in science. The focus of the current study is on one of the most critical areas of computer vision: medical image analysis (or medical computer vision), particularly deep learning-based approaches for medical image segmentation. Segmentation is an important processing step in natural images for scene understanding and medical image analysis, for image-guided interventions, radiotherapy, or improved radiological diagnostics, etc. Image segmentation is formally defined as “the partition of an image into a set of nonoverlapping regions whose union is the entire image” (Haralick and Shapiro 1992). A plethora of deep learning approaches for medical image segmentation have been introduced in the literature for different medical imaging modalities, including X-ray, visible-light imaging (e.g. colour dermoscopic images), magnetic resonance imaging (MRI), positron emission tomography (PET), computerized tomography (CT), and ultrasound (e.g. echocardiographic scans). Deep architectural improvement has been a focus of many researchers for different purposes, e.g., tackling gradient vanishing and exploding of deep models, model compression for efficient small yet accurate models, while other works have tried to improve the performance of deep networks by introducing new optimization functions.

Guo et al. (2018) provided a review of deep learning based semantic segmentation of images, and divided the literature into three categories: region-based, fully convolutional network (FCN)-based, and weakly supervised segmentation methods. Hu et al. (2018b) summarized the most commonly used RGB-D datasets for semantic segmentation as well as traditional machine learning based methods and deep learning-based network architectures for RGB-D segmentation. Lateef and Ruichek (2019) presented an extensive survey of deep learning architectures, datasets, and evaluation methods for the semantic segmentation of natural images using deep neural networks. Similarly, for medical imaging, Goceri and Goceri (2017) presented an high-level overview of deep learning-based medical image analysis techniques and application areas. Hesamian et al. (2019) presented an overview of the state-of-the-art methods in medical image segmentation using deep learning by covering the literature related to network structures and model training techniques. Karimi et al. (2019) reviewed the literature on techniques to handle label noise in deep learning based medical image analysis and evaluated existing approaches on three medical imaging datasets for segmentation and classification tasks. Zhou et al. (2019b) presented a review of techniques proposed for fusion of medical images from multiple modalities for medical image segmentation. Goceri (2019a) discussed the fully supervised, weakly supervised and transfer learning techniques for training deep neural networks for segmentation of medical images, and also discussed the existing methods for addressing the problems of lack of data and class imbalance. Zhang et al. (2019) presented a review of the approaches to address the problem of small sample sizes in medical image analysis, and divided the literature into five categories including explanation, weakly supervised, transfer learning, and active learning techniques. Tajbakhsh et al. (2020) presented a review of the literature for addressing the challenges of scarce annotations as well as weak annotations (e.g., noisy annotations, image-level labels, sparse annotations, etc.) in medical image segmentation. Similarly, there are several surveys covering the literature on the task of object detection (Wang et al. 2019c; Zou et al. 2019; Borji et al. 2019; Liu et al. 2019b; Zhao et al. 2019), which can also be used to obtain what can be termed as rough localizations of the object(s) of interest. In contrast to the existing surveys, we make the following contributions in this review:

  • We provide comprehensive coverage of research contributions in the field of semantic segmentation of natural and medical images. In terms of medical imaging modalities, we cover the literature pertaining to both 2D (RGB and grayscale) as well as volumetric medical images.

  • We group the semantic image segmentation literature into six different categories based on the nature of their contributions: architectural improvements, optimization function based improvements, data synthesis based improvements, weakly supervised models, sequenced models, and multi-task models. Figure 1 indicates the categories we cover in this review, along with a timeline of the most influential papers in the respective categories. Moreover, Fig. 2 shows a high-level overview of the deep semantic segmentation pipeline, and where each of the categories mentioned in Fig. 1 belong in the pipeline.

  • We study the behaviour of many popular loss functions used to train segmentation models on handling scenarios with varying levels of false positive and negative predictions.

  • Followed by the comprehensive review, we recognize and suggest the important research directions for each of the categories.

Fig. 1
figure 1

An overview of the deep learning based segmentation methods covered in this review

Fig. 2
figure 2

A typical deep neural network based semantic segmentation pipeline. Each component in the pipeline indicates the section of this paper that covers the corresponding contributions

In the following sections, we discuss deep semantic image segmentation improvements under different categories visualized in Fig. 1. For each category, we first review the improvements on non-medical datasets, and in a subsequent section, we survey the improvements for medical images.

2 Network architectural improvements

This section discusses the advancements in semantic image segmentation using convolutional neural networks (CNNs), which have been applied to interpretation tasks on both natural and medical images (Garcia-Garcia et al. 2018; Litjens et al. 2017). Although artificial neural network-based image segmentation approaches have been explored in the past using shallow networks (Reddick et al. 1997; Kuntimad and Ranganath 1999) as well as works which relied on superpixel segmentation maps to generate pixelwise predictions (Couprie et al. 2013), in this work, we focus on deep neural network based image segmentation models which are end-to-end trainable. The improvements are mostly attributed to exploring new neural architectures (with varying depths, widths, and connectivity or topology) or designing new types of components or layers.

2.1 Fully convolutional neural networks for semantic segmentation

As one of the first high impact CNN-based segmentation models, Long et al. (2015) proposed fully convolutional networks for pixel-wise labeling. They proposed up-sampling (deconvolving) the output activation maps from which the pixel-wise output can be calculated. The overall architecture of the network is visualized in Fig. 3.

Fig. 3
figure 3

Fully convolutional networks can efficiently learn to make dense predictions for per-pixel tasks like semantic segmentation (Long et al. 2015)

In order to preserve the contextual spatial information within an image as the filtered input data progresses deeper into the network, Long et al. (2015) proposed to fuse the output with shallower layers’ output. The fusion step is visualized in Fig. 4.

Fig. 4
figure 4

Upsampling and fusion step of the fully convolution networks (Long et al. 2015)

2.2 Encoder-decoder semantic image segmentation networks

Next, encoder-decoder segmentation networks (Noh et al. 2015) such as SegNet, were introduced (Badrinarayanan et al. 2015). The role of the decoder network is to map the low-resolution encoder feature to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies in the manner in which the decoder upsamples the lower resolution input feature maps. Specifically, the decoder uses pooling indices (Fig. 5) computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. The architecture (Fig. 5) consists of a sequence of non-linear processing layers (encoder) and a corresponding set of decoder layers followed by a pixel-wise classifier. Typically, each encoder consists of one or more convolutional layers with batch normalization and a ReLU non-linearity, followed by non-overlapping max-pooling and sub-sampling. The sparse encoding due to the pooling process is upsampled in the decoder using the max-pooling indices in the encoding sequence.

Ronneberger et al. (2015) proposed an architecture (U-Net; Fig. 6) consisting of a contracting path to capture context and a symmetric expanding path that enables precise localization. Similar to the image recognition (He et al. 2016) and keypoint detection (Honari et al. 2016), Ronneberger et al. (2015) added skip connections to the encoder-decoder image segmentation networks, e.g., SegNet, which improved the model’s accuracy and addressed the problem of vanishing gradients.

Fig. 5
figure 5

Top An illustration of the SegNet architecture. There are no fully connected layers, and hence it is only convolutional. Bottom An illustration of SegNet and FCN (Long et al. 2015) decoders. abcd correspond to values in a feature map. SegNet uses the max-pooling indices to upsample (without learning) the feature map(s) and convolves with a trainable decoder filter bank. FCN upsamples by learning to deconvolve the input feature map and adds the corresponding encoder feature map to produce the decoder output. This feature map is the output of the max-pooling layer (includes sub-sampling) in the corresponding encoder. Note that there are no trainable decoder filters in FCN ( Badrinarayanan et al. (2015))

Fig. 6
figure 6

An illustration of the U-Net (Ronneberger et al. 2015) architecture

Milletari et al. (2016) proposed a similar architecture (V-Net; Fig. 7) which added residual connections and replaced 2D operations with their 3D counterparts in order to process volumetric images. Milletari et al. also proposed optimizing for a widely used segmentation metric, i.e., Dice, which will be discussed in more detail in the Sect. 4.

Jégou et al. (2017) developed a segmentation version of the densely connected networks architecture (DenseNet ( Huang et al. (2017)) by adapting the U-Net like encoder-decoder skeleton. In Fig. 8, the detailed architecture of the network is visualized.

Fig. 7
figure 7

An illustration of the V-Net (Milletari et al. 2016) architecture

Fig. 8
figure 8

Diagram of the one-hundred layers Tiramisu network architecture (Jégou et al. 2017). The architecture is built from dense blocks. The architecture is composed of a downsampling path with two transitions down and an upsampling path with two transitions up. A circle represents concatenation, and the arrows represent connectivity patterns in the network. Gray horizontal arrows represent skip connections, where the feature maps from the downsampling path are concatenated with the corresponding feature maps in the upsampling path. Note that the connectivity pattern in the upsampling and the downsampling paths are different. In the downsampling path, the input to a dense block is concatenated with its output, leading to linear growth of the number of feature maps, whereas in the upsampling path, it is not the case

In Fig. 9, we visualize the simplified architectural modifications applied to the first image segmentation network i.e. FCN.

Fig. 9
figure 9

Gradual architectural improvements applied to FCN (Long et al. 2015) over time

Several modified versions (e.g. deeper/shallower, adding extra attention blocks) of encoder-decoder networks have been applied to semantic segmentation (Amirul Islam et al. 2017; Fu et al. 2019b; Lin et al. 2017a; Peng et al. 2017; Pohlen et al. 2017; Wojna et al. 2017; Zhang et al. 2018d). Recently in 2018, DeepLabV3+ (Chen et al. 2018b) has outperformed many state-of-the-art segmentation networks on PASCAL VOC 2012 (Everingham et al. 2015) and Cityscapes (Cordts et al. 2016) datasets. Zhao et al. (2017b) modified the feature fusing operation proposed by Long et al. (2015) using a spatial pyramid pooling module or encode-decoder structure (Fig. 10) are used in deep neural networks for semantic segmentation tasks. The spatial pyramid networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information.

Fig. 10
figure 10

Overview of the pyramid scene parsing networks. Given an input image (a), feature maps from last convolution layer are pulled (b), then a pyramid parsing module is applied to harvest different sub-region representations, followed by upsampling and concatenation layers to form the final feature representation, which carries both local and global context information in c. Finally, the representation is fed into a convolution layer to get the final per-pixel prediction (d) Zhao et al. (2017b)

Chen et al. (2018b) proposed to combine the advantages from both dilated convolutions and feature pyramid pooling. Specifically, DeepLabv3+, extends DeepLabv3 (Chen et al. 2017b) by adding a simple yet effective decoder module (Fig. 11) to refine the segmentation results, especially along object boundaries using dilated convolutions and pyramid features.

Fig. 11
figure 11

An illustration of the DeepLabV3+; The encoder module encodes multi-scale contextual information by applying atrous (dilated) convolution at multiple scales, while the simple yet effective decoder module refines the segmentation results along object boundaries (Chen et al. 2018b)

2.3 Computational complexity reduction for image segmentation networks

Several works have been done on reducing the time and the computational complexity of deep classification networks (Howard et al. 2017; Leroux et al. 2018). A few other works have attempted to simplify the structure of deep networks, e.g., by tensor factorization (Kim et al. 2015), channel/network pruning (Wen et al. 2016), or applying sparsity to connections (Han et al. 2016). Similarly, Yu et al. (2018b) addressed the high computational cost associated with high resolution feature maps in U-shaped architectures by proposing spatial and context paths to preserve the rich spatial information and obtain a large receptive field. A few methods have focused on the complexity optimization of deep image segmentation networks. Similar to the work of Saxena and Verbeek (2016), Liu et al. (2019a) proposed a hierarchical neural architecture search for semantic image segmentation by performing both cell and network-level search and achieved comparable results to the state-of-the-art results on the PASCAL VOC 2012 (Everingham et al. 2015) and Cityscapes (Cordts et al. 2016) datasets. In contrast, Chen et al. (2018a) focused on searching the much smaller atrous spatial pyramid pooling module using random search. Depth-wise separable convolutions (Sifre 2014; Chollet 2017) offer computational complexity reductions since they have fewer parameters and have therefore also been used in deep segmentation models (Chen et al. 2018b; Sandler et al. 2018).

Besides network architecture search, Srivastava et al. (2015) modified ResNet in a way to control the flow of information through a connection. Lin et al. (2017a) adopted one step fusion without filtering the channels.

2.4 Attention-based semantic image segmentation

Attention can be viewed as using information transferred from several subsequent layers/feature maps to select and localize the most discriminative (or salient) part of the input signal. Wang et al. (2017a) added an attention module to the deep residual network (ResNet) for image classification. Their proposed attention module consists of several encoding-decoding layers. Hu et al. (2018a) proposed a selection mechanism where feature maps are first aggregated using global average pooling and reduced to a single channel descriptor. Then an activation gate is used to highlight the most discriminative features.  Wang et al. (2018b) proposed non-local operation blocks for encoding long range spatio-temporal dependencies with deep neural networks that can be plugged into existing architectures. Fu et al. (2019a) proposed dual attention networks that apply both spatial and channel-based attention operations.

Li et al. (2018) proposed a pyramid attention based network, for semantic segmentation. They combined an attention mechanism and a spatial pyramid to extract precise dense features for pixel labeling instead of complicated dilated convolution and artificially designed decoder networks. Chen et al. (2016) applied attention to DeepLab (Chen et al. 2017a) which takes multi-scale inputs.

2.5 Adversarial semantic image segmentation

Goodfellow et al. (2014) proposed an adversarial approach to learn deep generative models. Their generative adversarial networks (GANs) take samples z from a fixed (e.g., standard Gaussian) distribution \(p_{z}(z)\), and transform them using a deterministic differentiable deep network p(.) to approximate the distribution of training samples x. Inspired by adversarial learning, Luc et al. (2016) trained a convolutional semantic segmentation network along with an adversarial network that discriminates segmentation maps coming either from the ground truth or from the segmentation network. Their loss function is defined as

$$\begin{aligned} \begin{aligned} \ell \left( \varvec{\theta }_{s}, \varvec{\theta }_{a}\right)&=\sum _{n=1}^{N} \ell _{\mathrm {mce}}\left( s\left( {\varvec{x}}_{n}\right) , {\varvec{y}}_{n}\right) \\&\quad -\lambda \left[ \ell _{\mathrm {bce}}\left( a\left( {\varvec{x}}_{n}, {\varvec{y}}_{n}\right) , 1\right) \right. \left. + \ell _{\mathrm {bce}}\left( a\left( {\varvec{x}}_{n}, s\left( {\varvec{x}}_{n}\right) \right) , 0\right) \right] , \end{aligned} \end{aligned}$$
(1)

where \(\varvec{\theta }_{s}\) and \(\varvec{\theta }_{a}\) denote the parameters of the segmentation and adversarial model, respectively. \(l_{bce}\) and \(l_{mce}\) are binary and multi-class cross-entropy losses, respectively. In this setup, the segmentor tries to produce segmentation maps that are close to the ground truth, i.e., which look more realistic.

The main models being used for image segmentation mostly follow encoder-decoder architectures as U-Net. Recent approaches have shown that dilated convolutions and feature pyramid pooling can improve the U-Net style networks. In Sect. 3, we summarize how these methods and their modified counterparts have been applied to medical images.

3 Architectural improvements applied to medical images

In this section, we review the different architectural based improvements for deep learning-based 2D and volumetric medical image segmentation.

3.1 Model compression based image segmentation

To perform image segmentation in real-time and be able to process larger images or (sub) volumes in case of processing volumetric and high-resolution 2D images such as CT, MRI, and histopathology images, several methods have attempted to compress deep models. Weng et al. (2019a) applied a neural architecture search method to U-Net to obtain a smaller network with a better organ/tumor segmentation performance on CT, MR, and ultrasound images. Brügger et al. (2019) by leveraging group normalization (Wu and He 2018) and leaky ReLU function, redesigned the U-Net architecture in order to make the network more memory efficient for 3D medical image segmentation. Perone et al. (2018) and Bonta and Kiran (2019) designed a dilated convolution neural network with fewer parameters as compared to the original convolution-based one. Some other works (Xu et al. 2018; Paschali et al. 2019) have focused on weight quantization of deep networks for making segmentation networks smaller.

3.2 Encoder decoder based image segmentation

Drozdzal et al. (2018) proposed to normalize input images before segmentation by applying a simple CNN prior to pushing the images to the main segmentation network. They showed improved results on electron microscopy segmentation, liver segmentation from CT, and prostate segmentation from MRI scans. Gu et al. (2019) proposed using a dilated convolution block close to the network’s bottleneck to preserve contextual information.

Vorontsov et al. (2019), using a dataset defined in Cohen et al. (2018), proposed an image-to-image based framework to transform an input image with object of interest (presence domain) like a tumor to an image without the tumor (absence domain) i.e. translate diseased image to healthy; next, their model learns to add the removed tumor to the new healthy image. This results in capturing detailed structure from the object, which improves the segmentation of the object. Zhou et al. (2018) proposed a rewiring method for the long skip connections used in U-Net and tested their method on nodule segmentation in the low-dose CT scans of the chest, nuclei segmentation in the microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos.

3.3 Attention based image segmentation

Nie et al. (2018) designed an attention model to segment prostate from MRI images with higher accuracy compared to baseline models, e.g., V-Net (Milletari et al. 2016) and FCN (Long et al. 2015). Sinha and Dolz (2019) proposed a multi-level attention based architecture for abdominal organ segmentation from MRI images. Qin et al. (2018) proposed a dilated convolution base block to preserve more detailed attention in 3D medical image segmentation. Similarly, other papers (Lian et al. 2018; Isensee et al. 2019; Li et al. 2019b; Ni et al. 2019; Oktay et al. 2018; Schlemper et al. 2019) have leveraged the attention concept into medical image segmentation as well.

3.4 Adversarial training based image segmentation

Khosravan et al. (2019) proposed an adversarial training framework for pancreas segmentation from CT scans. Son et al. (2017) applied GANs for retinal image segmentation. Xue et al. (2018) used a fully convolutional network as a segmenter in the generative adversarial framework to segment brain tumors from MRI images. Other papers (Costa et al. 2017; Dai et al. 2018; Jin et al. 2018; Moeskops et al. 2017; Neff et al. 2017; Rezaei et al. 2017; Yang et al. 2017a; Zhang et al. 2017) have also successfully applied adversarial learning to medical image segmentation.

3.5 Sequenced models

The Recurrent Neural Network (RNN) was designed for handling sequences. The long short-term memory (LSTM) network is a type of RNN that introduces self-loops to enable the gradient flow for long duration (Hochreiter and Schmidhuber 1997). In the medical image analysis domain, RNNs have been used to model the temporal dependency in image sequences. Bai et al. (2018) proposed an image sequence segmentation algorithm by combining a fully convolutional network with a recurrent neural network, which incorporates both spatial and temporal information into the segmentation task. Similarly, Gao et al. (2018) applied LSTM and CNN to model temporal relationship in brian MRI slices to improve segmentation performance in 4D volumes. Li et al. (2019a) applied U-Net to obtain initial segmentation probability maps and further improve them using LSTM for pancreas segmentation from 3D CT scans. Similarly, other works have also applied RNNs (LSTMs) (Alom et al. 2019; Chakravarty and Sivaswamy 2018; Yang et al. 2017b; Zhao and Hamarneh 2019a, b) to medical image segmentation.

4 Optimization function based improvements

In addition to improved segmentation speed and accuracy using architectural modifications as mentioned in Sect. 2, designing new loss functions has also resulted in improvements in subsequent inference-time segmentation accuracy.

4.1 Cross entropy

The most commonly used loss function for the task of image segmentation is a pixel-wise cross entropy loss (Eq. 2). This loss examines each pixel individually, comparing the class predictions vector to the one-hot encoded target (or ground truth) vector. For the case of binary segmentation, let \(P(Y=0)=p\) and \(P(Y=1)=1-p\). The predictions are given by the logistic/sigmoid function \(P({\hat{Y}}=0)=\frac{1}{1+e^{-x}}={\hat{p}}\) and \(P({\hat{Y}}=1)=1-\frac{1}{1+e^{-x}}=1-{\hat{p}}\), where x is output of network. Then cross entropy (CE) can be defined as:

$$\begin{aligned} {\text {CE}}(p, {\hat{p}})=-(p \log ({\hat{p}})+(1-p) \log (1-{\hat{p}})). \end{aligned}$$
(2)

The general form of the equation for multi-region (or multi-class) segmentation can be written as:

$$\begin{aligned} {\text {CE}} = -\sum _{classes} p \log {\hat{p}} \end{aligned}$$
(3)

4.2 Weighted cross entropy

The cross-entropy loss evaluates the class predictions for each pixel vector individually and then averages over all pixels, which implies equal learning to each pixel in the image. This can be problematic if the various classes have unbalanced representation in the image, as the most prevalent class can dominate training. Long et al. (2015) discussed weighting the cross-entropy loss (WCE) for each class in order to counteract a class imbalance present in the dataset. WCE was defined as:

$$\begin{aligned} {\text {WCE}}(p, {\hat{p}})=-(\beta p \log ({\hat{p}})+(1-p) \log (1-{\hat{p}})). \end{aligned}$$
(4)

To decrease the number of false negatives, \(\beta\) is set to a value larger than 1, and to decrease the number of false positives \(\beta\) is set to a value smaller than 1. To weight the negative pixels as well, the following balanced cross-entropy (BCE) can be used (Xie and Tu 2015).

$$\begin{aligned} {\text {BCE}}(p, {\hat{p}})=-(\beta p \log ({\hat{p}})+(1-\beta )(1-p) \log (1-{\hat{p}})). \end{aligned}$$
(5)

Ronneberger et al. (2015) added a distance function to the cross-entropy function to enforce learning distance between the components to enforce better segmentation in case of having very close objects to each other as follows:

$$\begin{aligned} {\text {BCE}}(p, {\hat{p}})+w_{0} \cdot \exp \left( -\frac{\left( d_{1}(x)+d_{2}(x)\right) ^{2}}{2 \sigma ^{2}}\right) \end{aligned}$$
(6)

where \(d_{1}(x)\) and \(d_{2}(x)\) are two functions that calculate the distance to the border of nearest and second cells in their cell segmentation problem.

4.3 Focal loss

To reduce the contribution of easy examples so that the CNN focuses more on the difficult examples, Lin et al. (2017b) added the term \((1-{\hat{p}})^{\gamma }\) to the cross entropy loss as:

$$\begin{aligned} \begin{aligned} {\text {FL}}(p, {\hat{p}})=-\left( \alpha (1-{\hat{p}})^{\gamma } p \log ({\hat{p}})\right. \left. +(1-\alpha ) {\hat{p}}^{\gamma }(1-p) \log (1-{\hat{p}})\right) . \end{aligned} \end{aligned}$$
(7)

Setting \(\gamma = 0\) in this equation yields the BCE loss.

4.4 Overlap measure based loss functions

4.4.1 Dice loss/F1 score

Another popular loss function for image segmentation tasks is based on the Dice coefficient, which is essentially a measure of overlap between two samples and is equivalent to the F1 score. This measure ranges from 0 to 1, where a Dice coefficient of 1 denotes perfect and complete overlap. The Dice coefficient (DC) is calculated as:

$$\begin{aligned} \mathrm {DC}=\frac{2 T P}{2 T P+F P+F N}=\frac{2|X \cap Y|}{|X|+|Y|}. \end{aligned}$$
(8)

Similarly, the Jaccard metric (intersection over union: IoU) is computed as:

$$\begin{aligned} \mathrm {IoU}=\frac{T P}{T P+F P+F N}=\frac{|X \cap Y|}{|X|+|Y|-|X \cap Y|} \end{aligned}$$
(9)

where X and Y are the predicted and ground truth segmentation, respectively. TP is the true positives, FP false positives and FN false negatives. We can see that \(\mathrm {DC} \ge \mathrm {IoU}\).

To use this as a loss function the DC can be defined as a Dice loss (DL) function (Milletari et al. 2016):

$$\begin{aligned} \mathrm {DL}(p, {\hat{p}})=\frac{2\langle p, {\hat{p}}\rangle }{\Vert p\Vert _{1}+\Vert {\hat{p}}\Vert _{1}} \end{aligned}$$
(10)

where \(p \in \{0,1\}^{n} \text{ and } 0 \le {\hat{p}} \le 1\). p and \({\hat{p}}\) are the ground truth and predicted segmentation and \({\langle \cdot ,\cdot \rangle }\) denotes dot product.

4.4.2 Tversky loss

Tversky loss (TL) (Salehi et al. 2017) is a generalization of the DL. To control the level of FP and FN, TL weights them as the following:

$$\begin{aligned} \mathrm {TL}(p, {\hat{p}})=\frac{\langle p, {\hat{p}} \rangle }{\langle p, {\hat{p}}\rangle + \beta (1-p, {\hat{p}}\rangle +(1-\beta )(p, 1-{\hat{p}})} \end{aligned}$$
(11)

setting \(\beta = 0.5\) simplifies the equation to Eq. 10.

4.4.3 Exponential logarithmic loss

Wong et al. (2018) proposed using a weighted sum of the exponential logarithmic Dice loss (\({\mathcal {L}}_{\mathrm {eld}}\)) and the weighted exponential cross-entropy loss (\({\mathcal {L}}_{\mathrm {wece}}\)) in order to improve the segmentation accuracy on small structures for tasks where there is a large variability among the sizes of the objects to be segmented.

$$\begin{aligned} {\mathcal {L}} = w_{\mathrm {eld}}{\mathcal {L}}_{\mathrm {eld}} + w_{\mathrm {wece}}{\mathcal {L}}_{\mathrm {wece}}, \end{aligned}$$
(12)

where

$$\begin{aligned} {\mathcal {L}}_{\mathrm {eld}}= & {} {\mathbf{E }}\left[ \left( -\ln {(D_i)} \right) ^{\gamma _D} \right] , \ \text {and} \end{aligned}$$
(13)
$$\begin{aligned} {\mathcal {L}}_{\mathrm {wece}}= & {} {\mathbf{E }}\left[ \left( -\ln {(p_l(\mathbf{x }))} \right) ^{\gamma _{CE}} \right] . \end{aligned}$$
(14)

x, i, and l denote the pixel position, the predicted label, and the ground truth label. \(D_i\) denotes the smoothed Dice loss (obtained by adding an \(\epsilon = 1\) term to the numerator and denominator in Eq. 10 in order to handle missing labels while training, and \(\gamma _D\) and \(\gamma _{CE}\) are used to control the non-linearities of the respective loss functions.

4.4.4 Lovász-softmax loss

Since it has been shown that the Jaccard loss (IoU loss) is submodular (Berman et al. 2018a), Berman et al. (2018b) proposed using the Lovász hinge with the Jaccard loss for binary segmentation, and proposed a surrogate of the Jaccard loss, called the Lovász-Softmax loss, which can be applied for the multi-class segmentation task. The Lovász-Softmax loss is, therefore, a smooth extension of the discrete Jaccard loss, and is defined as

$$\begin{aligned} {\mathcal {L}}_{\mathrm {LovaszSoftmax}} = \dfrac{1}{|{\mathcal {C}}|} \sum _{c\in {\mathcal {C}}}\overline{\Delta _{J_c}}\left( \varvec{m}(c)\right) , \end{aligned}$$
(15)

where \({\Delta _{J_c}}\left( \cdot \right)\) denotes the convex closure of the submodular Jaccard loss, \({\overline{\cdot }}\) denotes that it is a tight convex closure and polynomial time computable, \({\mathcal {C}}\) denotes all the classes, and \({J_c}\) and \(\varvec{m}(c)\) denote the Jaccard index and the vector of errors for class c respectively.

4.4.5 Boundary loss

Kervadec et al. (2019a) proposed to calculate boundary loss \({\mathcal {L}}_{B}\) along with the generalized Dice loss \({\mathcal {L}}_{GD}\) function as

$$\begin{aligned} \alpha {\mathcal {L}}_{GD}(\theta )+(1-\alpha ) {\mathcal {L}}_{B}(\theta ), \end{aligned}$$
(16)

where the two terms in the loss function are defined as

$$\begin{aligned} {\mathcal {L}}_{G D}(\theta )= & {} 1 \ - 2\dfrac{ w_{G} \sum _{p \in \Omega } g(p) s_{\theta }(p) + w_{B} \sum _{p \in \Omega }(1-g(p))\left( 1-s_{\theta }(p)\right) }{ w_{G} \sum _{p \in \Omega }\left[ s_{\theta }(p)+g(p)\right] + w_{B} \sum _{p \in \Omega }\left[ 2-s_{\theta }(p)-g(p)\right] }, \ \text {and} \end{aligned}$$
(17)
$$\begin{aligned} {\mathcal {L}}_{B}(\theta )= & {} {p \in \Omega } \phi _{G}(p) s_{\theta }(p), \end{aligned}$$
(18)

where \(\phi _{G}(p)=-\left\| p-z_{\partial G}(p)\right\|\) if \(p \in G\) and \(\phi _{G}(p)=\left\| p-z_{\partial G}(p)\right\|\), otherwise. The general form integral \(\sum _{\Omega } g(p) f\left( s_{\theta }(p)\right)\) is for foreground and \(\sum _{\Omega }(1-g(p)) f\left( 1-s_{\theta }(p)\right)\) for background. \(w_{G}=1 /\left( \sum _{p \in \Omega } g(p)\right) ^{2}\) and \(w_{B}=1 /\left( \sum _{\Omega }(1-g(p))\right) ^{2}.\Omega\) shows the spatial domain.

4.4.6 Conservative loss

Zhu et al. (2018) proposed the Conservative Loss for in order to achieve a good generalization ability in domain adaptation tasks by penalizing the extreme cases and encouraging the moderate cases. The Conservative Loss is defined as

$$\begin{aligned} CL(p_t) = \lambda (1 + \log _a(p_t))^2 * \log _a(-\log _a(p_t)), \end{aligned}$$
(19)

where \(p_t\) is the probability of the prediction towards the ground truth and a is the base of the logarithm. a and \(\lambda\) are empirically chosen to be e (Euler’s number) and 5 respectively.

Other works also include approaches to optimize the segmentation metrics (Nowozin 2014), weighting the loss function (Roy et al. 2017), and adding regularizers to loss functions to encode geometrical and topological shape priors (BenTaieb and Hamarneh 2016; Mirikharaji and Hamarneh 2018).

A significant problem in image segmentation (particularly in medical images) is to overcome class imbalance for which overlap measure based methods have shown reasonably good performance in overcoming the imbalance. In Sect. 5, we summarize the approaches which use new loss functions, particularly for medical image segmentation or use the (modified) loss functions mentioned above.

In Fig. 12, we visualize the behavior of different loss functions for segmenting large and small objects. For the parameters of the loss functions, we use the same parameters as reported by the authors in their respective papers. Therefore, we use \(\beta =0.3\) in Eq. 11, \(\alpha =0.25\) and \(\gamma =2\) in Eq. 7, and \(\gamma _D = \gamma _{CE} = 1\), \(w_{\mathrm {eld}}=0.8\), and \(w_{\mathrm {wece}}=0.2\) in Eq. 12. Moving from the left to the right for each plot, the overlap of the predictions and ground truth mask becomes progressively smaller, i.e., producing more false positives and false negatives. Ideally, the loss value should monotonically increase as more false positives, and negatives are predicted. For large objects, almost all the functions follow this assumption; however, for the small objects (right plot), only combo loss and focal loss penalize monotonically more for larger errors. In other words, the overlap-based functions highly fluctuate while segmenting small and large objects (also see Fig. 13), which results in unstable optimization. The loss functions which use cross-entropy as the base and the overlap measure functions as a weighted regularizer show more stability during training.

Fig. 12
figure 12

A comparison of seven loss functions for different extends of overlaps for a large (left) and a small (right) object

Fig. 13
figure 13

Comparison of cross entropy and Dice losses for segmenting small and large objects. The red pixels show the ground truth and the predicted foregrounds in the left and right columns respectively. The striped and the pink pixels indicate false negative and false positive, respectively. For the top row (i.e., large foreground), the Dice loss returns 0.96 for one false negative and for the bottom row (i.e., small object) returns 0.66 for one false negative, whereas the cross entropy loss function outputs 0.83 for both the cases. By considering a false negative and false positive, the output value drops even more in case of using Dice but the cross entropy stays smooth (i.e., Dice value of 0.93 and 0.50 for large and small object versus cross entropy loss value of 1.66 for both.)

5 Optimization function based improvements applied to medical images

The standard CE loss function and its weighted versions, as discussed in Sect. 4, have been applied to numerous medical image segmentation problems (Isensee et al. 2019; Li et al. 2019b; Lian et al. 2018; Ni et al. 2019; Nie et al. 2018; Oktay et al. 2018; Schlemper et al. 2019). However, Milletari et al. (2016) found that optimizing CNNs for DL (Eq. 10) in some cases, e.g., in the case of having very small foreground objects in a large background, works better than the original cross-entropy.

Li et al. (2019c) proposed adding the following regularization term to the cross entropy loss function to encourage smooth segmentation outputs.

$$\begin{aligned} R=\sum _{i=1}^{N} {\mathbb {E}}_{\xi ^{\prime }, \xi }\left\| f\left( x_{i} ; \theta , \xi ^{\prime }\right) -f\left( x_{i} ; \theta , \xi \right) \right\| ^{2} \end{aligned}$$
(20)

where \(\xi ^{\prime }\) and \(\xi\) are different perturbation (e.g., Gaussian noise, network dropout, and randomized data transformation) applied to the input image \(x_i\).

Chen et al. (2019) proposed leveraging traditional active contour energy minimization into CNNs via the following loss function.

$$\begin{aligned} {\text {Loss}}= & {} {\text {Length}}+\lambda \cdot {\text {Region}} \end{aligned}$$
(21)
$$\begin{aligned} \text{ Length }= & {} \sum _{\Omega }^{i=1, j=1} \sqrt{\left| \left( \nabla u_{x_{i, j}}\right) ^{2}+\left( \nabla u_{y_{i, j}}\right) ^{2}\right| +\epsilon } \end{aligned}$$
(22)

where x and y from \(u_{x_{i, j}}\) and \(u_{y_{i, j}}\) are horizontal and vertical directions, respectively.

$$\begin{aligned} \begin{aligned} \text{ Region }=\left| \sum _{\Omega }^{i=1, j=1} u_{i, j}\left( c_{1}-v_{i, j}\right) ^{2}\right| +\left| \sum _{\Omega }^{i=1, j=1}\left( 1-u_{i, j}\right) \left( c_{2}-v_{i, j}\right) ^{2}\right| \end{aligned} \end{aligned}$$
(23)

where u and v are represented as prediction and a given image, respectively. c1 is set to 1 and c2 to 0. Similar to, Li et al. (2019c), Zhou et al. (2019a) proposed adding a contour regression term to the weighted cross entropy loss function.

Karimi and Salcudean (2019) optimized Hausdorff distance based function between a predicted and ground truth segmentation as follows.

$$\begin{aligned} f_{\mathrm {HD}}(p, q)={\text {Loss}}(p, q)+\lambda \left( 1-\frac{2 \sum _{\Omega }(p \circ q)}{\sum _{\Omega }\left( p^{2}+q^{2}\right) }\right) \end{aligned}$$
(24)

where the second term is the Dice loss function and the first term can be replaced with three different versions of the Hausdorff distance for p and q i.e. ground truth and predicted segmentations respectively, as follows;

$$\begin{aligned} {\text {Loss}}(q, p)=\frac{1}{|\Omega |} \sum _{\Omega }\left( (p-q)^{2} \circ \left( d_{p}^{\alpha }+d_{q}^{\alpha }\right) \right) \end{aligned}$$
(25)

The parameter \(\alpha\) determines the level of penalty for larger errors. \(d_p\) is the distance map of the ground-truth segmentation as the unsigned distance to the boundary \(\delta p\). Similarly, \(d_q\) is defined as the distance to \(\delta q\). The \(\circ\) is Hadamard operation.

$$\begin{aligned} {\text {Loss}}(q, p)=\frac{1}{|\Omega |} \sum _{k=1}^{K} \sum _{\Omega }\left( (p-q)^{2} \ominus _{k} B\right) k^{\alpha } \end{aligned}$$
(26)

where \(\ominus _{k}\) denotes k successive erosions. where

$$\begin{aligned} B= & {} \left( \begin{array}{ccc}{0} &{}\quad {1 / 5} &{}\quad {0} \\ {1 / 5} &{}\quad {1 / 5} &{}\quad {1 / 5} \\ {0} &{}\quad {1 / 5} &{}\quad {0}\end{array}\right) \end{aligned}$$
(27)
$$\begin{aligned} {\text {Loss}}(q, p)= & {} \frac{1}{|\Omega |} \sum _{r \in R} r^{\alpha } \sum _{\Omega }\left[ f_{s}\left( B_{r} * {\overline{p}}^{C}\right) \circ f_{{\overline{q}} \backslash {\overline{p}}}\right. + {f_{s}\left( B_{r} * {\overline{p}}\right) \circ f_{{\overline{p}} \backslash {\overline{q}}}} \nonumber \\&+ {f_{s}\left( B_{r} * {\overline{q}}^{C}\right) \circ f_{{\overline{p}} \backslash {\overline{q}}}} + {f_{s}\left( B_{r} * {\overline{q}}\right) \circ f_{{\overline{q}} \backslash {\overline{p}}} ]} \end{aligned}$$
(28)

where \(f_{{\overline{q}} \backslash {\overline{p}}}=(p-q)^{2} q\). \(f_s\) indicates soft thresholding. \(B_r\) denotes a circular-shaped convolutional kernel of radius r. Elements of \(B_r\) are normalized such that they sum to one. \({\overline{p}}^{C}=1-{\overline{p}}\). Ground-truth and predicted segmentations, denoted with \({\overline{p}}\) and \({\overline{q}}\),

Caliva et al. (2019) proposed to measure distance of each voxel to the boundaries of the objects and use the weight matrices to penalize a model for error on the boundaries. Kim and Ye (2019) proposed using level-set energy minimization as a regularizer summed with standard multi-class cross entropy loss function for semi-supervised brain MRI segmentation as:

$$\begin{aligned} \begin{aligned} {\mathcal {L}}_{\text{ level }}(\Theta ; x)= \sum _{n=1}^{N} \int _{\Omega }\left| x(r)-c_{n}^{\Theta }\right| ^{2} y_{n}^{\Theta }(r) d r +\lambda \sum _{n=1}^{N} \int _{\Omega }\left| \nabla y_{n}^{\Theta }(r)\right| d r \end{aligned} \end{aligned}$$
(29)

with

$$\begin{aligned} c_{n}^{\Theta }=\frac{\int _{\Omega } x(r) y_{n}^{\Theta }(r) d r}{\int _{\Omega } y_{n}^{\Theta }(r) d r} \end{aligned}$$
(30)

where x(r) is the input, \(y_{n}^{\Theta }(r)\) is the output of softmax layer, \(\Theta\) refers to learnable parameters.

Taghanaki et al. (2019e) discussed the risks of using solo overlap based loss functions and proposed to use them as regularizes along with a weighted cross entropy to explicitly handle input and output imbalance as follows;

$$\begin{aligned} Combo \ Loss= & {} \alpha \biggl (-\frac{1}{N} \sum _{i=1}^{N} \beta \left( t_i - \ln p_i \right) + \left( 1-\beta \right) \left[ \left( 1-t_i\right) \ln \left( 1-p_i \right) \right] \biggr ) \nonumber \\&+ \left( 1-\alpha \right) \sum _{i=1}^{K} \left( -\frac{2\sum _{i=1}^{N} p_i t_i + S}{\sum _{i=1}^{N} p_i + \sum _{i=1}^{N} t_i + S} \right) \end{aligned}$$
(31)

where \(\alpha\) controls the amount of Dice term contribution in the loss function L, and \(\beta \in [0,1]\) controls the level of model penalization for false positives/negatives: when \(\beta\) is set to a value smaller than 0.5, FP are penalized more than FN as the term \((1-t_{i}) \ \ln \ (1-p_{i})\) is weighted more heavily, and vice versa. In their implementation, to prevent division by zero, the authors perform add-one smoothing (a specific instance of the additive/Laplace/Lidstone smoothing; Russell and Norvig 2016), i.e., they add unity constant S to both the denominator and numerator of the Dice term.

The majority of the methods discussed in Sect. 5 have attempted to handle the class imbalance issue in the input images i.e., small foreground versus large background with providing weights/penalty terms in the loss function. Other approaches consist of first identifying the object of interest, cropping around this object, and then performing the task (e.g., segmentation) with better-balanced classes. This type of cascade approach has been applied for the segmentation of multiple sclerosis lesions in the spinal cord (Gros et al. 2019).

6 Image synthesis based methods

Deep CNNs are heavily reliant on big data to avoid overfitting and class imbalance issues, and therefore this section focuses on data augmentation, a data-space solution to the problem of limited data. Apart from standard online image augmentation methods such as geometric transformations (LeCun et al. 1998; Simard et al. 2003; Cireşan et al. 2011, 2012; Krizhevsky et al. 2012), color space augmentations (Galdran et al. 2017; Yuan 2017; Abhishek et al. 2020), etc., in this section, we discuss image synthesis methods, the output of which are novel images rather than modifications to existing images. GANs based augmentation techniques for segmentation tasks have been used for a wide variety of problems - from remote sensing imagery (Mohajerani et al. 2019) to filamentary anatomical structures (Zhao et al. 2017a). For a more detailed review of image augmentation strategies in deep learning, we direct the interested readers to Shorten and Khoshgoftaar (2019).

6.1 Image synthesis based methods applied to natural image segmentation

Neff et al. (2018) trained a Wasserstein GAN with gradient penalty (Gulrajani et al. 2017) to generate labeled image data in the form of image-segmenation mask pairs. They evaluated their approach on a dataset of chest X-ray images and the Cityscapes dataset, and found that the WGAN-GP was able to generate images with sufficient variety and that a segmentation model trained using GAN-based augmentation only was able to perform better than a model trained with geometric transformation based augmentation. Cherian and Sullivan (2019) proposed to incorporate semantic consistency in image-to-image translation task by introducing segmentation functions in the GAN architecture and showed that the semantic segmentation models trained with synthetic images led to considerable performance improvements. Other works include GAN-based data augmentation for domain adaptation (Huang et al. 2018; Choi et al. 2019) and panoptic data augmentation (Liu et al. 2019c). However, the majority of GAN based data augmentation has been applied to medical images (Shorten and Khoshgoftaar 2019). Next, we discuss the GAN based image synthesis for augmentation in the field of medical image analysis.

6.2 Image synthesis based methods applied to medical image segmentation

Chartsias et al. (2017) used a conditional GAN to generate cardiac MR images from CT images. They showed that utilizing the synthetic data increased the segmentation accuracy and that using only the synthetic data led to only a marginal decrease in the segmentation accuracy. Similarly, Zhang et al. (2018c) proposed a GAN based volume-to-volume translation for generating MR volumes from corresponding CT volumes and vice versa. They showed that synthetic data improve segmentation performance on cardiovascular MRI volumes. Huo et al. (2018) proposed an end-to-end synthesis and segmentation network called EssNet to simultaneously synthesize CT images from unpaired MR images and to segment CT splenomegaly on unlabeled CT images and showed that their approach yielded better segmentation performance than even segmentation obtained using models trained using the manual CT labels. Abhishek and Hamarneh (2019) trained a conditional GAN to generate skin lesion images from and confined to binary masks, and showed that using the synthesized images led to a higher skin lesion segmentation accuracy. Zhang et al. (2018b) trained a GAN for translating between digitally reconstructed radiographs and X-ray images and achieved similar accuracy as supervised training in multi-organ segmentation. Shin et al. (2018) proposed a method to generate synthetic abnormal MRI images with brain tumors by training a GAN using two publicly available data sets of brain MRI. Similarly, other works (Han et al. 2019; Yang et al. 2018; Yu et al. 2018a) have leveraged GANs to synthesize brain MR images.

7 Weakly supervised methods

Collecting large-scale accurate pixel-level annotation is time-consuming and financially expensive. However, unlabeled and weakly-labeled images can be collected in large amounts in a relatively fast and cheap manner. As shown in Fig. 2, varying levels of supervision are possible when training deep segmentation models, from pixel-wise annotations (supervised learning) and image-level and bounding box annotations (semi-supervised learning) to no annotations at all (unsupervised learning), the last two of which comprise weak supervision. Therefore, a promising direction for semantic image segmentation is to develop weakly supervised segmentation models.

7.1 Weakly supervised methods applied to natural images

Kim and Hwang (2016) proposed a weakly supervised semantic segmentation network using unpooling and deconvolution operations, and used feature maps from the deconvolutions layers to learn scale-invariant features, and evaluated their model on the PASCAL VOC and chest X-ray image datasets. Lee et al. (2019) used dropout (Srivastava et al. 2014) to choose features at random during training and inference and combine the many different localization maps to generate a single localization map, effectively discovering relationships between locations in an image, and evaluated their proposed approach on the PASCAL VOC dataset.

7.2 Weakly supervised methods applied to medical images

The scarcity of richly annotated medical images is limiting supervised deep learning-based solutions to medical image analysis tasks (Perone and Cohen-Adad 2019), such as localizing discriminatory radiomic disease signatures. Therefore, it is desirable to leverage unsupervised and weakly supervised models. Kervadec et al. (2019b) introduced a differentiable term in the loss function for datasets with weakly supervised labels, which reduced the computational demand for training while also achieving almost similar performance to full supervision for segmentation of cardiac images. Afshari et al. (2019) used a fully convolutional architecture along with a Mumford-Shah functional Mumford and Shah (1989) inspired loss function to segment lesions from PET scans using only bounding box annotations as supervision. Mirikharaji et al. (2019) proposed to learn spatially adaptive weight maps to account for spatial variations in pixel-level annotations and used noisy annotations to train a segmentation model for skin lesions. Taghanaki et al. (2019d) proposed to learn spatial masks using only image-level labels with minimizing mutual information between the input and masks, and at the same time maximizing the mutual information between the masks and image labels. Peng et al. (2019) proposed an approach to train a CNN with discrete constraints and regularization priors based on the alternating direction method of multipliers (ADMM). Perone and Cohen-Adad (2018) expanded the semi-supervised mean teacher (Tarvainen and Valpola 2017) approach to segmentation tasks on MRI data, and show that it can bring important improvements in a realistic small data regime. In another work, Perone et al. (2019) extended the method of unsupervised domain adaptation using self-ensembling for the semantic segmentation task. They showed how this approach could improve the generalization of the models even when using a small amount of unlabeled data.

8 Multi-task models

Multi-task learning (Caruana 1997) refers to a machine learning approach where multiple tasks are learned simultaneously, and the learning efficiency and the model performance on each of the tasks are improved because of the existing commonalities across the tasks. For visual recognition tasks, it has been shown that there exist relations between various tasks in the task space (Zamir et al. 2018), and multi-task models can help exploit these relationships to improve performance on the related tasks.

8.1 Multi-task models applied to natural images

Bischke et al. (2019) proposed a cascaded multi-task loss to preserve boundary information from segmentation masks for segmenting building footprints and achieved state-of-the-art performance on an aerial image labeling task. He et al. (2017) extended Faster R-CNN (Ren et al. 2015) by adding a new branch to predict the object mask along with a class label and a bounding box, and the proposed model was called Mask R-CNN. Mask R-CNN has been used extensively for multi-task segmentation models for a wide range of application areas (Abdulla et al. 2017), such as adding sports fields to OpenStreetMap (Remillard 2018), detection and segmentation for surgery robots (SUYEgit 2018), understanding climate change patterns from aerial imagery of the Arctic (Zhang et al. 2018a), converting satellite imagery to maps (Mohanty 2018), detecting image forgeries (Wang et al. 2019d), and segmenting tree canopy (Zhao et al. 2018).

8.2 Multi-task models applied to medical images

Chaichulee et al. (2017) extended the VGG16 architecture (Simonyan and Zisserman 2014) to include a global average pooling layer for patient detection and a fully convolutional network for skin segmentation. The proposed model was evaluated on images from a clinical study conducted at a neonatal intensive care unit, and was robust to changes in lighting, skin tone, and pose. He et al. (2019) trained a U-Net (Ronneberger et al. 2015)-like encoder-decoder architecture to simultaneously segment thoracic organs from CT scans and perform global slice classification. Ke et al. (2019) trained a multi-task U-Net architecture to solve three tasks - separating wrongly connected objects, detecting class instances, and pixelwise labeling for each object, and evaluated it on a food microscopy image dataset. Other multi-task models have also been proposed for segmentation and classification for detecting manipulated faces in images and video (Nguyen et al. 2019) and diagnosis of breast biopsy images (Mehta et al. 2018) and mammograms (Le et al. 2019).

Mask R-CNN has also been used for segmentation tasks in medical image analysis such as automatically segmenting and tracking cell migration in phase-contrast microscopy (Tsai et al. 2019), detecting and segmenting nuclei from histological and microscopic images  (Johnson 2018; Vuola et al. 2019; Wang et al. 2019a, b), detecting and segmenting oral diseases (Anantharaman et al. 2018), segmenting neuropathic ulcers (Gamage et al. 2019), and labeling and segmenting ribs in chest X-rays (Wessel et al. 2019). Mask R-CNN has also been extended to work with 3D volumes and has been evaluated on lung nodule detection and segmentation from CT scans and breast lesion detection and categorization on diffusion MR images (Jaeger et al. 2018; Kopelowitz and Engelhard 2019).

9 Segmentation evaluation metrics and datasets

9.1 Evaluation metrics

The quantitative evaluation of segmentation models can be performed using pixel-wise and overlap based measures. For binary segmentation, pixel-wise measures involve the construction of a confusion matrix to calculate the number of true positive (TP), true negative (TN), false positive (FP), and false negative (FN) pixels, and then calculate various metrics such as precision, recall (also known as sensitivity), specificity, and overall pixel-wise accuracy. They are defined as follows:

$$\begin{aligned} \text {Precision}= & {} \frac{TP}{TP + FP}, \end{aligned}$$
(32)
$$\begin{aligned} \text {Recall or Sensitivity}= & {} \frac{TP}{TP + FN}, \end{aligned}$$
(33)
$$\begin{aligned} \text {Specificity}= & {} \frac{TN}{TN + FP}, \ \ \ \ \text {and,} \end{aligned}$$
(34)
$$\begin{aligned} \text {Accuracy}= & {} \frac{TP+TN}{TP + TN + FP + FN}. \end{aligned}$$
(35)

Two popular overlap-based measures used to evaluate segmentation performance are the Sørensen–Dice coefficient (also known as the Dice coefficient) and the Jaccard index (also known as the intersection over union or IoU). Given two sets \({\mathcal {A}}\) and \({\mathcal {B}}\), these metrics are defined as:

$$\begin{aligned} \text {Dice coefficient}, \text {Dice}({\mathcal {A}},{\mathcal {B}})= & {} 2\ \frac{\left| {\mathcal {A}} \cap {\mathcal {B}}\right| }{\left| {\mathcal {A}}\right| + \left| {\mathcal {B}}\right| }, \ \ \ \ \text {and,} \end{aligned}$$
(36)
$$\begin{aligned} \text {Jaccard index}, \text {Jaccard}({\mathcal {A}},{\mathcal {B}})= & {} \frac{\left| {\mathcal {A}} \cap {\mathcal {B}}\right| }{\left| {\mathcal {A}} \cup {\mathcal {B}}\right| }. \end{aligned}$$
(37)

For binary segmentation masks, these overlap-based measures can also be calculated from the confusion matrix as shown in Eqs. 8 and 9 respectively. The two measures are related by:

$$\begin{aligned} \text {Jaccard} = \frac{\text {Dice}}{2 - \text {Dice}}. \end{aligned}$$
(38)
Fig. 14
figure 14

A \(5 \times 5\) overlap scenario with a the ground truth, b the predicted binary masks, and c the overlap. In a and b, black and white pixels denote the foreground and the background respectively. In c, green, grey, blue, and red pixels denote TP, TN, FP, and FN pixels respectively

Figure 14 contains a simple overlap scenario, with the ground truth and the predicted binary masks with a spatial resolution \(5 \times 5\). Let black pixels denote the object to be segmented. The confusion matrix for this can be constructed as shown in Table 1. Using the expressions above, we can calculate the metrics as \(\text {precision} = \frac{7}{8}= 0.875\), \(\text {recall} = \frac{7}{10}= 0.7\), \(\text {specificity} = \frac{14}{15}= 0.9333\), \(\text {pixel-wise accuracy} = \frac{21}{25}= 0.84\), \(\text {Dice coefficient} = \frac{7}{9} = 0.7778\), and \(\text {Jaccard index} = \frac{7}{11}= 0.6364\).

Table 1 Confusion matrix for the overlap scenario shown in Fig. 14

9.2 Semantic segmentation datasets for natural images

Next, we briefly discuss the most popular and widely used datasets for the semantic segmentation of natural images. These datasets cover various categories of scenes, such as indoor and outdoor environments, common objects, urban street view as well as generic scenes. For a comprehensive review of the natural image datasets that segmentation models are usually benchmarked upon, we direct the interested readers to Lateef and Ruichek (2019).

Table 2 A summary of papers for semantic segmentation of natural images applied to PASCAL VOC 2012 dataset
Table 3 A summary of medical image segmentation papers along with their type of proposed improvement
  • Pascal VOC datasets The PASCAL Visual Object Classes (VOC) Challenge (Everingham et al. 2010) was an annual challenge that ran from 2005 through 2012 and had annotations for several tasks such as classification, detection, and segmentation. The segmentation task was first introduced in the 2007 challenge and featured objects belonging to 20 classes. The last offering of the challenge, the PASCAL VOC 2012 challenge, contained segmentation annotations for 2913 images across 20 object classes (Everingham et al. 2015).

  • PASCAL Context The PASCAL Context dataset (Mottaghi et al. 2014) extended the PASCAL VOC 2010 Challenge dataset by providing pixel-wise annotations for the images, resulting in a much larger dataset with 19,740 annotated images and labels belonging to 540 categories.

  • Cityscapes The Cityscapes dataset (Cordts et al. 2016) contains annotated images of urban street scenes. The data was collected during daytime from 50 cities and exhibits variance in the season of the year and traffic conditions. Semantic, instance wise, and dense pixel-wise annotations are provided, with ‘fine’ annotations for 5,000 images and ‘coarse’ annotations for 20,000 images.

  • ADE20K The ADE20K dataset (Zhou et al. 2017) contains 25,210 images from other existing datasets, e.g, the LabelMe (Russell et al. 2008), the SUN (Xiao et al. 2010), and the Places (Zhou et al. 2014) datasets. The images are annotated with labels belonging to 150 classes for “scenes, objects, parts of objects, and in some cases even parts of parts”.

  • CamVid The Cambridge-driving Labeled Video Database (CamVid) (Brostow et al. 2008, 2009) contains 10 min of video captured at 30 frames per second from a driving automobile’s perspective, along with pixel-wise semantic segmentation annotations for 701 frames and 32 object classes.

Table 2 lists a summary of selected papers from this review, the nature of their proposed contributions, and the datasets that they were evaluated on. For the papers that evaluated their models on the PASCAL VOC 2012 dataset (Everingham et al. 2012), one of the most popular image semantic segmentation dataset for natural images, we also list their reported mean IoU scores. As can be seen in Table 2, the focus has been mostly on architectural improvements. Comparing the first deep learning-based model (i.e., FCN Long et al. 2015) to the state-of-the-art model (i.e., DeepLabV3+ Chen et al. 2018b) there is a large improvement (i.e. \(\sim 27\%\), i.e., 62.2–89.0% ) in terms of mean IoU. The latter model leverages a more sophisticated decoder, dilated convolutions, and feature pyramid pooling.

9.3 Semantic segmentation datasets for medical images

Fig. 15
figure 15

Analyzing the attributes of the medical image segmentation papers discussed in this review. The large number of medical imaging modalities (b) as well as the smaller average dataset sizes for medical image segmentation datasets (c) as compared to natural images (as discussed in Sect. 9.2) make it difficult to benchmark the performance of various approaches. In (b), PET (1.1%), OCT (0.6%), and topogram (0.6%) make up the ‘Other’ label

Fig. 16
figure 16

The number of medical image segmentation challenges every year since 2007 listed on Grand Challenges (Challenge 2020), along with a imaging modality-wise breakdown. Note that for many challenges, the data is multi-modal, and therefore the breakdown takes that into account

In contrast to natural images, it is difficult to tabulate and summarize the performance of medical image segmentation methods because of the vast number of (a) medical imaging modalities and (b) medical image segmentation datasets. Figure 15 presents a breakdown of the various attributes of the medical image segmentation papers surveyed in this review, color coded similar to Fig. 1. As shown in Fig. 15b, the papers covered in this review use 13 medical imaging modalities. Figure 15c shows the distribution of the number of samples across datasets from multiple modalities. We observe that modalities which are expensive to acquire and annotate (such as electron microscopy (EM), PET, and MRI) have smaller dataset sizes than relative cheaper to acquire modalities such as RGB images (e.g., skin lesion images), ultrasound (US) and X-ray images. We also present a summary of the popular medical image segmentation papers in Table 3 and include the entire table in the Supplementary Material.

A similar observation can be made by looking at the medical image segmentation competitions. Grand Challenges in Biomedical Image Analysis (Challenge 2020) provides a comprehensive but not exhaustive list of publicly available medical image segmentation challenges, and since 2007, there have been 94 segmentation challenges for medical images and volumes from as many as 12 imaging modalities. Figure 16 shows the number of these challenges for every year since 2007, and it can be seen that this number has been on the rise in the past few years.

10 Discussion and future directions

In the following sections, we discuss in detail the potential future research directions for semantic segmentation of natural and medical images.

10.1 Architectures

Encoder-decoder networks with long and short skip connections are the winning architectures according to the state-of-the-art methods. Skip connections in deep networks have improved both segmentation and classification performance by facilitating the training of deeper network architectures and reducing the risks for vanishing gradients. They equip encoder-decoder-like networks with richer feature representations, but at the cost of higher memory usage, computation, and possibly resulting in transferring non-discriminative feature maps. Similar to Taghanaki et al. (2019c), one future work direction is to optimize the amount of data is being transferred through skip connections. As for the cell level architectural design, our study shows that atrous convolutions with feature pyramid pooling modules are highly being used in the recent models. These approaches are somehow modifications of the classical convolution blocks. Similar to the radial basis function layers in Meyer et al. (2018) and Taghanaki et al. (2019a), a future work focus can be designing new layers that capture a new aspect of data as opposed to convolutions or transform the convolution features into a new manifold. Another useful research direction is using neural architecture search (Zoph and Le 2016) to search for optimal deep neural network architectures for segmentation (Liu et al. 2019a; Zhu et al. 2019; Shaw et al. 2019; Weng et al. 2019b).

10.2 Sequenced models

For image segmentation, sequenced models can be used to segment temporal data such as videos. These models have also been applied to 3D medical datasets, however the advantage of processing volumetric data using 3D convolutions versus the processing the volume slice by slice using 2D sequenced models. Ideally, seeing the whole object of interest in a 3D volume might help to capture the geometrical information of the object, which might be missed in processing a 3D volume slice by slice. Therefore a future direction in this area can be through analysis of sequenced models versus volumetric convolution-based approaches.

10.3 Optimization functions

In medical image segmentation works, researchers have converged toward using classical cross-entropy loss functions along with a second distance or overlap based functions. Incorporating domain/prior knowledge (such as coding the location of different organs explicitly in a deep model) is more sensible in the medical datasets. As shown in Taghanaki et al. (2019e), when only a distance-based or overlap-based loss function is used in a network, and the final layer applies sigmoid function, the risk of gradient vanishing increases. Although overlap based loss function are used in case of a class imbalance (small foregrounds), in Fig. 13, we show how using (only) overlap based loss functions as the main term can be problematic for smooth optimization where they highly penalize a model under/over-segmenting a small foreground. However, the cross-entropy loss returns a reasonable score for the same cases. Besides using integrated cross-entropy based loss functions, future work can be exploring a single loss function that follows the behavior of the cross-entropy and at the same time, offers more features such capturing contour distance. This can be achieved by revisiting the current distance and overlap based loss functions. Another future path can be exploring auto loss function (or regularization term) search similar to the neural architecture search mentioned above. Similarly, gradient based optimizations based on Sobolev (Adams and Fournier 2003) gradients (Czarnecki et al. 2017), such as the works of Goceri (2019b, 2020) are an interesting research direction.

10.4 Other potential directions

  • Going beyond pixel intensity-based scene understanding by incorporating prior knowledge, which have been an active area of research for the past several decades (Nosrati and Hamarneh 2016; Xie et al. 2020). Encoding prior knowledge in medical image analysis models is generally more possible as compared to natural images. Currently, deep models receive matrices of intensity values, and usually, they are not forced to learn prior information. Without explicit reinforcement, the models might still learn object relations to some extent. However, it is difficult to interpret a learned strategy.

  • Because of the large number of imaging modalities, the significant signal noise present in imaging modalities such as PET and ultrasound, and the limited amount of medical imaging data mainly because of high acquisition cost compounded by legal, ethical, and privacy issues, it is difficult to develop universal solutions that yield acceptable performances across various imaging modalities. Therefore, a proper research direction would be along the work of Raghu et al. (2019) on image classification models, studying the risks of using non-medical pre-trained models for medical image segmentation.

  • Creating large 2D and 3D publicly available medical benchmark datasets for semantic image segmentation such as the Medical Segmentation Decathlon (Simpson et al. 2019). Medical imaging datasets are typically much smaller in size than natural image datasets (Jin et al. 2020), and the curation of larger public datasets for medical imaging modalities will allow researchers to accurately compare proposed approaches and make incremental improvements for specific datasets and problems.

  • A possible solution to address the paucity of sufficient annotated medical data is the development and use of physics based imaging simulators, the outputs of which can be used to train segmentation models and augment existing segmentation datasets. Several platforms (Marion et al. 2011; Glatard et al. 2013) as well as simulators already exist for various imaging modalities such as SIMRI (Benoit-Cattin et al. 2005) and POSSUM (Drobnjak et al. 2006, 2010) for magnetic resonance imaging (MRI), PET-SORTEO (Reilhac et al. 2005) and SimSET (Harrison and Lewellen 2012) for emission tomography, SINDBAD (Tabary et al. 2007) for computed tomography (CT), and FIELD-II (Jensen and Svendsen 1992; Jensen 1996) and SIMUS (Shahriari and Garcia 2018) for ultrasound imaging as well as anatomical regions of interest such as VascuSynth (Hamarneh and Jassi 2010) for vascular trees.

  • Medical images, both 2D and volumetric, have in general, larger file sizes than natural images, which inhibits the ability to load them entirely onto the memory for processing. As such, they need to be processed either as patches or sub-volumes, making it difficult for the segmentation models to capture spatial relationships in order to perform accurate segmentation. Therefore, an interesting and potentially very useful research direction would be coming up with architectures and training methods that can incorporate spatial relationships from large medical images and volumes in the models.

  • Exploring reinforcement learning approaches similar to Song et al. (2018) and Wang et al. (2018c) for semantic (medical) image segmentation to mimic the way humans delineate objects of interest. Deep CNNs are successful in extracting features of different classes of objects, but they lose the local spatial information of where the borders of an object should be. Some researchers resort to traditional computer vision methods such as conditional random fields (CRFs) to overcome this problem, which however, add more computation time to the models.

  • Studying the causes for some models and datasets being prone to false positive and false negative predictions in the image segmentation context as found by Berman et al. (2018b) and Taghanaki et al. (2019e).

  • Exploring segmentation-free approaches (Zhen and Li 2015; Hussain et al. 2017; Taghanaki et al. 2018; Mukherjee et al. 2019; Proenca and Neves 2019), i.e., bypassing the segmentation step according to the target problem.

  • Weakly supervised segmentation using image-level labels versus a few images with segmentation annotations. Most new weakly supervised localization methods apply attention maps or region proposals in a multiple instance learning formulations. While attention maps can be noisy, leading to erroneously highlighted regions, it is not simple to decide on an optimal window or bag size for multiple instance learning approaches.

  • While most deep segmentation models for medical image analysis rely on only clinical images for their predictions, there is often multi-modal patient data in the form of other imaging modalities as well as patient metadata that can provide valuable information, which most deep segmentation models do not use. Therefore, a valuable research direction for improving segmentation performance of medical images would be to develop models which are able to leverage multi-modal patient data.

  • Modifying input instead of the model, loss function, and adding more train data. Drozdzal et al. (2018) showed that attaching a pre-processing module at the beginning of a segmentation network improves the network performance. Taghanaki et al. (2019b) leveraged the gradients of a trained segmentation network with respect to the input to transfer it to a new space where the segmentation accuracy improves.

  • Deep neural networks are trained using error backpropagation (Rumelhart et al. 1986) and gradient descent for optimizing the network weights. However, there have been many neural network optimization techniques which do not rely on backpropagation, such as credit assignment (Bengio and Frasconi 1994), neuroevolution (Stanley and Miikkulainen 2002), difference target propagation (Lee et al. 2015), training with local error signals (Nøkland and Eidnes 2019) and several other techniques (Amit 2019; Bellec et al. 2019; Ma et al. 2019). Exploring these and similar other techniques to optimize deep neural networks for semantic segmentation would be another valuable research direction.