1 Introduction

The goal of object detection is to predict a set of bounding boxes and category labels for each object of interest. Modern detectors address this set prediction task in an indirect way, by defining surrogate regression and classification problems on a large set of proposals  [5, 36], anchors  [22], or window centers  [45, 52]. Their performances are significantly influenced by postprocessing steps to collapse near-duplicate predictions, by the design of the anchor sets and by the heuristics that assign target boxes to anchors  [51]. To simplify these pipelines, we propose a direct set prediction approach to bypass the surrogate tasks. This end-to-end philosophy has led to significant advances in complex structured prediction tasks such as machine translation or speech recognition, but not yet in object detection: previous attempts [4, 15, 38, 42] either add other forms of prior knowledge, or have not proven to be competitive with strong baselines on challenging benchmarks. This paper aims to bridge this gap.

We streamline the training pipeline by viewing object detection as a direct set prediction problem. We adopt an encoder-decoder architecture based on transformers [46], a popular architecture for sequence prediction. The self-attention mechanisms of transformers, which explicitly model all pairwise interactions between elements in a sequence, make these architectures particularly suitable for specific constraints of set prediction such as removing duplicate predictions.

Our DEtection TRansformer (DETR, see Fig. 1) predicts all objects at once, and is trained end-to-end with a set loss function which performs bipartite matching between predicted and ground-truth objects. DETR simplifies the detection pipeline by dropping multiple hand-designed components that encode prior knowledge, like spatial anchors or non-maximal suppression. Unlike most existing detection methods, DETR doesn’t require any customized layers, and thus can be reproduced easily in any framework that contains standard ResNet  [14] and Transformer  [46] classes.

Compared to most previous work on direct set prediction, the main features of DETR are the conjunction of the bipartite matching loss and transformers with (non-autoregressive) parallel decoding [7, 9, 11, 28]. In contrast, previous work focused on autoregressive decoding with RNNs [29, 35, 40,41,42]. Our matching loss function uniquely assigns a prediction to a ground truth object, and is invariant to a permutation of predicted objects, so we can emit them in parallel.

Fig. 1.
figure 1

DETR directly predicts (in parallel) the final set of detections by combining a common CNN with a transformer architecture. During training, bipartite matching uniquely assigns predictions with ground truth boxes. Prediction with no match should yield a “no object” (\(\varnothing \)) class prediction.

We evaluate DETR on one of the most popular object detection datasets, COCO  [23], against a very competitive Faster R-CNN baseline  [36]. Faster R-CNN has undergone many design iterations and its performance was greatly improved since the original publication. Our experiments show that our new model achieves comparable performances. More precisely, DETR demonstrates significantly better performance on large objects, a result likely enabled by the non-local computations of the transformer. It obtains, however, lower performances on small objects. We expect that future work will improve this aspect in the same way the development of FPN  [21] did for Faster R-CNN.

Training settings for DETR differ from standard object detectors in multiple ways. The new model requires extra-long training schedule and benefits from auxiliary decoding losses in the transformer. We thoroughly explore what components are crucial for the demonstrated performance.

The design ethos of DETR easily extend to more complex tasks. In our experiments, we show that a simple segmentation head trained on top of a pre-trained DETR outperfoms competitive baselines on Panoptic Segmentation  [18], a challenging pixel-level recognition task that has recently gained popularity.

2 Related Work

Our work build on prior work in several domains: bipartite matching losses for set prediction, encoder-decoder architectures based on the transformer, parallel decoding, and object detection methods.

2.1 Set Prediction

There is no canonical deep learning model to directly predict sets. The basic set prediction task is multilabel classification (see e.g., [32, 39] for references in the context of computer vision) for which the baseline approach, one-vs-rest, does not apply to problems such as detection where there is an underlying structure between elements (i.e., near-identical boxes). The first difficulty in these tasks is to avoid near-duplicates. Most current detectors use postprocessings such as non-maximal suppression to address this issue, but direct set prediction are postprocessing-free. They need global inference schemes that model interactions between all predicted elements to avoid redundancy. For constant-size set prediction, dense fully connected networks [8] are sufficient but costly. A general approach is to use auto-regressive sequence models such as recurrent neural networks [47]. In all cases, the loss function should be invariant by a permutation of the predictions. The usual solution is to design a loss based on the Hungarian algorithm [19], to find a bipartite matching between ground-truth and prediction. This enforces permutation-invariance, and guarantees that each target element has a unique match. We follow the bipartite matching loss approach. In contrast to most prior work however, we step away from autoregressive models and use transformers with parallel decoding, which we describe below.

2.2 Transformers and Parallel Decoding

Transformers were introduced by Vaswani et al.  [46] as a new attention-based building block for machine translation. Attention mechanisms [2] are neural network layers that aggregate information from the entire input sequence. Transformers introduced self-attention layers, which, similarly to Non-Local Neural Networks [48], scan through each element of a sequence and update it by aggregating information from the whole sequence. One of the main advantages of attention-based models is their global computations and perfect memory, which makes them more suitable than RNNs on long sequences. Transformers are now replacing RNNs in many problems in natural language processing, speech processing and computer vision   [7, 26, 30, 33, 44].

Transformers were first used in auto-regressive models, following early sequence-to-sequence models [43], generating output tokens one by one. However, the prohibitive inference cost (proportional to output length, and hard to batch) lead to the development of parallel sequence generation, in the domains of audio [28], machine translation [9, 11], word representation learning [7], and more recently speech recognition [6]. We also combine transformers and parallel decoding for their suitable trade-off between computational cost and the ability to perform the global computations required for set prediction.

2.3 Object Detection

Most modern object detection methods make predictions relative to some initial guesses. Two-stage detectors  [5, 36] predict boxes w.r.t. proposals, whereas single-stage methods make predictions w.r.t. anchors  [22] or a grid of possible object centers  [45, 52]. Recent work  [51] demonstrate that the final performance of these systems heavily depends on the exact way these initial guesses are set. In our model we are able to remove this hand-crafted process and streamline the detection process by directly predicting the set of detections with absolute box prediction w.r.t. the input image rather than an anchor.

Set-Based Loss. Several object detectors  [8, 24, 34] used the bipartite matching loss. However, in these early deep learning models, the relation between different prediction was modeled with convolutional or fully-connected layers only and a hand-designed NMS post-processing can improve their performance. More recent detectors  [22, 36, 52] use non-unique assignment rules between ground truth and predictions together with an NMS.

Learnable NMS methods  [4, 15] and relation networks  [16] explicitly model relations between different predictions with attention. Using direct set losses, they do not require any post-processing steps. However, these methods employ additional hand-crafted context features like proposal box coordinates to model relations between detections efficiently, while we look for solutions that reduce the prior knowledge encoded in the model.

Recurrent Detectors. Closest to our approach are end-to-end set predictions for object detection [42] and instance segmentation [29, 35, 40, 41]. Similarly to us, they use bipartite-matching losses with encoder-decoder architectures based on CNN activations to directly produce a set of bounding boxes. These approaches, however, were only evaluated on small datasets and not against modern baselines. In particular, they are based on autoregressive models (more precisely RNNs), so they do not leverage the recent transformers with parallel decoding.

3 The DETR Model

Two ingredients are essential for direct set predictions in detection: (1) a set prediction loss that forces unique matching between predicted and ground truth boxes; (2) an architecture that predicts (in a single pass) a set of objects and models their relation. We describe our architecture in detail in Fig. 2.

3.1 Object Detection Set Prediction Loss

DETR infers a fixed-size set of N predictions, in a single pass through the decoder, where N is set to be significantly larger than the typical number of objects in an image. One of the main difficulties of training is to score predicted objects (class, position, size) with respect to the ground truth. Our loss produces an optimal bipartite matching between predicted and ground truth objects, and then optimize object-specific (bounding box) losses.

Let us denote by y the ground truth set of objects, and \(\hat{y}= \{\hat{y}_i\}_{i=1}^{N}\) the set of N predictions. Assuming N is larger than the number of objects in the image, we consider y also as a set of size N padded with \(\varnothing \) (no object). To find a bipartite matching between these two sets we search for a permutation of N elements \(\sigma \in \mathfrak {S}_N\) with the lowest cost:

(1)

where \(\mathcal{L}_\mathrm{match}(y_i, \hat{y}_{\sigma (i)})\) is a pair-wise matching cost between ground truth \(y_i\) and a prediction with index \(\sigma (i)\). This optimal assignment is computed efficiently with the Hungarian algorithm, following prior work (e.g.   [42]).

The matching cost takes into account both the class prediction and the similarity of predicted and ground truth boxes. Each element i of the ground truth set can be seen as a \(y_i = (c_i, b_i)\) where \(c_i\) is the target class label (which may be \(\varnothing \)) and \(b_i \in [0, 1]^4\) is a vector that defines ground truth box center coordinates and its height and width relative to the image size. For the prediction with index \(\sigma (i)\) we define probability of class \(c_i\) as \(\hat{p}_{\sigma (i)}(c_i)\) and the predicted box as \(\hat{b}_{\sigma (i)}\). With these notations we define \(\mathcal{L}_\mathrm{match}(y_i, \hat{y}_{\sigma (i)})\) as .

This procedure of finding the matching plays the same role as the heuristic assignment rules used to match proposal  [36] or anchors  [21] to ground truth objects in modern detectors. The main difference is that we need to find one-to-one matching for direct set prediction without duplicates.

The second step is to compute the loss function, the Hungarian loss for all pairs matched in the previous step. We define the loss similarly to the losses of common object detectors, i.e. a linear combination of a negative log-likelihood for class prediction and a box loss \(\mathcal{L}_\mathrm{box}(\cdot , \cdot )\) defined later:

(2)

where \(\hat{\sigma }\) is the optimal assignment computed in the first step (1). In practice, we down-weight the log-probability term when \(c_i=\varnothing \) by a factor 10 to account for class imbalance. This is analogous to how Faster R-CNN training procedure balances positive/negative proposals by subsampling [36]. Notice that the matching cost between an object and \(\varnothing \) doesn’t depend on the prediction, which means that in that case the cost is a constant. In the matching cost we use probabilities \(\hat{p}_{\hat{\sigma }(i)}(c_{i})\) instead of log-probabilities. This makes the class prediction term commensurable to \(\mathcal{L}_\mathrm{box}(\cdot , \cdot )\), and we observed better empirical performances.

Bounding Box Loss. The second part of the matching cost and the Hungarian loss is \(\mathcal{L}_\mathrm{box}(\cdot )\) that scores the bounding boxes. Unlike many detectors that do box predictions as a \(\varDelta \) w.r.t. some initial guesses, we make box predictions directly. While such approach simplify the implementation it poses an issue with relative scaling of the loss. The most commonly-used \(\ell _1\) loss will have different scales for small and large boxes even if their relative errors are similar. To mitigate this issue we use a linear combination of the \(\ell _1\) loss and the generalized IoU loss  [37] \(\mathcal{L}_\mathrm{iou}(\cdot , \cdot )\) that is scale-invariant. Overall, our box loss is \(\mathcal{L}_\mathrm{box}(b_{i}, \hat{b}_{\sigma (i)})\) defined as \(\lambda _\mathrm{iou}\mathcal{L}_\mathrm{iou}(b_{i}, \hat{b}_{\sigma (i)}) + \lambda _\mathrm{L1}||b_{i}- \hat{b}_{\sigma (i)}||_1\) where \(\lambda _\mathrm{iou}, \lambda _\mathrm{L1}\in \mathbb {R}\) are hyperparameters. These two losses are normalized by the number of objects inside the batch.

Fig. 2.
figure 2

DETR uses a conventional CNN backbone to learn a 2D representation of an input image. The model flattens it and supplements it with a positional encoding before passing it into a transformer encoder. A transformer decoder then takes as input a small fixed number of learned positional embeddings, which we call object queries, and additionally attends to the encoder output. We pass each output embedding of the decoder to a shared feed forward network (FFN) that predicts either a detection (class and bounding box) or a “no object” class.

3.2 DETR Architecture

The overall DETR architecture is surprisingly simple and depicted in Fig. 2. It contains three main components, which we describe below: a CNN backbone to extract a compact feature representation, an encoder-decoder transformer, and a simple feed forward network (FFN) that makes the final detection prediction.

Unlike many modern detectors, DETR can be implemented in any deep learning framework that provides a common CNN backbone and a transformer architecture implementation with just a few hundred lines. Inference code for DETR can be implemented in less than 50 lines in PyTorch  [31]. We hope that the simplicity of our method will attract new researchers to the detection community.

Backbone. Starting from the initial image \(x_\mathrm{img} \in \mathbb {R}^{3\times H_0\times W_0}\) (with 3 color channelsFootnote 1), a conventional CNN backbone generates a lower-resolution activation map \(f \in \mathbb {R}^{C\times H\times W}\). Typical values we use are \(C=2048\) and \(H, W = \frac{H_0}{32}, \frac{W_0}{32}\).

Transformer Encoder. First, a 1x1 convolution reduces the channel dimension of the high-level activation map f from C to a smaller dimension \(d\). creating a new feature map \(z_0 \in \mathbb {R}^{d\times H\times W}\). The encoder expects a sequence as input, hence we collapse the spatial dimensions of \(z_0\) into one dimension, resulting in a \(d\times HW\) feature map. Each encoder layer has a standard architecture and consists of a multi-head self-attention module and a feed forward network (FFN). Since the transformer architecture is permutation-invariant, we supplement it with fixed positional encodings  [3, 30] that are added to the input of each attention layer. We defer to the supplementary material the detailed definition of the architecture, which follows the one described in  [46].

Transformer Decoder. The decoder follows the standard architecture of the transformer, transforming N embeddings of size d using multi-headed self- and encoder-decoder attention mechanisms. The difference with the original transformer is that our model decodes the N objects in parallel at each decoder layer, while Vaswani et al. [46] use an autoregressive model that predicts the output sequence one element at a time. We refer the reader unfamiliar with the concepts to the supplementary material. Since the decoder is also permutation-invariant, the N input embeddings must be different to produce different results. These input embeddings are learnt positional encodings that we refer to as object queries, and similarly to the encoder, we add them to the input of each attention layer. The N object queries are transformed into an output embedding by the decoder. They are then independently decoded into box coordinates and class labels by a feed forward network (described in the next subsection), resulting N final predictions. Using self- and encoder-decoder attention over these embeddings, the model globally reasons about all objects together using pair-wise relations between them, while being able to use the whole image as context.

Prediction Feed-Forward Networks (FFNs). The final prediction is computed by a 3-layer perceptron with ReLU activation function and hidden dimension d, and a linear projection layer. The FFN predicts the normalized center coordinates, height and width of the box w.r.t. the input image, and the linear layer predicts the class label using a softmax function. Since we predict a fixed-size set of N bounding boxes, where N is usually much larger than the actual number of objects of interest in an image, an additional special class label \(\varnothing \) is used to represent that no object is detected within a slot. This class plays a similar role to the “background” class in standard object detection approaches.

Auxiliary Decoding Losses. We found helpful to use auxiliary losses  [1] in the decoder during training, especially to help the model output the correct number of objects of each class. The output of each decoder layer is normalized with a shared layer-norm then fed to the shared prediction heads (classification and box prediction). We then apply the Hungarian loss as usual for supervision.

4 Experiments

We show that DETR achieves competitive results compared to Faster R-CNN  [36] and RetinaNet  [22] in quantitative evaluation on COCO. Then, we provide a detailed ablation study of the architecture and loss, with insights and qualitative results. Finally, to show that DETR is a versatile model, we present results on panoptic segmentation, training only a small extension on a fixed DETR model.

Dataset. We perform experiments on COCO 2017 detection and panoptic segmentation datasets  [17, 23], containing 118k training images and 5k validation images. Each image is annotated with bounding boxes and panoptic segmentation. There are 7 instances per image on average, up to 63 instances in a single image in training set, ranging from small to large on the same images. If not specified, we report AP as bbox AP, the integral metric over multiple thresholds. For comparison with other models we report validation AP at the last training epoch, and in ablations we report the median over the last 10 epochs.

Technical Details. We train DETR with AdamW  [25] setting the initial transformer’s learning rate to \(10^{-4}\), the backbone’s to \(10^{-5}\), and weight decay to \(10^{-4}\). All transformer weights are initialized with Xavier init  [10], and the backbone is with ImageNet-pretrained ResNet model  [14] from torchvision with frozen batchnorm layers. We report results with two different backbones: a ResNet-50 and a ResNet-101. The corresponding models are called respectively DETR and DETR-R101. Following   [20], we also increase the feature resolution by adding a dilation to the last stage of the backbone and removing a stride from the first convolution of this stage. The corresponding models are called respectively DETR-DC5 and DETR-DC5-R101 (dilated C5 stage). This modification increases the resolution by a factor of two, thus improving performance for small objects, at the cost of a 16x higher cost in the self-attentions of the encoder, leading to an overall 2x increase in computational cost. A full comparison of FLOPs of these models, Faster R-CNN and RetinaNet is given in Table 1.

We use scale augmentation, resizing the input images such that the shortest side is at least 480 and at most 800 pixels while the longest at most 1333  [49]. To help learning global relationships through the self-attention of the encoder, we also apply random crop augmentations during training, improving the performance by approximately 1 AP. Specifically, a train image is cropped with probability 0.5 to a random rectangular patch which is then resized again to 800–1333. The transformer is trained with default dropout of 0.1. At inference time, some slots predict empty class. To optimize for AP, we override the prediction of these slots with the second highest scoring class, using the corresponding confidence. This improves AP by 2 points compared to filtering out empty slots. Other training hyperparameters can be found in Appendix. For our ablation experiments we use training schedule of 300 epochs with a learning rate drop by a factor of 10 after 200 epochs, where a single epoch is a pass over all training images once. Training the baseline model for 300 epochs on 16 V100 GPUs takes 3 d, with 4 images per GPU (hence a total batch size of 64). For the longer schedule used to compare with Faster R-CNN we train for 500 epochs with learning rate drop after 400 epochs, which improves AP by 1.5 points.

Table 1. Comparison with RetinaNet and Faster R-CNN with a ResNet-50 and ResNet-101 backbones on the COCO validation set. The top section shows results for models in Detectron2  [49], the middle section shows results for models with GIoU  [37], random crops train-time augmentation, and the long 9x training schedule. DETR models achieve comparable results to heavily tuned Faster R-CNN baselines, having lower APS but greatly improved APL. We use torchscript models to measure FLOPS and FPS. Results without R101 in the name correspond to ResNet-50.

4.1 Comparison with Faster R-CNN and RetinaNet

Transformers are typically trained with Adam or Adagrad optimizers with very long training schedules and dropout, and this is true for DETR as well. Faster R-CNN, however, is trained with SGD with minimal data augmentation and we are not aware of successful applications of Adam or dropout. Despite these differences we attempt to make our baselines stronger. To align it with DETR, we add generalized IoU  [37] to the box loss, the same random crop augmentation and long training known to improve results  [12]. Results are presented in Table 1. In the top section we show results from Detectron2 Model Zoo  [49] for models trained with the 3x schedule. In the middle section we show results (with a “+”) for the same models but trained with the 9x schedule (109 epochs) and the described enhancements, which in total adds 1–2 AP. In the last section of Table 1 we show the results for multiple DETR models. To be comparable in the number of parameters we choose a model with 6 transformer and 6 decoder layers of width 256 with 8 attention heads. Like Faster R-CNN with FPN this model has 41.3M parameters, out of which 23.5M are in ResNet-50, and 17.8M are in the transformer. Even though both Faster R-CNN and DETR are still likely to further improve with longer training, we can conclude that DETR can be competitive with Faster R-CNN with the same number of parameters, achieving 42 AP on the COCO val subset. The way DETR achieves this is by improving APL (+7.8), however note that the model is still lagging behind in APS (-5.5). DETR-DC5 with the same number of parameters and similar FLOP count has higher AP, but is still significantly behind in APS too. Results on ResNet-101 backbone are comparable as well.

4.2 Ablations

Attention mechanisms in the transformer decoder are the key components which model relations between feature representations of different detections. In our ablation analysis, we explore how other components of our architecture and loss influence the final performance. For the study we choose ResNet-50-based DETR model with 6 encoder, 6 decoder layers and width 256. The model has 41.3M parameters, achieves 40.6 and 42.0 AP on short and long schedules respectively, and runs at 28 FPS, similarly to Faster R-CNN-FPN with the same backbone.

Number of Encoder Layers. We evaluate the importance of global image-level self-attention by changing the number of encoder layers. Without encoder layers, overall AP drops by 3.9 points, with a more significant drop of 6.0 AP on large objects. We hypothesize that, by using global scene reasoning, the encoder is important for disentangling objects. See results in appendix. In Fig. 3, we visualize the attention maps of the last encoder layer of a trained model, focusing on a few points in the image. The encoder seems to separate instances already, which likely simplifies object extraction and localization for the decoder.

Fig. 3.
figure 3

Encoder self-attention for a set of reference points. The encoder is able to separate individual instances. Prediction made with baseline DETR on a validation image.

Fig. 4.
figure 4

AP and AP50 performance after each decoder layer in a long schedule baseline model. DETR does not need NMS by design, which is validated by this figure. NMS lowers AP in the final layers, removing TP predictions, but improves it in the first layers, where DETR does not have the capability to remove double predictions.

Fig. 5.
figure 5

Out of distribution generalization for rare classes. Even though no image in the training set has more than 13 giraffes, DETR has no difficulty generalizing to 24 and more instances.

Number of Decoder Layers. We apply auxiliary losses after each decoding layer (see Sect. 3.2), hence, the prediction FFNs are trained by design to predict objects out of the outputs of every decoder layer. We analyze the importance of each decoder layer by evaluating the objects that would be predicted at each stage of the decoding (Fig. 4). Both AP and AP50 improve after every layer, totalling into a very significant +8.2/9.5 AP improvement between the first and the last layer. With its set-based loss, DETR does not need NMS by design. To verify this we run a standard NMS procedure with default parameters  [49] for the outputs after each decoder. NMS improves performance for the predictions from the first decoder. This can be explained by the fact that a single decoding layer of the transformer is not able to compute any cross-correlations between the output elements, and thus it is prone to making multiple predictions for the same object. In the second and subsequent layers, the self-attention mechanism over the activations allows the model to inhibit duplicate predictions. We observe that the improvement brought by NMS diminishes as depth increases. It hurts AP in the last layers, as it incorrectly removes true positive predictions.

Similarly to visualizing encoder attention, we visualize decoder attentions in Fig. 6, coloring attention maps for each predicted object in different colors. We observe that decoder attention is fairly local, meaning that it mostly attends to object extremities such as heads or legs. We hypothesise that after the encoder has separated instances via global attention, the decoder only needs to attend to the extremities to extract the class and object boundaries.

Fig. 6.
figure 6

Visualizing decoder attention for every predicted object (images from COCO val set). Predictions are made with DETR-DC5 model. Decoder typically attends to object extremities, such as legs and heads.

Importance of FFN. FFN inside tranformers can be seen as \(1\times 1\) convolutional layers, making encoder similar to attention augmented convolutional networks  [3]. We attempt to remove it completely leaving only attention in the transformer layers. By reducing the number of network parameters from 41.3M to 28.7M, leaving only 10.8M in the transformer, performance drops by 2.3 AP, we thus conclude that FFN are important for achieving good results.

Importance of Positional Encodings. There are two kinds of positional encodings in our model: spatial positional encodings and output positional encodings (object queries). We experiment with various combinations of fixed and learned encodings, see results in appendix. Output positional encodings are required and cannot be removed, so we experiment with either passing them once at decoder input or adding to queries at every decoder attention layer. In the first experiment we completely remove spatial positional encodings and pass output positional encodings at input and, interestingly, the model still achieves more than 32 AP, losing 7.8 AP to the baseline. Then, we pass fixed sine spatial positional encodings and the output encodings at input once, as in the original transformer  [46], and find that this leads to 1.4 AP drop compared to passing the positional encodings directly in attention. Learned spatial encodings passed to the attentions give similar results. Surprisingly, we find that not passing any spatial encodings in the encoder only leads to a minor AP drop of 1.3 AP. When we pass the encodings to the attentions, they are shared across all layers, and the output encodings (object queries) are always learned.

Given these ablations, we conclude that transformer components: the global self-attention in encoder, FFN, multiple decoder layers, and positional encodings, all significantly contribute to the final object detection performance.

Generalization to Unseen Numbers of Instances. Some classes in COCO are not well represented with many instances of the same class in the same image. For example, there is no image with more than 13 giraffes in the training set. We create a synthetic imageFootnote 2 to verify the generalization ability of DETR (see Fig. 5). Our model is able to find all 24 giraffes on the image which is clearly out of distribution. This experiment confirms that there is no strong class-specialization in each object query.

4.3 DETR for Panoptic Segmentation

Panoptic segmentation  [18] has recently attracted a lot of attention from the computer vision community. Similarly to the extension of Faster R-CNN  [36] to Mask R-CNN  [13], DETR can be naturally extended by adding a mask head on top of the decoder outputs. In this section we demonstrate that such a head can be used to produce panoptic segmentation  [18] by treating stuff and thing classes in a unified way. We perform our experiments on the panoptic annotations of the COCO dataset that has 53 stuff categories in addition to 80 things categories.

We train DETR to predict boxes around both stuff and things classes on COCO, using the same recipe. Predicting boxes is required for the training to be possible, since the Hungarian matching is computed using distances between boxes. We also add a mask head which predicts a binary mask for each of the predicted boxes, see Fig. 7. It takes as input the output of transformer decoder for each object and computes multi-head (with M heads) attention scores of this embedding over the output of the encoder, generating M attention heatmaps per object in a small resolution. To make the final prediction and increase the resolution, an FPN-like architecture is used. We refer to the supplement for more details. The final resolution of the masks has stride 4 and each mask is supervised independently using the DICE/F-1 loss  [27] and Focal loss  [22].

Fig. 7.
figure 7

Illustration of the panoptic head. A binary mask is generated in parallel for each detected object, then the masks are merged using pixel-wise argmax.

The mask head can be trained either jointly, or in a two steps process, where we train DETR for boxes only, then freeze all the weights and train only the mask head for 25 epochs. Experimentally, these two approaches give similar results, we report results using the latter method since it is less computationally intensive.

To predict the final panoptic segmentation we simply use an argmax over the mask scores at each pixel, and assign the corresponding categories to the resulting masks. This procedure guarantees that the final masks have no overlaps and thus DETR does not require a heuristic  [18] to align different masks.

Training Details. We train DETR, DETR-DC5 and DETR-R101 models following the recipe for bounding box detection to predict boxes around stuff and things classes in COCO dataset. The new mask head is trained for 25 epochs (see supplementary for details). During inference we first filter out the detection with a confidence below 85%, then compute the per-pixel argmax to determine in which mask each pixel belongs. We then collapse different mask predictions of the same stuff category in one, and filter the empty ones (less than 4 pixels).

Table 2. Comparison with the state-of-the-art methods UPSNet  [50] and Panoptic FPN  [17] on the COCO val dataset We retrained PanopticFPN with the same data-augmentation as DETR, on a 18x schedule for fair comparison. UPSNet uses the 1x schedule, UPSNet-M is the version with multiscale test-time augmentations.
Fig. 8.
figure 8

Qualitative results for panoptic segmentation generated by DETR-R101. DETR produces aligned mask predictions in a unified manner for things and stuff.

Main Results. Qualitative results are shown in Fig. 8. In Table 2 we compare our unified panoptic segmenation approach with several established methods that treat things and stuff differently. We report the Panoptic Quality (PQ) and the break-down on things (\(\text {PQ}^\text {th}\)) and stuff (\(\text {PQ}^\text {st}\)). We also report the mask AP (computed on the things classes), before any panoptic post-treatment (in our case, before taking the pixel-wise argmax). We show that DETR outperforms published results on COCO-val 2017, as well as our strong PanopticFPN baseline (trained with same data-augmentation as DETR, for fair comparison). The result break-down shows that DETR is especially dominant on stuff classes, and we hypothesize that the global reasoning allowed by the encoder attention is the key element to this result. For things class, despite a severe deficit of up to 8 mAP compared to the baselines on the mask AP computation, DETR obtains competitive \(\text {PQ}^\text {th}\). We also evaluated our method on the test set of the COCO dataset, and obtained 46 PQ. We hope that our approach will inspire the exploration of fully unified models for panoptic segmentation in future work.

5 Conclusion

We presented DETR, a new design for object detection systems based on transformers and bipartite matching loss for direct set prediction. The approach achieves comparable results to an optimized Faster R-CNN baseline on the challenging COCO dataset. DETR is straightforward to implement and has a flexible architecture that is easily extensible to panoptic segmentation, with competitive results. In addition, it achieves significantly better performance on large objects, likely due to the processing of global information performed by the self-attention.

This new design for detectors also comes with new challenges, in particular regarding training, optimization and performances on small objects. Current detectors required several years of improvements to cope with similar issues, and we expect future work to successfully address them for DETR.