1 Introduction

Object detection consists of two sub-problems - finding the object (localization) and naming it (classification). Traditional methods tightly couple these two sub-problems and thus rely on box labels for all classes. Despite many data collection efforts, detection datasets [18, 28, 34, 49] are much smaller in overall size and vocabularies than classification datasets [10]. For example, the recent LVIS detection dataset [18] has 1000+ classes with 120K images; OpenImages [28] has 500 classes in 1.8M images. Moreover, not all classes contain sufficient annotations to train a robust detector (see Fig. 1 Top). In classification, even the ten-year-old ImageNet [10] has 21K classes and 14M images (Fig. 1 Bottom).

In this paper, we propose Detector with image classes (Detic) that uses image-level supervision in addition to detection supervision. We observe that the localization and classification sub-problems can be decoupled. Modern region proposal networks already localize many ‘new’ objects using existing detection supervision. Thus, we focus on the classification sub-problem and use image-level labels to train the classifier and broaden the vocabulary of the detector. We propose a simple classification loss that applies the image-level supervision to the proposal with the largest size, and do not supervise other outputs for image-labeled data. This is easy to implement and massively expands the vocabulary.

Fig. 1.
figure 1

Top: Typical detection results from a strong open-vocabulary LVIS detector. The detector misses objects of “common” classes. Bottom: Number of images in LVIS, ImageNet, and Conceptual Captions per class (smoothed by averaging 100 neighboring classes). Classification datasets have a much larger vocabulary than detection datasets.

Most existing weakly-supervised detection techniques [13, 22, 36, 59, 67] use the weakly labeled data to supervise both the localization and classification sub-problems of detection. Since image-classification data has no box labels, these methods develop various label-to-box assignment techniques based on model predictions to obtain supervision. For example, YOLO9000 [45] and DLWL [44] assign the image label to proposals that have high prediction scores on the labeled class. Unfortunately, this prediction-based assignment requires good initial detections which leads to a chicken-and-egg problem—we need a good detector for good label assignment, but we need many boxes to train a good detector. Our method completely side-steps the prediction-based label assignment process by supervising the classification sub-problem alone when using classification data. This also enables our method to learn detectors for new classes which would have been impossible to predict and assign.

Experiments on the open-vocabulary LVIS [17, 18] and the open-vocabulary COCO [2] benchmarks show that our method can significantly improve over a strong box-supervised baseline, on both novel and base classes. With image-level supervision from ImageNet-21K [10], our model trained without novel class detection annotations improves the baseline by 8.3 point and matches the performance of using full class annotations in training. With the standard LVIS annotations, our model reaches 41.7 mAP and 41.7 mAP\(_{\text {rare}}\), closing the gap between rare classes and all classes. On open-vocabulary COCO, our method outperforms the previous state-of-the-art OVR-CNN [72] by 5 point with the same detector and data. Finally, we train a detector using the full ImageNet-21K with more than twenty-thousand classes. Our detector generalizes much better to new datasets [28, 49] with disjoint label spaces, reaching 21.5 mAP on Objects365 and 55.2 mAP50 on OpenImages, without seeing any images from the corresponding training sets. Our contributions are summarized below:

  • We identify issues and propose a simpler alternative to existing weakly-supervised detection techniques in the open-vocabulary setting.

  • Our proposed family of losses significantly improves detection performance on novel classes, closely matching the supervised performance upper bound.

  • Our detector transfers to new datasets and vocabularies without finetuning.

  • We release our code (in supplement). It is ready-to-use for open-vocabulary detection in the real world. See examples in supplement (Fig. 2).

Fig. 2.
figure 2

Left: Standard detection requires ground-truth labeled boxes and cannot leverage image-level labels. Center: Existing prediction-based weakly supervised detection methods [3, 44, 45] use image-level labels by assigning them to the detector’s predicted boxes (proposals). Unfortunately, this assignment is error-prone, especially for large vocabulary detection. Right: Detic simply assigns the image-labels to the max-size proposal. We show that this loss is both simpler and performs better than prior work.

2 Related Work

Weakly-Supervised Object Detection (WSOD) trains object detector using image-level labels. Many works use only image-level labels without any box supervision [30, 51, 52, 63, 70]. WSDDN [3] and OIRC [60] use a subnetwork to predict per-proposal weighting and sum up proposal scores into a single image scores. PCL [59] first clusters proposals and then assign image labels at the cluster level. CASD [22] further introduces feature-level attention and self-distillation. As no bounding box supervision is used in training, these methods rely on low-level region proposal techniques [1, 62], which leads to reduced localization quality.

Another line of WSOD work uses bounding box supervision together with image labels, known as semi-supervised WSOD [12, 13, 31, 35, 61, 68, 75]. YOLO9000 [45] mixes detection data and classification data in the same mini-batch, and assigns classification labels to anchors with the highest predicted scores. DLWL [44] combines self-training and clustering-based WSOD [59], and again assigns image labels to max-scored proposals. MosaicOS [73] handles domain differences between detection and image datasets by mosaic augmentation [4] and proposed a three-stage self-training and finetuning framework. In segmentation, Pinheiro et al. [41] use a log-sum-exponential function to aggregate pixels scores into a global classification. Our work belongs to semi-supervised WSOD. Unlike prior work, we use a simple image-supervised loss. Besides image labels, researchers have also studied complementary methods for weak localization supervision like points [7] or scribles [47].

Open-Vocabulary Object Detection, or also named zero-shot object detection, aims to detect objects outside of the training vocabulary. The basic solution [2] is to replace the last classification layer with language embeddings (e.g., GloVe [40]) of the class names. Rahman et al. [43] and Li et al. [33] improve the classifier embedding using external text information. OVR-CNN [72] pretrains the detector on image-text pairs. ViLD [17], OpenSeg [16] and langSeg [29] upgrade the language embedding to CLIP [42]. ViLD further distills region features from CLIP image features. We use CLIP [42] classifier as well, but do not use distillation. Instead, we use additional image-labeled data for co-training.

Large-Vocabulary Object Detection [18, 45, 53, 69] requires detecting 1000+ classes. Many existing works focus on handling the long-tail problem  [6, 14, 32, 39, 65, 74]. Equalization losses [55, 56] and SeeSaw loss [64] reweights the per-class loss by balancing the gradients [55] or number of samples [64]. Federated Loss [76] subsamples classes per-iteration to mimic the federated annotation [18]. Yang et al. [69] detects 11K classes with a label hierarchy. Our method builds on these advances, and we tackle the problem from a different aspect: using additional image-labeled data.

Proposal Network Generalization. ViLD [17] reports that region proposal networks have certain generalization abilities for new classes by default. Dave et al. [9] shows segmentation and localization generalizes across classes. Kim et al. [25] further improves proposal generalization with a localization quality estimator. In our experiments, we found proposals to generalize well enough (see Appendix A), as also observed in ViLD [17]. Further improvements to RPNs [17, 25, 27, 38] can hopefully lead to better results.

3 Preliminaries

We train object detectors using both object detection and image classification datasets. We propose a simple way to leverage image supervision to learn object detectors, including for classes without box labels. We first describe the object detection problem and then detail our approach.

Problem Setup. Given an image \(\textbf{I}\in \mathbb {R}^{3\times h \times w}\), object detection solves the two subproblems of (1) localization: find all objects with their location, represented as a box \(\textbf{b}_j \in \mathbb {R}^4\) and (2) classification: assign a class label \(c_j \in \mathcal {C}^{\textrm{test}}\) to the j-th object. Here \(\mathcal {C}^{\textrm{test}}\) is the class vocabulary provided by the user at test time. During training, we use a detection dataset \(\mathcal {D}^{\textrm{det}}= \{(\textbf{I}, \{(\textbf{b}, c)_k\})_i\}_{i=1}^{|\mathcal {D}^{\textrm{det}}|}\) with vocabulary \(\mathcal {C}^{\textrm{det}}\) that has both class and box labels. We also use an image classification dataset \(\mathcal {D}^{\textrm{cls}}= \{(\textbf{I}, \{c_k\})_i\}_{i=1}^{|\mathcal {D}^{\textrm{cls}}|}\) with vocabulary \(\mathcal {C}^{\textrm{cls}}\) that only has image-level class labels. The vocabularies \(\mathcal {C}^{\textrm{test}}\), \(\mathcal {C}^{\textrm{det}}\), \(\mathcal {C}^{\textrm{cls}}\) may or may not overlap.

Traditional Object Detection considers \(\mathcal {C}^{\textrm{test}}= \mathcal {C}^{\textrm{det}}\) and \(\mathcal {D}^{\textrm{cls}}= \emptyset \). Predominant object detectors [20, 46] follow a two-stage framework. The first stage, called the region proposal network (RPN), takes the image \(\textbf{I}\) and produces a set of object proposals \(\{(\textbf{b}, \textbf{f}, o)_j\}\), where \(\textbf{f}_j \in \mathbb {R}^D\) is a D-dimensional region feature and \(o \in \mathbb {R}\) is the objectness score. The second stage takes the object feature and outputs a classification score and a refined box location for each object, \(s_j = \textbf{W}\textbf{f}_j\), \(\hat{\textbf{b}}_j = \textbf{B}\textbf{f}_j + \textbf{b}_j\), where \(\textbf{W}\in \mathbb {R}^{|\mathcal {C}^{\textrm{det}}| \times D}\) and \(\textbf{B}\in \mathbb {R}^{4 \times D}\) are the learned weights of the classification layer and the regression layer, respectively.Footnote 1 Our work focuses on improving classification in the second stage. In our experiments, the proposal network and the bounding box regressors are not the current performance bottleneck, as modern detectors use an over-sufficient number of proposals in testing (1K proposals for < 20 objects per image. See Appendix A for more details).

Open-vocabulary Object Detection allows \(\mathcal {C}^{\textrm{test}}\ne \mathcal {C}^{\textrm{det}}\). Simply replacing the classification weights \(\textbf{W}\) with fixed language embeddings of class names converts a traditional detector to an open-vocabulary detector [2]. The region features are trained to match the fixed language embeddings. We follow Gu et al. [17] to use the CLIP embeddings [42] as the classification weights. In theory, this open-vocabulary detector can detect any object class. However, in practice, it yields unsatisfying results as shown in Fig. 1. Our method uses image-level supervision to improve object detection including in the open-vocabulary setting.

Fig. 3.
figure 3

Approach Overview. We mix train on detection data and image-labeled data. When using detection data, our model uses the standard detection losses to train the classifier (\(\textbf{W}\)) and the box prediction branch (\(\textbf{B}\)) of a detector. When using image-labeled data, we only train the classifier using our modified classification loss. Our loss trains the features extracted from the largest-sized proposal.

4 Detic: Detector with Image Classes

As shown in Fig. 3, our method leverages the box labels from detection datasets \(\mathcal {D}^{\textrm{det}}\) and image-level labels from classification datasets \(\mathcal {D}^{\textrm{cls}}\). During training, we compose a mini-batch using images from both types of datasets. For images with box labels, we follow the standard two-stage detector training [46]. For image-level labeled images, we only train the features from a fixed region proposal for classification. Thus, we only compute the localization losses (RPN loss and bounding box regression loss) on images with ground truth box labels. Below we describe our modified classification loss for image-level labels.

A sample from the weakly labeled dataset \(\mathcal {D}^{\textrm{cls}}\) contains an image \(\textbf{I}\) and a set of K labels \(\{c_k\}_{k=1}^K\). We use the region proposal network to extract N object features \(\{(\textbf{b}, \textbf{f}, o)_j\}_{j=1}^{N}\). Prediction-based methods try to assign image labels to regions, and aim to train both localization and classification abilities. Instead, we propose simple ways to use the image labels \(\{c_k\}_{k=1}^K\) and only improve classification. Our key idea is to use a fixed way to assign image labels to regions, and side-step a complex prediction-based assignment. We allow the fixed assignment schemes miss certain objects, as long as they miss fewer objects than the prediction-based counterparts, thus leading to better performance.

Non-prediction-Based Losses. We now describe a variety of simple ways to use image labels and evaluate them empirically in Table 1. Our first idea is to use the whole image as a new “proposal” box. We call this loss image-box. We ignore all proposals from the RPN, and instead use an injected box of the whole image \(\textbf{b}' = (0, 0, w, h)\). We then apply the classification loss to its RoI features \(\textbf{f}'\) for all classes \(c \in \{c_k\}_{k=1}^K\):

$$L_{\text {image-box}} = BCE(\textbf{W}\textbf{f}', c)$$

where \(BCE(s, c) = -log \sigma (s_c) - \sum _{k \ne c} log (1 - \sigma (s_k))\) is the binary cross-entropy loss, and \(\sigma \) is the sigmoid activation. Thus, our loss uses the features from the same ‘proposal’ for solving the classification problem for all the classes \(\{c_k\}\).

In practice, the image-box can be replaced by smaller boxes. We introduce two alternatives: the proposal with the max object score or the proposal with the max size:

$$L_{\text {max-object-score}} = BCE(\textbf{W}\textbf{f}_j, c), j = \text {argmax}_j o_j$$
$$L_{\text {max-size}} = BCE(\textbf{W}\textbf{f}_j, c), j = \text {argmax}_j (\text {size}(\textbf{b}_j)) $$

We show that all these three losses can effectively leverage the image-level supervision, while the max-size loss performs the best. We thus use the max-size loss by default for image-supervised data. We also note that the classification parameters \(\textbf{W}\) are shared across both detection and classification data, which greatly improves detection performance. The overall training objective is

$$\begin{aligned} L(\textbf{I})={\left\{ \begin{array}{ll} L_{\text {rpn}} + L_{\text {reg}} + L_{\text {cls}}, &{} \text {if} \ \textbf{I}\in \mathcal {D}^{\textrm{det}}\\ \lambda L_{\text {max-size}}, &{} \text {if} \ \textbf{I}\in \mathcal {D}^{\textrm{cls}}\end{array}\right. } \end{aligned}$$

where \(L_{\text {rpn}}\), \(L_{\text {reg}}\), \(L_{\text {cls}}\) are standard losses in a two-stage detector, and \(\lambda =0.1\) is the weight of our loss.

Relation to Prediction-Based Assignments. In traditional weakly-supervised detection  [3, 44, 45], a popular idea is to assign the image to the proposals based on model prediction. Let \(\textbf{F}= (\textbf{f}_1, \dots , \textbf{f}_N)\) be the stacked feature of all object proposals and \(\textbf{S}= \textbf{W}\textbf{F}\) be their classification scores. For each \(c \in \{c_k\}_{k=1}^K\), \(L = BCE(\textbf{S}_j, c), j = \mathcal {F}(\textbf{S}, c)\), where \(\mathcal {F}\) is the label-to-box assignment process. In most methods, \(\mathcal {F}\) is a function of the prediction \(\textbf{S}\). For example, \(\mathcal {F}\) selects the proposal with max score on c. Our key insight is that \(\mathcal {F}\) should not depend on the prediction \(\textbf{S}\). In large-vocabulary detection, the initial recognition ability of rare or novel classes is low, making the label assignment process inaccurate. Our method side-steps this prediction-and-assignment process entirely and relies on a fixed supervision criteria.

5 Experiments

We evaluate Detic on the large-vocabulary object detection dataset LVIS  [18]. We mainly use the open-vocabulary setting proposed by Gu et al. [17], and also report results on the standard LVIS setting. We describe our experiment setup below.

LVIS . The LVIS  [18] dataset has object detection and instance segmentation labels for 1203 classes with 100K images. The classes are divided into three groups - frequent, common, rare based on the number of training images. We refer to this standard LVIS training set as LVIS-all. Following ViLD  [17], we remove the labels of 337 rare-class from training and consider them as novel classes in testing. We refer to this partial training set with only frequent and common classes as LVIS-base. We report mask mAP which is the official metric for LVIS. While our model is developed for box detection, we use a standard class-agnostic mask head [20] to produce segmentation masks for boxes. We train the mask head only on detection data.

Image-Supervised Data. We use two sources of image-supervised data: ImageNet-21K  [10] and Conceptual Captions  [50]. ImageNet-21K (IN-21K) contains 14M images for 21K classes. For ease of training and evaluation, most of our experiments use the 997 classes that overlap with the LVIS vocabulary and denote this subset as IN-L. Conceptual Captions  [50] (CC) is an image captioning dataset containing 3M images. We extract image labels from the captions using exact text-matching and keep images whose captions mention at least one LVIS class. See Appendix B for results of directly using captions. The resulting dataset contains 1.5M images with 992 LVIS classes. We summarize the datasets used below.

Notation

Definition

#Images

#Classes

LVIS-all

The original LVIS dataset [18]

100K

1203

LVIS-base

LVIS without rare-class annotations

100K

866

IN-21K

The original ImageNet-21K dataset [10]

14M

21k

IN-L

997 overlapping IN-21K classes with LVIS

1.2M

997

CC

Conceptual Captions  [50] with LVIS classes

1.5M

992

5.1 Implementation Details

Box-Supervised: A Strong LVIS Baseline. We first establish a strong baseline on LVIS to demonstrate that our improvements are orthogonal to recent advances in object detection. The baseline only uses the supervised bounding box labels. We use the CenterNet2 [76] detector with ResNet50 [21] backbone. We use Federated Loss [76] and repeat factor sampling [18]. We use large scale jittering [15] with input resolution \(640\!\times \!640\) and train for a \(4\times \) (\(\sim \!\!48\) LVIS epochs) schedule. To show our method is compatible with better pretraining, we use ImageNet-21k pretrained backbone weights [48]. As described in Sect. 3, we use the CLIP [42] embedding as the classifier. Our baseline is 9.1 mAP higher than the detectron2 baseline [66] (31.5 vs. 22.4 mAP\(^{\text {mask}}\)) and trains in a similar time (17 vs. 12 h on 8 V100 GPUs). See Appendix C for more details.

Resolution Change for Image-Labeled Images. ImageNet images are inherently smaller and more object-focused than LVIS images [73]. In practice, we observe it is important to use smaller image resolution for ImageNet images. Using smaller resolution in addition allows us to increase the batch-size with the same computation. In our implementation, we use \(320 \times 320\) for ImageNet and CC and ablate this in Appendix D.

Multi-dataset Training. We sample detection and classification mini-batches in a 1 : 1 ratio, regardless of the original dataset size. We group images from the same dataset on the same GPU to improve training efficiency [77].

Training Schedules. To shorten the experimental cycle and have a good initialization for prediction-based WSOD losses [44, 45], we always first train a converged base-class-only model (\(4\times \) schedule) and finetune on it with additional image-labeled data for another \(4\times \) schedule. We confirm finetuning the model using only box supervision does not improve the performance. The \(4\times \) schedule for our joint training consists of \(\sim \!\!24\) LVIS epochs plus \(\sim \!\!4.8\) ImageNet epochs or \(\sim \!\!3.8\) CC epochs. Training our ResNet50 model takes \(\sim \!22\) hours on 8 V100 GPUs. The large 21K Swin-B model trains in \(\sim \!24\) hours on 32 GPUs.

5.2 Prediction-Based vs Non-prediction-Based Methods

Table 1 shows the results of the box-supervised baseline, existing prediction-based methods, and our proposed non-prediction-based methods. The baseline (Box-Supervised) is trained without access to novel class bounding box labels. It uses the CLIP classifier [17] and has open-vocabulary capabilities with 16.3 mAP\(_{\text {novel}}\). In order to leverage additional image-labeled data like ImageNet or CC, we use prior prediction-based methods or our non-prediction-based method.

We compare a few prediction-based methods that assign image labels to proposals based on predictions. Self-training assigns predictions of Box-Supervised as pseudo-labels offline with a fixed score threshold (0.5). The other prediction-based methods use different losses to assign predictions to image labels online. See Appendix E for implementation details. For DLWL [44], we implement a simplified version that does not include bootstrapping and refer to it as DLWL*.

Table 1. Prediction-based vs non-prediction-based methods. We show overall and novel-class mAP on open-vocabulary LVIS [17] (with 866 base classes and 337 novel classes) with different image-labeled datasets (IN-L or CC). The models are trained using our strong baseline Sect. 5.1 (top row). This baseline is trained on boxes from the base classes and has non-zero novel-class mAP as it uses the CLIP classifier. All models in the following rows are finetuned from the baseline model and leverage image-labeled data. We repeat experiments for 3 runs and report mean/ std. All variants of our proposed non-prediction-based losses outperform existing prediction-based counterparts.

Table 1 (third block) shows the results of our non-prediction-based methods in Sect. 4. All variants of our proposed simpler method outperform the complex prediction-based counterparts, with both image-supervised datasets. On the novel classes, Detic provides a significant gain of \(\sim 4.2\) points with ImageNet over the best prediction-based methods.

Using Non-object Centric Images from Conceptual Captions. ImageNet images typically have a single large object [18]. Thus, our non-prediction-based methods, for example image-box which considers the entire image as a bounding box, are well suited for ImageNet. To test whether our losses work with different image distributions with multiple objects, we test it with the Conceptual Captions (CC) dataset. Even on this challenging dataset with multiple objects/labels per image, Detic provides a gain of \(\sim 2.6\) points on novel class detection over the best prediction-based methods. This suggests that our simpler Detic method can generalize to different types of image-labeled data. Overall, the results from Table 1 suggest that complex prediction-based methods that overly rely on model prediction scores do not perform well for open-vocabulary detection. Amongst our non-prediction-based variants, the max-size loss consistently performs the best, and is the default for Detic in our following experiments.

Why Does Max-Size Work? Intuitively, our simpler non-prediction methods outperform the complex prediction-based method by side-stepping a hard assignment problem. Prediction-based methods rely on strong initial detections to assign image-level labels to predicted boxes. When the initial predictions are reliable, prediction-based methods are ideal. However, in open-vocabulary scenarios, such strong initial predictions are absent, which explains the limited performance of prediction-based methods. Detic ’s simpler assignment does not rely on strong predictions and is more robust under the challenges of open-vocabulary setting.

We now study two additional advantages of the Detic max-size variant over prediction-based methods that may contribute to improved performance: 1) the selected max-size proposal can safely cover the target object; 2) the selected max-size proposal is consistent during different training iterations.

Figure 4 provides typical qualitative examples of the assigned region for the prediction-based method and our max-size variant. On an annotated subset of IN-L, Detic max-size covers \(92.8\%\) target objects, vs. \(69.0\%\) for the prediction-based method. Overall, unlike prediction-based methods, Detic ’s simpler assignment yields boxes that are more likely to contain the object. Indeed, Detic may miss certain objects (especially small objects) or supervise to a loose region. However, in order for Detic to yield a good detector, the selected box need not be perfect, it just needs to 1) provide meaningful training signal (cover the objects and be consistent during training); 2) be ‘more correct’ than the box selected by the prediction-based method. We provide details about our metrics, more quantitative evaluation, and more discussions in Appendix E.

Fig. 4.
figure 4

Visualization of the assigned boxes during training. We show all boxes with score \(>0.5\) in and the assigned (selected) box in . Top: The prediction-based method selects different boxes across training, and the selected box may not cover the objects in the image. Bottom: Our simpler max-size variant selects a box that covers the objects and is more consistent across training. (Color figure online)

Table 2. Open-vocabulary LVIS compared to ViLD [17]. We train our model using their training settings and architecture (MaskRCNN-ResNet50, training from scratch). We report mask mAP and its breakdown to novel (rare), common, and frequent classes. Variants of ViLD use distillation (ViLD) or ensembling (ViLD-ensemble.). Detic (with IN-L) uses a single model and improves both mAP and mAP\(_{\text {novel}}\).

5.3 Comparison with a Fully-Supervised Detector

In Table 1, compared with the strong baseline Box-Supervised , Detic improves the detection performance by 2.4 mAP and 8.3 mAP\(_\text {novel}\). Thus, Detic with image-level labels leads to strong open-vocabulary detection performance and can provide orthogonal gains to existing open-vocabulary detectors [2]. To further understand the open-vocabulary capabilities of Detic , we also report the top-line results trained with box labels for all classes (Table 1 last row). Despite not using box labels for the novel classes, Detic with ImageNet performs favorably compared to the fully-supervised detector. This result also suggests that bounding box annotations may not be required for new classes. Detic combined with large image classification datasets is a simple and effective alternative for increasing detector vocabulary.

5.4 Comparison with the State-of-the-Art

We compare Detic ’s open-vocabulary object detectors with state-of-the-art methods on the open-vocabulary LVIS and the open-vocabulary COCO benchmarks. In each case, we strictly follow the architecture and setup from prior work to ensure fair comparisons.

Open-vocabulary LVIS. We compare to ViLD  [17], which first uses CLIP embeddings [42] for open-vocabulary detection. We strictly follow their training setup and model architecture (Appendix G) and report results in Table 2. Here ViLD-text is exactly our Box-Supervised baseline. Detic provides a gain of 7.7 points on mAP\(_\text {novel}\). Compared to ViLD-text, ViLD , which uses knowledge distillation from the CLIP visual backbone, improves mAP\(_\text {novel}\) at the cost of hurting overall mAP. Ensembling the two models, ViLD-ens provides improvements for both metrics. On the other hand, Detic uses a single model which improves both novel and overall mAP, and outperforms the ViLD ensemble.

Open-vocabulary COCO. Next, we compare with prior works on the popular open-vocabulary COCO benchmark [2] (see benchmark and implementation details in Appendix H). We strictly follow OVR-CNN [72] to use Faster R-CNN with ResNet50-C4 backbone and do not use any improvements from Sect. 5.1. Following [72], we use COCO captions as the image-supervised data. We extract nouns from the captions and use both the image labels and captions as supervision.

Table 3. Open-vocabulary COCO [2]. We compare Detic using the same training data and architecture from OVR-CNN [72]. We report box mAP at IoU threshold 0.5 using Faster R-CNN with ResNet50-C4 backbone. Detic builds upon the CLIP baseline (second row) and shows significant improvements over prior work. \(\dagger \): results quoted from OVR-CNN [72] paper or code. \(\ddagger \): results quoted from the original publications.

Table 3 summarizes our results. As the training set contains only 48 base classes, the base-class only model (second row) yields low mAP on novel classes. Detic improves the baseline and outperforms OVR-CNN [72] by a large margin, using exactly the same model, training recipe, and data.

Additionally, similar to Table 1, we compare to prior prediction-based methods on the open-vocabulary COCO benchmark in Appendix H. In this setting too, Detic improves over prior work providing significant gains on novel class detection and overall detection performance.

5.5 Detecting 21K Classes Across Datasets Without Finetuning

Next, we train a detector with the full 21K classes of ImageNet. We use our strong recipe with Swin-B  [37] backbone. In practice, training a classification layer of 21K classes is computationally involved.Footnote 2 We adopt a modified Federated Loss [76] that uniformly samples 50 classes from the vocabulary at every iteration. We only compute classification scores and back-propagate on the sampled classes.

As there are no direct benchmark to evaluate detectors with such large vocabulary, we evaluate our detectors on new datasets without finetuning. We evaluate on two large-scale object detection datasets: Objects365v2 [49] and OpenImages [28], both with around 1.8M training images. We follow LVIS to split \(\frac{1}{3}\) of classes with the fewest training images as rare classes. Table 4 shows the results. On both datasets, Detic improves the Box-Supervised baseline by a large margin, especially on classes with fewer annotations. Using all the 21k classes further improves performance owing to the large vocabulary. Our single model significantly reduces the gap towards the dataset-specific oracles and reaches \(70\%\)\(80\%\) of their performance without using the corresponding 1.8M detection annotations. See Fig. 5 for qualitative results.

Table 4. Detecting 21K classes across datasets. We use Detic to train a detector and evaluate it on multiple datasets without retraining. We report the bounding box mAP on Objects365 and OpenImages. Compared to the Box-Supervised baseline (trained on LVIS-all), Detic leverages image-level supervision to train robust detectors. The performance of Detic is \(70\%\)-\(80\%\) of (bottom row) that use dataset specific box labels.
Fig. 5.
figure 5

Qualitative results of our 21k-class detector. We show random samples from images containing novel classes in OpenImages (top) and Objects365 (bottom) validation sets. We use the CLIP embedding of the corresponding vocabularies. We show LVIS classes in and novel classes in . We use a score threshold of 0.5 and show the most confident class for each box. Best viewed on screen.

Table 5. Detic with different classifiers. We vary the classifier used with Detic and observe that it works well with different choices. While CLIP embeddings give the best performance (* indicates our default), all classifiers benefit from our Detic .
Table 6. Detic with different pretraining data. Top: our method using ImageNet-1K as pretraining and ImageNet-21K as co-training; Bottom: using ImageNet-21K for both pretraining and co-training. Co-training helps pretraining in both cases.

5.6 Ablation Studies

We now ablate our key components under the open-vocabulary LVIS setting with IN-L as the image-classification data. We use our strong training recipe as described in Sect. 5.1 for all these experiments.

Classifier Weights. We study the effect of different classifier weights \(\textbf{W}\). While our main open-vocabulary experiments use CLIP [42], we show the gain of Detic is independent of CLIP. We train Box-Supervised and Detic with different classifiers, including a standard random initialized and trained classifier, and other fixed language models [23, 24] The results are shown in Table 5. By default, a trained classifier cannot recognize novel classes. However, Detic enables novel class recognition ability even in this setting (17.4 mAP\(_{\text {novel}}\) for classes without detection labels). Using language models such as FastText [24] or an open-source version of CLIP [23] leads to better novel class performance. CLIP [42] performs the best among them.

Effect of Pretraining. Many existing methods use additional data only for pretraining [11, 72, 73], while we use image-labeled data for co-training. We present results of Detic with different types of pretraining in Table 6. Detic provides similar gains across different types of pretraining, suggesting that our gains are orthogonal to advances in pretraining. We believe that this is because pretraining improves the overall features, while Detic uses co-training which improves both the features and the classifier.

5.7 The Standard LVIS benchmark

Finally, we evaluate Detic on the standard LVIS benchmark [18]. In this setting, the baseline (Box-Supervised) is trained with box and mask labels for all classes while Detic uses additional image-level labels from IN-L . We train Detic with the same recipe in Sect. 5.1 and use a strong Swin-B [37] backbone and \(896\times 896\) input size. We report the mask mAP across all classes and also split into rare, common, and frequent classes. Notably, Detic achieves 41.7 mAP and 41.7 mAP\(_{\text {r}}\), closing the gap between the overall mAP and the rare mAP. This suggests Detic effectively uses image-level labels to improve the performance of classes with very few boxes labels. Appendix I provides more comparisons to prior work [73] on LVIS. Appendix J shows Detic generalizes to DETR-based [79] detectors (Table 7).

Table 7. Standard LVIS . We evaluate our baseline (Box-Supervised) and Detic using different backbones on the LVIS dataset. We report the mask mAP. We also report prior work on LVIS using large backbone networks (single-scale testing) for references (not for apple-to-apple comparison). \(\dagger \): detectors using additional data. Detic improves over the baseline with increased gains for the rare classes.

6 Limitations and Conclusions

We present Detic which is a simple way to use image supervision in large-vocabulary object detection. While Detic is simpler than prior assignment-based weakly-supervised detection methods, it supervises all image labels to the same region and does not consider overall dataset statistics. We leave incorporating such information for future work. Moreover, open vocabulary generalization has no guarantees on extreme domains. Our experiments show Detic improves large-vocabulary detection with various weak data sources, classifiers, detector architectures, and training recipes.