1 Introduction

Overview The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for 5 years (since 2010) and has become the standard benchmark for large-scale object recognition.Footnote 1 ILSVRC follows in the footsteps of the PASCAL VOC challenge (Everingham et al. 2012), established in 2005, which set the precedent for standardized evaluation of recognition algorithms in the form of yearly competitions. As in PASCAL VOC, ILSVRC consists of two components: (1) a publically available dataset, and (2) an annual competition and corresponding workshop. The dataset allows for the development and comparison of categorical object recognition algorithms, and the competition and workshop provide a way to track the progress and discuss the lessons learned from the most successful and innovative entries each year.

The publically released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.Footnote 2 Participants train their algorithms using the training images and then automatically annotate the test images. These predicted annotations are submitted to the evaluation server. Results of the evaluation are revealed at the end of the competition period and authors are invited to share insights at the workshop held at the International Conference on Computer Vision (ICCV) or European Conference on Computer Vision (ECCV) in alternate years.

ILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.

Large-Scale Challenges and Innovations In creating the dataset, several challenges had to be addressed. Scaling up from 19,737 images in PASCAL VOC 2010 to 1,461,406 in ILSVRC 2010 and from 20 object classes to 1000 object classes brings with it several challenges. It is no longer feasible for a small group of annotators to annotate the data as is done for other datasets (Fei-Fei et al. 2004; Criminisi 2004; Everingham et al. 2012; Xiao et al. 2010). Instead we turn to designing novel crowdsourcing approaches for collecting large-scale annotations (Su et al. 2012; Deng et al. 2009, 2014).

Some of the 1000 object classes may not be as easy to annotate as the 20 categories of PASCAL VOC: e.g., bananas which appear in bunches may not be as easy to delineate as the basic-level categories of aeroplanes or cars. Having more than a million images makes it infeasible to annotate the locations of all objects (much less with object segmentations, human body parts, and other detailed annotations that subsets of PASCAL VOC contain). New evaluation criteria have to be defined to take into account the facts that obtaining perfect manual annotations in this setting may be infeasible.

Once the challenge dataset was collected, its scale allowed for unprecedented opportunities both in evaluation of object recognition algorithms and in developing new techniques. Novel algorithmic innovations emerge with the availability of large-scale training data. The broad spectrum of object categories motivated the need for algorithms that are even able to distinguish classes which are visually very similar. We highlight the most successful of these algorithms in this paper, and compare their performance with human-level accuracy.

Finally, the large variety of object classes in ILSVRC allows us to perform an analysis of statistical properties of objects and their impact on recognition algorithms. This type of analysis allows for a deeper understanding of object recognition, and for designing the next generation of general object recognition algorithms.

Goals This paper has three key goals:

  1. (1)

    To discuss the challenges of creating this large-scale object recognition benchmark dataset,

  2. (2)

    To highlight the developments in object classification and detection that have resulted from this effort, and

  3. (3)

    To take a closer look at the current state of the field of categorical object recognition.

The paper may be of interest to researchers working on creating large-scale datasets, as well as to anybody interested in better understanding the history and the current state of large-scale object recognition.

The collected dataset and additional information about ILSVRC can be found at:

http://image-net.org/challenges/LSVRC/.

1.1 Related Work

We briefly discuss some prior work in constructing benchmark image datasets.

Image Classification Datasets Caltech 101 (Fei-Fei et al. 2004) was among the first standardized datasets for multi-category image classification, with 101 object classes and commonly 15–30 training images per class. Caltech 256 (Griffin et al. 2007) increased the number of object classes to 256 and added images with greater scale and background variability. The TinyImages dataset (Torralba et al. 2008) contains 80 million \(32\times 32\) low resolution images collected from the internet using synsets in WordNet (Miller 1995) as queries. However, since this data has not been manually verified, there are many errors, making it less suitable for algorithm evaluation. Datasets such as 15 Scenes (Oliva and Torralba 2001; Fei-Fei and Perona 2005; Lazebnik et al. 2006) or recent Places (Zhou et al. 2014) provide a single scene category label (as opposed to an object category).

The ImageNet dataset (Deng et al. 2009) is the backbone of ILSVRC. ImageNet is an image dataset organized according to the WordNet hierarchy (Miller 1995). Each concept in WordNet, possibly described by multiple words or word phrases, is called a “synonym set” or “synset”. ImageNet populates 21,841 synsets of WordNet with an average of 650 manually verified and full resolution images. As a result, ImageNet contains 14,197,122 annotated images organized by the semantic hierarchy of WordNet (as of August 2014). ImageNet is larger in scale and diversity than the other image classification datasets. ILSVRC uses a subset of ImageNet images for training the algorithms and some of ImageNet’s image collection protocols for annotating additional images for testing the algorithms.

Image Parsing Datasets Many datasets aim to provide richer image annotations beyond image-category labels. LabelMe (Russell et al. 2007) contains general photographs with multiple objects per image. It has bounding polygon annotations around objects, but the object names are not standardized: annotators are free to choose which objects to label and what to name each object. The SUN2012 (Xiao et al. 2010) dataset contains 16,873 manually cleaned up and fully annotated images more suitable for standard object detection training and evaluation. SIFT Flow (Liu et al. 2011) contains 2,688 images labeled using the LabelMe system. The LotusHill dataset (Yao et al. 2007) contains very detailed annotations of objects in 636,748 images and video frames, but it is not available for free. Several datasets provide pixel-level segmentations: for example, MSRC dataset (Criminisi 2004) with 591 images and 23 object classes, Stanford Background Dataset (Gould et al. 2009) with 715 images and 8 classes, and the Berkeley Segmentation dataset (Arbelaez et al. 2011) with 500 images annotated with object boundaries. OpenSurfaces segments surfaces from consumer photographs and annotates them with surface properties, including material, texture, and contextual information (Bell et al. 2013).

The closest to ILSVRC is the PASCAL VOC dataset (Everingham et al. 2010, 2014), which provides a standardized test bed for object detection, image classification, object segmentation, person layout, and action classification. Much of the design choices in ILSVRC have been inspired by PASCAL VOC and the similarities and differences between the datasets are discussed at length throughout the paper. ILSVRC scales up PASCAL VOC’s goal of standardized training and evaluation of recognition algorithms by more than an order of magnitude in number of object classes and images: PASCAL VOC 2012 has 20 object classes and 21,738 images compared to ILSVRC2012 with 1000 object classes and 1,431,167 annotated images.

The recently released COCO dataset (Lin et al. 2014b) contains more than 328,000 images with 2.5 million object instances manually segmented. It has fewer object categories than ILSVRC (91 in COCO versus 200 in ILSVRC object detection) but more instances per category (27K on average compared to about 1K in ILSVRC object detection). Further, it contains object segmentation annotations which are not currently available in ILSVRC. COCO is likely to become another important large-scale benchmark.

Large-Scale Annotation ILSVRC makes extensive use of Amazon Mechanical Turk to obtain accurate annotations (Sorokin and Forsyth 2008). Works such as (Welinder et al. 2010; Sheng et al. 2008; Vittayakorn and Hays 2011) describe quality control mechanisms for this marketplace. Vondrick et al. (2012) provides a detailed overview of crowdsourcing video annotation. A related line of work is to obtain annotations through well-designed games, e.g. (von Ahn and Dabbish 2005). Our novel approaches to crowdsourcing accurate image annotations are in Sects. 3.1.33.2.1 and 3.3.3.

Standardized Challenges There are several datasets with standardized online evaluation similar to ILSVRC: the aforementioned PASCAL VOC (Everingham et al. 2012), Labeled Faces in the Wild (Huang et al. 2007) for unconstrained face recognition, Reconstruction meets Recognition (Urtasun et al. 2014) for 3D reconstruction and KITTI (Geiger et al. 2013) for computer vision in autonomous driving. These datasets along with ILSVRC help benchmark progress in different areas of computer vision. Works such as (Torralba and Efros 2011) emphasize the importance of examining the bias inherent in any standardized dataset.

1.2 Paper Layout

We begin with a brief overview of ILSVRC challenge tasks in Sect. 2. Dataset collection and annotation are described at length in Sect. 3. Section 4 discusses the evaluation criteria of algorithms in the large-scale recognition setting. Section 5 provides an overview of the methods developed by ILSVRC participants.

Section 6 contains an in-depth analysis of ILSVRC results: Sect. 6.1 documents the progress of large-scale recognition over the years, Sect. 6.2 concludes that ILSVRC results are statistically significant, Sect. 6.3 thoroughly analyzes the current state of the field of object recognition, and Sect. 6.4 compares state-of-the-art computer vision accuracy with human accuracy. We conclude and discuss lessons learned from ILSVRC in Sect. 7.

2 Challenge Tasks

The goal of ILSVRC is to estimate the content of photographs for the purpose of retrieval and automatic annotation. Test images are presented with no initial annotation, and algorithms have to produce labelings specifying what objects are present in the images. New test images are collected and labeled especially for this competition and are not part of the previously published ImageNet dataset (Deng et al. 2009).

ILSVRC over the years has consisted of one or more of the following tasks (years in parentheses):Footnote 3

  1. (1)

    Image classification (2010–2014): Algorithms produce a list of object categories present in the image.

  2. (2)

    Single-object localization (2011–2014): Algorithms produce a list of object categories present in the image, along with an axis-aligned bounding box indicating the position and scale of one instance of each object category.

  3. (3)

    Object detection (2013–2014): Algorithms produce a list of object categories present in the image along with an axis-aligned bounding box indicating the position and scale of every instance of each object category.

This section provides an overview and history of each of the three tasks. Table 1 shows summary statistics.

Table 1 Overview of the provided annotations for each of the tasks in ILSVRC

2.1 Image Classification Task

Data for the image classification task consists of photographs collected from FlickrFootnote 4 and other search engines, manually labeled with the presence of one of 1000 object categories. Each image contains one ground truth label.

For each image, algorithms produce a list of object categories present in the image. The quality of a labeling is evaluated based on the label that best matches the ground truth label for the image (see Sect. 4.1).

Constructing ImageNet was an effort to scale up an image classification dataset to cover most nouns in English using tens of millions of manually verified photographs (Deng et al. 2009). The image classification task of ILSVRC came as a direct extension of this effort. A subset of categories and images was chosen and fixed to provide a standardized benchmark while the rest of ImageNet continued to grow.

2.2 Single-Object Localization Task

The single-object localization task, introduced in 2011, built off of the image classification task to evaluate the ability of algorithms to learn the appearance of the target object itself rather than its image context.

Data for the single-object localization task consists of the same photographs collected for the image classification task, hand labeled with the presence of one of 1000 object categories. Each image contains one ground truth label. Additionally, every instance of this category is annotated with an axis-aligned bounding box.

For each image, algorithms produce a list of object categories present in the image, along with a bounding box indicating the position and scale of one instance of each object category. The quality of a labeling is evaluated based on the object category label that best matches the ground truth label, with the additional requirement that the location of the predicted instance is also accurate (see Sect. 4.2).

2.3 Object Detection Task

The object detection task went a step beyond single-object localization and tackled the problem of localizing multiple object categories in the image. This task has been a part of the PASCAL VOC for many years on the scale of 20 object categories and tens of thousands of images, but scaling it up by an order of magnitude in object categories and in images proved to be very challenging from a dataset collection and annotation point of view (see Sect. 3.3).

Data for the detection tasks consists of new photographs collected from Flickr using scene-level queries. The images are annotated with axis-aligned bounding boxes indicating the position and scale of every instance of each target object category. The training set is additionally supplemented with (a) data from the single-object localization task, which contains annotations for all instances of just one object category, and (b) negative images known not to contain any instance of some object categories.

For each image, algorithms produce bounding boxes indicating the position and scale of all instances of all target object categories. The quality of labeling is evaluated by recall, or number of target object instances detected, and precision, or the number of spurious detections produced by the algorithm (see Sect. 4.3).

3 Dataset Construction at Large Scale

Our process of constructing large-scale object recognition image datasets consists of three key steps.

The first step is defining the set of target object categories. To do this, we select from among the existing ImageNet (Deng et al. 2009) categories. By using WordNet as a backbone (Miller 1995), ImageNet already takes care of disambiguating word meanings and of combining together synonyms into the same object category. Since the selection of object categories needs to be done only once per challenge task, we use a combination of automatic heuristics and manual post-processing to create the list of target categories appropriate for each task. For example, for image classification we may include broader scene categories such as a type of beach, but for single-object localization and object detection we want to focus only on object categories which can be unambiguously localized in images (Sects. 3.1.13.3.1).

The second step is collecting a diverse set of candidate images to represent the selected categories. We use both automatic and manual strategies on multiple search engines to do the image collection. The process is modified for the different ILSVRC tasks. For example, for object detection we focus our efforts on collecting scene-like images using generic queries such as “African safari” to find pictures likely to contain multiple animals in one scene (Sect. 3.3.2).

The third (and most challenging) step is annotating the millions of collected images to obtain a clean dataset. We carefully design crowdsourcing strategies targeted to each individual ILSVRC task. For example, the bounding box annotation system used for localization and detection tasks consists of three distinct parts in order to include automatic crowdsourced quality control (Sect. 3.2.1). Annotating images fully with all target object categories (on a reasonable budget) for object detection requires an additional hierarchical image labeling system (Sect. 3.3.3).

We describe the data collection and annotation procedure for each of the ILSVRC tasks in order: image classification (Sect. 3.1), single-object localization (Sect. 3.2), and object detection (Sect. 3.3), focusing on the three key steps for each dataset.

3.1 Image Classification Dataset Construction

The image classification task tests the ability of an algorithm to name the objects present in the image, without necessarily localizing them.

We describe the choices we made in constructing the ILSVRC image classification dataset: selecting the target object categories from ImageNet (Sect. 3.1.1), collecting a diverse set of candidate images by using multiple search engines and an expanded set of queries in multiple languages (Sect. 3.1.2), and finally filtering the millions of collected images using the carefully designed crowdsourcing strategy of ImageNet (Deng et al. 2009) (Sect. 3.1.3).

3.1.1 Defining Object Categories for the Image Classification Dataset

The 1000 categories used for the image classification task were selected from the ImageNet (Deng et al. 2009) categories. The 1000 synsets are selected such that there is no overlap between synsets: for any synsets \(i\) and \(j\), \(i\) is not an ancestor of \(j\) in the ImageNet hierarchy. These synsets are part of the larger hierarchy and may have children in ImageNet; however, for ILSVRC we do not consider their child subcategories. The synset hierarchy of ILSVRC can be thought of as a “trimmed” version of the complete ImageNet hierarchy. Figure 1 visualizes the diversity of the ILSVRC2012 object categories.

Fig. 1
figure 1

The diversity of data in the ILSVRC image classification and single-object localization tasks. For each of the eight dimensions, we show example object categories along the range of that property. Object scale, number of instances and image clutter for each object category are computed using the metrics defined in Sect. 3.2.2 and in Appendix 1. The other properties were computed by asking human subjects to annotate each of the 1000 object categories (Russakovsky et al. 2013)

The exact 1000 synsets used for the image classification and single-object localization tasks have changed over the years. There are 639 synsets which have been used in all five ILSVRC challenges so far. In the first year of the challenge synsets were selected randomly from the available ImageNet synsets at the time, followed by manual filtering to make sure the object categories were not too obscure. With the introduction of the object localization challenge in 2011 there were 321 synsets that changed: categories such as “New Zealand beach” which were inherently difficult to localize were removed, and some new categories from ImageNet containing object localization annotations were added. In ILSVRC2012, 90 synsets were replaced with categories corresponding to dog breeds to allow for evaluation of more fine-grained object classification, as shown in Fig. 2. The synsets have remained consistent since year 2012. Appendix 1 provides the complete list of object categories used in ILSVRC2012-2014.

Fig. 2
figure 2

The ILSVRC dataset contains many more fine-grained classes compared to the standard PASCAL VOC benchmark; for example, instead of the PASCAL “dog” category there are 120 different breeds of dogs in ILSVRC2012-2014 classification and single-object localization tasks

3.1.2 Collecting Candidate Images for the Image Classification Dataset

Image collection for ILSVRC classification task is the same as the strategy employed for constructing ImageNet (Deng et al. 2009). Training images are taken directly from ImageNet. Additional images are collected for the ILSVRC using this strategy and randomly partitioned into the validation and test sets.

We briefly summarize the process; (Deng et al. 2009) contains further details. Candidate images are collected from the Internet by querying several image search engines. For each synset, the queries are the set of WordNet synonyms. Search engines typically limit the number of retrievable images (on the order of a few hundred to a thousand). To obtain as many images as possible, we expand the query set by appending the queries with the word from parent synsets, if the same word appears in the glossary of the target synset. For example, when querying “whippet”, according to WordNet’s glossary a “small slender dog of greyhound type developed in England”, we also use “whippet dog” and “whippet greyhound.” To further enlarge and diversify the candidate pool, we translate the queries into other languages, including Chinese, Spanish, Dutch and Italian. We obtain accurate translations using WordNets in those languages.

3.1.3 Image Classification Dataset Annotation

Annotating images with corresponding object classes follows the strategy employed by ImageNet (Deng et al. 2009). We summarize it briefly here.

To collect a highly accurate dataset, we rely on humans to verify each candidate image collected in the previous step for a given synset. This is achieved by using Amazon Mechanical Turk (AMT), an online platform on which one can put up tasks for users for a monetary reward. With a global user base, AMT is particularly suitable for large scale labeling. In each of our labeling tasks, we present the users with a set of candidate images and the definition of the target synset (including a link to Wikipedia). We then ask the users to verify whether each image contains objects of the synset. We encourage users to select images regardless of occlusions, number of objects and clutter in the scene to ensure diversity.

While users are instructed to make accurate judgment, we need to set up a quality control system to ensure this accuracy. There are two issues to consider. First, human users make mistakes and not all users follow the instructions. Second, users do not always agree with each other, especially for more subtle or confusing synsets, typically at the deeper levels of the tree. The solution to these issues is to have multiple users independently label the same image. An image is considered positive only if it gets a convincing majority of the votes. We observe, however, that different categories require different levels of consensus among users. For example, while five users might be necessary for obtaining a good consensus on Burmese cat images, a much smaller number is needed for cat images. We develop a simple algorithm to dynamically determine the number of agreements needed for different categories of images. For each synset, we first randomly sample an initial subset of images. At least 10 users are asked to vote on each of these images. We then obtain a confidence score table, indicating the probability of an image being a good image given the consensus among user votes. For each of the remaining candidate images in this synset, we proceed with the AMT user labeling until a pre-determined confidence score threshold is reached.

Empirical Evaluation Evaluation of the accuracy of the large-scale crowdsourced image annotation system was done on the entire ImageNet (Deng et al. 2009). A total of 80 synsets were randomly sampled at every tree depth of the mammal and vehicle subtrees. An independent group of subjects verified the correctness of each of the images. An average of 99.7 % precision is achieved across the synsets. We expect similar accuracy on ILSVRC image classification dataset since the image annotation pipeline has remained the same. To verify, we manually checked 1500 ILSVRC2012-2014 image classification test set images (the test set has remained unchanged in these 3 years). We found 5 annotation errors, corresponding as expected to 99.7 % precision.

3.1.4 Image Classification Dataset Statistics

Using the image collection and annotation procedure described in previous sections, we collected a large-scale dataset used for ILSVRC classification task. There are 1000 object classes and approximately 1.2 million training images, 50 thousand validation images and 100 thousand test images. Table 2 documents the size of the dataset over the years of the challenge.

Table 2 Scale of ILSVRC image classification task (minimum per class - maximum per class)

3.2 Single-Object Localization Dataset Construction

The single-object localization task evaluates the ability of an algorithm to localize one instance of an object category. It was introduced as a taster task in ILSVRC 2011, and became an official part of ILSVRC in 2012.

The key challenge was developing a scalable crowdsourcing method for object bounding box annotation. Our three-step self-verifying pipeline is described in Sect. 3.2.1. Having the dataset collected, we perform detailed analysis in Sect. 3.2.2 to ensure that the dataset is sufficiently varied to be suitable for evaluation of object localization algorithms.

Object Classes and Candidate Images The object classes for single-object localization task are the same as the object classes for image classification task described above in Sect. 3.1. The training images for localization task are a subset of the training images used for image classification task, and the validation and test images are the same between both tasks.

Bounding Box Annotation Recall that for the image classification task every image was annotated with one object class label, corresponding to one object that is present in an image. For the single-object localization task, every validation and test image and a subset of the training images are annotated with axis-aligned bounding boxes around every instance of this object.

Every bounding box is required to be as small as possible while including all visible parts of the object instance. An alternate annotation procedure could be to annotate the full (estimated) extent of the object: e.g., if a person’s legs are occluded and only the torso is visible, the bounding box could be drawn to include the likely location of the legs. However, this alternative procedure is inherently ambiguous and ill-defined, leading to disagreement among annotators and among researchers (what is the true “most likely” extent of this object?). We follow the standard protocol of only annotating visible object parts (Russell et al. 2007; Everingham et al. 2010).Footnote 5

3.2.1 Bounding Box Object Annotation System

We summarize the crowdsourced bounding box annotation system described in detail in Su et al. (2012). The goal is to build a system that is fully automated, highly accurate, and cost-effective. Given a collection of images where the object of interest has been verified to exist, for each image the system collects a tight bounding box for every instance of the object.

There are two requirements:

  • Quality Each bounding box needs to be tight, i.e. the smallest among all bounding boxes that contains all visible parts of the object. This facilitates the object detection learning algorithms by providing the precise location of each object instance;

  • Coverage Every object instance needs to have a bounding box. This is important for training localization algorithms because it tells the learning algorithms with certainty what is not the object.

The core challenge of building such a system is effectively controlling the data quality with minimal cost. Our key observation is that drawing a bounding box is significantly more difficult and time consuming than giving answers to multiple choice questions. Thus quality control through additional verification tasks is more cost-effective than consensus-based algorithms. This leads to the following workflow with simple basic subtasks:

  1. (1)

    Drawing A worker draws one bounding box around one instance of an object on the given image.

  2. (2)

    Quality verification A second worker checks if the bounding box is correctly drawn.

  3. (3)

    Coverage verification A third worker checks if all object instances have bounding boxes.

The sub-tasks are designed following two principles. First, the tasks are made as simple as possible. For example, instead of asking the worker to draw all bounding boxes on the same image, we ask the worker to draw only one. This reduces the complexity of the task. Second, each task has a fixed and predictable amount of work. For example, assuming that the input images are clean (object presence is correctly verified) and the coverage verification tasks give correct results, the amount of work of the drawing task is always that of providing exactly one bounding box.

Quality control on Tasks 2 and 3 is implemented by embedding “gold standard” images where the correct answer is known. Worker training for each of these subtasks is described in detail in Su et al. (2012).

Empirical Evaluation The system is evaluated on 10 categories with ImageNet (Deng et al. 2009): balloon, bear, bed, bench, beach, bird, bookshelf, basketball hoop, bottle, and people. A subset of 200 images are randomly sampled from each category. On the image level, our evaluation shows that 97.9 % images are completely covered with bounding boxes. For the remaining 2.1 %, some bounding boxes are missing. However, these are all difficult cases: the size is too small, the boundary is blurry, or there is strong shadow.

On the bounding box level, 99.2 % of all bounding boxes are accurate (the bounding boxes are visibly tight). The remaining 0.8 % are somewhat off. No bounding boxes are found to have less than 50 % intersection over union overlap with ground truth.

Additional evaluation of the overall cost and an analysis of quality control can be found in Su et al. (2012).

3.2.2 Single-Object Localization Dataset Statistics

Using the annotation procedure described above, we collect a large set of bounding box annotations for the ILSVRC single-object classification task. All 50 thousand images in the validation set and 100 thousand images in the test set are annotated with bounding boxes around all instances of the ground truth object class (one object class per image). In addition, in ILSVRC2011 25 % of training images are annotated with bounding boxes the same way, yielding more than 310 thousand annotated images with more than 340 thousand annotated object instances. In ILSVRC2012 40 % of training images are annotated, yielding more than 520 thousand annotated images with more than 590 thousand annotated object instances. Table 3 documents the size of this dataset.

Table 3 Scale of additional annotations for the ILSVRC single-object localization task (minimum per class - maximum per class)

In addition to the size of the dataset, we also analyze the level of difficulty of object localization in these images compared to the PASCAL VOC benchmark. We compute statistics on the ILSVRC2012 single-object localization validation set images compared to PASCAL VOC 2012 validation images.

Real-world scenes are likely to contain multiple instances of some objects, and nearby object instances are particularly difficult to delineate. The average object category in ILSVRC has \(1.61\) target object instances on average per positive image, with each instance having on average \(0.47\) neighbors (adjacent instances of the same object category). This is comparable to \(1.69\) instances per positive image and \(0.52\) neighbors per instance for an average object class in PASCAL.

As described in Hoiem et al. (2012), smaller objects tend to be significantly more difficult to localize. In the average object category in PASCAL the object occupies 24.1 % of the image area, and in ILSVRC 35.8 %. However, PASCAL has only 20 object categories while ILSVRC has 1000. The 537 object categories of ILSVRC with the smallest objects on average occupy the same fraction of the image as PASCAL objects: 24.1 %. Thus even though on average the object instances tend to be bigger in ILSVRC images, there are more than 25 times more object categories than in PASCAL VOC with the same average object scale.

Appendix 1 and Russakovsky et al. (2013) have additional comparisons.

3.3 Object Detection Dataset Construction

The ILSVRC task of object detection evaluates the ability of an algorithm to name and localize all instances of all target objects present in an image. It is much more challenging than object localization because some object instances may be small/occluded/difficult to accurately localize, and the algorithm is expected to locate them all, not just the one it finds easiest.

There are three key challenges in collecting the object detection dataset. The first challenge is selecting the set of common objects which tend to appear in cluttered photographs and are well-suited for benchmarking object detection performance. Our approach relies on statistics of the object localization dataset and the tradition of the PASCAL VOC challenge (Sect. 3.3.1).

The second challenge is obtaining a much more varied set of scene images than those used for the image classification and single-object localization datasets. Section 3.3.2 describes the procedure for utilizing as much data from the single-object localization dataset as possible and supplementing it with Flickr images queried using hundreds of manually designed high-level queries.

The third, and biggest, challenge is completely annotating this dataset with all the objects. This is done in two parts. Section 3.3.3 describes the first part: our hierarchical strategy for obtaining the list of all target objects which occur within every image. This is necessary since annotating in a straight-forward way by creating a task for every (image, object class) pair is no longer feasible at this scale. Appendix 1 describes the second part: annotating the bounding boxes around these objects, using the single-object localization bounding box annotation pipeline of Sect. 3.2.1 along with extra verification to ensure that every instance of the object is annotated with exactly one bounding box.

3.3.1 Defining Object Categories for the Object Detection Dataset

There are 200 object classes hand-selected for the detection task, eacg corresponding to a synset within ImageNet. These were chosen to be mostly basic-level object categories that would be easy for people to identify and label. The rationale is that the object detection system developed for this task can later be combined with a fine-grained classification model to further classify the objects if a finer subdivision is desired.Footnote 6 As with the 1000 classification classes, the synsets are selected such that there is no overlap: for any synsets \(i\) and \(j\), \(i\) is not an ancestor of \(j\) in the ImageNet hierarchy.

The selection of the 200 object detection classes in 2013 was guided by the ILSVRC 2012 classification and localization dataset. Starting with 1000 object classes and their bounding box annotations we first eliminated all object classes which tended to be too “big” in the image (on average the object area was greater than 50 % of the image area). These were classes such as T-shirt, spiderweb, or manhole cover. We then manually eliminated all classes which we did not feel were well-suited for detection, such as hay, barbershop, or poncho. This left 494 object classes which were merged into basic-level categories: for example, different species of birds were merged into just the “bird” class. The classes remained the same in ILSVRC2014. Appendix 1 contains the complete list of object categories used in ILSVRC2013-2014 (in the context of the hierarchy described in Sect. 3.3.3).

Staying mindful of the tradition of the PASCAL VOC dataset we also tried to ensure that the set of 200 classes contains as many of the 20 PASCAL VOC classes as possible. Table 4 shows the correspondences. The changes that were done were to ensure more accurate and consistent crowdsourced annotations. The object class with the weakest correspondence is “potted plant” in PASCAL VOC, corresponding to “flower pot” in ILSVRC. “Potted plant” was one of the most challenging object classes to annotate consistently among the PASCAL VOC classes, and in order to obtain accurate annotations using crowdsourcing we had to restrict the definition to a more concrete object.

Table 4 Correspondences between the object classes in the PASCAL VOC (Everingham et al. 2010) and the ILSVRC detection task

3.3.2 Collecting Images for the Object Detection Dataset

Many images for the detection task were collected differently than the images in ImageNet and the classification and single-object localization tasks. Figure 3 summarizes the types of images that were collected. Ideally all of these images would be scene images fully annotated with all target categories. However, given budget constraints our goal was to provide as much suitable detection data as possible, even if the images were drawn from a few different sources and distributions.

Fig. 3
figure 3

Summary of images collected for the detection task. Images in green (bold) boxes have all instances of all 200 detection object classes fully annotated. Table 5 lists the complete statistics

The validation and test detection set images come from two sources (percent of images from each source in parentheses). The first source (77 %) is images from ILSVRC2012 single-object localization validation and test sets corresponding to the 200 detection classes (or their children in the ImageNet hierarchy). Images where the target object occupied more than 50 % of the image area were discarded, since they were unlikely to contain other objects of interest. The second source (23 %) is images from Flickr collected specifically for detection task. We queried Flickr using a large set of manually defined queries, such as “kitchenette” or “Australian zoo” to retrieve images of scenes likely to contain several objects of interest. Appendix 1 contains the full list. We also added pairwise queries, or queries with two target object names such as “tiger lion,” which also often returned cluttered scenes.

Figure 4 shows a random set of both types of validation images. Images were randomly split, with 33 % going into the validation set and 67 % into the test set.Footnote 7

Fig. 4
figure 4

Random selection of images in ILSVRC detection validation set. The images in the top four rows were taken from ILSVRC2012 single-object localization validation set, and the images in the bottom four rows were collected from Flickr using scene-level queries

The training set for the detection task comes from three sources of images (percent of images from each source in parentheses). The first source (63 %) is all training images from ILSVRC2012 single-object localization task corresponding to the 200 detection classes (or their children in the ImageNet hierarchy). We did not filter by object size, allowing teams to take advantage of all the positive examples available. The second source (24 %) is negative images which were part of the original ImageNet collection process but voted as negative: for example, some of the images were collected from Flickr and search engines for the ImageNet synset “animals” but during the manual verification step did not collect enough votes to be considered as containing an “animal.” These images were manually re-verified for the detection task to ensure that they did not in fact contain the target objects. The third source (13 %) is images collected from Flickr specifically for the detection task. These images were added for ILSVRC2014 following the same protocol as the second type of images in the validation and test set. This was done to bring the training and testing distributions closer together.

3.3.3 Complete Image-Object Annotation for the Object Detection Dataset

The key challenge in annotating images for the object detection task is that all objects in all images need to be labeled. Suppose there are N inputs (images) which need to be annotated with the presence or absence of K labels (objects). A naïve approach would query humans for each combination of input and label, requiring \(NK\) queries. However, N and K can be very large and the cost of this exhaustive approach quickly becomes prohibitive. For example, annotating 60,000 validation and test images with the presence or absence of 200 object classes for the detection task naïvely would take 80 times more effort than annotating 150,000 validation and test images with 1 object each for the classification task—and this is not even counting the additional cost of collecting bounding box annotations around each object instance. This quickly becomes infeasible.

In Deng et al. (2014) we study strategies for scalable multilabel annotation, or for efficiently acquiring multiple labels from humans for a collection of items. We exploit three key observations for labels in real world applications (illustrated in Fig. 5):

Fig. 5
figure 5

Consider the problem of binary multi-label annotation. For each input (e.g., image) and each label (e.g., object), the goal is to determine the presence or absense (plus or minus) of the label (e.g., decide if the object is present in the image). Multi-label annotation becomes much more efficient when considering real-world structure of data: correlation between labels, hierarchical organization of concepts, and sparsity of labels

  1. (1)

    Correlation Subsets of labels are often highly correlated. Objects such as a computer keyboard, mouse and monitor frequently co-occur in images. Similarly, some labels tend to all be absent at the same time. For example, all objects that require electricity are usually absent in pictures taken outdoors. This suggests that we could potentially fill in the values of multiple labels by grouping them into only one query for humans. Instead of checking if dog, cat, rabbit etc. are present in the photo, we just check about the “animal” group If the answer is no, then this implies a no for all categories in the group.

  2. (2)

    Hierarchy The above example of grouping dog, cat, rabbit etc. into animal has implicitly assumed that labels can be grouped together and humans can efficiently answer queries about the group as a whole. This brings up our second key observation: humans organize semantic concepts into hierarchies and are able to efficiently categorize at higher semantic levels (Thorpe et al. 1996), e.g. humans can determine the presence of an animal in an image as fast as every type of animal individually. This leads to substantial cost savings.

  3. (3)

    Sparsity The values of labels for each image tend to be sparse, i.e. an image is unlikely to contain more than a dozen types of objects, a small fraction of the hundreds of object categories. This enables rapid elimination of many objects by quickly filling in no. With a high degree of sparsity, an efficient algorithm can have a cost which grows logarithmically with the number of objects instead of linearly.

We propose algorithmic strategies that exploit the above intuitions. The key is to select a sequence of queries for humans such that we achieve the same labeling results with only a fraction of the cost of the naïve approach. The main challenges include how to measure cost and utility of queries, how to construct good queries, and how to dynamically order them. A detailed description of the generic algorithm, along with theoretical analysis and empirical evaluation, is presented in Deng et al. (2014).

Application of the Generic Multi-class Labeling Algorithm to Our Setting The generic algorithm automatically selects the most informative queries to ask based on object label statistics learned from the training set. In our case of 200 object classes, since obtaining the training set was by itself challenging we chose to design the queries by hand. We created a hierarchy of queries of the type “is there a... in the image?” For example, one of the high-level questions was “is there an animal in the image?” We ask the crowd workers this question about every image we want to label. The children of the “animal” question would correspond to specific examples of animals: for example, “is there a mammal in the image?” or “is there an animal with no legs?” To annotate images efficiently, these questions are asked only on images determined to contain an animal. The 200 leaf node questions correspond to the 200 target objects, e.g., “is there a cat in the image?”. A few sample iterations of the algorithm are shown in Fig. 6.

Fig. 6
figure 6

Our algorithm dynamically selects the next query to efficiently determine the presence or absence of every object in every image. Green denotes a positive annotation and red denotes a negative annotation. This toy example illustrates a sample progression of the algorithm for one label (cat) on a set of images

Algorithm 1 is the formal algorithm for labeling an image with the presence or absence of each target object category. With this algorithm in mind, the hierarchy of questions was constructed following the principle that false positives only add extra cost whereas false negatives can significantly affect the quality of the labeling. Thus, it is always better to stick with more general but less ambiguous questions, such as “is there a mammal in the image?” as opposed to asking overly specific but potentially ambiguous questions, such as “is there an animal that can climb trees?” Constructing this hierarchy was a surprisingly time-consuming process, involving multiple iterations to ensure high accuracy of labeling and avoid question ambiguity. Appendix 1 shows the constructed hierarchy.

Bounding Box Annotation Once all images are labeled with the presence or absence of all object categories we use the bounding box system described in Sect. 3.2.1 along with some additional modifications of Appendix 1 to annotate the location of every instance of every present object category.

figure a

3.3.4 Object Detection Dataset Statistics

Using the procedure described above, we collect a large-scale dataset for ILSVRC object detection task. There are 200 object classes and approximately 450K training images, 20K validation images and 40K test images. Table 5 documents the size of the dataset over the years of the challenge. The major change between ILSVRC2013 and ILSVRC2014 was the addition of 60,658 fully annotated training images.

Table 5 Scale of ILSVRC object detection task

Prior to ILSVRC, the object detection benchmark was the PASCAL VOC challenge (Everingham et al. 2010). ILSVRC has \(10\) times more object classes than PASCAL VOC (200 vs 20), \(10.6\) times more fully annotated training images (60,658 vs 5,717), \(35.2\) times more training objects (478,807 vs 13,609), \(3.5\) times more validation images (20,121 vs 5823) and \(3.5\) times more validation objects (55,501 vs 15,787). ILSVRC has \(2.8\) annotated objects per image on the validation set, compared to \(2.7\) in PASCAL VOC. The average object in ILSVRC takes up 17.0 % of the image area and in PASCAL VOC takes up 20.7 %; Table 4 contains per-class comparisons. Additionally, ILSVRC contains a wide variety of objects, including tiny objects such as sunglasses (1.3 % of image area on average), ping-pong balls (1.5 % of image area on average) and basketballs (2.0 % of image area on average).

4 Evaluation at Large Scale

Once the dataset has been collected, we need to define a standardized evaluation procedure for algorithms. Some measures have already been established by datasets such as the Caltech 101 (Fei-Fei et al. 2004) for image classification and PASCAL VOC (Everingham et al. 2012) for both image classification and object detection. To adapt these procedures to the large-scale setting we had to address three key challenges. First, for the image classification and single-object localization tasks only one object category could be labeled in each image due to the scale of the dataset. This created potential ambiguity during evaluation (addressed in Sect. 4.1). Second, evaluating localization of object instances is inherently difficult in some images which contain a cluster of objects (addressed in Sect. 4.2). Third, evaluating localization of object instances which occupy few pixels in the image is challenging (addressed in Sect. 4.3).

In this section we describe the standardized evaluation criteria for each of the three ILSVRC tasks. We elaborate further on these and other more minor challenges with large-scale evaluation. Appendix 1 describes the submission protocol and other details of running the competition itself.

4.1 Image Classification

The scale of ILSVRC classification task (1000 categories and more than a million of images) makes it very expensive to label every instance of every object in every image. Therefore, on this dataset only one object category is labeled in each image. This creates ambiguity in evaluation. For example, an image might be labeled as a “strawberry” but contain both a strawberry and an apple. Then an algorithm would not know which one of the two objects to name. For the image classification task we allowed an algorithm to identify multiple (up to 5) objects in an image and not be penalized as long as one of the objects indeed corresponded to the ground truth label. Figure 7 (top row) shows some examples.

Fig. 7
figure 7

Tasks in ILSVRC. The first column shows the ground truth labeling on an example image, and the next three show three sample outputs with the corresponding evaluation score

Concretely, each image \(i\) has a single class label \(C_i\). An algorithm is allowed to return 5 labels \(c_{i1},\dots c_{i5}\), and is considered correct if \(c_{ij} = C_i\) for some \(j\).

Let the error of a prediction \(d_{ij} = d(c_{ij},C_i)\) be \(1\) if \(c_{ij} \ne C_i\) and \(0\) otherwise. The error of an algorithm is the fraction of test images on which the algorithm makes a mistake:

$$\begin{aligned} \text{ error } = \frac{1}{N} \sum _{i=1}^N \min _j d_{ij} \end{aligned}$$
(1)

We used two additional measures of error. First, we evaluated top-1 error. In this case algorithms were penalized if their highest-confidence output label \(c_{i1}\) did not match ground truth class \(C_i\). Second, we evaluated hierarchical error. The intuition is that confusing two nearby classes (such as two different breeds of dogs) is not as harmful as confusing a dog for a container ship. For the hierarchical criteria, the cost of one misclassification, \(d(c_{ij},C_i)\), is defined as the height of the lowest common ancestor of \(c_{ij}\) and \(C_i\) in the ImageNet hierarchy. The height of a node is the length of the longest path to a leaf node (leaf nodes have height zero).

However, in practice we found that all three measures of error (top-5, top-1, and hierarchical) produced the same ordering of results. Thus, since ILSVRC2012 we have been exclusively using the top-5 metric which is the simplest and most suitable to the dataset.

4.2 Single-Object Localization

The evaluation for single-object localization is similar to object classification, again using a top-5 criteria to allow the algorithm to return unannotated object classes without penalty. However, now the algorithm is considered correct only if it both correctly identifies the target class \(C_i\) and accurately localizes one of its instances. Figure 7 (middle row) shows some examples.

Concretely, an image is associated with object class \(C_i\), with all instances of this object class annotated with bounding boxes \(B_{ik}\). An algorithm returns \(\{(c_{ij},b_{ij})\}_{j=1}^5\) of class labels \(c_{ij}\) and associated locations \(b_{ij}\). The error of a prediction \(j\) is:

$$\begin{aligned} d_{ij} = \max (d(c_{ij},C_i),\min _{k}d(b_{ij},B_{ik})) \end{aligned}$$
(2)

Here \(d(b_{ij},B_{ik})\) is the error of localization, defined as \(0\) if the area of intersection of boxes \(b_{ij}\) and \(B_{ik}\) divided by the areas of their union is greater than \(0.5\), and \(1\) otherwise (Everingham et al. 2010). The error of an algorithm is computed as in Eq. 1.

Evaluating localization is inherently difficult in some images. Consider a picture of a bunch of bananas or a carton of apples. It is easy to classify these images as containing bananas or apples, and even possible to localize a few instances of each fruit. However, in order for evaluation to be accurate every instance of banana or apple needs to be annotated, and that may be impossible. To handle the images where localizing individual object instances is inherently ambiguous we manually discarded 3.5 % of images since ILSVRC2012. Some examples of discarded images are shown in Fig. 8.

Fig. 8
figure 8

Images marked as “difficult” in the ILSVRC2012 single-object localization validation set. Please refer to Sect. 4.2 for details

4.3 Object Detection

The criteria for object detection was adopted from PASCAL VOC (Everingham et al. 2010). It is designed to penalize the algorithm for missing object instances, for duplicate detections of one instance, and for false positive detections. Figure 7(bottom row) shows examples.

For each object class and each image \(I_i\), an algorithm returns predicted detections \((b_{ij},s_{ij})\) of predicted locations \(b_{ij}\) with confidence scores \(s_{ij}\). These detections are greedily matched to the ground truth boxes \(\{B_{ik}\}\) using Algorithm 2. For every detection \(j\) on image \(i\) the algorithm returns \(z_{ij} = 1\) if the detection is matched to a ground truth box according to the threshold criteria, and \(0\) otherwise. For a given object class, let \(N\) be the total number of ground truth instances across all images. Given a threshold \(t\), define recall as the fraction of the \(N\) objects detected by the algorithm, and precision as the fraction of correct detections out of the total detections returned by the algorithm. Concretely,

$$\begin{aligned}&Recall(t) = \frac{\sum _{ij} 1[s_{ij} \ge t] z_{ij} }{N} \end{aligned}$$
(3)
$$\begin{aligned}&Precision(t) = \frac{\sum _{ij} 1[s_{ij} \ge t] z_{ij} }{\sum _{ij} 1[s_{ij} \ge t]} \end{aligned}$$
(4)
figure b

The final metric for evaluating an algorithm on a given object class is average precision over the different levels of recall achieved by varying the threshold \(t\). The winner of each object class is then the team with the highest average precision, and then winner of the challenge is the team that wins on the most object classes.Footnote 8

Difference with PASCAL VOC Evaluating localization of object instances which occupy very few pixels in the image is challenging. The PASCAL VOC approach was to label such instances as “difficult” and ignore them during evaluation. However, since ILSVRC contains a more diverse set of object classes including, for example, “nail” and “ping pong ball” which have many very small instances, it is important to include even very small object instances in evaluation.

In Algorithm 2, a predicted bounding box \(b\) is considered to have properly localized by a ground truth bounding box \(B\) if \(IOU(b,B) \ge \text{ thr }(B)\). The PASCAL VOC metric uses the threshold \(\text{ thr }(B) = 0.5\). However, for small objects even deviations of a few pixels would be unacceptable according to this threshold. For example, consider an object \(B\) of size \(10 \times 10\) pixels, with a detection window of \(20 \times 20\) pixels which fully contains that object. This would be an error of approximately \(5\) pixels on each dimension, which is average human annotation error. However, the IOU in this case would be \(100/400 = 0.25\), far below the threshold of \(0.5\). Thus for smaller objects we loosen the threshold in ILSVRC to allow for the annotation to extend up to 5 pixels on average in each direction around the object. Concretely, if the ground truth box \(B\) is of dimensions \(w \times h\) then

$$\begin{aligned} \text{ thr }(B) = \min \left( 0.5, \frac{w h}{(w+10)(h+10)} \right) \end{aligned}$$
(5)

In practice, this changes the threshold only on objects which are smaller than approximately \(25\times 25\) pixels, and affects 5.5 % of objects in the detection validation set.

Practical Consideration One additional practical consideration for ILSVRC detection evaluation is subtle and comes directly as a result of the scale of ILSVRC. In PASCAL, algorithms would often return many detections per class on the test set, including ones with low confidence scores. This allowed the algorithms to reach the level of high recall at least in the realm of very low precision. On ILSVRC detection test set if an algorithm returns 10 bounding boxes per object per image this would result in \(10 \times 200 \times 40K = 80\)M detections. Each detection contains an image index, a class index, 4 bounding box coordinates, and the confidence score, so it takes on the order of 28 bytes. The full set of detections would then require \(2.24\)Gb to store and submit to the evaluation server, which is impractical. This means that algorithms are implicitly required to limit their predictions to only the most confident locations.

5 Methods

The ILSVRC dataset and the competition has allowed significant algorithmic advances in large-scale image recognition and retrieval.

5.1 Challenge Entries

This section is organized chronologically, highlighting the particularly innovative and successful methods which participated in the ILSVRC each year. Tables 67 and 8 list all the participating teams. We see a turning point in 2012 with the development of large-scale convolutional neural networks.

Table 6 Teams participating in ILSVRC2010-2012, ordered alphabetically
Table 7 Teams participating in ILSVRC2013, ordered alphabetically
Table 8 Teams participating in ILSVRC2014, ordered alphabetically

ILSVRC2010 The first year the challenge consisted of just the classification task. The winning entry from NEC team (Lin et al. 2011) used SIFT (Lowe 2004) and LBP (Ahonen et al. 2006) features with two non-linear coding representations (Zhou et al. 2010; Wang et al. 2010) and a stochastic SVM. The honorable mention XRCE team (Perronnin et al. 2010) used an improved Fisher vector representation (Perronnin and Dance 2007) along with PCA dimensionality reduction and data compression followed by a linear SVM. Fisher vector-based methods have evolved over 5 years of the challenge and continued performing strongly in every ILSVRC from 2010 to 2014.

ILSVRC2011 The winning classification entry in 2011 was the 2010 runner-up team XRCE, applying high-dimensional image signatures (Perronnin et al. 2010) with compression using product quantization (Sanchez and Perronnin 2011) and one-vs-all linear SVMs. The single-object localization competition was held for the first time, with two brave entries. The winner was the UvA team using a selective search approach to generate class-independent object hypothesis regions (van de Sande et al. 2011b), followed by dense sampling and vector quantization of several color SIFT features (van de Sande et al. 2010), pooling with spatial pyramid matching (Lazebnik et al. 2006), and classifying with a histogram intersection kernel SVM (Maji and Malik 2009) trained on a GPU (van de Sande et al. 2011a).

ILSVRC2012 This was a turning point for large-scale object recognition, when large-scale deep neural networks entered the scene. The undisputed winner of both the classification and localization tasks in 2012 was the SuperVision team. They trained a large, deep convolutional neural network on RGB values, with 60 million parameters using an efficient GPU implementation and a novel hidden-unit dropout trick (Krizhevsky et al. 2012; Hinton et al. 2012). The second place in image classification went to the ISI team, which used Fisher vectors (Sanchez and Perronnin 2011) and a streamlined version of Graphical Gaussian Vectors (Harada and Kuniyoshi 2012), along with linear classifiers using Passive-Aggressive (PA) algorithm (Crammer et al. 2006). The second place in single-object localization went to the VGG, with an image classification system including dense SIFT features and color statistics (Lowe 2004), a Fisher vector representation (Sanchez and Perronnin 2011), and a linear SVM classifier, plus additional insights from (Arandjelovic and Zisserman 2012; Sanchez et al. 2012). Both ISI and VGG used (Felzenszwalb et al. 2010) for object localization; SuperVision used a regression model trained to predict bounding box locations. Despite the weaker detection model, SuperVision handily won the object localization task. A detailed analysis and comparison of the SuperVision and VGG submissions on the single-object localization task can be found in Russakovsky et al. (2013). The influence of the success of the SuperVision model can be clearly seen in ILSVRC2013 and ILSVRC2014.

ILSVRC2013 There were 24 teams participating in the ILSVRC2013 competition, compared to 21 in the previous 3 years combined. Following the success of the deep learning-based method in 2012, the vast majority of entries in 2013 used deep convolutional neural networks in their submission. The winner of the classification task was Clarifai, with several large deep convolutional networks averaged together. The network architectures were chosen using the visualization technique of (Zeiler and Fergus 2013), and they were trained on the GPU following (Zeiler et al. 2011) using the dropout technique (Krizhevsky et al. 2012).

The winning single-object localization OverFeat submission was based on an integrated framework for using convolutional networks for classification, localization and detection with a multiscale sliding window approach (Sermanet et al. 2013). They were the only team tackling all three tasks.

The winner of object detection task was UvA team, which utilized a new way of efficient encoding (van de Sande et al. 2014) densely sampled color descriptors (van de Sande et al. 2010) pooled using a multi-level spatial pyramid in a selective search framework (Uijlings et al. 2013). The detection results were rescored using a full-image convolutional network classifier.

ILSVRC2014 2014 attracted the most submissions, with 36 teams submitting 123 entries compared to just 24 teams in 2013—a 1.5\(\times \) increase in participation.Footnote 9 As in 2013 almost all teams used convolutional neural networks as the basis for their submission. Significant progress has been made in just 1 year: image classification error was almost halved since ILSVRC2013 and object detection mean average precision almost doubled compared to ILSVRC2013. Please refer to Sect. 6.1 for details.

In 2014 teams were allowed to use outside data for training their models in the competition, so there were six tracks: provided and outside data tracks in each of image classification, single-object localization, and object detection tasks.

The winning image classification with provided data team was GoogLeNet, which explored an improved convolutional neural network architecture combining the multi-scale idea with intuitions gained from the Hebbian principle. Additional dimension reduction layers allowed them to increase both the depth and the width of the network significantly without incurring significant computational overhead. In the image classification with external data track, CASIAWS won by using weakly supervised object localization from only classification labels to improve image classification. MCG region proposals (Arbeláez et al. 2014) pretrained on PASCAL VOC 2012 data are used to extract region proposals, regions are represented using convolutional networks, and a multiple instance learning strategy is used to learn weakly supervised object detectors to represent the image.

In the single-object localization with provided data track, the winning team was VGG, which explored the effect of convolutional neural network depth on its accuracy by using three different architectures with up to 19 weight layers with rectified linear unit non-linearity, building off of the implementation of Caffe (Jia 2013). For localization they used per-class bounding box regression similar to OverFeat (Sermanet et al. 2013). In the single-object localization with external data track, Adobe used 2000 additional ImageNet classes to train the classifiers in an integrated convolutional neural network framework for both classification and localization, with bounding box regression. At test time they used k-means to find bounding box clusters and rank the clusters according to the classification scores.

In the object detection with provided data track, the winning team NUS used the RCNN framework (Girshick et al. 2013) with the network-in-network method (Lin et al. 2014a) and improvements of (Howard 2014). Global context information was incorporated following (Chen et al. 2014). In the object detection with external data track, the winning team was GoogLeNet (which also won image classification with provided data). It is truly remarkable that the same team was able to win at both image classification and object detection, indicating that their methods are able to not only classify the image based on scene information but also accurately localize multiple object instances. Just like most teams participating in this track, GoogLeNet used the image classification dataset as extra training data.

5.2 Large Scale Algorithmic Innovations

ILSVRC over the past 5 years has paved the way for several breakthroughs in computer vision.

The field of categorical object recognition has dramatically evolved in the large-scale setting. Section 5.1 documents the progress, starting from coded SIFT features and evolving to large-scale convolutional neural networks dominating at all three tasks of image classification, single-object localization, and object detection. With the availability of so much training data (along with an efficient algorithmic implementation and GPU computing resources) it became possible to learn neural networks directly from the image data, without needing to create multi-stage hand-tuned pipelines of extracted features and discriminative classifiers. The major breakthrough came in 2012 with the win of the SuperVision team on image classification and single-object localization tasks (Krizhevsky et al. 2012), and by 2014 all of the top contestants were relying heavily on convolutional neural networks.

Further, over the past few years there has been a lot of focus on large-scale recognition in the computer vision community . Best paper awards at top vision conferences in 2013 were awarded to large-scale recognition methods: at CVPR 2013 to “Fast, Accurate Detection of 100,000 Object Classes on a Single Machine” (Dean et al. 2013) and at ICCV 2013 to “From Large Scale Image Categorization to Entry-Level Categories” (Ordonez et al. 2013). Additionally, several influential lines of research have emerged, such as large-scale weakly supervised localization work of (Kuettel et al. 2012) which was awarded the best paper award in ECCV 2012 and large-scale zero-shot learning, e.g., (Frome et al. 2013).

6 Results and Analysis

6.1 Improvements over the Years

State-of-the-art accuracy has improved significantly from ILSVRC2010 to ILSVRC2014, showcasing the massive progress that has been made in large-scale object recognition over the past 5 years. The performance of the winning ILSVRC entries for each task and each year are shown in Fig. 9. The improvement over the years is clearly visible. In this section we quantify and analyze this improvement.

Fig. 9
figure 9

Performance of winning entries in the ILSVRC2010-2014 competitions in each of the three tasks (details about the entries and numerical results are in Sect. 5.1). There is a steady reduction of error every year in object classification and single-object localization tasks, and a 1.9\(\times \) improvement in mean average precision in object detection. There are two considerations in making these comparisons. (1) The object categories used in ISLVRC changed between years 2010 and 2011, and between 2011 and 2012. However, the large scale of the data (1000 object categories, 1.2 million training images) has remained the same, making it possible to compare results. Image classification and single-object localization entries shown here use only provided training data. (2) The size of the object detection training data has increased significantly between years 2013 and 2014 (Sect. 3.3). Section 6.1 discusses the relative effects of training data increase versus algorithmic improvements

6.1.1 Image Classification and Single-Object Localization Improvement over the Years

There has been a 4.2\(\times \) reduction in image classification error (from 28.2 to 6.7 %) and a 1.7\(\times \) reduction in single-object localization error (from 42.5 to 25.3 %) since the beginning of the challenge. For consistency, here we consider only teams that use the provided training data. Even though the exact object categories have changed (Sect. 3.1.1), the large scale of the dataset has remained the same (Table 3), making the results comparable across the years. The dataset has not changed since 2012, and there has been a 2.4\(\times \) reduction in image classification error (from 16.4 to 6.7 %) and a 1.3\(\times \) in single-object localization error (from 33.5 to 25.3 %) in the past 3 years.

6.1.2 Object Detection Improvement over the Years

Object detection accuracy as measured by the mean average precision (mAP) has increased 1.9\(\times \) since the introduction of this task, from 22.6 % mAP in ILSVRC2013 to 43.9 % mAP in ILSVRC2014. However, these results are not directly comparable for two reasons. First, the size of the object detection training data has increased significantly from 2013 to 2014 (Sect. 3.3). Second, the 43.9 % mAP result was obtained with the addition of the image classification and single-object localization training data. Here we attempt to understand the relative effects of the training set size increase versus algorithmic improvements. All models are evaluated on the same ILSVRC2013-2014 object detection test set.

First, we quantify the effects of increasing detection training data between the two challenges by comparing the same model trained on ILSVRC2013 detection data versus ILSVRC2014 detection data. The UvA team’s framework from 2013 achieved 22.6 % with ILSVRC2013 data (Table 7) and 26.3 % with ILSVRC2014 data and no other modifications.Footnote 10 The absolute increase in mAP was 3.7 %. The RCNN model achieved 31.4 % mAP with ILSVRC2013 detection plus image classification data (Girshick et al. 2013) and 34.5 % mAP with ILSVRC2014 detection plus image classification data (Berkeley team in Table 8). The absolute increase in mAP by expanding ILSVRC2013 detection data to ILSVRC2014 was 3.1 %.

Second, we quantify the effects of adding in the external data for training object detection models. The NEC model in 2013 achieved 19.6 % mAP trained on ILSVRC2013 detection data alone and 20.9 % mAP trained on ILSVRC2013 detection plus classification data (Table 7). The absolute increase in mAP was 1.3 %. The UvA team’s best entry in 2014 achieved 32.0 % mAP trained on ILSVRC2014 detection data and 35.4 % mAP trained on ILSVRC2014 detection plus classification data. The absolute increase in mAP was 3.4 %.

Thus, we conclude based on the evidence so far that expanding the ILSVRC2013 detection set to the ILSVRC2014 set, as well as adding in additional training data from the classification task, all account for approximately 1–4 % in absolute mAP improvement for the models. For comparison, we can also attempt to quantify the effect of algorithmic innovation. The UvA team’s 2013 framework achieved 26.3 % mAP on ILSVRC2014 data as mentioned above, and their improved method in 2014 obtained 32.0 % mAP (Table 8). This is 5.8 % absolute increase in mAP over just 1 year from algorithmic innovation alone.

In summary, we conclude that the absolute 21.3 % increase in mAP between winning entries of ILSVRC2013 (22.6 % mAP) and of ILSVRC2014 (43.9 % mAP) is the result of impressive algorithmic innovation and not just a consequence of increased training data. However, increasing the ISLVRC2014 object detection training dataset further is likely to produce additional improvements in detection accuracy for current algorithms.

6.2 Statistical Significance

One important question to ask is whether results of different submissions to ILSVRC are statistically significantly different from each other. Given the large scale, it is no surprise that even minor differences in accuracy are statistically significant; we seek to quantify exactly how much of a difference is enough.

Following the strategy employed by PASCAL VOC (Everingham et al. 2014), for each method we obtain a confidence interval of its score using bootstrap sampling. During each bootstrap round, we sample \(N\) images with replacement from all the available \(N\) test images and evaluate the performance of the algorithm on those sampled images. This can be done very efficiently by precomputing the accuracy on each image. Given the results of all the bootstrapping rounds we discard the lower and the upper \(\alpha \) fraction. The range of the remaining results represents the \(1-2\alpha \) confidence interval. We run a large number of bootstrapping rounds (from 20,000 until convergence). Table 9 shows the results of the top entries to each task of ILSVRC2012-2014. The winning methods are statistically significantly different from the other methods, even at the 99.9 % level.

Table 9 We use bootstrapping to construct 99.9 each ILSVRC task in 2012–2014

6.3 Current State of Categorical Object Recognition

Besides looking at just the average accuracy across hundreds of object categories and tens of thousands of images, we can also delve deeper to understand where mistakes are being made and where researchers’ efforts should be focused to expedite progress.

To do so, in this section we will be analyzing an “optimistic” measurement of state-of-the-art recognition performance instead of focusing on the differences in individual algorithms. For each task and each object class, we compute the best performance of any entry submitted to any ILSVRC2012-2014, including methods using additional training data. Since the test sets have remained the same, we can directly compare all the entries in the past 3 years to obtain the most “optimistic” measurement of state-of-the-art accuracy on each category.

For consistency with the object detection metric (higher is better), in this section we will be using image classification and single-object localization accuracy instead of error, where \(accuracy = 1-error\).

6.3.1 Range of Accuracy Across Object Classes

Figure 10 shows the distribution of accuracy achieved by the “optimistic” models across the object categories. The image classification model achieves 94.6 % accuracy on average (or 5.4 % error), but there remains a 41.0 % absolute difference inaccuracy between the most and least accurate object class. The single-object localization model achieves 81.5 % accuracy on average (or 18.5 % error), with a 77.0 % range in accuracy across the object classes. The object detection model achieves 44.7 % average precision, with an 84.7 % range across the object classes. It is clear that the ILSVRC dataset is far from saturated: performance on many categories has remained poor despite the strong overall performance of the models.

Fig. 10
figure 10

For each object class, we consider the best performance of any entry submitted to ILSVRC2012-2014, including entries using additional training data. The plots show the distribution of these “optimistic” per-class results. Performance is measured as accuracy for image classification (left) and for single-object localization (middle), and as average precision for object detection (right). While the results are very promising in image classification, the ILSVRC datasets are far from saturated: many object classes continue to be challenging for current algorithms

6.3.2 Qualitative Examples of Easy and Hard Classes

Figures 11 and 12 show the easiest and hardest classes for each task, i.e., classes with the best and worst results obtained with the “optimistic” models.

Fig. 11
figure 11

For each object category, we take the best performance of any entry submitted to ILSVRC2012-2014 (including entries using additional training data). Given these “optimistic” results we show the easiest and harder classes for each task. The numbers in parentheses indicate classification and localization accuracy. For image classification the 10 easiest classes are randomly selected from among 121 object classes with 100 % accuracy. Object detection results are shown in Fig. 12

Fig. 12
figure 12

For each object category, we take the best performance of any entry submitted to ILSVRC2012-2014 (including entries using additional training data). Given these “optimistic” results we show the easiest and harder classes for the object detection task, i.e., classes with best and worst results. The numbers in parentheses indicate average precision. Image classification and single-object localization results are shown in Fig. 11

For image classification, 121 out of 1000 object classes have 100 % image classification accuracy according to the optimistic estimate. Figure 11 (top) shows a random set of 10 of them. They contain a variety of classes, such as mammals like “red fox” and animals with distinctive structures like “stingray”. The hardest classes in the image classification task, with accuracy as low as 59.0 %, include metallic and see-through man-made objects, such as “hook” and “water bottle,” the material “velvet” and the highly varied scene class “restaurant.”

For single-object localization, the 10 easiest classes with 99.0–100 % accuracy are all mammals and birds. The hardest classes include metallic man-made objects such as “letter opener” and “ladle”, plus thin structures such as “pole” and “spacebar” and highly varied classes such as “wing”. The most challenging class “spacebar” has a only 23.0 % localization accuracy.

Object detection results are shown in Fig. 12. The easiest classes are living organisms such as “dog” and “tiger”, plus “basketball” and “volleyball” with distinctive shape and color, and a somewhat surprising “snowplow.” The easiest class “butterfly” is not yet perfectly detected but is very close with \(92.7\,\%\) AP. The hardest classes are as expected small thin objects such as “flute” and “nail”, and the highly varied “lamp” and “backpack” classes, with as low as \(8.0\,\%\) AP.

6.3.3 Per-Class Accuracy as a Function of Image Properties

We now take a closer look at the image properties to try to understand why current algorithms perform well on some object classes but not others. One hypothesis is that variation in accuracy comes from the fact that instances of some classes tend to be much smaller in images than instances of other classes, and smaller objects may be harder for computers to recognize. In this section we argue that while accuracy is correlated with object scale in the image, not all variation in accuracy can be accounted for by scale alone.

For every object class, we compute its average scale, or the average fraction of image area occupied by an instance of the object class on the ILSVRC2012-2014 validation set. Since the images and object classes in the image classification and single-object localization tasks are the same, we use the bounding box annotations of the single-object localization dataset for both tasks. In that dataset the object classes range from “swimming trunks” with scale of \(1.5\,\%\) to “spider web” with scale of \(85.6\,\%\). In the object detection validation dataset the object classes range from “sunglasses” with scale of \(1.3\,\%\) to “sofa” with scale of \(44.4\,\%\).

Figure 13 shows the performance of the “optimistic” method as a function of the average scale of the object in the image. Each dot corresponds to one object class. We observe a very weak positive correlation between object scale and image classification accuracy: \(\rho = 0.14\). For single-object localization and object detection the correlation is stronger, at \(\rho = 0.40\) and \(\rho = 0.41\) respectively. It is clear that not all variation in accuracy can be accounted for by scale alone. Nevertheless, in the next section we will normalize for object scale to ensure that this factor is not affecting our conclusions.

Fig. 13
figure 13

Performance of the “optimistic” method as a function of object scale in the image, on each task. Each dot corresponds to one object class. Average scale (x-axis) is computed as the average fraction of the image area occupied by an instance of that object class on the ILSVRC2014 validation set. “Optimistic” performance (y-axis) corresponds to the best performance on the test set of any entry submitted to ILSVRC2012-2014 (including entries with additional training data). The test set has remained the same over these 3 years. We see that accuracy tends to increase as the objects get bigger in the image. However, it is clear that far from all the variation in accuracy on these classes can be accounted for by scale alone

6.3.4 Per-Class Accuracy as a Function of Object Properties

Besides considering image-level properties we can also observe how accuracy changes as a function of intrinsic object properties. We define three properties inspired by human vision: the real-world size of the object, whether it’s deformable within instance, and how textured it is. For each property, the object classes are assigned to one of a few bins (listed below). These properties are illustrated in Fig. 1.

Human subjects annotated each of the 1000 image classification and single-object localization object classes from ILSVRC2012-2014 with these properties (Russakovsky et al. 2013). By construction (see Sect. 3.3.1), each of the 200 object detection classes is either also one of 1000 object classes or is an ancestor of one or more of the 1000 classes in the ImageNet hierarchy. To compute the values of the properties for each object detection class, we simply average the annotated values of the descendant classes.

In this section we draw the following conclusions about state-of-the-art recognition accuracy as a function of these object properties:

  • Real-world size XS for extra small (e.g. nail), small (e.g. fox), medium (e.g. bookcase), large (e.g. car) or XL for extra large (e.g. church) The image classification and single-object localization “optimistic” models performs better on large and extra large real-world objects than on smaller ones. The “optimistic” object detection model surprisingly performs better on extra small objects than on small or medium ones.

  • Deformability within instance Rigid (e.g., mug) or deformable (e.g., water snake) The “optimistic” model on each of the three tasks performs statistically significantly better on deformable objects compared to rigid ones. However, this effect disappears when analyzing natural objects separately from man-made objects.

  • Amount of texture none (e.g. punching bag), low (e.g. horse), medium (e.g. sheep) or high (e.g. honeycomb) The “optimistic” model on each of the three tasks is significantly better on objects with at least low level of texture compared to untextured objects.

These and other findings are justified and discussed in detail below.

Experimental Setup We observed in Sect. 6.3.3 that objects that occupy a larger area in the image tend to be somewhat easier to recognize. To make sure that differences in object scale are not influencing results in this section, we normalize each bin by object scale. We discard object classes with the largest scales from each bin as needed until the average object scale of object classes in each bin across one property is the same (or as close as possible). For real-world size property for example, the resulting average object scale in each of the five bins is 31.6–31.7 % in the image classification and single-object localization tasks, and 12.9–13.4 % in the object detection task.Footnote 11

Figure 14 shows the average performance of the “optimistic” model on the object classes that fall into each bin for each property. We analyze the results in detail below. Unless otherwise specified, the reported accuracies below are after the scale normalization step.

To evaluate statistical significance, we compute the 95 % confidence interval for accuracy using bootstrapping: we repeatedly sample the object classes within the bin with replacement, discard some as needed to normalize by scale, and compute the average accuracy of the “optimistic” model on the remaining classes. We report the \(95\,\%\) confidence intervals (CI) in parentheses.

Fig. 14
figure 14

Performance of the “optimistic” computer vision model as a function of object properties. The x-axis corresponds to object properties annotated by human labelers for each object class (Russakovsky et al. 2013) and illustrated in Fig. 1. The y-axis is the average accuracy of the “optimistic” model. Note that the range of the y-axis is different for each task to make the trends more visible. The black circle is the average accuracy of the model on all object classes that fall into each bin. We control for the effects of object scale by normalizing the object scale within each bin (details in Sect. 6.3.4). The color bars show the model accuracy averaged across the remaining classes. Error bars show the \(95\,\%\) confidence interval obtained with bootstrapping. Some bins are missing color bars because less than 5 object classes remained in the bin after scale normalization. For example, the bar for XL real-world object detection classes is missing because that bin has only 3 object classes (airplane, bus, train) and after normalizing by scale no classes remain

Real-World Size In Fig. 14 (top, left) we observe that in the image classification task the “optimistic” model tends to perform significantly better on objects which are larger in the real-world. The classification accuracy is 93.6–93.9 % on XS, S and M objects compared to \(97.0\,\%\) on L and \(96.4\,\%\) on XL objects. Since this is after normalizing for scale and thus can’t be explained by the objects’ size in the image, we conclude that either (1) larger real-world objects are easier for the model to recognize, or (2) larger real-world objects usually occur in images with very distinctive backgrounds.

To distinguish between the two cases we look Fig. 14 (top, middle). We see that in the single-object localization task, the L objects are easy to localize at \(82.4\,\%\) localization accuracy. XL objects, however, tend to be the hardest to localize with only \(73.4\,\%\) localization accuracy. We conclude that the appearance of L objects must be easier for the model to learn, while XL objects tend to appear in distinctive backgrounds. The image background make these XL classes easier for the image-level classifier, but the individual instances are difficult to accurately localize. Some examples of L objects are “killer whale,” “schooner,” and “lion,” and some examples of XL objects are “boathouse,” “mosque,” “toyshop” and “steel arch bridge.”

In Fig. 14 (top,right) corresponding to the object detection task, the influence of real-world object size is not as apparent. One of the key reasons is that many of the XL and L object classes of the image classification and single-object localization datasets were removed in constructing the detection dataset (Sect. 3.3.1) since they were not basic categories well-suited for detection. There were only 3 XL object classes remaining in the dataset (“train,” “airplane” and “bus”), and none after scale normalization.We omit them from the analysis. The average precision of XS, S, M objects (44.5, 39.0, and 38.5 % mAP respectively) is statistically insignificant from average precision on L objects: \(95\,\%\) confidence interval of L objects is 37.5–59.5 %. This may be due to the fact that there are only 6 L object classes remaining after scale normalization; all other real-world size bins have at least 18 object classes.

Finally, it is interesting that performance on XS objects of 44.5 mAP (CI 40.5–47.6 %) is statistically significantly better than performance on S or M objects with 39.0 and \(38.5\,\%\) mAP respectively. Some examples of XS objects are “strawberry,” “bow tie” and “rugby ball.”

Deformability Within Instance In Fig. 14(second row) it is clear that the “optimistic” model performs statistically significantly worse on rigid objects than on deformable objects. Image classification accuracy is \(93.2\,\%\) on rigid objects (CI 92.6–93.8 %), much smaller than 95.7 % on deformable ones. Single-object localization accuracy is \(76.2\,\%\) on rigid objects (CI 74.9–77.4 %), much smaller than \(84.7\,\%\) on deformable ones. Object detection mAP is \(40.1\,\%\) on rigid objects (CI 37.2–42.9 %), much smaller than \(44.8\,\%\) on deformable ones.

We can further analyze the effects of deformability after separating object classes into “natural” and “man-made” bins based on the ImageNet hierarchy. Deformability is highly correlated with whether the object is natural or man-made: \(0.72\) correlation for image classification and single-object localization classes, and \(0.61\) for object detection classes. Figure 14(third row) shows the effect of deformability on performance of the model for man-made and natural objects separately.

Man-made classes are significantly harder than natural classes: classification accuracy \(92.8\,\%\) (CI 92.3–93.3 %) for man-made versus \(97.0\,\%\) for natural, localization accuracy \(75.5\,\%\) (CI 74.3–76.5 %) for man-made versus \(88.5\,\%\) for natural, and detection mAP \(38.7\,\%\) (CI 35.6–41.3 %) for man-made versus \(50.9\,\%\) for natural. However, whether the classes are rigid or deformable within this subdivision is no longer significant in most cases. For example, the image classification accuracy is \(92.3\,\%\) (CI 91.4–93.1 %) on man-made rigid objects and \(91.8\,\%\) on man-made deformable objects—not statistically significantly different.

There are two cases where the differences in performance are statistically significant. First, for single-object localization, natural deformable objects are easier than natural rigid objects: localization accuracy of \(87.9\,\%\) (CI 85.9–90.1 %) on natural deformable objects is higher than \(85.8\,\%\) on natural rigid objects—falling slightly outside the 95 % confidence interval. This difference in performance is likely because deformable natural animals tend to be easier to localize than rigid natural fruit.

Second, for object detection, man-made rigid objects are easier than man-made deformable objects: \(38.5\,\%\) mAP (CI 35.2–41.7 %) on man-made rigid objects is higher than \(33.0\,\%\) mAP on man-made deformable objects. This is because man-made rigid objects include classes like “traffic light” or “car” whereas the man-made deformable objects contain challenging classes like “plastic bag,” “swimming trunks” or “stethoscope.”

Amount of Texture Finally, we analyze the effect that object texture has on the accuracy of the “optimistic” model. Figure 14(fourth row) demonstrates that the model performs better as the amount of texture on the object increases. The most significant difference is between the performance on untextured objects and the performance on objects with low texture. Image classification accuracy is \(90.5\,\%\) on untextured objects (CI 89.3–91.6 %), lower than \(94.6\,\%\) on low-textured objects. Single-object localization accuracy is \(71.4\,\%\) on untextured objects (CI 69.1–73.3 %), lower than \(80.2\,\%\) on low-textured objects. Object detection mAP is \(33.2\,\%\) on untextured objects (CI 29.5–35.9 %), lower than \(42.9\,\%\) on low-textured objects.

Texture is correlated with whether the object is natural or man-made, at \(0.35\) correlation for image classification and single-object localization, and \(0.46\) correlation for object detection. To determine if this is a contributing factor, in Fig. 14(bottom row) we break up the object classes into natural and man-made and show the accuracy on objects with no texture versus objects with low texture. We observe that the model is still statistically significantly better on low-textured object classes than on untextured ones, both on man-made and natural object classes independently.Footnote 12

6.4 Human Accuracy on Large-Scale Image Classification

Recent improvements in state-of-the-art accuracy on the ILSVRC dataset are easier to put in perspective when compared to human-level accuracy. In this section we compare the performance of the leading large-scale image classification method with the performance of humans on this task.

To support this comparison, we developed an interface that allowed a human labeler to annotate images with up to five ILSVRC target classes. We compare human errors to those of the winning ILSRC2014 image classification model, GoogLeNet (Sect. 5.1). For this analysis we use a random sample of 1500 ILSVRC2012-2014 image classification test set images.

Annotation Interface Our web-based annotation interface consists of one test set image and a list of 1000 ILSVRC categories on the side. Each category is described by its title, such as “cowboy boot.” The categories are sorted in the topological order of the ImageNet hierarchy, which places semantically similar concepts nearby in the list. For example, all motor vehicle-related classes are arranged contiguously in the list. Every class category is additionally accompanied by a row of 13 examples images from the training set to allow for faster visual scanning. The user of the interface selects 5 categories from the list by clicking on the desired items. Since our interface is web-based, it allows for natural scrolling through the list, and also search by text.

Annotation Protocol We found the task of annotating images with one of 1000 categories to be an extremely challenging task for an untrained annotator. The most common error that an untrained annotator is susceptible to is a failure to consider a relevant class as a possible label because they are unaware of its existence.

Therefore, in evaluating the human accuracy we relied primarily on expert annotators who learned to recognize a large portion of the 1000 ILSVRC classes. During training, the annotators labeled a few hundred validation images for practice and later switched to the test set images.

6.4.1 Quantitative Comparison of Human and Computer Accuracy on Large-Scale Image Classification

We report results based on experiments with two expert annotators. The first annotator (A1) trained on 500 images and annotated 1500 test images. The second annotator (A2) trained on 100 images and then annotated 258 test images. The average pace of labeling was approximately 1 image per minute, but the distribution is strongly bimodal: some images are quickly recognized, while some images (such as those of fine-grained breeds of dogs, birds, or monkeys) may require multiple minutes of concentrated effort.

The results are reported in Table 10.

Table 10 Human classification results on the ILSVRC2012-2014 classification test set, for two expert annotators A1 and A2

Annotator 1 Annotator A1 evaluated a total of 1500 test set images. The GoogLeNet classification error on this sample was estimated to be \(6.8\,\%\) (recall that the error on full test set of 100,000 images is \(6.7\,\%\), as shown in Table 8). The human error was estimated to be \(\mathbf 5.1\,\% \). Thus, annotator A1 achieves a performance superior to GoogLeNet, by approximately \(1.7\,\%\). We can analyze the statistical significance of this result under the null hypothesis that they are from the same distribution. In particular, comparing the two proportions with a z-test yields a one-sided \(p\)-value of \(p = 0.022\). Thus, we can conclude that this result is statistically significant at the \(95\,\%\) confidence level.

Annotator 2 Our second annotator (A2) trained on a smaller sample of only 100 images and then labeled 258 test set images. As seen in Table 10, the final classification error is significantly worse, at approximately \(12.0\,\%\) Top-5 error. The majority of these errors (\(48.8\,\%\)) can be attributed to the annotator failing to spot and consider the ground truth label as an option.

Thus, we conclude that a significant amount of training time is necessary for a human to achieve competitive performance on ILSVRC. However, with a sufficient amount of training, a human annotator is still able to outperform the GoogLeNet result (\(p = 0.022\)) by approximately \(1.7\,\%\).

Annotator Comparison We also compare the prediction accuracy of the two annotators. Of a total of 204 images that both A1 and A2 labeled, 174 (\(85\,\%\)) were correctly labeled by both A1 and A2, 19 (\(9\,\%\)) were correctly labeled by A1 but not A2, 6 (\(3\,\%\)) were correctly labeled by A2 but not A1, and 5 (\(2\,\%\)) were incorrectly labeled by both. These include 2 images that we consider to be incorrectly labeled in the ground truth.

In particular, our results suggest that the human annotators do not exhibit strong overlap in their predictions. We can approximate the performance of an “optimistic” human classifier by assuming an image to be correct if at least one of A1 or A2 correctly labeled the image. On this sample of 204 images, we approximate the error rate of an “optimistic” human annotator at \(2.4\,\%\), compared to the GoogLeNet error rate of \(4.9\,\%\).

6.4.2 Analysis of Human and Computer Errors on Large-Scale Image Classification

We manually inspected both human and GoogLeNet errors to gain an understanding of common error types and how they compare. For purposes of this section, we only discuss results based on the larger sample of 1500 images that were labeled by annotator A1. Examples of representative mistakes are in Fig. 15. The analysis and insights below were derived specifically from GoogLeNet predictions, but we suspect that many of the same errors may be present in other methods.

Fig. 15
figure 15

Representative validation images that highlight common sources of error. For each image, we display the ground truth in blue, and top 5 predictions from GoogLeNet follow (red wrong, green right). GoogLeNet predictions on the validation set images were graciously provided by members of the GoogLeNet team. From left to right: Images that contain multiple objects, images of extreme closeups and uncharacteristic views, images with filters, images that significantly benefit from the ability to read text, images that contain very small and thin objects, images with abstract representations, and example of a fine-grained image that GoogLeNet correctly identifies but a human would have significant difficulty with

Types of Errors in Both Computer and Human Annotations

  1. (1)

    Multiple objects Both GoogLeNet and humans struggle with images that contain multiple ILSVRC classes (usually many more than five), with little indication of which object is the focus of the image. This error is only present in the Classification setting, since every image is constrained to have exactly one correct label. In total, we attribute 24 (\(24\,\%\)) of GoogLeNet errors and 12 (\(16\,\%\)) of human errors to this category. It is worth noting that humans can have a slight advantage in this error type, since it can sometimes be easy to identify the most salient object in the image.

  2. (2)

    Incorrect annotations We found that approximately 5 out of 1500 images (\(0.3\,\%\)) were incorrectly annotated in the ground truth. This introduces an approximately equal number of errors for both humans and GoogLeNet.

Types of Errors that the Computer is More Susceptible to than the Human

  1. (1)

    Object small or thin GoogLeNet struggles with recognizing objects that are very small or thin in the image, even if that object is the only object present. Examples of this include an image of a standing person wearing sunglasses, a person holding a quill in their hand, or a small ant on a stem of a flower. We estimate that approximately 22 (\(21\,\%\)) of GoogLeNet errors fall into this category, while none of the human errors do. In other words, in our sample of images, no image was mislabeled by a human because they were unable to identify a very small or thin object. This discrepancy can be attributed to the fact that a human can very effectively leverage context and affordances to accurately infer the identity of small objects (for example, a few barely visible feathers near person’s hand as very likely belonging to a mostly occluded quill).

  2. (2)

    Image filters Many people enhance their photos with filters that distort the contrast and color distributions of the image. We found that 13 (\(13\,\%\)) of the images that GoogLeNet incorrectly classified contained a filter. Thus, we posit that GoogLeNet is not very robust to these distortions. In comparison, only one image among the human errors contained a filter, but we do not attribute the source of the error to the filter.

  3. (3)

    Abstract representations. GoogLeNet struggles with images that depict objects of interest in an abstract form, such as 3D-rendered images, paintings, sketches, plush toys, or statues. An example is the abstract shape of a bow drawn with a light source in night photography, a 3D-rendered robotic scorpion, or a shadow on the ground, of a child on a swing. We attribute approximately 6 (\(6\,\%\)) of GoogLeNet errors to this type of error and believe that humans are significantly more robust, with no such errors seen in our sample.

  4. (4)

    Miscellaneous sources Additional sources of error that occur relatively infrequently include extreme closeups of parts of an object, unconventional viewpoints such as a rotated image, images that can significantly benefit from the ability to read text (e.g. a featureless container identifying itself as “face powder”), objects with heavy occlusions, and images that depict a collage of multiple images. In general, we found that humans are more robust to all of these types of error.

Types of Errors that the Human is More Susceptible to than the Computer

  1. (1)

    Fine-grained recognition We found that humans are noticeably worse at fine-grained recognition (e.g. dogs, monkeys, snakes, birds), even when they are in clear view. To understand the difficulty, consider that there are more than 120 species of dogs in the dataset. We estimate that 28 (\(37\,\%\)) of the human errors fall into this category, while only 7 (\(7\,\%\)) of GoogLeNet errors do.

  2. (2)

    Class unawareness The annotator may sometimes be unaware of the ground truth class present as a label option. When pointed out as an ILSVRC class, it is usually clear that the label applies to the image. These errors get progressively less frequent as the annotator becomes more familiar with ILSVRC classes. Approximately 18 (\(24\,\%\)) of the human errors fall into this category.

  3. (3)

    Insufficient training data Recall that the annotator is only presented with 13 examples of a class under every category name. However, 13 images are not always enough to adequately convey the allowed class variations. For example, a brown dog can be incorrectly dismissed as a “Kelpie” if all examples of a “Kelpie” feature a dog with black coat. However, if more than 13 images were listed it would have become clear that a “Kelpie” may have brown coat. Approximately 4 (\(5\,\%\)) of human errors fall into this category.

6.4.3 Conclusions from Human Image Classification Experiments

We investigated the performance of trained human annotators on a sample of 1500 ILSVRC test set images. Our results indicate that a trained human annotator is capable of outperforming the best model (GoogLeNet) by approximately \(1.7\,\%\) (\(p = 0.022\)).

We expect that some sources of error may be relatively easily eliminated (e.g. robustness to filters, rotations, collages, effectively reasoning over multiple scales), while others may prove more elusive (e.g. identifying abstract representations of objects). On the other hand, a large majority of human errors come from fine-grained categories and class unawareness. We expect that the former can be significantly reduced with fine-grained expert annotators, while the latter could be reduced with more practice and greater familiarity with ILSVRC classes. Our results also hint that human errors are not strongly correlated and that human ensembles may further reduce human error rate.

It is clear that humans will soon outperform state-of-the-art ILSVRC image classification models only by use of significant effort, expertise, and time. One interesting follow-up question for future investigation is how computer-level accuracy compares with human-level accuracy on more complex image understanding tasks.

7 Conclusions

In this paper we described the large-scale data collection process of ILSVRC, provided a summary of the most successful algorithms on this data, and analyzed the success and failure modes of these algorithms. In this section we discuss some of the key lessons we learned over the years of ILSVRC, strive to address the key criticisms of the datasets and the challenges we encountered over the years, and conclude by looking forward into the future.

7.1 Lessons Learned

The key lesson of collecting the datasets and running the challenges for 5 years is this: All human intelligence tasks need to be exceptionally well-designed. We learned this lesson both when annotating the dataset using Amazon Mechanical Turk workers (Sect. 3) and even when trying to evaluate human-level image classification accuracy using expert labelers (Sect. 6.4). The first iteration of the labeling interface was always bad—generally meaning completely unusable. If there was any inherent ambiguity in the questions posed (and there almost always was), workers found it and accuracy suffered. If there is one piece of advice we can offer to future research, it is to very carefully design, continuously monitor, and extensively sanity-check all crowdsourcing tasks.

The other lesson, already well-known to large-scale researchers, is this: Scaling up the dataset always reveals unexpected challenges. From designing complicated multi-step annotation strategies (Sect. 3.2.1) to having to modify the evaluation procedure (Sect. 4), we had to continuously adjust to the large-scale setting. On the plus side, of course, the major breakthroughs in object recognition accuracy (Sect. 5) and the analysis of the strength and weaknesses of current algorithms as a function of object class properties (Sect. 6.3) would never have been possible on a smaller scale.

7.2 Criticism

In the past 5 years, we encountered three major criticisms of the ILSVRC dataset and the corresponding challenge: (1) the ILSVRC dataset is insufficiently challenging, (2) the ILSVRC dataset contains annotation errors, and (3) the rules of ILSVRC competition are too restrictive. We discuss these in order.

The first criticism is that the objects in the dataset tend to be large and centered in the images, making the dataset insufficiently challenging. In Sect. 3.2.2 and 3.3.4 we tried to put those concerns to rest by analyzing the statistics of the ILSVRC dataset and concluding that it is comparable with, and in many cases much more challenging than, the long-standing PASCAL VOC benchmark (Everingham et al. 2010).

The second is regarding the errors in ground truth labeling. We went through several rounds of in-house post-processing of the annotations obtained using crowdsourcing, and corrected many common sources of errors (e.g., Appendix 1). The major remaining source of annotation errors stem from fine-grained object classes, e.g., labelers failing to distinguish different species of birds. This is a tradeoff that had to be made: in order to annotate data at this scale on a reasonable budget, we had to rely on non-expert crowd labelers. However, overall the dataset is encouragingly clean. By our estimates, \(99.7\,\%\) precision is achieved in the image classification dataset (Sects. 3.1.36.4) and \(97.9\,\%\) of images that went through the bounding box annotation system have all instances of the target object class labeled with bounding boxes (Sect. 3.2.1).

The third criticism we encountered is over the rules of the competition regarding using external training data. In ILSVRC2010-2013, algorithms had to only use the provided training and validation set images and annotations for training their models. With the growth of the field of large-scale unsupervised feature learning, however, questions began to arise about what exactly constitutes “outside” data: for example, are image features trained on a large pool of “outside” images in an unsupervised fashion allowed in the competition? After much discussion, in ILSVRC2014 we took the first step towards addressing this problem. We followed the PASCAL VOC strategy and created two tracks in the competition: entries using only “provided” data and entries using “outside” data, meaning any images or annotations not provided as part of ILSVRC training or validation sets. However, in the future this strategy will likely need to be further revised as the computer vision field evolves. For example, competitions can consider allowing the use of any image features which are publically available, even if these features were learned on an external source of data.

7.3 The Future

Given the massive algorithmic breakthroughs over the past 5 years, we are very eager to see what will happen in the next 5 years. There are many potential directions of improvement and growth for ILSVRC and other large-scale image datasets.

First, continuing the trend of moving towards richer image understanding (from image classification to single-object localization to object detection), the next challenge would be to tackle pixel-level object segmentation. The recently released large-scale COCO dataset (Lin et al. 2014b) is already taking a step in that direction.

Second, as datasets grow even larger in scale, it may become impossible to fully annotate them manually. The scale of ILSVRC is already imposing limits on the manual annotations that are feasible to obtain: for example, we had to restrict the number of objects labeled per image in the image classification and single-object localization datasets. In the future, with billions of images, it will become impossible to obtain even one clean label for every image. Datasets such as Yahoo’s Flickr Creative Commons 100M,Footnote 13 released with weak human tags but no centralized annotation, will become more common.

The growth of unlabeled or only partially labeled large-scale datasets implies two things. First, algorithms will have to rely more on weakly supervised training data. Second, even evaluation might have to be done after the algorithms make predictions, not before. This means that rather than evaluating accuracy (how many of the test images or objects did the algorithm get right) or recall (how many of the desired images or objects did the algorithm manage to find), both of which require a fully annotated test set, we will be focusing more on precision: of the predictions that the algorithm made, how many were deemed correct by humans.

We are eagerly awaiting the future development of object recognition datasets and algorithms, and are grateful that ILSVRC served as a stepping stone along this path.