1 Introduction

Along with the growth in large-scale datasets and computation resources, deep neural networks have become the most dominant models for various tasks [14, 52]. However, the increasing depth of neural networks also introduces challenges in their training process. Traditional supervised training method only applies the supervision to the last layer and then propagates the error from the last layer to the shallow layers (Fig. 1(a)), which leads to hardship in optimizing the intermediate layers such as gradient vanishing [29].

Recently, deep supervision (a.k.a. deeply-supervised net) has been proposed to address this issue by optimizing the intermediate layers directly [38]. As shown in Fig. 1(b), deep supervision adds several auxiliary classifiers to the intermediate layers in different depths. During the training phase, these classifiers are optimized with the original final classifier together by the same training loss (e.g. cross entropy for classification tasks). Both experimental and theoretical analyses have demonstrated its effectiveness in facilitating model convergence [62].

However, success comes with remaining obstacles. In general, different layers in convolutional neural networks tend to learn features at different levels. Usually, the shallow layers learn low-level features such as colors and edges, while the last several layers learn more high-level task-related semantic features such as categorical knowledge for classification tasks [82]. However, deep supervision forces the shallow layers to learn the task-related knowledge, which disobeys the original feature extraction process in neural networks. As pointed out in MSDNet [28], this conflict sometimes leads to accuracy degradation in the final classifier. This observation indicates that the supervised task loss is probably not the best supervision for optimizing the intermediate layers.

Fig. 1.
figure 1

The overview of the four methods. “\(\rightarrow \)” and “ ” indicate the path of forward computation and gradients backward computation. “proj” and “fc” indicate the projection heads and the fully connected classifiers, respectively. The gray dash line indicates whether the feature is task-irrelevant or task-biased. (a) Traditional supervised learning only applies supervision to the last layer and propagates it to the previous layers, leading to gradient vanishing. (c) Deep supervision trains both the last layer and the intermediate layers directly, which addresses gradient vanishing but makes all the layers be biased to the task. (d) Our method introduces contrastive learning to supervise the intermediate layer and thus avoid these problems.

In this paper, we argue that contrastive learning can provide better supervision for intermediate layers than the supervised task loss. Contrastive learning is one of the most popular and effective techniques in representation learning [7, 8, 34]. Usually, it regards two augmentations from the same image as a positive pair and different images as negative pairs. In the training period, the neural network is trained to minimize the distance of a positive pair while maximizing the distance of a negative pair. As a result, the network can learn the invariance to various data augmentation, such as Color Jitter and Random Gray Scale. Considering that these data augmentation invariances are usually low-level, task-irrelevant and transferable to various vision tasks [3, 64], we argue that they are more beneficial knowledge to be learned by intermediate layers.

Motivated by these observations, we propose a novel training framework named Contrastive Deep Supervision. It optimizes the intermediate layers with contrastive learning instead of traditional supervised learning. As shown in Fig. 1(d), several projection heads are attached in the intermediate layers of the neural networks and trained to perform contrastive learning. These projection heads can be discarded in the inference period to avoid additional computation and storage. Different from deep supervision which trains the intermediate layers to learn the knowledge for a specific task, the intermediate layers in our method are trained to learn the invariance to data augmentation, which makes the neural network generalize better. Besides, since contrastive learning can be performed on unlabeled data, the proposed contrastive deep supervision can also be easily extended in the semi-supervised learning paradigm.

Moreover, contrastive deep supervision can be further utilized to boost the performance of another deep learning technique – knowledge distillation. Knowledge distillation (KD) is a popular model compression approach which aims to transfer the knowledge from a cumbersome teacher model to a lightweight student model [2, 15, 23]. Recently, abundant research finds that distilling the “crucial knowledge” inside the backbone features such as attention and relation [49, 58, 72] leads to better performance than directly distilling all the backbone features. In this paper, we show that the data augmentation invariances learned by the intermediate layers in contrastive deep supervision are more beneficial knowledge to be distilled. By combining contrastive deep supervision with the naïve feature distillation, the distilled ResNet18 achieves 73.23% accuracy on ImageNet, which outperforms the baseline and the second-best KD method by 4.02% and 2.16%, respectively.

Extensive experiments on nine datasets with eleven neural networks methods have been conducted to evaluate its effectiveness on general image classification, fine-grained image classification, object detection in supervised learning, semi-supervised learning and knowledge distillation, which demonstrates that contrastive deep supervision enables neural networks to learn better visual representation. In the discussion section, we further explain the effectiveness of our method from the perspective of regularization methods, which prevents models from overfitting and leads to better uncertainty estimation. To sum up, the main contributions of our paper can be summarized as follows.

  • We propose contrastive deep supervision, a neural network training method in which the intermediate layers are directly optimized with contrastive learning. It enables neural networks to learn better visual representation at no expense of additional parameters and computation during inference.

  • From the perspective of deep supervision, this paper firstly shows that the intermediate layers can be trained with supervision besides the task loss.

  • From the perspective of representation learning, we firstly show that contrastive learning and supervised learning can be combined in a one-stage deep-supervision manner instead of the two-stage “pretrain-finetune” scheme.

  • Extensive experiments on nine datasets, eleven neural networks with eleven comparison methods demonstrate the effectiveness of our method on general classification, fine-grained classification and object detection in supervised learning, semi-supervised learning and knowledge distillation.

2 Related Work

2.1 Deep Supervision

Deep neural networks usually contain a large number of layers, which increases the difficulty of optimization. To address this issue, deeply supervised net (a.k.a. deep supervision) is proposed to directly supervise the intermediate layers of deep neural networks [38]. Wang et al. show that deep supervision can alleviate the vanishing gradient problem and thus leads to significant performance improvements [62]. Usually, deep supervision attaches several auxiliary classifiers at the intermediate layers and supervises these auxiliary classifiers with the task loss (e.g. cross-entropy loss in classification). Recently, several methods have been proposed to improve deep supervision with knowledge distillation, which aims to minimize the difference between the prediction of the deepest classifier and the auxiliary classifiers in the intermediate layers [40, 55]. Besides classification, abundant research has also demonstrated the effectiveness of deep supervision methods in dynamic neural networks [78], semantic segmentation [51, 73, 81], object detection [39], knowledge distillation [76] and so on.

2.2 Contrastive Learning

In the last several years, contrastive learning has become the most popular method in representation learning [5, 18, 24, 27, 32, 60, 61, 63, 68, 74]. Oord et al. propose the contrastive predictive coding, which aims to predict the low dimension embedding of future signals with an auto-regressive model [47]. He et al. propose MoCo, which introduces a dynamic memory bank to record the embeddings of negative samples [9, 11, 19]. Then, SimCLR is proposed to show the importance of large batch size and long training time in contrastive learning [7, 8]. Recently, abundant research has been proposed to study the influence of negative samples further. BYOL is introduced to demonstrate that contrastive learning is effective even without negative samples [16]. SimSiam gives a detailed study on the importance of batch normalization, negative samples, memory bank, and the stop-gradient operation [10]. Besides self-supervised learning, contrastive learning has also shown its power in the traditional supervised learning paradigm. Khosla  et al. show that state-of-the-art performance can be achieved on ImageNet with the basic contrastive learning in SimCLR by building the positive pairs with label supervision [6, 34]. Park et al. apply contrastive learning to unpaired image-to-image translation, which breaks the limitation of cycle reconstruction [48].

2.3 Knowledge Distillation

Knowledge distillation, which aims to facilitate the training of a lightweight student model under the supervision of an over-parameterized teacher model, has become one of the most popular methods in model compression. Knowledge distillation is first proposed by Bucilua et al. [2] and then expanded by Hinton et al. [23], who introduces a temperature-characterized softmax to soften the distribution of teacher logits. Instead of distilling the knowledge of the logits, more and more techniques are proposed to distill the information in teacher features or its variants, such as attention maps [42, 72], negative values [22], task-oriented information [76], relational information [43, 49, 58], Gram matrix [69], mutual information [1], context information [75] and so on. Besides model compression, knowledge distillation has also achieved significant success in self-supervised learning [30, 46], semi-supervised learning [37, 56], multi-exit neural network [70, 77, 78], incremental learning [83] and model robustness [65, 79].

3 Methodology

3.1 Deep Supervision

In this subsection, we revisit the formulation of deep supervision methods. Let c be a given backbone classifier, deep supervision introduces several shallow classifiers by using the intermediate features in c. More specifically, assume \(c= g \circ f\) where g is the final classifier, f is the feature extractor operator and \(f=f_K \circ f_{K-1}\circ \cdot \cdot \cdot f_1\). K denotes the number of convolutional stages in f. At each feature extraction stage i, deep supervision attaches an auxiliary classifier \(g_i\) for providing intermediate supervision. Thus, there are K classifiers in total which have the following form:

$$\begin{aligned} \begin{aligned} c_1(x)&=g_1 \circ f_1(x) \\ c_2(x)&=g_2 \circ f_2 \circ f_1(x)\\ \cdot \cdot \cdot \\ c_K(x)&=g_K \circ f_K \circ f_{K-1} \circ \cdot \cdot \cdot \circ f_1(x). \end{aligned} \end{aligned}$$
(1)

Given a set of training samples \(\mathcal {X}=\{x_i\}^n_{i=1}\) and its corresponding labels \(\mathcal {Y}=\{y_i\}^n_{i=1}\), the training loss of deep supervision \(\mathcal {L}_{\text {DS}}\) can be formulated as

$$\begin{aligned} \mathcal {L}_{\text {DS}}= \underbrace{\mathcal {L}_{\text {CE}}(c_K(\mathcal {X}), \mathcal {Y})}_{\text {from standard training}} + \alpha \cdot \mathop {\sum }_{i=1}^{K-1} \underbrace{\mathcal {L}_{\text {CE}}(c_i(\mathcal {X}), \mathcal {Y})}_{\text {from deep supervision}}, \end{aligned}$$
(2)

where \(\mathcal {L}_{\text {CE}}\) indicates the cross entropy loss. The first and the second item in the loss function indicate the standard training loss and the additional loss from deep supervision for the intermediate layers, respectively. \(\alpha \) is a hyper-parameter to balance the two loss items. Recently, some research has been proposed to apply layer-wise consistency on deep supervision, which additionally minimizes the KL divergence between the prediction of auxiliary classifiers and the final classifier [40, 55]. These methods can also be considered as the knowledge distillation which regards the final classifier as the teacher and the auxiliary classifiers as the students. Their training loss can be formulated as

$$\begin{aligned} \mathcal {L}_{\text {DS}} + \beta \cdot \mathop {\sum }_{i=1}^{K-1} \mathcal {L}_{\text {KL}}(c_i(\mathcal {X}), c_K(\mathcal {X})), \end{aligned}$$
(3)

where \(\beta \) is a hyper-parameter to balance the two loss functions.

3.2 Contrastive Deep Supervision

In this subsection, we first introduce the formulation of contrastive learning. For a minibatch of N images \(\{x_1, x_2, ..., x_N\}\), we apply stochastic data augmentation to each image twice, resulting in a batch of 2N images. For convenience, we denote \(x_i\) and \(x_{N+i}\) images as the two augmentations from the same image, which is regarded as a positive pair. Denote \(z= c(x)\) as the normalized projection head outputs, contrastive learning loss (a.k.a. NT-Xtent [7]) can be formulated as

$$\begin{aligned} \mathcal {L}_{\text {Contra}}= - \sum _{i=1}^{N}\log \frac{\exp (z_i\cdot z_{i+N})/\mathcal {\tau }}{\sum _{k=1}^{2N}\mathbb {1}_{[k\ne i]}\exp (z_i\cdot z_k)/\mathcal {\tau }}, \end{aligned}$$
(4)

where \(\mathbbm {1}\in \{0, 1\}\) is an indicator function evaluating to 1 if \(k\ne i\) and \(\tau \) is a temperature hyper-parameter. Intuitively, \(\mathcal {L}_{\text {Contra}}\) encourages the encoder network to learn similar representation for different augmentations from the same image while increasing the difference between representations of the augmentations from different images.

The main difference between deep supervision and our method is that deep supervision trains the auxiliary classifiers by the cross entropy loss while our method trains them with the contrastive loss \(\mathcal {L}_{\text {Contra}}\). By denoting the contrastive loss at \(c_i\) as \(\mathcal {L}_{\text {Contra}}(\mathcal {X};c_i)\), then the training loss of our contrastive deep supervision \(\mathcal {L}_{\text {CDS}}\) can be formulated as

$$\begin{aligned} \mathcal {L}_{\text {CDS}}= \underbrace{\mathcal {L}_{\text {CE}}(c_K(\mathcal {X}), \mathcal {Y})}_{\text {from standard training}} + \lambda _1 \mathop {\sum }_{i=1}^{K-1} \underbrace{\mathcal {L}_{\text {Contra}}(\mathcal {X};c_i)}_{\text {from our method}} , \end{aligned}$$
(5)

where the first and the second item indicate the standard training loss and the additional loss in our method for the intermediate layers, respectively. \(\lambda _1\) is a hyper-parameter to balance the two loss items.

Based on the above formulation on supervised learning, we can extend contrastive deep supervision in semi-supervised learning and knowledge distillation.

Semi-supervised Learning. In semi-supervised learning, we assume that there is a labeled dataset \(\mathcal {X}_1\) with its labels \(\mathcal {Y}_1\) and an unlabeled dataset \(\mathcal {X}_2\). On the labeled data, contrative deep supervision can be appled directly with \(\mathcal {L}_{\text {CDS}}\). On the unlabeled data, due to the lack of labels, contrastive deep supervision only optimize the contrastive learning loss \(\mathcal {L_{\text {Contra}}}\), which can be formulated as

$$\begin{aligned} \mathcal {L}_{\text {CDS}}(\mathcal {X}_1,\mathcal {Y}_1) + \mathcal {L}_{\text {Contra}}(\mathcal {X}_2) \end{aligned}$$
(6)

Knowledge Distillation. The intermediate layers in contrastive deep supervision are supervised with contrastive learning and thus they can learn the invariance to different data augmentation. As shown in previous research, these data augmentation invariance is beneficial to various downstream tasks [31]. In this paper, we further propose to improve knowledge distillation with contrastive deep supervision by transferring the data augmentation invariance learned by the teachers to the students. Denote the student model and the teacher model in knowledge distillation as \(f^\mathcal {S}\) and \(f^\mathcal {T}\) respectively, the naïve feature-based knowledge distillation directly minimizes the distance between the backbone features of the student and the teacher, which can be formulated as

$$\begin{aligned} \mathop {\sum }_{i=1}^{K} \Vert f_i^\mathcal {T}(\mathcal {X}) - f_i^\mathcal {\mathcal {S}}(\mathcal {X})\Vert _2. \end{aligned}$$
(7)

In contrast, knowledge distillation with contrastive deep supervision minimizes the distance between the embedding vectors (the output of the projection heads) of the student and the teacher, which can be formulated as

$$\begin{aligned} \begin{aligned} \mathcal {L_{\text {CDS for KD}}}=\mathop {\sum }_{i=1}^{K-1} \Vert c^{\mathcal {T}}_i(\mathcal {X})-c^{\mathcal {S}}_i(\mathcal {X})\Vert _2. \end{aligned} \end{aligned}$$
(8)

Now we can formulate the overall training loss of the student as

$$\begin{aligned} \begin{aligned} \mathcal {L}_{\text {DCDS}}&= \mathcal {L}_{\text {CDS}} + \lambda _2 \cdot \mathcal {L_{\text {CDS for KD}}} + \lambda _3 \cdot \mathcal {L}_{\text {KL}}\left( c^{\mathcal {T}}_K\left( \mathcal {X}\right) , c^{\mathcal {S}}_K\left( \mathcal {X}\right) \right) , \end{aligned} \end{aligned}$$
(9)

where \(\lambda _2\) and \(\lambda _3\) are the hyper-parameters to balance different loss items. Following previous works in deep supervision, we do not set an individual hyper-parameter for each projection head for convenience in hyper-parameter tuning.

3.3 Other Details and Tricks

Design of Projection Heads. In contrastive deep supervision, several projection heads are added to the intermediate layers of neural networks during the training period. These projection heads map the backbone features into a normalized embedding space, where the contrastive learning loss is applied. As discussed in related works, the architecture of the projection head is crucial to model performance [8]. Usually, the projection head is a non-linear projection stacked by two fully connected layers and a ReLU function. However, in contrastive deep supervision, the input feature comes from the intermediate layers instead of the final layer, and thus it is more challenging to project them properly [8]. Hence, we increase the complexity of these projection heads by adding convolutional layers before the non-linear projection.

Table 1. Comparison experiments (top-1 accuracy / %) with the other deep supervision methods on CIFAR100.
Table 2. Comparison experiments (top-1 accuracy / %) with the other deep supervision methods on CIFAR10.

Contrastive Learning. The proposed contrastive deep supervision is a general training framework and does not depend on a specific contrastive learning method. In this paper, we adopt SimCLR [7] and SupCon [34] as the contrastive learning method in most experiments. We argue that the performance of our method can be further improved by using better contrastive learning method.

Table 3. Comparison with the other deep supervision methods on ImageNet.

Negative Samples. Previous studies show that the number of negative samples has a vital influence on the performance of contrastive learning. Accordingly, a large batch size, a momentum encoder or a memory bank is usually required [7, 16, 19]. In contrastive deep supervision, we do not use any of these solutions because the supervised loss (\(\mathcal {L}_{\text {CE}}\) in Eq. 5) is enough to prevent contrastive learning from converging to the collapsing solutions.

Table 4. Experiments on different object detection models on COCO2017. ResNet50 models are pre-trained on ImageNet with different deep supervision methods and then utilized as the backbones of these detectors.
Table 5. Comparison (top-1 acc. / %) with deep supervision methods with ResNet50 for fine-grained classification. Models are trained from scratch.
Table 6. Comparison (top-1 acc. %) with deep supervision methods with ResNet50 for fine-grained classification. Models are finetuned from ImageNet pre-trained weights.

4 Experiment

4.1 Experiment Setting

Common Image Classification. For common image classification, our method has been evaluated on three datasets, including CIFAR10, CIFAR100 and ImageNet [13, 36] with kinds of neural networks including ResNet (RNT), ResNeXt (RXT), Wide ResNet (WRN), SENet (SET), PreAct ResNet (PAT), MobileNetv1, MobileNetv2, ShuffleNetv1 and ShuffleNetv2 [20, 21, 25, 26, 54, 66, 71, 80].

Fine-Grained Image Classification. For fine-grained image classification, our method has been evaluated on five popular datasets, including CUB200-2011 [59], Stanford Cars [35], Oxford Flowers [45], Stanford Dogs [33] and FGVC Aircraft [44]. ResNet50 is utilized as the classifier for all the experiments.

Object Detection. For object detection, our method has been evaluated on MS COCO2017 [41] with Faster RCNN and RetinaNet by MMdetection [4].

Semi-supervised Learning. Semi-supervised learning experiments have been conducted on CIFAR100, CIFAR10 with ResNet18. For each dataset, we have evaluated our method with 10%, 20%, 30% and 40% labels.

Table 7. Comparison experiments (top-1 and top-5 accuracy / %) with the other eight knowledge distillation methods on ImageNet with ResNet. Numbers in bold indicate the highest.Results marked with \(^\dag \) come from the paper of SSKD [67].
Fig. 2.
figure 2

Experimental results of semi-supervised training on CIFAR100 and CIFAR10 with ResNet18.

Fig. 3.
figure 3

Influence from the number of projection heads.

Comparison Methods. Three previous deep supervision methods are utilized for comparison, including DSN [38], DKS [55] and DHM [40]. In knowledge distillation experiments, we have evaluated our method with nine knowledge distillation methods, including KD [23], FitNet [53], AT [72], RKD [49], SP [58] and CRD [57]. Besides, we also cite results on ImageNet of CC [50], OKD [84], and SSKD [67] from the paper of SSKD.

Table 8. Comparison with the other knowledge distillation methods on CIFAR.

4.2 Experimental Results

Image Classification. Experimental results on CIFAR100, CIFAR10 and ImageNet are shown in Table 1, Table 2 and Table 3, respectively. It is observed that: (a) Our method achieves 3.44% and 1.70% top-1 accuracy improvements on CIFAR100 and CIFAR10 on average, respectively. It consistently outperforms the second-best deep supervision method by 1.05% and 0.90% on the two datasets, respectively. (b) On ImageNet, contrastive deep supervision leads to 3.64%, 3.02% and 2.95% top-1 accuracy improvements on ResNet18, ResNet34 and ResNet50, respectively. On average, it outperforms the baseline and the second-best method by 3.20% and 1.83% top-1 accuracy, respectively.

Object Detection. Table 4 shows the performance of our method on object detection. In these experiments, We firstly pretrain the ResNets on ImageNet with standard training (Baseline), three deep supervision methods, and our method, and then finetuning them as the backbone for object detection models, including RetinaNet and Faster RCNN on COCO2017 datasets. It is observed that with backbones pre-trained with our method, there are 0.9 and 0.8 AP improvements on Faster RCNN and RetinaNet respectively, which outperforms the second-best method by 0.6 AP, indicating that the representation learned with our method are more beneficial to downstream tasks.

Fine-Grained Image Classification. Experiments on fine-grained image classification are shown in Table 6. It is observed that: (a) Contrastive deep supervision leads to consistent and significant accuracy improvements on the five datasets. On average, it leads to 3.80%, 2.43%, 1.73%, 4.77% and 2.25% accuracy improvements on the five datasets, respectively. (b) Besides, the benefits of our method in “finetuning from ImageNet” and “training from scratch” are very similar (except on Aircraft), which indicates that the effectiveness of our method is consistent in different training settings.

Semi-supervised Learning. Experiments on semi-supervised learning with ResNet18 on CIFAR10 and CIFAR100 are shown in Fig. 2. It is observed that: (a) Our method leads to consistent accuracy improvements at all the ratios of labeled data. (b) The benefits of our method become larger when there is less labeled data, which indicates that our method is effective in using the unlabeled data to optimize the intermediate layers.

Knowledge Distillation. Knowledge distillation experiments on ImageNet and CIFAR are shown in Table 7 and Table 8, respectively. It is observed that: (a) Our method achieves 5.07% and 2.20% top-1 accuracy improvements on CIFAR100 and CIFAR10 on average, outperforming the second-best KD method by 1.40% and 0.87% on the two datasets, respectively. (b) The similar results can also be observed in ImageNet experiments. Our method leads to 4.02%/2.55%, 3.48%/2.14% and 3.38%/2.22% top-1/top-5 accuracy improvements on ResNet18, ResNet34 and ResNet50, respectively. On average, it outperforms the baseline and the second-best method by 3.62% and 1.76% top-1 accuracy, respectively.

5 Discussion

5.1 Contrastive Deep Supervision as a Regularizer

Loss Curves. Regularization methods in deep learning are usually utilized to avoid model overfitting by introducing additional penalties or loss. In this subsection, we show that the contrastive learning loss introduced by our method in the intermediate layers works as a regularizer. Figure 4 shows the cross entropy loss between predicted results and labels during the training period from two ResNet18 models trained by the standard method and our method, respectively. It is observed that at most of epochs, the baseline model has lower cross entropy loss than our model. When both models are converged (epoch 280–300), the baseline model has only 0.005 loss while our model still has 0.025 loss. These observations indicate that there is serve overfitting in the baseline model while deep contrastive supervision can alleviate overfitting and thus improve the accuracy.

Uncertainty Estimation. Besides, the comparison on expected calibrated error (ECE) of models trained with the standard method and our method has been shown in Fig. 5. A lower ECE indicates that the predicted probability of a neural network estimates representative of the true correctness likelihood better [17]. It is observed that compared with the baseline model, our method leads to a lower ECE, indicating better uncertainty estimation and interpretability.

Table 9. Comparison between our method and contrastive learning methods with ResNet50 on ImageNet. Baseline\(^1-2\): Two baselines trained with and without AutoAugmentation [12]. SupCon\(^{1-3}\): Three models trained by supervised contrastive learning with different hyper-parameters. BYOL: ResNet50 unsupervisedly pre-trained by 1000 epochs and then supervisedly finetuned. BYOL+DSN: ResNet50 pretrained with BYOL and then finetuned with deep supervision. Ours\(^{1,3}\): ResNet50 trained with contrastive deep supervision in different settings. Ours\(^2\): ResNet50 trained with contrastive deep supervision+ knowledge distillation.

5.2 Comparison with Contrastive Learning

Comparison between our method and two “pretrain & finetune” contrastive learning methods is shown in Table 9. It is observed that without a large batch size and the advanced data augmentation policy (AutoAugment), contrastive deep supervision (Ours\(^1\)) with only 25% training time achieves 0.4% lower accuracy than SupCon\(^3\). Besides, contrastive deep supervision with the same training time and data augmentation (Ours\(^3\)) achieves 1.1% and 1.6% higher accuracy than SupCon\(^3\) and BYOL+DSN, respectively, which demonstrates the advantage of our method over the traditional contrastive learning methods.

5.3 Ablation Study on Knowledge Distillation

The main difference between the naïve feature distillation and feature distillation with our contrastive deep supervision is “what to distill”. Naïve feature distillation distills the backbone features while our method distills the embedding learned by contrastive deep supervision. To further demonstrate its effectiveness, we have trained a ResNet50 model on CIFAR100 with both contrastive deep supervision and distillation on backbone features. Experimental results show this model achieves 82.26% accuracy, which is 1.27% lower than distilling the embedding. These results demonstrate that distilling the embedding learned by contrastive deep supervision is more beneficial.

Fig. 4.
figure 4

Comparison on the cross entropy loss between predicted results and labels during the training period. Note that our method also leads to better accuracy (80.84% vs 77.45%)

Fig. 5.
figure 5

Comparison on reliability diagrams. “GAP” indicates the difference between confidence and accuracy. “Output” indicates accuracy. ECE: Expected Calibrated Error (lower is better).

5.4 Sensitivity Study

Where to Apply Projection Heads. We study the influence from the position of projection heads with the following four schemes: (1)uniform scheme - applying projection heads into different depths uniformly; (2) downsampling scheme - applying projection heds into the layers before downsampling; (3) shallow scheme - applying projection heads into only the shallower layers; (4) deep scheme - applying projection heads to only the deeper layers; Experimental results on CIFAR100 with ResNet50 show that the four schemes achieves 81.23%, 81.31%, 81.07% and 80.99% accuracy, respectively. It is observed that both uniform and downsampling schemes leads to excellent performance, indicating our method is not sensitive to where to apply projection heads.

The Number of Projection Heads. We have studied the influence from the number of projection heads in Fig. 3. It is observed that when there are less than five projection heads, more projection heads tend to achieve better performance. The fifth projection head does not leads to more accuracy improvements.

6 Conclusion

This paper proposes contrastive deep supervision, a novel training methodology that directly optimizes the intermediate layers of deep neural networks with contrastive learning. It enables the neural network to learn better visual representation without additional computation and storage in inference. Experiments on nine datasets with eleven neural networks have demonstrated its effectiveness in general image classification, fine-grained image classification and object detection for traditional supervised learning, semi-supervised learning and knowledge distillation. It outperforms the previous deep supervision methods, knowledge distillation methods, and contrastive learning methods by a clear margin. Besides, we also show that contrastive deep supervision works as a regularizer to prevent models from overfitting, and thus leads to better uncertainty estimation.