Keywords

1 Introduction

Deep learning methods, especially convolutional neural networks (CNNs) have achieved remarkable performances in many fields, such as computer vision, natural language processing and speech recognition. However, these extraordinary performances are at the expense of high computational and storage demand. Although the power of modern GPUs has skyrocketed in the last years, these high costs are still prohibitive for CNNs to deploy in latency critical applications such as self-driving cars and augmented reality, etc.

Recently, a significant amount of works on accelerating CNNs at inference time have been proposed. Methods focus on accelerating pre-trained models include direct pruning [9, 13, 24, 27, 29], low-rank decomposition [7, 20, 49], and quantization [6, 31, 44]. Another stream of researches trained small and efficient networks directly, such as knowledge distillation [14, 33, 35], novel architecture designs [15, 18] and sparse learning [1, 25, 43, 50]. In spare learning, prior works [25] pursued the sparsity of weights. However, non-structure sparsity only produce random connectivities and can hardly utilize current off-the-shelf hardwares such as GPUs to accelerate model inference in wall clock time. To address this problem, recently methods [1, 43, 50] proposed to apply group sparsity to retain a hardware friendly CNN structure.

In this paper, we take another view to jointly learn and prune a CNN. First, we introduce a new type of parameter – scaling factors which scale the outputs of some specific structures (e.g., neurons, groups or blocks) in CNNs. These scaling factors endow more flexibility to CNN with very few parameters. Then, we add sparsity regularizations on these scaling factors to push them to zero during training. Finally, we can safely remove the structures correspond to zero scaling factors and get a pruned model. Comparing with direct pruning methods, this method is data driven and fully end-to-end. In other words, the network can select its unique configuration based on the difficulty and needs of each task. Moreover, the model selection is accomplished jointly with the normal training of CNNs. We do not require extra fine-tuning or multi-stage optimizations, and it only introduces minor cost in the training.

To summarize, our contributions are in the following three folds:

  • We propose a unified framework for model training and pruning in CNNs. Particularly, we formulate it as a joint sparse regularized optimization problem by introducing scaling factors and corresponding sparse regularizations on certain structures of CNNs.

  • We utilize a modified stochastic Accelerated Proximal Gradient (APG) method to jointly optimize the weights of CNNs and scaling factors with sparsity regularizations. Compared with previous methods that utilize heuristic ways to force sparsity, our methods enjoy more stable convergence and better results without fine-tuning and multi-stage optimization.

  • We test our proposed method on several state-of-the-art networks, PeleeNet, VGG, ResNet and ResNeXt to prune neurons, residual blocks and groups, respectively. We can adaptively adjust the depth and width accordingly. We show very promising acceleration performances on CIFAR and large scale ILSVRC 2012 image classification datasets.

2 Related Works

Network pruning was pioneered in the early development of neural network. In Optimal Brain Damage [23] and Optimal Brain Surgeon [10], unimportant connections are removed based on the Hessian matrix derived from the loss function. Recently, Han et al. [9] brought back this idea by pruning the weights whose absolute value are smaller than a given threshold. This approach requires iteratively pruning and fine-tuning which is very time-consuming. To tackle this problem, Guo et al. [8] proposed dynamic network surgery to prune parameters during training. However, the nature of irregular sparse weights make them only yield effective compression but not faster inference in terms of wall clock time. To tackle this issue, several works pruned the neurons directly [16, 24, 29] by evaluating neuron importance on specific criteria. These methods all focus on removing the neurons whose removal affect the final prediction least. On the other hand, the diversity of neurons to be kept is also an important factor to consider [28]. More recently, [27] and [13] formulate pruning as a optimization problem. They first select most representative neurons and further minimize the reconstitution error to recover the accuracy of pruned networks. While neuron level pruning can achieve practical acceleration with moderate accuracy loss, it is still hard to implement them in an end-to-end manner without iteratively pruning and retraining. Very recently, Liu et al. [26] used similar technique as ours to prune neurons. They sparsify the scaling parameters of batch normalization (BN) [19] to select channels. Ye et al. [48] also adopted this idea into neuron pruning. As discussed later, both of their works can be seen as a special case in our framework.

Model structure learning for deep learning models has attracted increasing attention recently. Several methods have been explored to learn CNN architectures without handcrafted design [2, 32, 51]. One stream is to explore the design space by reinforcement learning [2, 51] or genetic algorithms [32, 46]. Another stream is to utilize sparse learning or binary optimization. [1, 50] added group sparsity regularizations on the weights of neurons and sparsified them in the training stage. Lately, Wen et al. [43] proposed a more general approach, which applied group sparsity on multiple structures of networks, including filter shapes, channels and layers in skip connections. Srinivas et al. [38] proposed a new trainable activation function tri-state ReLU into deep networks. They pruned neurons by forcing the parameters of tri-state ReLU into binary.

CNNs with skip connections have been the main stream for modern network design since it can mitigate the gradient vanishing/exploding issue in ultra deep networks by the help of skip connections [11, 39]. Among these work, ResNet and its variants [12, 47] have attracted more attention because of their simple design principle and state-of-the-art performances. Recently, Veit et al. [41] interpreted ResNet as an exponential ensemble of many shallow networks. They find there is minor impact on the performance when removing single residual block. However, deleting more and more residual blocks will impair the accuracy significantly. Therefore, accelerating this state-of-the-art network architecture is still a challenging problem. In this paper, we propose a data-driven method to learn the architecture of such kind of network. Through scaling and pruning residual blocks during training, our method can produce a more compact ResNet with faster inference speed and even better performance.

3 Proposed Method

Notations. Consider the weights of a convolutional layer l in a L layers CNN as a 4-dimensional tensor \(\mathbf{W}^{l} \in {\mathbb R}^{N_l\times M_l\times H_l\times W_l}\), where \(N_l\) is the number of output channels, \(M_l\) represents the number of input channels, \(H_l\) and \(W_l\) are the height and width of a 2-dimensional kernel. Then we can use \(\mathbf{W}^{l}_{k}\) to denote the weights of k-th neuron in layer l. The scaling factors are represented as a 1-dimensional vector \(\varvec{\lambda }\in {\mathbb R}^s\), where S is the number of structures we consider to prune. \(\lambda ^i\) refers to the i-th value of \(\varvec{\lambda }\). Denote soft-threshold operator as \({\mathcal S}_\alpha (\mathbf{z})_i=\text {sign}(z_i)(|z_i|-\alpha )_{+}\).

Fig. 1.
figure 1

The network architecture of our method. \({\mathcal F}\) represents a residual function. Gray block, group and neuron mean they are inactive and can be pruned since their corresponding scaling factors are 0.

3.1 Sparse Structure Selection

Given a training set consisting of N sample-label pairs \(\{\mathbf{x}_i,\mathbf{y}_i\}_{1\le i\le N}\), then a L layers CNN can be represented as a function \({\mathcal C}(\mathbf{x}_i,\mathbf{W})\), where \(\mathbf{W}=\{\mathbf{W}^{l}\}_{1\le l\le L}\) represents the collection of all weights in the CNN. \(\mathbf{W}\) is learned through solving an optimization problem of the form:

$$\begin{aligned} \min _\mathbf{W}{\frac{1}{N}\sum _{i=1}^{N}{{\mathcal L}(\mathbf{y}_i,{\mathcal C}(\mathbf{x}_i,\mathbf{W}))}+{\mathcal R}(\mathbf{W})}, \end{aligned}$$
(1)

where \({\mathcal L}(\mathbf{y}_i,{\mathcal C}(\mathbf{x}_i,\mathbf{W}))\) is the loss on the sample \(\mathbf{x}_i\), \({\mathcal R}(\cdot )\) is a non-structured regularization applying on every weight, e.g. \(l_2\)-norm as weight decay.

Prior sparse based model structure learning work [1, 50] tried to learn the number of neurons in a CNN. To achieve this goal, they added group sparsity regularization \({\mathcal R}_g(\cdot )\) on \(\mathbf{W}^{l}_k\) into Eq. 1, and enforced entire \(\mathbf{W}^{l}_k\) to zero during training. Another concurrent work by Wen et al. [43] adopted similar method but on multiple different structures. These ideas are straightforward but the implementations are nontrivial. First, the optimization is difficult since there are several constraints on weights simultaneously, including weight decay and group sparsity. Improper optimization technique may result in slow convergence and inferior results. Consequently, there is no successful attempt to directly apply these methods on large scale applications with complicated modern network architectures.

In this paper, we address structure learning problem in a more simple and effective way. Different from directly pushing weights in the same group to zero, we try to enforce the output of the group to zero. To achieve this goal, we introduce a new type of parameter – scaling factor \(\varvec{\lambda }\) to scale the outputs of some specific structures (neurons, groups or blocks), and add sparsity constraint on \(\varvec{\lambda }\) during training. Our goal is to obtain a sparse \(\varvec{\lambda }\). Namely, if \(\lambda ^i=0\), then we can safely remove the corresponding structure since its outputs have no contribution to subsequent computation. Figure 1 illustrates our framework.

Formally, the objective function of our proposed method can be formulated as:

$$\begin{aligned} \min _{\mathbf{W},\varvec{\lambda }}{\frac{1}{N}\sum _{i=1}^{N}{{\mathcal L}(\mathbf{y}_i,{\mathcal C}(\mathbf{x}_i,\mathbf{W},\varvec{\lambda }))}+{\mathcal R}(\mathbf{W})+{\mathcal R}_s(\varvec{\lambda })}, \end{aligned}$$
(2)

where \({\mathcal R}_s(\cdot )\) is a sparsity regularization for \(\varvec{\lambda }\) with weight \(\gamma \). In this work, we consider its most commonly used convex relaxation \(l_1\)-norm, which defined as \(\gamma \Vert \varvec{\lambda }\Vert _1\).

For \(\mathbf{W}\), we can update it by Stochastic Gradient Descent (SGD) with momentum or its variants. For \(\varvec{\lambda }\), we adopt Accelerated Proximal Gradient (APG) [30] method to solve it. For better illustration, we shorten \(\frac{1}{N}\sum _{i=1}^{N}{{\mathcal L}(\mathbf{y}_i,{\mathcal C}(\mathbf{x}_i,\varvec{\lambda }))}\) as \({\mathcal G}(\varvec{\lambda })\), and reformulate the optimization of \(\varvec{\lambda }\) as:

$$\begin{aligned} \min _{\varvec{\lambda }}{{\mathcal G}(\varvec{\lambda })+{\mathcal R}_s(\varvec{\lambda })}. \end{aligned}$$
(3)

Then we can update \(\varvec{\lambda }\) by APG:

$$\begin{aligned} \mathbf{d}_{(t)}&=\varvec{\lambda }_{(t-1)}+\frac{t-2}{t+1}(\varvec{\lambda }_{(t-1)}-\varvec{\lambda }_{(t-2)})\end{aligned}$$
(4)
$$\begin{aligned} \mathbf{z}_{(t)}&=\mathbf{d}_{(t)}-\eta _{(t)}\nabla {\mathcal G}(\mathbf{d}_{(t)}) \end{aligned}$$
(5)
$$\begin{aligned} \varvec{\lambda }_{(t)}&=\mathbf {prox}_{\eta _{(t)} {\mathcal R}_s}(\mathbf{z}_{(t)}), \end{aligned}$$
(6)

where \(\eta _{(t)}\) is gradient step size at iteration t and \(\mathbf {prox}_{\eta {\mathcal R}_s}(\cdot )={\mathcal S}_{\eta \gamma }(\cdot )\) since \({\mathcal R}_s(\varvec{\lambda })=\gamma \Vert \varvec{\lambda }\Vert _1\). However, this formulation is not friendly for deep learning since additional to the pass for updating \(\mathbf{W}\), we need to obtain \(\nabla {\mathcal G}(\mathbf{d}_{(t)})\) by extra forward-backward computation, which is computational expensive for deep neural networks. Thus, following the derivation in [40], we reformulate APG as a momentum based method:

$$\begin{aligned} \mathbf{z}_{(t)}&=\varvec{\lambda }_{(t-1)}+\mu _{(t-1)}\mathbf{v}_{(t-1)}\nonumber \\&~~~~-\eta _{(t)}\nabla {\mathcal G}(\varvec{\lambda }_{(t-1)}+\mu _{(t-1)}\mathbf{v}_{(t-1)}) \end{aligned}$$
(7)
$$\begin{aligned} \mathbf{v}_{(t)}&={\mathcal S}_{\eta _{(t)} \gamma }(\mathbf{z}_{(t)})-\varvec{\lambda }_{(t-1)} \end{aligned}$$
(8)
$$\begin{aligned} \varvec{\lambda }_{(t)}&=\varvec{\lambda }_{(t-1)}+\mathbf{v}_{(t)}, \end{aligned}$$
(9)

where we define \(\mathbf{v}_{(t-1)} = \varvec{\lambda }_{(t-1)} - \varvec{\lambda }_{(t-2)}\) and \(\mu _{(t-1)} = \frac{t-2}{t+1}\). This formulation is similar as the modified Nesterov Accelerated Gradient (NAG) in [40] except the update of \(\mathbf{v}_t\). Furthermore, we simplified the update of \(\varvec{\lambda }\) by replacing \(\varvec{\lambda }_{(t-1)}\) as \(\varvec{\lambda }'_{(t-1)}=\varvec{\lambda }_{(t-1)}+\mu _{(t-1)}\mathbf{v}_{(t-1)}\) following the modification of NAG in [4] which has been widely used in practical deep learning frameworks [5]. Our new parameters \(\varvec{\lambda }'_{t}\) updates become:

$$\begin{aligned} \mathbf{z}_{(t)}&=\varvec{\lambda }'_{(t-1)}-\eta _{(t)}\nabla {\mathcal G}(\varvec{\lambda }'_{(t-1)}) \end{aligned}$$
(10)
$$\begin{aligned} \mathbf{v}_{(t)}&={\mathcal S}_{\eta _{(t)} \gamma }(\mathbf{z}_{(t)})-\varvec{\lambda }'_{(t-1)}+\mu _{(t-1)}\mathbf{v}_{(t-1)} \end{aligned}$$
(11)
$$\begin{aligned} \varvec{\lambda }'_{(t)}&={\mathcal S}_{\eta _{(t)} \gamma }(\mathbf{z}_{(t)})+\mu _{(t)}\mathbf{v}_{(t)} \end{aligned}$$
(12)

In practice, we follow a stochastic approach with mini-batches and set momentum \(\mu \) fixed to a constant value. Both \(\mathbf{W}\) and \(\varvec{\lambda }\) are updated in each iteration.

The implementation of APG is very simple and effective after our modification. In the following, we show it can be implemented by only ten lines of code in MXNet [5].

MXNet implementation of APG

figure a

In our framework, we add scaling factors to three different CNN micro-structures, including neurons, groups and blocks to yield flexible structure selection. We will introduce these three cases in the following. Note that for networks with BN, we add scaling factors after BN to prevent the influence of bias parameters.

3.2 Neuron Selection

We introduce scaling factors for the output of channels to prune neurons. After training, removing the filters with zero scaling factor will result in a more compact network. A recent work proposed by Liu et al. [26] adopted similar idea for network slimming. They absorbed the scaling parameters into the parameters of batch normalization, and solve the optimization by subgradient descent. During training, scaling parameters whose absolute value are lower than a threshold value are set to 0. Comparing with [26], our method is more general and effective. Firstly, introducing scaling factor is more universal than reusing BN parameters. On one hand, some networks have no batch normalization layers, such as AlexNet [22] and VGG [37]; On the other hand, when we fine-tune pre-trained models on object detection or semantic segmentation tasks, the parameters of batch normalization are usually fixed due to small batch size. Secondly, the optimization of [26] is heuristic and need iterative pruning and retraining. In contrast, our optimization is more stable in an end-to-end manner. Above all, [26] can be seen as a special case of our method. Similarly, [48] is also a special case of our method. The difference between Ye et al. [48] and Liu et al. [26] is Ye et al. adopted ISTA [3] to optimize scaling factors. We will compare these different optimization methods in our experiments.

3.3 Block Selection

The structure of skip connection CNNs allows us to skip the computation of specific layers without cutting off the information flow in the network. Through stacking residual blocks, ResNet [11, 12] can easily exploit the advantage of very deep networks. Formally, residual block with identity mapping can be formulated by the following formula:

$$\begin{aligned} \mathbf{r}^{i+1}=\mathbf{r}^{i}+{\mathcal F}^{i}(\mathbf{r}^i, \mathbf{W}^i), \end{aligned}$$
(13)

where \(\mathbf{r}^{i}\) and \(\mathbf{r}^{i+1}\) are input and output of the i-th block, \({\mathcal F}^i\) is a residual function and \(\mathbf{W}^i\) are parameters of the block.

To prune blocks, we add scaling factor after each residual block. Then in our framework, the formulation of Eq. 13 is as follows:

$$\begin{aligned} \mathbf{r}^{i+1}=\mathbf{r}^{i}+\lambda ^i{\mathcal F}^{i}(\mathbf{r}^i, \mathbf{W}^i). \end{aligned}$$
(14)

As shown in Fig 1, after optimization, we can get a sparse \(\varvec{\lambda }\). The residual block with scaling factor 0 will be pruned entirely, and we can learn a much shallower ResNet. A prior work that also adds scaling factors for residual in ResNet is Weighted Residual Networks [36]. Though sharing a lot of similarities, the motivations behind these two works are different. Their work focuses on how to train ultra deep ResNet to get better results with the help of scaling factors. Particularly, they increase depth from 100+ to 1000+. While our method aims to decrease the depth of ResNet, we use the scaling factors and sparse regularizations to sparsify the output of residual blocks.

3.4 Group Selection

Recently, Xie et al. introduced a new dimension – cardinality into ResNets and proposed ResNeXt [47]. Formally, they presented aggregated transformations as:

$$\begin{aligned} {\mathcal A}(\mathbf{x})=\sum _{i=1}^{C}{\mathcal T}^i(\mathbf{x},\mathbf{W}^i), \end{aligned}$$
(15)

where \({\mathcal T}^i(\mathbf{x})\) represents a transformation with parameters \(\mathbf{W}^i\), C is the cardinality of the set of \({\mathcal T}^i(\mathbf{x})\) to be aggregated. In practice, they use grouped convolution to ease the implementation of aggregated transformations. So in our framework, we refer C as the number of group, and formulate a weighted \({\mathcal A}(\mathbf{x})\) as:

$$\begin{aligned} {\mathcal A}(\mathbf{x})=\sum _{i=1}^{C}\lambda ^i{\mathcal T}^i(\mathbf{x}, \mathbf{W}^i) \end{aligned}$$
(16)

After training, several basic cardinalities are chosen by a sparse \(\varvec{\lambda }\) to form the final transformations. Then, the inactive groups with zero scaling factors can be safely removed as shown in Fig 1. Note that neuron pruning can also seen as a special case of group pruning when each group contains only one neuron. Furthermore, we can combine block pruning and group pruning to learn more flexible network structures.

4 Experiments

In this section, we evaluate the effectiveness of our method on three standard datasets, including CIFAR-10, CIFAR-100 [21] and ImageNet LSVRC 2012 [34]. For neuron pruning, we adopt VGG16 [37], a classical plain network to validate our method. As for blocks and groups, we use two state-of-the-art networks, ResNet [12] and ResNeXt [47] respectively. To prove the practicability of our method, we further experiment in a very lightweight network, PeleeNet [42].

Fig. 2.
figure 2

Error vs. number of parameters and FLOPs after SSS training for VGG on CIFAR-10 and CIFAR-100 datasets.

Fig. 3.
figure 3

Error vs. number of parameters and FLOPs after SSS training for ResNet-20 and ResNet-164 on CIFAR-10 and CIFAR-100 datasets.

Fig. 4.
figure 4

Error vs. number of parameters and FLOPs with SSS training for ResNeXt-20 and ResNeXt-164 on CIFAR-10 and CIFAR-100 datasets.

Fig. 5.
figure 5

Top-1 error vs. number of parameters and FLOPs for our SSS models and original ResNets on ImageNet validation set.

For optimization, we adopt NAG [4, 40] and our modified APG to update weights \(\mathbf{W}\) and scaling factors \(\varvec{\lambda }\), respectively. We set weight decay of \(\mathbf{W}\) to 0.0001 and fix momentum to 0.9 for both \(\mathbf{W}\) and \(\varvec{\lambda }\). The weights are initialized as in [11] and all scaling factors are initialized to be 1. All the experiments are conducted in MXNet [5].

4.1 CIFAR

We start with CIFAR dataset to evaluate our method. CIFAR-10 dataset consists of 50K training and 10K testing RGB images with 10 classes. CIFAR-100 is similar to CIFAR-10, except it has 100 classes. As suggested in [11], the input image is \(32\times 32\) randomly cropped from a zero-padded \(40\times 40\) image or its flipping. The models in our experiments are trained with a mini-batch size of 64 on a single GPU. We start from a learning rate of 0.1 and train the models for 240 epochs. The learning rate is divided by 10 at the 120-th,160-th and 200-th epoch.

VGG: The baseline network is a modified VGG16 with BN [19]Footnote 1. We remove fc6 and fc7 and only use one fully-connected layer for classification. We add scale factors after every batch normalization layers. Figure 2 shows the results of our method. Both parameters and floating-point operations per second (FLOPs)Footnote 2 are reported. Our method can save about 30% parameters and 30% - 50% computational cost with minor lost of performance.

ResNet: To learn the number of residual blocks, we use ResNet-20 and ResNet-164 [12] as our baseline networks. ResNet-20 consists of 9 residual blocks. Each block has 2 convolutional layers, while ResNet-164 has 54 blocks with bottleneck structure in each block. Figure 3 summarizes our results. It is easy to see that our SSS achieves better performance than the baseline model with similar parameters and FLOPs. For ResNet-164, our SSS yields 2.5x speedup with about \(2\%\) performance loss both in CIFAR-10 and CIFAR-100. After optimization, we found that the blocks in early stages are pruned first. This discovery coincides with the common design that the network should spend more budget in its later stage, since more and more diverse and complicated pattern may emerge as the receptive field increases.

Table 1. Network architectures of ResNet-50 and our pruned ResNets for ImageNet. \(\surd \) represents that the corresponding block is kept while \(\times \) denotes that the block is pruned.

ResNeXt: We also test our method on ResNeXt [47]. We choose ResNeXt-20 and ResNeXt-164 as our base networks. Both of these two networks have bottleneck structures with 32 groups in residual blocks. For ResNeXt-20, we focus on groups pruning since there are only 6 residual blocks in it. For ResNeXt-164, we add sparsity on both groups and blocks. Figure 4 shows our experiment results. Both groups pruning and block pruning show good trade-off between parameters and performance, especially in ResNeXt-164. The combination of groups and blocks pruning is extremely effective in CIFAR-10. Our SSS saves about \(60\%\) FLOPs while achieves \(1\%\) higher accuracy. In ResNeXt-20, groups in first and second block are pruned first. Similarly, in ResNeXt-164, groups in shallow residual blocks are pruned mostly.

Table 2. Results on ImageNet dataset. Both top-1 and top-5 validation errors (single crop) are reported. Number of parameters and FLOPs for inference of different models are also shown. Here, M/B means million/billion (\(10^6/10^9\)), respectively

4.2 ImageNet LSVRC 2012

To further demonstrate the effectiveness of our method in large-scale CNNs, we conduct more experiments on the ImageNet LSVRC 2012 classification task with VGG16 [37], ResNet-50 [12] and ResNeXt-50 (32 \(\times \) 4d) [47]. We do data augmentation based on the publicly available implementation of “fb.resnet” Footnote 3. The mini-batch size is 128 on 4 GPUs for VGG16 and ResNet-50, and 256 on 8 GPUs for ResNeXt-50. The optimization and initialization are similar as those in CIFAR experiments. We train the models for 100 epochs. The learning rate is set to an initial value of 0.1 and then divided by 10 at the 30-th, 60-th and 90-th epoch. All the results for ImageNet dataset are summarized in Table 2.

VGG16: In our experiments of VGG16 pruning, we find the results of pruning all convolutional layers were not promising. This is because in VGG16, the computational cost in terms of FLOPs is not equally distributed in each layer. The number of FLOPs of conv5 layers is 2.77 billion in total, which is only 9% of the whole network (30.97 billion). Thus, we consider the sparse penalty should be adjusted by computational cost of different layers. Similar idea has been adopted in [29] and [13]. In [29], they introduce FLOPs regularization to the pruning criteria. He et al. [13] do not prune conv5 layers in their VGG16 experiments. Following [13], we set the sparse penalty of conv5 to 0 and only prune conv1 to conv4. The results can be found in Table 2. The pruned model save about 75% FLOPs, while the parameter saving is negligible. This is due to that fully-connected layers have a large amount of parameters (123 million in original VGG16), and we do not pruned fully-connected layers for fair comparison with other methods.

ResNet-50: For ResNet-50, we experiment three different settings of \(\gamma \) to explore the performance of our method in block pruning. For simplicity, we denote the trained models as ResNet-26, ResNet-32 and ResNet-41 depending on their depths. Their structures are shown in Table 1. All the pruned models come with accuracy loss in certain extent. Comparing with original ResNet-50, ResNet-41 provides \(15\%\) FLOPs reduction with \(0.7\%\) top-1 accuracy loss while ResNet-32 saves \(31\%\) FLOPs with about \(2\%\) top-1 loss. Figure 5 shows the top-1 validation errors of our SSS models and ResNets as a function of the number of parameters and FLOPs. The results reveal that our pruned models perform on par with original hand-crafted ResNets, whilst requiring less parameters and computational cost. For example, comparing with ResNet-34 [12], both our ResNet-41 and ResNet-32 yield better performances with less FLOPs.

ResNeXt-50: As for ResNeXt-50, we add sparsity constraint on both residual blocks and groups which results in several pruned models. Table 2 summarizes the performance of these models. The learned ResNeXt-41 yields \(24\%\) top-1 error in ILSVRC validation set. It gets similar results with the original ResNet50, but with half parameters and more than 20% less FLOPs. In ResNeXt-41, three residual blocks in “conv5” stage are pruned entirely. This pruning result is somewhat contradict to the common design of CNNs, which worth to be studied in depth in the future.

4.3 Pruning Lightweight Network

Adopting lightweight networks, such as MobileNet [15], ShuffleNet [45] for fast inference is a more effective strategy in practice. To future prove the effectiveness of our method, we adopt neuron pruning in PeleeNet [42], which is a state-of-the-art efficient architecture without separable convolution. We follow the training settings and hyper-parameters used in [45]. The mini-batch size is 1024 on 8 GPUs and we train 240 epoch. Table 3 shows the pruning results of PeleeNet. We adopt different settings of \(\gamma \) and get three pruned networks. Comparing to baseline, Our purned PeleeNet-A save about 14% parameters and FLOPs with only 0.4% top-1 accuracy degradation.

Table 3. Results of PeleeNet on ImageNet dataset

4.4 Comparison with Other Methods

We compare our SSS with other pruning methods, including SSL [43], filter pruning [24], channel pruning [13], ThiNet [27, 29] and [48]. We compare SSL with our method in CIFAR10 and CIFAR100. All the models are trained from scratch. As shown in Fig. 6, our SSS achieves much better performances than SSL, even SSL with finetune. Table 4 shows the pruning results on the ImageNet LSVRC2012 dataset. To the best of our knowledge, only a few works reported ResNet pruning results with FLOPs. Comparing with filter pruning results, our ResNet-32 performs best with least FLOPs. As for channel pruning, with similar FLOPsFootnote 4, our ResNet-32 yields 1.88% lower top-1 error and 1.11% lower top-5 error than pruned ResNet-50 provided by [13]. As for [48], our ResNet-41 achieves about 1% lower top-1 error with less computation budge. We also show comparison in VGG16. All the method including channel pruning, ThiNet and our SSS achieve significant improvement than [29]. Our VGG16 pruning result is competitive to other state-of-the-art.

We further compare our pruned ResNeXt with DenseNet [17] in Table 5. With 14% less FLOPs, Our ResNeXt-38 achieves 0.2% lower top-5 error than DenseNet-121.

Fig. 6.
figure 6

Error vs. FLOPs for our SSS models and SSL models

Table 4. Comparison among several state-of-the-art pruning methods on the ResNet and VGG16 networks
Table 5. Comparison between pruned ResNeXt-38 and DenseNet-121

4.5 Choice of ifferent optimization methods

We compare our APG with other different optimization methods for optimizing \(\lambda \) in our ImageNet experiments, including SGD adopted in [26] and ISTA [3] used in [48]. We adopted ResNet-50 for block pruning and train it from scratch. The sparse penalty \(\gamma \) is set to 0.005 for all optimization methods.

Table 6. Comparison between different optimization methods.

For SGD, since we can not get exact zero scale factor during training, a extra hyper-parameter – hard threshold is need for the optimization of \(\lambda \). In our experiment, we set it to 0.0001. After training, we get a ResNet-32-SGD network. As show in Table 6, the performance of our ResNet-32-APG is better than ResNet-32-SGD.

For ISTA, we found the optimization of network could not converge. The reason is that the converge speed of ISTA for \(\lambda \) optimization is too slow when training from scratch. Adopting ISTA can get reasonable results in CIFAR dataset. However, in ImageNet, it is hard to optimize the \(\lambda \) to be sparse with small \(\gamma \), and larger \(\gamma \) will lead too many zeros in our experiments. [48] alleviated this problem by fine-tunning from a pretrained model. They also adopted \(\lambda \)-\(\mathbf{W}\) rescaling trick to get an small \(\lambda \) initialization.

Comparing to ISTA, Our APG can be seen as a modified version of an improved ISTA, namely FISTA [3], which has been proved to be significantly better than ISTA in convergence. Thus the optimization of our method is effective and stable in both CIFAR and ImageNet experiments. The results described in Table 4 also show the advantages of our APG method to ISTA. The performance of our trained ResNet-41 is better than ResNet-101-pruned provided by [48].

5 Conclusions

In this paper, we have proposed a data-driven method, Sparse Structure Selection (SSS) to adaptively learn the structure of CNNs. In our framework, the training and pruning of CNNs is formulated as a joint sparse regularized optimization problem. Through pushing the scaling factors which are introduced to scale the outputs of specific structures to zero, our method can remove the structures corresponding to zero scaling factors. To solve this challenging optimization problem and adapt it into deep learning models, we modified the Accelerated Proximal Gradient method. In our experiments, we demonstrate very promising pruning results on PeleeNet, VGG, ResNet and ResNeXt. We can adaptively adjust the depth and width of these CNNs based on budgets at hand and difficulties of each task. We believe these pruning results can further inspire the design of more compact CNNs.

In future work, we plan to apply our method in more applications such as object detection. It is also interesting to investigate the use of more advanced sparse regularizers such as non-convex relaxations, and adjust the penalty based on the complexity of different structures adaptively.