Abstract
Intelligent transportation and autonomous driving systems have made urgent demands on the techniques with high performance on object detection in traffic scenes. This paper proposes an improved object detection model YOLO-VSF over the YOLOv4 model, which is a representative work with excellent performance among YOLO series of object detection models. The main improvement measures include: The backbone feature extraction network CSPDarknet53 of YOLOv4 is replaced with VGG16 to improve the feature extraction capability; SENet attention mechanism is incorporated to improve the salient and correlation feature representation capability; Focal Loss is integrated into the loss function to overcome the sample imbalance problem. In addition, the detection performance of small targets is improved by increasing the resolution of input images. Experimental results show that on the VanJee traffic image dataset provided by Beijing VanJee Technology Co., Ltd., the proposed YOLO-VSF model achieves an average mean accuracy (mAP) of 92.21 percentage points, which improves the mAP by 3.04 percentage points compared with the YOLOv4 model while maintaining the detection speed of the original model. On the UA-DETRAC dataset, the average accuracy of YOLO-VSF is close to that of the latest YOLOv7 model with the number of parameters reduced by 1.329 × 107. The proposed method can provide a support for object detection in traffic scenes.
摘要
智能交通和自动驾驶系统对高性能的交通场景目标检测技术提出了迫切要求. YOLOv4 模型是 YOLO 系列目标检测模型中性能优异的代表性工作, 本文在其基础上提出了一种改进的用于交通场景的目标检测模型YOLO-VSF. 主要改进措施有: 用 VGG16 取代 YOLOv4 的骨干特征提取网络 CSPDarknet53, 提高特征提取能力; 引入 SENet 注意机制, 提高显著性和相关性特征表征能力; 将 Focal Loss 集成到损失函数中, 克服样本不平衡问题. 此外, 通过增加输入图像的分辨率, 提高了小目标的检测性能. 实验结果表明: 在北京万集科技股份有限公司提供的万集交通图像数据集上, 所提出的 YOLO-VSF 模型在保持原模型 YOLOv4 检测速度的前提下, 平均准确率 (mAP) 达到了 92.21%, 比原模型提高了 3.04%; 在 UA-DETRAC 数据集上, YOLO-VSF 的平均精度接近最新的 YOLOv7 模型, 但在模型参数量上减少了 1.329×107. 所提方法为交通场景中的目标检测提供了支持.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
BAY H, TUYTELAARS T, VAN GOOL L. SURF: Speeded up robust features [M]//Computer Vision–ECCV 2006. Berlin, Heidelberg: Springer, 2006: 404–417.
DALAL N, TRIGGS B. Histograms of oriented gradients for human detection [C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego: IEEE, 2005: 886–893.
VIOLA P, JONES M. Rapid object detection using a boosted cascade of simple features [C]//2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Kauai: IEEE, 2001: 511–518.
SUYKENS J A K, VANDEWALLE J. Least squares support vector machine classifiers [J]. Neural Processing Letters, 1999, 9(3): 293–300.
FREUND Y, SCHAPIRE R E. A desicion-theoretic generalization of on-line learning and an application to boosting [M]//Computational learning theory. Berlin, Heidelberg: Springer, 1995: 23–37.
FELZENSZWALB P, MCALLESTER D, RAMANAN D. A discriminatively trained, multiscale, deformable part model [C]//2008 IEEE Conference on Computer Vision and Pattern Recognition. Anchorage: IEEE, 2008: 1–8.
GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation [C]//2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus: IEEE, 2014: 580–587.
UIJLINGS J R R, VAN DE SANDE K E A, GEVERS T, et al. Selective search for object recognition [J]. International Journal of Computer Vision, 2013, 104(2): 154–171.
GIRSHICK R. Fast R-CNN [C]//IEEE International Conference on Computer Vision. Santiago: IEEE, 2015: 1440–1448.
REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149.
REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection [C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 779–788.
REDMON J, FARHADI A. YOLO9000: Better, faster, stronger [C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 6517–6525.
REDMON J, FARHADI A. YOLOv3: An incremental improvement [DB/OL]. (2018-04-08). http://arxiv.org/abs/1804.02767
BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: Optimal speed and accuracy of object detection [DB/OL]. (2020-04-23). http://arxiv.org/abs/2004.10934
WANG C Y, MARK LIAO H Y, WU Y H, et al. CSP-Net: A new backbone that can enhance learning capability of CNN [C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Seattle: IEEE, 2020: 1571–1580.
HE K M, ZHANG X Y, REN S Q, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904–1916.
LIU S, QI L, QIN H F, et al. Path aggregation network for instance segmentation [C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 8759–8768.
SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [DB/OL]. (2014-09-04). https://arxiv.org/abs/1409.1556
HU J, SHEN L, ALBANIE S, et al. Squeeze-and-excitation networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011–2023.
LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318–327.
The UA-DETRAC dataset [EB/OL]. [2023-08-03]. https://detrac-db.rit.albany.edu/
LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: Common objects in context [M]//Computer vision - ECCV 2014. Cham: Springer, 2014: 740–755.
EVERINGHAM M, VAN GOOL L, WILLIAMS C K I, et al. The pascal visual object classes (VOC) challenge [J]. International Journal of Computer Vision, 2010, 88(2): 303–338.
HOWARD A G, ZHU M L, CHEN B, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications [DB/OL]. (2017-04-17). http://arxiv.org/abs/1704.04861
SANDLER M, HOWARD A, ZHU M L, et al. MobileNetV2: Inverted residuals and linear bottlenecks [C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 4510–4520.
HOWARD A, SANDLER M, CHEN B, et al. Searching for MobileNetV3 [C]//2019 IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019: 1314–1324.
HAN K, WANG Y H, TIAN Q, et al. GhostNet: More features from cheap operations [C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 1577–1586.
HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition [C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770–778.
HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks [C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 2261–2269.
WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional block attention module [M]//Computer vision - ECCV 2018. Cham: Springer, 2018: 3–19.
WANG Q L, WU B G, ZHU P F, et al. ECA-net: Efficient channel attention for deep convolutional neural networks [C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 11531–11539.
WANG C Y, BOCHKOVSKIY A, LIAO H Y M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors [C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver: IEEE, 2023: 7464–7475.
LIU W, ANGUELOV D, ERHAN D, et al. SSD: Single shot multibox detector [M]//Computer vision - ECCV 2016. Cham: Springer, 2016: 21–37.
DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale [DB/OL]. (2020-10-22). https://arxiv.org/abs/2010.11929
LIU Z, LIN Y T, CAO Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows [C]//2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 9992–10002.
LI Y H, YAO T, PAN Y W, et al. Contextual transformer networks for visual recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(2): 1489–1500.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest The authors declare that they have no conflict of interest.
Additional information
Foundation item: the National Natural Science Foundation of China (No. 62271466), the Beijing Natural Science Foundation (No. 4202025), the Beijing VanJee Technology Co., Ltd. - Beijing Municipal Science and Technology Project (No. Z201100003920003), the Tian-jin IoT Technology Enterprise Key Laboratory Research Project (No. VTJ-OT20230209-2), and the Guizhou Provincial Sci-Tech Project (No. zk[2022]-012)
Rights and permissions
About this article
Cite this article
Miao, J., Gong, S., Deng, Y. et al. YOLO-VSF: An Improved YOLO Model by Incorporating Attention Mechanism for Object Detection in Traffic Scenes. J. Shanghai Jiaotong Univ. (Sci.) (2024). https://doi.org/10.1007/s12204-024-2751-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s12204-024-2751-y