Abstract
Environmental perception is a key technology for autonomous driving. Owing to the limitations of a single sensor, multiple sensors are often used in practical applications. However, multi-sensor fusion faces some problems, such as the choice of sensors and fusion methods. To solve these issues, we proposed a machine learning-based fusion sensing system that uses a camera and radar, and that can be used in intelligent vehicles. First, the object detection algorithm is used to detect the image obtained by the camera; in sequence, the radar data is preprocessed, coordinate transformation is performed, and a multi-layer perceptron model for correlating the camera detection results with the radar data is proposed. The proposed fusion sensing system was verified by comparative experiments in a real-world environment. The experimental results show that the system can effectively integrate camera and radar data results, and obtain accurate and comprehensive object information in front of intelligent vehicles.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
YURTSEVER E, LAMBERT J, CARBALLO A, et al. A survey of autonomous driving: Common practices and emerging technologies [J]. IEEE Access, 2020, 8: 58443–58469.
FAYYAD J, JARADAT M A, GRUYER D, et al. Deep learning sensor fusion for autonomous vehicle perception and localization: A review [J]. Sensors (Basel, Switzerland), 2020, 20(15): E4220.
ALESSANDRETTI G, BROGGI A, CERRI P. Vehicle and guard rail detection using radar and vision data fusion [J]. IEEE Transactions on Intelligent Transportation Systems, 2007, 8(1): 95–105.
CHAVEZ-GARCIA R O, AYCARD O. Multiple sensor fusion and classification for moving object detection and tracking [J]. IEEE Transactions on Intelligent Transportation Systems, 2016, 17(2): 525–534.
KIM B, KIM D, PARK S, et al. Automated complex urban driving based on enhanced environment representation with GPS/map, radar, lidar and vision [J]. IFAC-PapersOnLine, 2016, 49(11): 190–195.
PANG S, MORRIS D, RADHA H. CLOCs: Camera-LiDAR object candidates fusion for 3D object detection [C]//2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Las Vegas, NV: IEEE, 2020: 10386–10393.
VIOLA P, JONES M J. Robust real-time face detection [J]. International Journal of Computer Vision, 2004, 57(2): 137–154.
DALAL N, TRIGGS B. Histograms of oriented gradients for human detection [C]//2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, CA: IEEE, 2005: 886–893.
FELZENSZWALB P F, GIRSHICK R B, MCALLESTER D, et al. Object detection with discriminatively trained part-based models [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(9): 1627–1645.
GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation [C]//2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus, OH: IEEE, 2014: 580–587.
GIRSHICK R. Fast R-CNN [C]//2015 IEEE International Conference on Computer Vision. Santiago: IEEE, 2015: 1440–1448.
REN S Q, HE K M, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149.
REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection [C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV: IEEE, 2016: 779–788.
REDMON J, FARHADI A. YOLO9000: Better, faster, stronger [C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI: IEEE, 2017: 6517–6525.
REDMON J, FARHADI A. YOLOv3: An incremental improvement [C]//2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT: IEEE, 2018: 2513–2520.
Author information
Authors and Affiliations
Corresponding author
Additional information
Foundation item
the National Natural Science Foundation of China (No. U1764264/61873165), and the Shanghai Automotive Industry Science and Technology Development Foundation (No. 1733/1807)
Rights and permissions
About this article
Cite this article
Yao, T., Wang, C. & Qian, Y. Camera-Radar Fusion Sensing System Based on Multi-Layer Perceptron. J. Shanghai Jiaotong Univ. (Sci.) 26, 561–568 (2021). https://doi.org/10.1007/s12204-021-2345-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12204-021-2345-x