Environmental Perception Algorithm for Multi-task Autonomous Driving Based on YOLOv5
Author:
Affiliation:

Fund Project:

• 摘要
• |
• 图/表
• |
• 访问统计
• |
• 参考文献
• |
• 相似文献
• |
• 引证文献
• |
• 增强出版
• |
• 文章评论
摘要:

自动驾驶任务是当前深度学习研究的热门领域, 环境感知作为自动驾驶中最重要的模块之一, 是一项极具挑战性并具有深远意义的任务, 包括目标检测、车道线检测、可行驶区域分割等. 传统的深度学习算法通常只解决环境感知中的一个检测任务, 无法满足自动驾驶同时感知多种环境因素的需求. 本文使用YOLOv5作为骨干网络及目标检测分支, 结合实时语义分割网络ENet进行车道线检测和可行驶区域分割, 实现了多任务自动驾驶环境感知算法, 损失计算时采用$\alpha$-IoU提高回归精度, 对噪声有更好的鲁棒性. 实验表明, 在BDD100K数据集上, 本文提出的算法结构优于当前现有的多任务深度学习网络, 并且在GTX1080Ti上可达到76.3 FPS的速度.

Abstract:

Autonomous driving tasks are a popular field of deep learning research. As one of the most important modules in autonomous driving, environmental perception includes object detection, lane detection, and drivable area segmentation, which is extremely challenging and has far-reaching significance. Traditional deep learning algorithms usually only solve one detection task in environmental perception and cannot meet the needs of autonomous driving to simultaneously perceive multiple environmental factors. In this study, YOLOv5 is used as the backbone network and object detection branch for lane detection and drivable area segmentation in combination with the real-time semantic segmentation network ENet. Therefore, the environmental perception algorithm for multi-task autonomous driving is achieved, and α-IoU is employed for loss calculation to improve the regression accuracy, which is greatly robust against noise. Experiments show that on the BDD100K data set, the proposed algorithm structure is better than the existing multi-task deep learning networks and can reach a speed of 76.3 FPS on GTX1080Ti.

参考文献
相似文献
引证文献

• 点击次数:
• 下载次数:
• HTML阅读次数:
##### 历史
• 收稿日期:2021-12-24
• 最后修改日期:2022-01-24
• 录用日期:
• 在线发布日期: 2022-05-31
• 出版日期: