Abstract:Autonomous driving tasks are a popular field of deep learning research. As one of the most important modules in autonomous driving, environmental perception includes object detection, lane detection, and drivable area segmentation, which is extremely challenging and has far-reaching significance. Traditional deep learning algorithms usually only solve one detection task in environmental perception and cannot meet the needs of autonomous driving to simultaneously perceive multiple environmental factors. In this study, YOLOv5 is used as the backbone network and object detection branch for lane detection and drivable area segmentation in combination with the real-time semantic segmentation network ENet. Therefore, the environmental perception algorithm for multi-task autonomous driving is achieved, and α-IoU is employed for loss calculation to improve the regression accuracy, which is greatly robust against noise. Experiments show that on the BDD100K data set, the proposed algorithm structure is better than the existing multi-task deep learning networks and can reach a speed of 76.3 FPS on GTX1080Ti.