###
计算机系统应用英文版:2020,29(10):158-166
本文二维码信息
码上扫一扫!
基于Dense-YOLOv3的车型检测模型
(1.太原科技大学 计算机科学与技术学院, 太原 030024;2.忻州师范学院 计算机系, 忻州 034000)
Vehicle Recognition Algorithm Based on Dense-YOLOv3
(1.School of Computer Science and Technology, Taiyuan University of Science and Technology, Taiyuan 030024, China;2.Department of Computer, Xinzhou Teachers University, Xinzhou 034000, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 1200次   下载 2612
Received:March 03, 2020    Revised:March 27, 2020
中文摘要: 针对传统YOLOv3的网络结构存在曝光过度或光线较暗等异常图片在提取特征时鲁棒性较差,导致车型识别率低下的问题,提出了一种用于交通车辆检测的Dense-YOLOv3模型.该模型集成了密集卷积神经网络DenseNet和YOLOv3网络的特点,加强了卷积层之间的车型特征传播和重复利用,提高了网络的抗过拟合性能;同时,对目标车辆进行了不同尺度的检测,构建了交叉损失函数,实现了车型的多目标检测.经过在BIT-Vehicle标准数据集上对模型进行训练和测试,实验结果表明,基于Dense-YOLOv3车型检测模型平均精度达到了96.57%,召回率为93.30%,表明了该模型对车辆检测的有效性和实用性.
中文关键词: 车型检测  YOLOv3  DenseNet  鲁棒性  平均精度  召回率
Abstract:The traditional YOLOv3 network structure has poor robustness in extracting features such as over exposure or dark light, which leads to low recognition rate. A Dense-YOLOv3 model for traffic vehicle classification is proposed. The model integrates the characteristics of dense convolutional neural network DenseNet and YOLOv3 network, which strengthen the vehicle model feature propagation and reuse between convolution layers, and improve the anti-overfitting performance of the network. At the same time, the target vehicle is detected at different scales, and the cross-loss function is constructed to realize the multi-objective detection of the vehicle model. The model is trained and tested on BIT-Vehicle standard data sets. The experimental results show that the average accuracy of the model based on Dense-YOLOv3 vehicle detection reaches 96.57% and the recall rate is 93.30%, which indicates the effectiveness and practicability of the model for vehicle detection.
文章编号:     中图分类号:    文献标志码:
基金项目:山西省自然基金(201801D221179,201701D121059)
引用文本:
陈立潮,王彦苏,曹建芳.基于Dense-YOLOv3的车型检测模型.计算机系统应用,2020,29(10):158-166
CHEN Li-Chao,WANG Yan-Su,CAO Jian-Fang.Vehicle Recognition Algorithm Based on Dense-YOLOv3.COMPUTER SYSTEMS APPLICATIONS,2020,29(10):158-166