基于改进损失函数的YOLOv3网络
作者:
基金项目:

国家重点研发计划(2017YFC0803700);上海市科委项目(17511101702);复旦大学工程与应用技术研究院先导项目(gyy2917-003)


YOLOv3 Network Based on Improved Loss Function
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [17]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    为了提高卷积神经网络在目标检测的精度,本文提出了一种基于改进损失函数的YOLOv3网络.该网络模型应用一种新的损失函数Tan-Squared Error (TSE),将原有的平方和损失(Sum Squared Error,SSE)函数进行转化,能更好地计算连续变量的损失;TSE能有效减低Sigmoid函数梯度消失的影响,使模型收敛更加快速.在VOC数据集上的实验结果表明,与原网络模型的表现相比,利用TSE有效提高了检测精度,且收敛更加快速.

    Abstract:

    To improve the object detect precision of Convolutional Neural Network (CNN), we present a YOLOv3 network which based on improved loss function. This network model uses a new loss function Tan-Squared Error (TSE) which transferred from primary Sum Squared Error(SSE), and works better on continuous variable error computing. Meanwhile, the properties of TSE could decrease the impact of vanishing gradient problem in sigmoid function, and speed up model converging. The experiment results in Pascal VOC dataset show that TSE improves the detect precision effectively compared with the performance of primary network model, and the convergence is accelerated.

    参考文献
    [1] Dalal N, Triggs B. Histograms of oriented gradients for human detection. Proceedings of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, CA, USA. 2005. 886-893.
    [2] Felzenszwalb P, McAllester D, Ramanan D. A discriminatively trained, multiscale, deformable part model. Proceedings of 2008 IEEE Conference on Computer Vision and Pattern Recognition. Anchorage, AK, USA. 2008. 1-8.
    [3] Girshick R, Donahue J, Darrell T, et al. Region-based convolutional networks for accurate object detection and segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(1):142-158.[doi:10.1109/TPAMI.2015.2437384
    [4] Girshick R. Fast R-CNN. Proceedings of 2015 IEEE International Conference on Computer Vision. Santiago, Chile. 2015. 1440-1448.
    [5] Ren SQ, He KM, Girshick R, et al. Faster R-CNN:Towards real-time object detection with region proposal networks. Proceedings of the 28th International Conference on Neural Information Processing Systems. Montréal, Canada. 2015. 91-99.
    [6] Redmon J, Divvala S, Girshick R, et al. You only look once:Unified, real-time object detection. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA. 2016. 779-788.
    [7] Liu W, Anguelov D, Erhan D, et al. SSD:Single shot multibox detector. Proceedings of the 14th European Conference on Computer Vision. Amsterdam, the Netherlands. 2016. 21-37.
    [8] Redmon J, Farhadi A. Yolo9000:Better, faster, stronger. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA. 2017. 6517-6525.
    [9] Redmon J, Farhadi A. YOLOv3:An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.
    [10] Lin TY, Goyal P, Girshick R, et al. Focal loss for dense object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.[doi:10.1109/TPAMI.2018.2858826
    [11] Everingham M, Eslami SMA, Van Gool L, et al. The pascal visual object classes challenge:A retrospective. International Journal of Computer Vision, 2015, 111(1):98-136.[doi:10.1007/s11263-014-0733-5
    [12] 曾逸琪, 关胜晓. 一种基于隔离损失函数的人脸表情识别方法. 信息技术与网络安全, 2018, 37(6):80-84
    [13] 龙鑫, 苏寒松, 刘高华, 等. 一种基于角度距离损失函数和卷积神经网络的人脸识别算法. 激光与光电子学进展, 2018, 55(12):121505
    [14] 郑志强, 刘妍妍, 潘长城, 等. 改进YOLO V3遥感图像飞机识别应用. 电光与控制, 1–6. http://kns.cnki.net/kcms/detail/41.1227.TN.20180823.1037.002.html.[2018-08-25].
    [15] 王福建, 张俊, 卢国权, 等. 基于YOLO的车辆信息检测和跟踪系统. 工业控制计算机, 2018, 31(7):89-91.[doi:10.3969/j.issn.1001-182X.2018.07.039
    [16] 蔡成涛, 吴科君, 严勇杰. 基于优化YOLO方法机场跑道目标检测. 指挥信息系统与技术, 2018, 9(3):37-41
    [17] Chen XL, Fang H, Lin TY, et al. Microsoft COCO captions:Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015.
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

吕铄,蔡烜,冯瑞.基于改进损失函数的YOLOv3网络.计算机系统应用,2019,28(2):1-7

复制
分享
文章指标
  • 点击次数:4717
  • 下载次数: 6328
  • HTML阅读次数: 13235
  • 引用次数: 0
历史
  • 收稿日期:2018-08-12
  • 最后修改日期:2018-09-05
  • 在线发布日期: 2019-01-28
  • 出版日期: 2019-02-15
文章二维码
您是第12829256位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号