基于Faster RCNN的穿越围栏违规行为检测
作者:
基金项目:

国家自然科学基金(62072469); 国家重点科研计划(2018YFE0116700); 山东省自然科学基金(ZR2019MF049); 中央高校基础研究基金(2015020031); 西海岸人工智能技术创新中心建设专项(2019-1-5、2019-1-6); 上海可信工业控制平台开放项目(TICPSH202003015-ZC).


Faster RCNN-Based Detection Method for Violations of Crossing Fences
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [29]
  • | | | |
  • 文章评论
    摘要:

    安全围栏在电力施工现场扮演着重要的角色, 但穿越围栏的违规行为普遍存在, 给施工现场造成了极大的安全隐患. 为了实现智能化监管, 本文利用目标检测算法, 结合帧差法的思想提出了一种基于Faster RCNN的穿越围栏违规行为检测方法. 该方法通过读取视频监控信息, 利用目标检测方法获取围栏的位置信息以及人体关键点的信息, 通过帧差判断法识别施工现场的违规动作. 经过实验验证, 该方法可以有效的检测出施工现场的穿越围栏违规行为, 并可以满足实时性的要求.

    Abstract:

    Safety fences play an important role in power construction sites. However, violations of crossing the fence are widespread, causing great safety hazards to the construction sites. To intelligently supervise, this study proposes a Faster RCNN-based detection method for crossing fence that combines the object detection and the ideas of frame difference method. The proposed method first obtains the information of the fence location and human keypoints by object detection from the captured frames in the video and then recognizes the violations at the construction site with the frame difference method. The experiment results show that the method can effectively detect violations of crossing fences at construction sites and meet the real-time requirements.

    参考文献
    [1] 叶帆, 王娜. 电力系统信息安全的重要性及防护措施. 通信电源技术, 2019, 36(12): 248–249
    [2] 金小谷. 中国电力装备研究应对中国工业4. 0的创新动能. 企业科技与发展, 2015, (23): 1–3
    [3] 陈俊勇. 浅谈电力工程施工现场安全管控. 科技经济导刊, 2021, 29(9): 229–230
    [4] 张昭, 刘洋, 吕梁辉, 等. 基于红外对射和视频监控的变电站电子围栏设计. 国网技术学院学报, 2020, 23(6): 31–33, 51. [doi: 10.3969/j.issn.1008-3162.2020.06.010
    [5] 黄纯, 陈国伟, 毛新健. 洋山港水域电子围栏系统的应用. 水运管理, 2020, 42(9): 3–5. [doi: 10.3969/j.issn.1000-8799.2020.09.002
    [6] 房凯. 基于深度学习的围栏跨越行为检测方法. 计算机系统应用, 2021, 30(2): 147–153. [doi: 10.15888/j.cnki.csa.007770
    [7] Ren SQ, He KM, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149. [doi: 10.1109/TPAMI.2016.2577031
    [8] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection. 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016. 779–788.
    [9] He KM, Gkioxari G, Dollár P, et al. Mask R-CNN. Proceedings of the IEEE International Conference on Computer Vision. Venice: IEEE, 2017. 2980–2988.
    [10] Liu W, Anguelov D, Erhan D, et al. SSD: Single shot MultiBox detector. 14th European Conference on Computer Vision. Amsterdam: Springer, 2016. 21–37.
    [11] Lin TY, Dollár P, Girshick R, et al. Feature pyramid networks for object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017. 936–944.
    [12] Lin TY, Goyal P, Girshick R, et al. Focal loss for dense object detection. Proceedings of the IEEE International Conference on Computer Vision. Venice: IEEE, 2017. 2999–3007.
    [13] Law H, Deng J. CornerNet: Detecting objects as paired keypoints. Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich: Springer, 2018. 765–781.
    [14] Zhou XY, Zhuo JC, Kr?henbühl P. Bottom-up object detection by grouping extreme and center points. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 850–859.
    [15] Duan KW, Bai S, Xie LX, et al. CenterNet: Keypoint triplets for object detection. Proceedings of the IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019. 6568–6577.
    [16] Zhu CC, He YH, Savvides M. Feature selective anchor-free module for single-shot object detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 840–849.
    [17] Tian Z, Shen CH, Chen H, et al. FCOS: Fully convolutional one-stage object detection. Proceedings of the IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019. 9626–9635.
    [18] Kong T, Sun FC, Liu HP, et al. FoveaBox: Beyound anchor-based object detection. IEEE Transactions on Image Processing, 2020, 29: 7389–7398. [doi: 10.1109/TIP.2020.3002345
    [19] 周越, 李硕. 基于帧差法的运动车辆检测算法研究. 电子世界, 2021, (3): 35–36
    [20] 刘志峰, 陈姚节, 程杰. 结合帧差法的尺度自适应核相关滤波跟踪. 计算机技术与发展, 2021, 31(2): 70–74
    [21] 何银飞. 基于改进的帧差法和背景差法实现运动目标检测[硕士学位论文]. 秦皇岛: 燕山大学, 2016.
    [22] 张应辉, 刘养硕. 基于帧差法和背景差法的运动目标检测. 计算机技术与发展, 2017, 27(2): 25–28. [doi: 10.3969/j.issn.1673-629X.2017.02.006
    [23] 欧先锋, 晏鹏程, 王汉谱, 等. 基于深度帧差卷积神经网络的运动目标检测方法研究. 电子学报, 2020, 48(12): 2384–2393
    [24] Simonyan K, Zisserman A. Two-stream convolutional networks for action recognition in videos. arXiv: 1406.2199, 2014.
    [25] Wang LM, Xiong YJ, Wang Z, et al. Temporal segment networks: Towards good practices for deep action recognition. 14th European Conference on Computer Vision. Amsterdam: Springer, 2016. 20–36.
    [26] Tran D, Bourdev L, Fergus R, et al. Learning spatiotemporal features with 3D convolutional networks. Proceedings of the IEEE International Conference on Computer Vision. Santiago: IEEE, 2015. 4489–4497.
    [27] Hara K, Kataoka H, Satoh Y. Can spatiotemporal 3D CNNs retrace the history of 2D CNNs and ImageNet?. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 6546–6555.
    [28] He KM, Zhang XY, Ren SQ, et al. Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016. 770–778.
    [29] Carreira J, Zisserman A. Quo vadis, action recognition? A new model and the kinetics dataset. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017. 4724–4733.
    相似文献
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

王志鹏,王涛.基于Faster RCNN的穿越围栏违规行为检测.计算机系统应用,2022,31(4):346-351

复制
分享
文章指标
  • 点击次数:705
  • 下载次数: 1841
  • HTML阅读次数: 2392
  • 引用次数: 0
历史
  • 收稿日期:2021-06-28
  • 最后修改日期:2021-07-30
  • 在线发布日期: 2022-03-22
文章二维码
您是第11349662位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号