Lightweight Bearing Defect Detection Based on Efficient-YOLO
CSTR:
Author:
  • Article
  • | |
  • Metrics
  • |
  • Reference [25]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    Since the existing deep model faces many problems such as a large number of model parameters, insufficient feature fusion, and low detection accuracy for small targets in the field of industrial bearing appearance defect detection, a lightweight adaptive feature fusion detection network (Efficient-YOLO) is proposed. First of all, the network uses the EfficientNetV2 structure embedded in the CBAM attention mechanism for basic feature extraction to ensure model accuracy and significantly optimize the model parameters. Secondly, an adaptive feature fusion network (CBAM-BiFPN) is designed to strengthen the network’s extraction of effective feature information. Then, the Swin?Transformer mechanism is introduced in the downstream feature fusion network, and the Ghost convolution introduced by the upstream network is used to greatly improve the model’s global perception of bearing appearance defects. Finally, the improved non-maximum suppression method (Soft-CIoU-NMS) is applied in the inference phase, with distance-related weight evaluation factors added, so as to reduce missed detection of overlapping frames. The experimental results show that compared with the existing mainstream detection models, the method has a mAP of 90.1% on the bearing surface defect dataset. The number of parameters is reduced to 1.99M. and the calculation amount is 7 GFLOPs. The recognition rate of small targets with bearing defects is significantly improved, which meets the needs of industrial bearing appearance defect detection.

    Reference
    [1] Zhao Y, Chen BL, Liu BS, et al. GRP-YOLOv5: An improved bearing defect detection algorithm based on YOLOv5. Sensors, 2023, 23(17): 7437.
    [2] Li B, Gao QJ. Defect detection for metal shaft surfaces based on an improved YOLOv5 algorithm and transfer learning. Sensors, 2023, 23(7): 3761.
    [3] Niu J, Li HY, Chen X, et al. An improved YOLOv5 network for detection of printed circuit board defects. Journal of Sensors, 2023, 2023: 7270093.
    [4] Chen W, Huang ZT, Mu Q, et al. PCB defect detection method based on Transformer-YOLO. IEEE Access, 2022, 10: 129480–129489.
    [5] Qian XH, Wang X, Yang SY, et al. LFF-YOLO: A YOLO algorithm with lightweight feature fusion network for multi-scale defect detection. IEEE Access, 2022, 10: 130339–130349.
    [6] Ma NN, Zhang XY, Zheng HT, et al. ShuffleNet v2: Practical guidelines for efficient CNN architecture design. Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich: Springer, 2018. 122–138.
    [7] Zhao H, Wan F, Lei GB, et al. LSD-YOLOv5: A steel strip surface defect detection algorithm based on lightweight network and enhanced feature fusion mode. Sensors, 2023, 23(14): 6558.
    [8] Fan BB, Li WX. Application of GCB-Net based on defect detection algorithm for steel plates. https://www.researchsquare.com/article/rs-1550068/v1. (2022-04-21).
    [9] Xue GH, Li SX, Hou P, et al. Research on lightweight YOLO coal gangue detection algorithm based on ResNet18 backbone feature network. Internet of Things, 2023, 22: 100762.
    [10] Han K, Wang Y, Tian Q, et al. GhostNet: More features from cheap operations. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. 1580–1589.
    [11] Neubeck A, van Gool L. Efficient non-maximum suppression. Proceedings of the 18th International Conference on Pattern Recognition (ICPR2006). Hong Kong: IEEE, 2006. 850–855.
    [12] Lin TY, Dollár P, Girshick R, et al. Feature pyramid networks for object detection. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017. 936–944.
    [13] Liu S, Qi L, Qin HF, et al. Path aggregation network for instance segmentation. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 8759–8768.
    [14] Tan MX, Le QV. EfficientNetV2: Smaller models and faster training. Proceedings of the 38th International Conference on Machine Learning. PMLR, 2021. 10096–10106.
    [15] Tan MX, Pang RM, Le QV. EfficientDet: Scalable and efficient object detection. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 10778–10787.
    [16] Hu J, Shen L, Sun G. Squeeze-and-excitation networks. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 7132–7141.
    [17] Woo S, Park J, Lee JY, et al. CBAM: Convolutional block attention module. Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich: Springer, 2018. 3–19.
    [18] Liu Z, Lin YT, Cao Y, et al. Swin Transformer: Hierarchical vision Transformer using shifted windows. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal: IEEE, 2021. 9992–10002.
    [19] Bodla N, Singh B, Chellappa R, et al. Soft-NMS-improving object detection with one line of code. Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017. 5562–5570.
    [20] Ren SQ, He KM, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137–1149.
    [21] Liu W, Anguelov D, Erhan D, et al. SSD: Single shot multibox detector. Proceedings of the 14th European Conference on Computer Vision. Amsterdam: Springer, 2016. 21–37.
    [22] Howard A, Sandler M, Chu G, et al. Searching for MobileNetV3. Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019. 1314–1324.
    [23] Hou QB, Zhou DQ, Feng JS. Coordinate attention for efficient mobile network design. Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021. 13708–13717.
    [24] Wang QL, Wu BG, Zhu PF, et al. ECA-Net: Efficient channel attention for deep convolutional neural networks. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 11531–11539.
    [25] Li YH, Yao T, Pan YW, et al. Contextual Transformer networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(2): 1489–1500.
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

娄瑶迪,岳俊峰,周迪斌,刘文浩.基于Efficient-YOLO的轻量化轴承缺陷检测.计算机系统应用,2024,33(2):265-275

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:August 17,2023
  • Revised:September 26,2023
  • Online: December 26,2023
  • Published: February 05,2023
Article QR Code
You are the first992158Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063