###
计算机系统应用英文版:2022,31(7):203-209
本文二维码信息
码上扫一扫!
基于Ghost卷积和YOLOv5s网络的服装检测
(西安工程大学 计算机科学学院, 西安 710048)
Clothes Detection Using Ghost Convolution and YOLOv5s Network
(School of Computer Science, Xi’an Polytechnic University, Xi’an 710048, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 662次   下载 1463
Received:October 17, 2021    Revised:November 17, 2021
中文摘要: 为了降低服装目标检测模型的参数量和浮点型计算量, 提出一种改进的轻量级服装目标检测模型——G-YOLOv5s. 首先使用Ghost卷积重构YOLOv5s的主干网络; 然后使用DeepFashion2数据集中的部分数据进行模型训练和验证; 最后将训练好的模型用于服装图像的目标检测. 实验结果表明, G-YOLOv5s的mAP达到71.7%, 模型体积为9.09 MB, 浮点型计算量为9.8 G FLOPs, 与改进前的YOLOv5s网络相比, 模型体积压缩了34.8%, 计算量减少了41.3%, 精度仅下降1.3%, 方便部署在资源有限的设备中使用.
Abstract:To reduce the number of parameters and floating points operations of the object detection model for clothes, we propose an improved object detection model for lightweight clothes, namely G-YOLOv5s. First, the Ghost convolution is used to reconstruct the backbone network of YOLOv5s, and then the data in the DeepFashion2 dataset is employed for model training and validation. Finally, the trained model is applied to the detection of clothes images. The experimental results show that the G-YOLOv5s algorithm achieves the mean average precision (mAP) of 71.7%, with a model volume of 9.09 MB and the floating point operations of 9.8 G FLOPs. Compared with those of YOLOv5s, the model volume of G-YOLOv5s is compressed by 34.8%, and the floating point operations are reduced by 41.3%, with an mAP drop of only 1.3%. Moreover, it is convenient for deployment in equipment with limited resources.
文章编号:     中图分类号:    文献标志码:
基金项目:陕西省教育厅科研计划(21JP049); 西安工程大学研究生创新基金(chx2021026); 大学生创新创业训练计划(S202110709112)
引用文本:
李雪,吴圣明,马丽丽,陈金广.基于Ghost卷积和YOLOv5s网络的服装检测.计算机系统应用,2022,31(7):203-209
LI Xue,WU Sheng-Ming,MA Li-Li,CHEN Jin-Guang.Clothes Detection Using Ghost Convolution and YOLOv5s Network.COMPUTER SYSTEMS APPLICATIONS,2022,31(7):203-209