Abstract:To reduce the number of parameters and floating points operations of the object detection model for clothes, we propose an improved object detection model for lightweight clothes, namely G-YOLOv5s. First, the Ghost convolution is used to reconstruct the backbone network of YOLOv5s, and then the data in the DeepFashion2 dataset is employed for model training and validation. Finally, the trained model is applied to the detection of clothes images. The experimental results show that the G-YOLOv5s algorithm achieves the mean average precision (mAP) of 71.7%, with a model volume of 9.09 MB and the floating point operations of 9.8 G FLOPs. Compared with those of YOLOv5s, the model volume of G-YOLOv5s is compressed by 34.8%, and the floating point operations are reduced by 41.3%, with an mAP drop of only 1.3%. Moreover, it is convenient for deployment in equipment with limited resources.