Abstract:Image segmentation is the basis of computer-aided film reading, and the accuracy of wound image segmentation directly affects the results of wound analysis. However, the traditional method of wound image segmentation has cumbersome steps and low accuracy. At present, a few studies have applied deep learning to wound image segmentation, but they are all based on small data sets and can hardly give full play to the advantages of deep neural networks and further improve accuracy. Maximizing the advantages of deep learning in the field of image segmentation requires large data sets, but there is no large public data set on wound images as establishing large wound image data sets requires manual labeling, which consumes a lot of time and energy. In this study, a wound image segmentation method based on transfer learning is proposed. Specifically, the ResNet50 network is trained with a large public data set as a feature extractor, and then the feature extractor is connected with two parallel attention mechanisms for retraining with a small wound image data set. Experiments show that the segmentation results of this method are greatly improved in the average intersection over union (IoU), and this method solves the problem of low accuracy in wound image segmentation due to the lack of large wound image data sets to some extent.