Wound Image Segmentation Based on Transfer Learning
CSTR:
Author:
  • Article
  • | |
  • Metrics
  • |
  • Reference [17]
  • |
  • Related
  • | | |
  • Comments
    Abstract:

    Image segmentation is the basis of computer-aided film reading, and the accuracy of wound image segmentation directly affects the results of wound analysis. However, the traditional method of wound image segmentation has cumbersome steps and low accuracy. At present, a few studies have applied deep learning to wound image segmentation, but they are all based on small data sets and can hardly give full play to the advantages of deep neural networks and further improve accuracy. Maximizing the advantages of deep learning in the field of image segmentation requires large data sets, but there is no large public data set on wound images as establishing large wound image data sets requires manual labeling, which consumes a lot of time and energy. In this study, a wound image segmentation method based on transfer learning is proposed. Specifically, the ResNet50 network is trained with a large public data set as a feature extractor, and then the feature extractor is connected with two parallel attention mechanisms for retraining with a small wound image data set. Experiments show that the segmentation results of this method are greatly improved in the average intersection over union (IoU), and this method solves the problem of low accuracy in wound image segmentation due to the lack of large wound image data sets to some extent.

    Reference
    [1] Blanco G, Bedo MVN, Cazzolato MT, et al. A label-scaled similarity measure for content-based image retrieval. Proceedings of 2016 IEEE International Symposium on Multimedia. San Jose: IEEE, 2016. 20–25.
    [2] Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Medical Image Analysis, 2017, 42: 60–88. [doi: 10.1016/j.media.2017.07.005
    [3] Seixas JL, Barbon S, Mantovani RG. Pattern recognition of lower member skin ulcers in medical images with machine learning algorithms. Proceedings of the 2015 IEEE 28th International Symposium on Computer-based Medical Systems. Sao Carlos: IEEE, 2015. 50–53.
    [4] Song B, Sacan A. Automated wound identification system based on image segmentation and Artificial Neural Networks. Proceedings of 2012 IEEE International Conference on Bioinformatics and Biomedicine. Philadelphia: IEEE, 2012. 1–4.
    [5] Fauzi MFA, Khansa I, Catignani K, et al. Computerized segmentation and measurement of chronic wound images. Computers in Biology and Medicine, 2015, 60: 74–85.
    [6] Li FZ, Wang CJ, Liu XH, et al. A composite model of wound segmentation based on traditional methods and deep neural networks. Computational Intelligence and Neuroscience, 2018, 2018: 4149103. [doi: 10.1155/2018/4149103
    [7] Wang CB, Anisuzzaman DM, Williamson V, et al. Fully automatic wound segmentation with deep convolutional neural networks. Scientific Reports, 2020, 10(1): 21897. [doi: 10.1038/s41598-020-78799-w
    [8] Sandler M, Howard A, Zhu ML, et al. MobileNetV2: Inverted residuals and linear bottlenecks. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 4510–4520.
    [9] He KM, Zhang XY, Ren SQ, et al. Deep residual learning for image recognition. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016. 770–778.
    [10] Fu J, Liu J, Tian HJ, et al. Dual attention network for scene segmentation. Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 3141–3149.
    [11] Yosinski J, Clune J, Bengio Y, et al. How transferable are features in deep neural networks? Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal: NIPS, 2014. 3320–3328.
    [12] Oquab M, Bottou L, Laptev I, et al. Learning and transferring mid-level image representations using convolutional neural networks. Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus: IEEE, 2014. 1717–1724.
    [13] Jaderberg M, Simonyan K, Zisserman A, et al. Spatial transformer networks. Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal: NIPS, 2015. 2017–2025.
    [14] Hu J, Shen L, Sun G. Squeeze-and-excitation networks. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 7132–7141.
    [15] Garcia-Garcia A, Orts-Escolano S, Oprea S, et al. A review on deep learning techniques applied to semantic segmentation. arXiv: 1704.06857, 2017.
    [16] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich: Springer, 2015. 234–241.
    [17] Oktay O, Schlemper J, Le Folgoc L, et al. Attention U-Net: Learning where to look for the pancreas. arXiv: 1804.03999, 2018.
    Related
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

陈志威,赵奎,曹吉龙,孙靖,马慧敏.基于迁移学习的伤口图像分割.计算机系统应用,2022,31(8):259-264

Copy
Share
Article Metrics
  • Abstract:871
  • PDF: 1588
  • HTML: 1515
  • Cited by: 0
History
  • Received:November 04,2021
  • Revised:December 02,2021
  • Online: May 31,2022
Article QR Code
You are the first991205Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063