Liver Segmentation Based on Improved UNETR++
CSTR:
Author:
  • Article
  • | |
  • Metrics
  • |
  • Reference [14]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    In the process of fat quantification standardization in liver MRI images, it is often necessary to manually sample the liver area of interest, but the manual sampling strategy is time-consuming and the results are variable. Compared with manually sketched regions of interest, the whole liver segmentation based on deep learning method has lower variability error and uncertainty, and better performance in fat quantitative analysis. To improve the segmentation performance during the whole liver segmentation task, this study makes improvements based on the UNETR++ model. This method combines the advantages of a convolutional neural network and Transformer structure and adds convolutional structure branches to supplement local features. Meanwhile, it introduces a gated attention mechanism to suppress irrelevant background information to make the model more prominent features of the segmented region. The improved method has better DCS and HD95 indexes than UNETR++ and other segmentation models.

    Reference
    [1] Procter AJ, Sun JY, Malcolm PN, et al. Measuring liver fat fraction with complex-based chemical shift MRI: The effect of simplified sampling protocols on accuracy. BMC Medical Imaging, 2019, 19(1): 14.
    [2] Song JJ, Yu XL, Song WL, et al. MRI-based radiomics models developed with features of the whole liver and right liver lobe: Assessment of hepatic inflammatory activity in chronic hepatic disease. Journal of Magnetic Resonance Imaging, 2020, 52(6): 1668–1678.
    [3] Martí-Aguado D, Jiménez-Pastor A, Alberich-Bayarri Á, et al. Automated whole-liver MRI segmentation to assess steatosis and iron quantification in chronic liver disease. Radiology, 2022, 302(2): 345–354.
    [4] 陆雪松, 闫书豪. 基于迭代卷积神经网络的肝脏MRI图像分割. 中南民族大学学报(自然科学版), 2022, 41(3): 319–325.
    [5] Amin J, Anjum MA, Sharif M, et al. Visual geometry group based on U-shaped model for liver/liver tumor segmentation. IEEE Latin America Transactions, 2023, 21(4): 557–564.
    [6] 王志明. 基于U-Net+的全身脂肪及肝脏MR图像分割算法研究[硕士学位论文]. 武汉: 华中师范大学, 2021.
    [7] Çiçek Ö, Abdulkadir A, Lienkamp SS, et al. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. Proceedings of the 19th International Conference on Medical Image Computing and Computer-assisted Intervention. Athens: Springer, 2016. 424–432.
    [8] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017. 6000–6010.
    [9] Shaker A, Maaz M, Rasheed H, et al. UNETR++: Delving into efficient and accurate 3D medical image segmentation. arXiv:2212.04497, 2022.
    [10] Deng WJ, Shi Q, Li J. Attention-gate-based encoder–decoder network for automatical building extraction. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021, 14: 2611–2620.
    [11] Hatamizadeh A, Tang YC, Nath V, et al. UNETR: Transformers for 3D medical image segmentation. Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision. Waikoloa: IEEE, 2022. 1748–1758.
    [12] Chen YP, Dai XY, Liu MC, et al. Dynamic ReLU. Proceedings of the 16th European Conference on Computer Vision. Glasgow: Springer, 2020. 351–367.
    [13] Milletari F, Navab N, Ahmadi SA. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. Proceedings of the 4th International Conference on 3D Vision (3DV). Stanford: IEEE, 2016. 565–571.
    [14] Lei Y, Wang TH, Wang B, et al. Ultrasound prostate segmentation based on 3D V-Net with deep supervision. Proceedings of the 2019 Medical Imaging Conference on Ultrasonic Imaging and Tomography. San Diego: SPIE, 2019. 198–204.
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

马力,王骏,梁羡和,郝金华.基于改进UNETR++的肝脏分割.计算机系统应用,2024,33(2):246-252

Copy
Share
Article Metrics
  • Abstract:533
  • PDF: 1355
  • HTML: 860
  • Cited by: 0
History
  • Received:July 05,2023
  • Revised:August 24,2023
  • Online: December 26,2023
  • Published: February 05,2023
Article QR Code
You are the first990399Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063