基于残差融合网络的定量磁敏感图像与T1加权图像配准
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金(62161004); 中法蔡元培项目(N.41400TC); 贵州省科技计划(ZK[2021] Key 002, [2018]5301)


Quantitative Susceptibility Mapping and T1-weighted Image Registration Based on Residual Fusion Network
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 增强出版
  • |
  • 文章评论
    摘要:

    医学图像配准对医学图像处理和分析至关重要, 由于定量磁敏感图像 (quantitative susceptibility mapping, QSM) 与T1加权图像的灰度、纹理等信息存在较大的差异, 现有的医学图像配准算法难以高效精确地完成两者配准. 因此, 本文提出了一个基于残差融合的无监督深度学习配准模型RF-RegNet (residual fusion registration network, RF-RegNet). RF-RegNet由编解码器、重采样器以及上下文自相似特征提取器3部分组成. 编解码器用于提取待配准图像对的特征和预测两者的位移矢量场 (displacement vector field, DVF), 重采样器根据估计的DVF对浮动QSM图像重采样, 上下文自相似特征提取器分别用于提取参考T1加权图像和重采样后的QSM图像的上下文自相似特征并计算两者的平均绝对误差 (mean absolute error, MAE) 以驱动卷积神经网络 (convolutional neural network, ConvNet) 学习. 实验结果表明本文提出的方法显著地提高了QSM图像与T1加权图像的配准精度, 满足临床的配准需求.

    Abstract:

    Medical image registration plays a crucial role in medical image processing and analysis. Due to the large differences in gray scale and texture information between quantitative susceptibility mapping (QSM) and T1-weighted images, it is difficult for existing medical image registration algorithms to obtain accurate registration results efficiently. Therefore, this study proposes an unsupervised deep learning registration model (residual fusion registration network, RF-RegNet) based on residual fusion. RF-RegNet is composed of an encoder, a decoder, a resampler, and a context-similarity feature extractor. The encoder and decoder are used to extract the features of the image pair to be aligned and estimate the displacement vector field (DVF). The moving QSM image is resampled according to the estimated DVF, and the context-similarity feature extractor is used to extract separately the context-similarity features of the reference T1-weighted image and the resampled QSM image to describe the similarity of the two images. The mean absolute error (MAE) between context-similarity features from the two images is used to drive the convolutional neural network (ConvNet) learning. Experimental results reveal that the proposed method significantly improves the registration accuracy of QSM images and T1-weighted images, which is adequate for clinical demands.

    参考文献
    相似文献
    引证文献
引用本文

王毅,田梨梨,程欣宇,王丽会.基于残差融合网络的定量磁敏感图像与T1加权图像配准.计算机系统应用,2022,31(8):46-54

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2021-11-15
  • 最后修改日期:2021-12-13
  • 录用日期:
  • 在线发布日期: 2022-06-16
  • 出版日期:
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号