国家自然科学基金(62161004); 中法蔡元培项目(N.41400TC); 贵州省科技计划项目(ZK Key 002, 5301)
医学图像配准对医学图像处理和分析至关重要, 由于定量磁敏感图像 (quantitative susceptibility mapping, QSM) 与T1加权图像的灰度、纹理等信息存在较大的差异, 现有的医学图像配准算法难以高效精确地完成两者配准. 因此, 本文提出了一个基于残差融合的无监督深度学习配准模型RF-RegNet (residual fusion registration network, RF-RegNet). RF-RegNet由编解码器、重采样器以及上下文自相似特征提取器3部分组成. 编解码器用于提取待配准图像对的特征和预测两者的位移矢量场 (displacement vector field, DVF), 重采样器根据估计的DVF对浮动QSM图像重采样, 上下文自相似特征提取器分别用于提取参考T1加权图像和重采样后的QSM图像的上下文自相似特征并计算两者的平均绝对误差 (mean absolute error, MAE) 以驱动卷积神经网络 (convolutional neural network, ConvNet) 学习. 实验结果表明本文提出的方法显著地提高了QSM图像与T1加权图像的配准精度, 满足临床的配准需求.
Medical image registration plays a crucial role in medical image processing and analysis. Due to the large differences in gray scale and texture information between quantitative susceptibility mapping (QSM) and T1-weighted images, it is difficult for existing medical image registration algorithms to obtain accurate registration results efficiently. Therefore, this study proposes an unsupervised deep learning registration model (residual fusion registration network, RF-RegNet) based on residual fusion. RF-RegNet is composed of an encoder, a decoder, a resampler, and a context-similarity feature extractor. The encoder and decoder are used to extract the features of the image pair to be aligned and estimate the displacement vector field (DVF). The moving QSM image is resampled according to the estimated DVF, and the context-similarity feature extractor is used to extract separately the context-similarity features of the reference T1-weighted image and the resampled QSM image to describe the similarity of the two images. The mean absolute error (MAE) between context-similarity features from the two images is used to drive the convolutional neural network (ConvNet) learning. Experimental results reveal that the proposed method significantly improves the registration accuracy of QSM images and T1-weighted images, which is adequate for clinical demands.