Abstract:Medical image registration plays a crucial role in medical image processing and analysis. Due to the large differences in gray scale and texture information between quantitative susceptibility mapping (QSM) and T1-weighted images, it is difficult for existing medical image registration algorithms to obtain accurate registration results efficiently. Therefore, this study proposes an unsupervised deep learning registration model (residual fusion registration network, RF-RegNet) based on residual fusion. RF-RegNet is composed of an encoder, a decoder, a resampler, and a context-similarity feature extractor. The encoder and decoder are used to extract the features of the image pair to be aligned and estimate the displacement vector field (DVF). The moving QSM image is resampled according to the estimated DVF, and the context-similarity feature extractor is used to extract separately the context-similarity features of the reference T1-weighted image and the resampled QSM image to describe the similarity of the two images. The mean absolute error (MAE) between context-similarity features from the two images is used to drive the convolutional neural network (ConvNet) learning. Experimental results reveal that the proposed method significantly improves the registration accuracy of QSM images and T1-weighted images, which is adequate for clinical demands.