针对FSRCNN模型中存在的特征提取不充分和反卷积带来的人工冗余信息的问题, 本文提出了一种基于多尺度融合卷积神经网络的图像超分辨率重建算法. 首先设计了一种多尺度融合的特征提取通道, 解决对图像不同尺寸信息利用不充分问题; 其次在图像重建部分, 采用子像素卷积进行上采样, 抑制反卷积层带来的人工冗余信息. 与FSRCNN模型相比, 在Set5和Set14数据集中, 2倍放大因子下的PSNR值和SSIM值平均提高了0.14 dB、
To address the insufficient feature extraction of the FSRCNN model and its artificial redundant information caused by deconvolution, this study proposes an image super-resolution reconstruction algorithm based on a multi-scale fusion convolutional neural network. Specifically, a multi-scale fusion feature extraction channel is designed to cope with the insufficient utilization of image information of different sizes. Then, sub-pixel convolution is used for up-sampling in the image reconstruction part to suppress the artificial redundant information caused by deconvolution. Compared with the FSRCNN model, the algorithm in this study respectively increases the PSNR and SSIM by 0.14 dB and 0.001 0 on average at an amplification factor of 2 and by 0.48 dB and 0.009 1 on average at an amplification factor of 3 on Set5 and Set14 data sets. Experimental results show that the proposed algorithm can better retain the texture details of the image and improve the overall image reconstruction effect.