Abstract:Image super-resolution reconstruction technology can improve image resolution, which plays an important role in medical, military and other fields. The traditional super-resolution generative adversarial network (SRGAN) algorithm for image super-resolution reconstruction has a slow training convergence speed, and excessive sharpening of high-frequency texture leads to distortion of some details, which affects the quality of reconstructed images. To address these problems, the generator network and loss function of the traditional SRGAN model are improved for image super-resolution reconstruction. The sparse residual dense network (SRDN) is used instead of the traditional SRResNet as the generator network to fully utilize low-resolution image features. Meanwhile, the sparse connection method of SRDN and the depthwise separable convolution are used to reduce the number of model parameters. In addition, a joint perceptual loss of fused the low-frequency features and high-frequency features of VGG is proposed to improve the network’s perceptual loss function by combining with the mean square error loss. Tested on the Set5, Set14, and BSD100 data sets, the results show that the improved SRGAN algorithm outperforms the traditional SRGAN algorithm in three evaluation indexes, namely, peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and mean option score (MOS), and the details of the reconstructed images are clearer. The improved SRGAN algorithm shows better overall robustness and comprehensive performance.