Abstract:To address the challenges posed by fixed network architectures and deep network layers, such as incomplete expression of complex scene predictions, high computational costs, and deployment difficulties, this study proposes a new network called wide structure dynamic super-resolution network (W-SDNet). Initially, a residual enhancement block, consisting of shift convolution residual structures, is designed to enhance the capability of extracting hierarchical features for image super-resolution and to reduce computational costs. Next, a wide enhancement module is introduced, employing a dual-branch four-layer parallel structure to extract deep information while using a dynamic network’s gating mechanism to selectively enhance feature expression. This module also utilizes an attention mechanism that integrates edge detection operators to improve the expressiveness of edge details. To prevent interference among components within the wide enhancement block, a refinement block utilizing group convolution and channel splitting is employed. Ultimately, high-quality image reconstruction is achieved through a construction block. Experimental results show that W-SDNet outperforms the existing mainstream algorithms in peak signal-to-noise ratio (PSNR) metrics when zoomed in 4 times on five publicly available test datasets, and the number of parameters in the model is significantly reduced. The results demonstrate the advantages of W-SDNet in terms of complexity, performance, and recovery time of super-resolution reconstruction.