Abstract:At present, most image dehazing algorithms ignore the local details of the image and fail to make full use of features at different levels, resulting in color distortion, contrast reduction, and haze residual phenomena in the restored image without fog. To solve this problem, this study proposes an adaptive feature fusion image dehazing network combined with dense attention. The network takes the encoder-decoder structure as the basic framework, and the feature enhancement part and the feature fusion part are embedded in the middle. The dense feature attention block composed of the dense residual network and the Channel-Spatial attention combination module is superimposed on the feature enhancement part. In this way, the network can pay attention to the local details of the image, enhance the reuse of features, and effectively prevent the disappearance of gradients. In the feature fusion part, an adaptive feature fusion module is constructed to fuse low-level and high-level features to prevent shallow feature degradation caused by the deepening of the network. The experimental results show that the proposed algorithm performs well on both synthetic and real fog image datasets. The peak signal-to-noise ratio and structural similarity on SOTS indoor synthetic datasets reach 35.81 dB and 0.9889, respectively, and those on the real image datasets O-HAZE reach 22.75 dB and 0.7788 respectively. The proposed algorithm effectively solves the problems of color distortion, contrast reduction, and haze residue.