Abstract:The image data generated at night, under low light conditions, etc., has the problems of too dark images and loss of details, which hinders the understanding of image content and the extraction of image features. The research of enhancing this type of images to restore the brightness, contrast and details is meaningful in the applications of digital photography and upstream computer vision tasks. This study proposes a U-Net-based generative adversarial network. The generator is a U-Net model with a hybrid attention mechanism. The hybrid attention module can combine the asymmetric Non-local global information and the channel weight information of channel attention to improve the feature representation ability of the network. A fully convolutional network model based on PatchGAN is taken as the discriminator to perform local processing on different regions of the images. We introduce a multi-loss weighted fusion method to guide the network to learn the mapping from low-light images to normal-light images from multiple angles. Experiments show that this method achieves better results regarding objective indicators such as peak signal-to-noise ratio and structural similarity and reasonably restores the brightness, contrast and details of the images to intuitively improve their perceived quality.