Low-light Image Enhancement Algorithm Based on GAN and U-Net
CSTR:
Author:
  • Article
  • | |
  • Metrics
  • |
  • Reference [26]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    The image data generated at night, under low light conditions, etc., has the problems of too dark images and loss of details, which hinders the understanding of image content and the extraction of image features. The research of enhancing this type of images to restore the brightness, contrast and details is meaningful in the applications of digital photography and upstream computer vision tasks. This study proposes a U-Net-based generative adversarial network. The generator is a U-Net model with a hybrid attention mechanism. The hybrid attention module can combine the asymmetric Non-local global information and the channel weight information of channel attention to improve the feature representation ability of the network. A fully convolutional network model based on PatchGAN is taken as the discriminator to perform local processing on different regions of the images. We introduce a multi-loss weighted fusion method to guide the network to learn the mapping from low-light images to normal-light images from multiple angles. Experiments show that this method achieves better results regarding objective indicators such as peak signal-to-noise ratio and structural similarity and reasonably restores the brightness, contrast and details of the images to intuitively improve their perceived quality.

    Reference
    [1] 陈清江, 李金阳, 胡倩楠. 基于并联残差网络的低照度图像增强算法. 激光与光电子学进展, 2021, 58(14): 1410015
    [2] 蔡文成. 基于生成对抗网络的低照度图像增强方法研究[硕士学位论文]. 武汉: 湖北工业大学, 2020.
    [3] He KM, Sun J, Tang XO. Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(12): 2341–2353. [doi: 10.1109/TPAMI.2010.168
    [4] Zuiderveld K. Contrast limited adaptive histogram equalization. Graphics Gems IV, 1994: 474–485
    [5] Jobson DJ, Rahman Z, Woodell GA. Properties and performance of a center/surround Retinex. IEEE Transactions on Image Processing, 1997, 6(3): 451–462. [doi: 10.1109/83.557356
    [6] 王小明, 黄昶, 李全彬, 等. 改进的多尺度Retinex图像增强算法. 计算机应用, 2010, 30(8): 2091–2093
    [7] Wang SH, Zheng J, Hu HM, et al. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing, 2013, 22(9): 3538–3548. [doi: 10.1109/TIP.2013.2261309
    [8] Guo XJ, Li Y, Ling HB. LIME: Low-light image enhance-ment via illumination map estimation. IEEE Transactions on Image Processing, 2017, 26(2): 982–993. [doi: 10.1109/TIP.2016.2639450
    [9] Ying ZQ, Li G, Gao W. A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv: 1711.00591, 2017.
    [10] Lore KG, Akintayo A, Sarkar S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognition, 2017, 61: 650–662. [doi: 10.1016/j.patcog.2016.06.008
    [11] Lv FF, Lu F, Wu JH, et al. MBLLEN: Low-light image/video enhancement using CNNs. British Machine Vision Conference 2018. Newcastle: BMVA Press, 2018. 1–13.
    [12] Lv FF, Li Y, Lu F. Attention-guided low-light image enhancement. arXiv: 1908.00682, 2019.
    [13] Wang RX, Zhang Q, Fu CW, et al. Underexposed photo enhancement using deep illumination estimation. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019. 6842–6850.
    [14] Jiang YF, Gong XY, Liu D, et al. EnlightenGAN: Deep light enhancement without paired supervision. IEEE Transactions on Image Processing, 2021, 30: 2340–2349. [doi: 10.1109/TIP.2021.3051462
    [15] Sajjadi MSM, Sch?lkopf B, Hirsch M. EnhanceNet: Single image super-resolution through automated texture synthesis. IEEE International Conference on Computer Vision. Venice: IEEE, 2017. 4501–4510.
    [16] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich: Springer, 2015. 234–241.
    [17] 江泽涛, 覃露露. 一种基于U-Net生成对抗网络的低照度图像增强方法. 电子学报, 2020, 48(2): 258–264
    [18] Hu J, Shen L, Albanie S, et al. Squeeze-and-excitation networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011–2023. [doi: 10.1109/TPAMI.2019.2913372
    [19] Wang XL, Girshick R, Gupta A, et al. Non-local neural networks. IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 7794–7803.
    [20] Zhu Z, Xu MD, Bai S, et al. Asymmetric non-local neural networks for semantic segmentation. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul: IEEE, 2019. 593–602.
    [21] Zhang C, Yan QS, Zhu Y, et al. Attention-based network for low-light image enhancement. 2020 IEEE International Conference on Multimedia and Expo (ICME). London: IEEE, 2020. 1–6.
    [22] Tao L, Zhu C, Xiang GQ, et al. LLCNN: A convolutional neural network for low-light image enhancement. IEEE Visual Communications and Image Processing. Saint Petersburg: IEEE, 2018. 1–4.
    [23] Wang Z, Bovik AC, Sheikh HR, et al. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 2004, 13(4): 600–612. [doi: 10.1109/TIP.2003.819861
    [24] Wang Z, Simoncelli EP, Bovik AC. Multiscale structural similarity for image quality assessment. Proceedings of the IEEE Thrity-Seventh Asilomar Conference on Signals, Systems & Computers. Pacific Grove: IEEE, 2003. 1398–1402.
    [25] Ledig C, Theis L, Huszár F, et al. Photo-realistic single image super-resolution using a generative adversarial network. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2016. 105–114.
    [26] 卫依雪, 周冬明, 王长城, 等. 结合多分支结构和U-net的低照度图像增强. 计算机工程与应用, 2021: 1–12. [doi: 10.3778/j.issn.1002-8331.2104-0432
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

李晨曦,李健.基于GAN和U-Net的低光照图像增强算法.计算机系统应用,2022,31(5):174-183

Copy
Share
Article Metrics
  • Abstract:1429
  • PDF: 3557
  • HTML: 4097
  • Cited by: 0
History
  • Received:July 13,2021
  • Revised:August 04,2021
  • Online: April 11,2022
Article QR Code
You are the first990360Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063