Image Inpainting Method Based on GAN Prior
Author:
  • Article
  • | |
  • Metrics
  • |
  • Reference [23]
  • |
  • Related [20]
  • |
  • Cited by
  • | |
  • Comments
    Abstract:

    Designing and utilizing good image prior knowledge is an important way to enable image inpainting. A generative adversarial network (GAN) is an excellent generative model, and its generator can learn rich image semantic information from large datasets. Thus, it is a good choice to use a pre-trained GAN model as an image prior. Making use of multiple hidden variables, this study adds adaptive weights to the channels and feature maps at the same time in the middle layer of the pre-trained generator and fine-tunes generator parameters in the training process. In this way, the pre-trained GAN model can be used for better image inpainting. Finally, through the contrast experiment of image reconstruction and image inpainting and the combination of qualitative and quantitative analysis, the proposed method is proved effective to mine the prior knowledge of the pre-trained model, thus finishing the task of image inpainting with high quality.

    Reference
    [1] Zhu SC, Mumford D. Prior learning and Gibbs reaction-diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19(11): 1236–1250. [doi: 10.1109/34.632983
    [2] Geman S, Geman D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1984, PAMI-6(6): 721–741. [doi: 10.1109/TPAMI.1984.4767596
    [3] He KM, Sun J, Tang XO. Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(12): 2341–2353. [doi: 10.1109/TPAMI.2010.168
    [4] Rudin LI, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms. Physica D:Nonlinear Phenomena, 1992, 60(1–4): 259–268. [doi: 10.1016/0167-2789(92)90242-F
    [5] Zhang K, Zuo WM, Gu SH, et al. Learning deep CNN denoiser prior for image restoration. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017. 2808–2817.
    [6] Lempitsky V, Vedaldi A, Ulyanov D. Deep image prior. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 9446–9454.
    [7] Gu JJ, Shen YJ, Zhou BL. Image processing using multi-code GAN prior. Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 3009–3018.
    [8] Woo S, Park J, Lee JY, et al. CBAM: Convolutional block attention module. Proceedings of the 15th European Conference on Computer Vision. Munich: Springer, 2018. 3–19.
    [9] Chen JW, Chen JW, Chao HY, et al. Image blind denoising with generative adversarial network based noise modeling. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 3155–3164.
    [10] Kim DW, Chung JR, Jung SW. GRDN: Grouped residual dense network for real image denoising and GAN-based real-world noise modeling. Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Long Beach: IEEE, 2019. 2086–2094.
    [11] Yeh RA, Chen C, Lim TY, et al. Semantic image inpainting with deep generative models. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017. 6882–6890.
    [12] Yu JH, Lin Z, Yang JM, et al. Generative image inpainting with contextual attention. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 5505–5514.
    [13] Ledig C, Theis L, Huszár F, et al. Photo-realistic single image super-resolution using a generative adversarial network. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017. 105–114.
    [14] Wang XT, Yu K, Wu SX, et al. ESRGAN: Enhanced super-resolution generative adversarial networks. Proceedings of Computer Vision. Munich: Springer, 2018. 63–79.
    [15] Lample G, Zeghidour N, Usunier N, et al. Fader networks: Manipulating images by sliding attributes. Proceedings of Neural Information Processing Systems 30. Long Beach: NIPS, 2017. 5967–5976.
    [16] Shen YJ, Luo P, Yan JJ, et al. FaceID-GAN: Learning a symmetry three-player GAN for identity-preserving face synthesis. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 821–830.
    [17] Perarnau G, van de Weijer J, Raducanu B, et al. Invertible conditional GANs for image editing. arXiv: 1611.06355, 2016.
    [18] Zhu JY, Krähenbühl P, Shechtman E, et al. Generative visual manipulation on the natural image manifold. Proceedings of the 14th European Conference on Computer Vision. Amsterdam: Springer, 2016. 597–613.
    [19] Lipton ZC, Tripathi S. Precise recovery of latent vectors from generative adversarial networks. Proceedings of the 5th International Conference on Learning Representations. Toulon: ICLR, 2017.
    [20] Shaham RT, Dekel T, Michaeli T. SinGAN: Learning a generative model from a single natural image. Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019. 4569–4579.
    [21] Pan XG, Zhan XH, Dai B, et al. Exploiting deep generative prior for versatile image restoration and manipulation. Proceedings of the 16th European Conference on Computer Vision. Glasgow: Springer, 2020. 262–277.
    [22] Karras T, Aila T, Laine S, et al. Progressive growing of GANs for improved quality, stability, and variation. Proceedings of the 6th International Conference on Learning Representations. Vancouver: ICLR, 2018.
    [23] Johnson J, Alahi A, Li FF. Perceptual losses for real-time style transfer and super-resolution. Proceedings of the 14th European Conference on Computer Vision. Amsterdam: Springer, 2016. 694–711.
    Cited by
Get Citation

卢世杰,郝文宁,余晓晗,于坤.基于GAN先验的图像补全方法.计算机系统应用,2022,31(10):397-403

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:January 04,2022
  • Revised:January 29,2022
  • Online: June 28,2022
Article QR Code
You are the first1014291Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063