融合空间域和频率域信息的图像去模糊
作者:
基金项目:

国家自然科学基金(62172418); 中央高校基本科研业务费项目中国民航大学专项(3122020045); 中国民航大学科研启动项目(2017QD15X, 2017QD17X); 中国民航大学学科经费(2012/230123006002)


Image Deblurring by Fusing Information of Spatial and Frequency Domains
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [33]
  • |
  • 相似文献
  • | | |
  • 文章评论
    摘要:

    现有的图像去模糊方法通常直接采用图像的空间域或频率域信息恢复清晰图像, 忽略了空间域信息和频率域信息的互补性. 利用图像的空间域信息可以有效地恢复物体结构, 而利用图像的频率域信息可以有效地恢复纹理细节. 本文提出了一种简单、有效的图像去模糊框架, 可以充分利用图像的空间域和频率域信息, 产生高质量的清晰图像. 首先采用两个结构相同但独立的网络分别从图像的空间域和频率域中学习模糊图像到清晰图像的映射关系; 然后使用一个单独的融合网络, 充分融合空间域和频率域的图像信息, 进一步提升清晰图像的质量. 3个网络链接形成一个端到端的、可学习的大网络, 不同网络之间相互影响, 通过联合优化最终得到高质量的清晰图像. 在公共图像去模糊数据集GoPro、Kohler以及RWBI上, 本文方法的峰值信噪比、结构相似度、平均绝对误差3个指标都优于9个先进的图像去模糊方法. 大量的实验结果验证了本文提出的融合空间域和频率域信息的图像去模糊方法的有效性.

    Abstract:

    The existing image deblurring methods typically directly use spatial or frequency domain information to restore clear images, ignoring the complementarity of spatial and frequency domain information. Utilizing the spatial domain information of images can effectively restore object structures while utilizing the frequency domain information of images can effectively restore texture details. This study proposes a simple and effective image deblurring framework that can fully utilize both the spatial and frequency domain information of images to produce high-quality and clear images. Firstly, two independent networks with the same structure are employed to learn the mapping relationship from the blurred images to the clear images in the spatial and frequency domain, respectively. Then a separate fusion network is adopted to further elevate the quality of clear images by fully integrating image information from both spatial and frequency domains. The three networks can be linked to form an end-to-end trainable large network, where they interact with each other to obtain high-quality images by joint optimization. The proposed method surpasses 9 state-of-the-art image deblurring methods in terms of peak signal-to-noise ratio, structural similarity index metric, and mean absolute error on the public image deblurring datasets including GoPro, Kohler, and RWBI. The effectiveness of the proposed image deblurring method which integrates both spatial and frequency domain information is verified by a large number of experiments.

    参考文献
    [1] Huang HB, He R, Sun ZA, et al. Wavelet-SRNet: A wavelet-based CNN for multi-scale face super resolution. Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017. 1698–1706.
    [2] Pathak D, Krähenbühl P, Donahue J, et al. Context encoders: Feature learning by inpainting. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016. 2536–2544.
    [3] Zou WB, Jiang MC, Zhang YC, et al. SDWNet: A straight dilated network with wavelet transformation for image deblurring. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops. Montreal: IEEE, 2021. 1895–1904.
    [4] Xu XY, Sun DQ, Pan JS, et al. Learning to super-resolve blurry face and text images. Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017. 251–260.
    [5] Tao X, Gao HY, Shen XY, et al. Scale-recurrent network for deep image deblurring. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 8174–8182.
    [6] Suin M, Purohit K, Rajagopalan AN. Spatially-attentive patch-hierarchical network for adaptive motion deblurring. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 3603–3612.
    [7] Purohit K, Rajagopalan AN. Region-adaptive dense network for efficient motion deblurring. Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York: AAAI, 2020. 11882–11889.
    [8] Ye MY, Lyu D, Chen GS. Scale-iterative upscaling network for image deblurring. IEEE Access, 2020, 8: 18316–18325.
    [9] Zhang JW, Pan JS, Ren J, et al. Dynamic scene deblurring using spatially variant recurrent neural networks. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 2521–2529.
    [10] Kupyn O, Budzan V, Mykhailych M, et al. DeblurGAN: Blind motion deblurring using conditional adversarial networks. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 8183–8192.
    [11] Kupyn O, Martyniuk T, Wu JR, et al. DeblurGAN-v2: Deblurring (orders-of-magnitude) faster and better. Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019. 8877–8886.
    [12] Chakrabarti A. A neural approach to blind motion deblurring. Proceedings of the 14th European Conference on Computer Vision. Amsterdam: Springer, 2016. 221–235.
    [13] Cho SJ, Ji SW, Hong JP, et al. Rethinking coarse-to-fine approach in single image deblurring. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021. 4621–4630.
    [14] Min C, Wen GQ, Li BR, et al. Blind deblurring via a novel recursive deep CNN improved by wavelet transform. IEEE Access, 2018, 6: 69242–69252.
    [15] Liu PJ, Zhang HZ, Zhang K, et al. Multi-level wavelet-CNN for image restoration. Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City: IEEE, 2018. 886–895.
    [16] Zhu JY, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the 2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017. 2242–2251.
    [17] 陈涛, 魏为民, 申林帅. 一种CycleGAN结合膨胀卷积的指纹图像增强方法. 国外电子测量技术, 2022, 41(9): 47–53.
    [18] Lin TY, Dollár P, Girshick R, et al. Feature pyramid networks for object detection. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017. 936–944.
    [19] Tsai FJ, Peng YT, Tsai CC, et al. BANet: A blur-aware attention network for dynamic scene deblurring. IEEE Transactions on Image Processing. 2022, 31: 6789–6799.
    [20] Hou QB, Zhang L, Cheng MM, et al. Strip pooling: Rethinking spatial pooling for scene parsing. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 4002–4011.
    [21] Charbonnier P, Blanc-Feraud L, Aubert G, et al. Two deterministic half-quadratic regularization algorithms for computed imaging. Proceedings of the 1st International Conference on Image Processing. Austin: IEEE. 1994. 168–172.
    [22] Jiang LM, Dai B, Wu W, et al. Focal frequency loss for image reconstruction and synthesis. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021. 13899–13909.
    [23] Nah S, Kim TH, Lee KM. Deep multi-scale convolutional neural network for dynamic scene deblurring. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017. 257–265.
    [24] Köhler R, Hirsch M, Mohler B, et al. Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. Proceedings of the 12th European Conference on Computer Vision. Florence: Springer, 2012. 27–40.
    [25] Zhang KH, Luo WH, Zhong YR, et al. Deblurring by realistic blurring. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 2734–2743.
    [26] Pech-Pacheco JL, Cristobal G, Chamorro-Martinez J, et al. Diatom autofocusing in brightfield microscopy: A comparative study. Proceedings of the 15th International Conference on Pattern Recognition. Barcelona: IEEE, 2000. 314–317.
    [27] Mallat SG. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 11(7): 674–693.
    [28] Zhang HG, Dai YC, Li HD, et al. Deep stacked hierarchical multi-patch network for image deblurring. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 5971–5979.
    [29] Liu Y, Fang FM, Wang TT, et al. Multi-scale grid network for image deblurring with high-frequency guidance. IEEE Transactions on Multimedia, 2021, 24: 2890–2901.
    [30] Gao HY, Tao X, Shen XY, et al. Dynamic scene deblurring with parameter selective sharing and nested skip connections. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 3843–3851.
    [31] Park D, Kang DU, Kim J, et al. Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training. Proceedings of the 16th European Conference on Computer Vision. Glasgow: Springer, 2020. 327–343.
    [32] Shensa MJ. The discrete wavelet transform: Wedding the a trous and mallat algorithms. IEEE Transactions on Signal Processing, 1992, 40(10): 2464–2482.
    [33] Ahmed N, Natarajan T, Rao KR. Discrete cosine transform. IEEE Transactions on Computers, 1974, C-23(1): 90–93.
    相似文献
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

邢艳,陈晓璐,徐启奥,黄睿.融合空间域和频率域信息的图像去模糊.计算机系统应用,2024,33(2):1-12

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-08-09
  • 最后修改日期:2023-09-15
  • 在线发布日期: 2023-12-25
  • 出版日期: 2023-02-05
文章二维码
您是第11184643位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号