基于Swin Transformer的遥感图像超分辨率重建
作者:

Super-resolution Reconstruction of Remote Sensing Image Based on Swin Transformer
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [24]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    由于遥感图像中的物体具有不确定性, 同时不同图像之间的特征信息差异较大, 导致现有超分辨率方法重建效果差, 因此本文提出一种结合Swin Transformer和N-gram模型的NG-MAT模型来实现遥感图像超分辨率. 首先, 在原始Transformer计算自注意力的分支上并联多注意力模块, 用于提取全局特征信息来激活更多像素. 其次, 将自然语言处理领域的N-gram模型应用到图像处理领域, 用三元N-gram模型来加强窗口之间的信息交互. 本文提出的方法在所选取的数据集上, 峰值信噪比在放大因子为2、3、4时达到了34.68 dB、31.03 dB、28.99 dB, 结构相似度在放大因子为2、3、4时达到了0.9266、0.8444、0.7734, 实验结果表明, 本文提出的方法各个指标都优于其他同类方法.

    Abstract:

    Due to the uncertainty of objects in remote sensing images and significant differences in feature information between different images, existing super-resolution methods yield poor reconstruction results. Therefore, this study proposes an NG-MAT model that combines the Swin Transformer and the N-gram model to achieve super-resolution of remote sensing images. Firstly, multiple attention modules are connected in parallel on the branch of the original Transformer to extract global feature information for activating more pixels. Secondly, the N-gram model from natural language processing is applied to the field of image processing, utilizing a trigram N-gram model to enhance information interaction between windows. The proposed method achieves peak signal-to-noise ratios of 34.68 dB, 31.03 dB, and 28.99 dB at amplification factors of 2, 3, and 4, respectively, and structural similarity indices of 0.926 6, 0.844 4, and 0.773 4 at the same amplification factors on the selected dataset. Experimental results demonstrate that the proposed method outperforms other similar methods in various metrics.

    参考文献
    [1] Ahmad W, Ali H, Shah Z, et al. A new generative adversarial network for medical images super resolution. Scientific Reports, 2022, 12(1): 9533.
    [2] Qiu DF, Zheng LX, Zhu JQ, et al. Multiple improved residual networks for medical image super-resolution. Future Generation Computer Systems, 2021, 116: 200–208.
    [3] Zhang DY, Shao J, Li XY, et al. Remote sensing image super-resolution via mixed high-order attention network. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(6): 5183–5196.
    [4] Jia S, Wang ZH, Li QQ, et al. Multiattention generative adversarial network for remote sensing image super-resolution. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5624715.
    [5] Rasti P, Uiboupin T, Escalera S, et al. Convolutional neural network super resolution for face recognition in surveillance monitoring. Proceedings of the 9th International Conference on Articulated Motion and Deformable Objects. Palma de Mallorca: Springer International Publishing, 2016. 175–184.
    [6] Dong C, Loy CC, He KM, et al. Learning a deep convolutional network for image super-resolution. Proceedings of the 13th European Conference Computer Vision. Zurich: Springer, 2014. 184–199.
    [7] Lim B, Son S, Kim H, et al. Enhanced deep residual networks for single image super-resolution. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu: IEEE, 2017. 136–144.
    [8] Kim J, Lee JK, Lee KM. Accurate image super-resolution using very deep convolutional networks. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016. 1646–1654.
    [9] Zhang YL, Li KP, Li K, et al. Image super-resolution using very deep residual channel attention networks. Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich: Springer, 2018. 286–301.
    [10] Ledig C, Theis L, Huszár F, et al. Photo-realistic single image super-resolution using a generative adversarial network. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017. 105–114.
    [11] Wang XT, Yu K, Wu SX, et al. ESRGAN: Enhanced super-resolution generative adversarial networks. Proceedings of the 2018 Conference on Computer Vision—ECCV 2018 Workshops. Munich: Springer, 2018. 63–79.
    [12] Zhang WL, Liu YH, Dong C, et al. RankSRGAN: Generative adversarial networks with ranker for image super-resolution. Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019. 3096–3105.
    [13] Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: Transformers for image recognition at scale. Proceedings of the 9th International Conference on Learning Representations. OpenReview.net, 2021.
    [14] Liu Z, Lin YT, Cao Y, et al. Swin Transformer: Hierarchical vision Transformer using shifted windows. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021. 9992–10002.
    [15] Liang JY, Cao JZ, Sun GL, et al. SwinIR: Image restoration using Swin Transformer. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021. 1833–1844.
    [16] Lu ZS, Li JC, Liu H, et al. Transformer for single image super-resolution. Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. New Orleans: IEEE, 2022. 456–465.
    [17] Fang JS, Lin HJ, Chen XY, et al. A hybrid network of CNN and Transformer for lightweight image super-resolution. Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Orleans: IEEE, 2022. 1102–1111.
    [18] Chen XY, Wang XT, Zhou JT, et al. Activating more pixels in image super-resolution Transformer. Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver: IEEE, 2023. 22367–22377.
    [19] Haut JM, Fernandez-Beltran R, Paoletti ME, et al. A new deep generative network for unsupervised remote sensing single-image super-resolution. IEEE Transactions on Geoscience and Remote Sensing, 2018, 56(11): 6792–6810.
    [20] Ma W, Pan ZX, Yuan F, et al. Super-resolution of remote sensing images via a dense residual generative adversarial network. Remote Sensing, 2019, 11(21): 2578.
    [21] Arjovsky M, Chintala S, Bottou L. Wasserstein GAN. arXiv:1701.07875, 2017.
    [22] Wu K, Peng HW, Chen MH, et al. Rethinking and improving relative position encoding for vision Transformer. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021. 10013–10021.
    [23] Cheng G, Han JW, Lu XQ. Remote sensing image scene classification: Benchmark and state of the art. Proceedings of the IEEE, 2017, 105(10): 1865–1883.
    [24] Xia GS, Hu JW, Hu F, et al. AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(7): 3965–3981.
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

孔锐,冉友红.基于Swin Transformer的遥感图像超分辨率重建.计算机系统应用,2024,33(9):85-94

复制
分享
文章指标
  • 点击次数:318
  • 下载次数: 1236
  • HTML阅读次数: 802
  • 引用次数: 0
历史
  • 收稿日期:2024-03-05
  • 最后修改日期:2024-04-03
  • 在线发布日期: 2024-07-26
文章二维码
您是第11183081位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号