基于高分辨率图像的多尺度作物分类
作者:
基金项目:

山东省重点研发计划 (2019GGX101047); 山东省自然科学基金 (ZR2021QC120)


Multi-scale Crop Classification Based on High-resolution Images
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [34]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    基于无人机平台获取的地面影像有着较高的空间分辨率, 但提供丰富的细节信息的同时, 也为农作物分类带来很多“干扰”, 尤其是在利用深度模型进行作物识别时, 存在边缘信息提取不充分及相似纹理作物误分, 导致分类效果欠佳等问题. 因此, 通过多尺度注意力特征提取的思路构建模型, 有效提取边缘信息, 提高作物分类精度. 所提出的多尺度注意力模型 (multi-scale attention network, MSAT)通过多尺度块嵌入获取同一层级不同尺度的作物信息, 多尺度特征图被映射为多条序列独立地馈送到因子注意力模块中, 增强对农作物上下文信息的关注, 提高模型对地块边缘信息的提取, 因子注意力模块内置的卷积相对位置编码增强块内部局部信息的建模, 提高对相似纹理作物的区分能力, 最后通过融合局部特征与全局特征, 实现粗细双重信息的提取. 在水稻、甘蔗、玉米、香蕉和柑橘5种作物上的分类结果表明, MSAT模型的MIoU (mean intersection over union)和OA (overall accuracy)指标达0.816、98.10%, 验证了基于高分辨率图像的精细作物分类方法可行且设备成本低.

    Abstract:

    The ground images obtained by the unmanned aerial vehicle (UAV) platform have a high spatial resolution, but they also bring a lot of “interference” to crop classification while providing rich details. In particular, when depth models are used for crop recognition, there are problems such as insufficient edge information extraction and misclassification of similarly textured crops, which results in a poor classification effect. Therefore, a model is constructed by the idea of multi-scale attention feature extraction to effectively extract edge information and improve the accuracy of crop classification. The proposed multi-scale attention network (MSAT) obtains crop information on different scales at the same level through multi-scale block embedding. The multi-scale feature map is mapped into multiple sequences that are fed into the factor attention module independently, which enhances the attention to crop contexts and improves the model’s extraction ability of plot edge information. Moreover, the built-in convolutional relative position encoding of the factor attention module enhances the modeling of local information inside the module and the ability to distinguish similarly textured crops. Finally, the thickness information is extracted upon the fusion of local features and global features. The classification results of rice, sugarcane, corn, bananas, and oranges show that the mean intersection over union (MIoU) and overall accuracy (OA) of the MSAT model reach 0.816 and 98.10%, respectively, which verifies that the fine crop classification method based on high-resolution images is feasible, and the equipment cost is low.

    参考文献
    [1] Nasirzadehdizaji R, Cakir Z, Sanli FB, et al. Sentinel-1 interferometric coherence and backscattering analysis for crop monitoring. Computers and Electronics in Agriculture, 2021, 185: 106118. [doi: 10.1016/j.compag.2021.106118
    [2] 周亮, 慕号伟, 马海姣, 等. 基于卷积神经网络的中国北方冬小麦遥感估产. 农业工程学报, 2019, 35(15): 119–128. [doi: 10.11975/j.issn.1002-6819.2019.15.016
    [3] Mubin NA, Nadarajoo E, Shafri HZM, et al. Young and mature oil palm tree detection and counting using convolutional neural network deep learning method. International Journal of Remote Sensing, 2019, 40(19): 7500–7515. [doi: 10.1080/01431161.2019.1569282
    [4] Yang GF, He Y, Yang Y, et al. Fine-grained image classification for crop disease based on attention mechanism. Frontiers in Plant Science, 2020, 11: 600854. [doi: 10.3389/fpls.2020.600854
    [5] Tatsumi K, Yamashiki Y, Morante AKM, et al. Pixel-based crop classification in Peru from Landsat 7 ETM+ images using a random forest model. Journal of Agricultural Meteorology, 2016, 72(1): 1–11. [doi: 10.2480/agrmet.D-15-00010
    [6] Kumar P, Prasad R, Choudhary A, et al. A statistical significance of differences in classification accuracy of crop types using different classification algorithms. Geocarto International, 2017, 32(2): 206–224
    [7] Wu MQ, Yang LC, Yu B, et al. Mapping crops acreages based on remote sensing and sampling investigation by multivariate probability proportional to size. Transactions of the Chinese Society of Agricultural Engineering, 2014, 30(2): 146–152
    [8] Du XD, Cai YH, Wang S, et al. Overview of deep learning. Proceedings of the 31st Youth Academic Annual Conference of Chinese Association of Automation (YAC). Wuhan: IEEE, 2016. 159–164.
    [9] Liang J, Zheng ZW, Xia ST, et al. Crop recognition and evaluation using red edge features of GF-6 satellite. Journal of Remote Sensing, 2020, 24(10): 1168–1179
    [10] 刘帅兵, 杨贵军, 周成全, 等. 基于无人机遥感影像的玉米苗期株数信息提取. 农业工程学报, 2018, 34(22): 69–77. [doi: 10.11975/j.issn.1002-6819.2018.22.009
    [11] Yang MD, Tseng HH, HsuY C, et al. Semantic segmentation using deep learning with vegetation indices for rice lodging identification in multi-date UAV visible images. Remote Sensing, 2020, 12(4): 633. [doi: 10.3390/rs12040633
    [12] 韩文霆, 张立元, 牛亚晓, 等. 无人机遥感技术在精量灌溉中应用的研究进展. 农业机械学报, 2020, 51(2): 1–14. [doi: 10.6041/j.issn.1000-1298.2020.02.001
    [13] Gómez-Candón D, Virlet N, Labbé S, et al. Field phenotyping of water stress at tree scale by UAV-sensed imagery: New insights for thermal acquisition and calibration. Precision Agriculture, 2016, 17(6): 786–800. [doi: 10.1007/s11119-016-9449-6
    [14] Stöcker C, Bennett R, Nex F, et al. Review of the current state of UAV regulations. Remote Sensing, 2017, 9(5): 459. [doi: 10.3390/rs9050459
    [15] Ozdarici-Ok A, Ok AO, Schindler K. Mapping of agricultural crops from single high-resolution multispectral images—Data-driven smoothing vs parcel-based smoothing. Remote Sensing, 2015, 7(5): 5611–5638. [doi: 10.3390/rs70505611
    [16] Zhao WZ, Du SH. Learning multiscale and deep representations for classifying remotely sensed imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 2016, 113: 155–165. [doi: 10.1016/j.isprsjprs.2016.01.004
    [17] Kwak GH, Park NW. Impact of texture information on crop classification with machine learning and UAV images. Applied Sciences, 2019, 9(4): 643. [doi: 10.3390/app9040643
    [18] Zhu XX, Tuia D, Mou LC, et al. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine, 2017, 5(4): 8–36. [doi: 10.1109/MGRS.2017.2762307
    [19] Rebetez J, Satizábal HF, Mota M, et al. Augmenting a convolutional neural network with local histograms—A case study in crop classification from high-resolution UAV imagery. Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges, 2016. 27–29.
    [20] 汪传建, 赵庆展, 马永建, 等. 基于卷积神经网络的无人机遥感农作物分类. 农业机械学报, 2019, 50(11): 161–168. [doi: 10.6041/j.issn.1000-1298.2019.11.018
    [21] Chew R, Rineer J, Beach R, et al. Deep neural networks and transfer learning for food crop identification in UAV images. Drones, 2020, 4(1): 7. [doi: 10.3390/drones4010007
    [22] Han ZM, Dian YY, Xia H, et al. Comparing fully deep convolutional neural networks for land cover classification with high-spatial-resolution Gaofen-2 images. ISPRS International Journal of Geo-Information, 2020, 9(8): 478. [doi: 10.3390/ijgi9080478
    [23] Yang YD, Zhuang Y, Bi FK, et al. M-FCN: Effective fully convolutional network-based airplane detection framework. IEEE Geoscience and Remote Sensing Letters, 2017, 14(8): 1293–1297. [doi: 10.1109/LGRS.2017.2708722
    [24] Karim F, Majumdar S, Darabi H. Insights into LSTM fully convolutional networks for time series classification. IEEE Access, 2019, 7: 67718–67725. [doi: 10.1109/ACCESS.2019.2916828
    [25] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017. 6000–6010.
    [26] Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv:2010.11929, 2020.
    [27] Reedha R, Dericquebourg E, Canals R, et al. Transformer neural network for weed and crop classification of high resolution UAV images. Remote Sensing, 2022, 14(3): 592. [doi: 10.3390/rs14030592
    [28] Lee Y, Kim J, Willette J, et al. MPViT: Multi-path vision transformer for dense prediction. Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022. 7277–7286.
    [29] Tan MX, Le QV. EfficientNet: Rethinking model scaling for convolutional neural networks. Proceedings of the 36th International Conference on Machine Learning. Long Beach: PMLR, 2019. 6105–6114.
    [30] Chen LC, Zhu YK, Papandreou G, et al. Encoder-decoder with atrous separable convolution for semantic image segmentation. Proceedings of the 15th European Conference on Computer Vision. Munich: Springer, 2018. 833–851.
    [31] Xu WJ, Xu YF, Chang T, et al. Co-scale conv-attentional image transformers. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021. 9981–9990.
    [32] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. Proceedings of the 18th International Conference on Medical Image Computing and Computer-assisted Intervention. Munich: Springer, 2015. 234–241.
    [33] Zhou ZW, Rahman Siddiquee M, Tajbakhsh N, et al. UNet++: A nested U-Net architecture for medical image segmentation. Proceedings of the 4th Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Granada: Springer, 2018. 3–11.
    [34] Chaurasia A, Culurciello E. LinkNet: Exploiting encoder representations for efficient semantic segmentation. Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP). St. Petersburg: IEEE, 2017. 1–4.
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

郭金,宋廷强,巩传江,孙媛媛,马兴录,范海生.基于高分辨率图像的多尺度作物分类.计算机系统应用,2023,32(7):84-94

复制
分享
文章指标
  • 点击次数:606
  • 下载次数: 1427
  • HTML阅读次数: 996
  • 引用次数: 0
历史
  • 收稿日期:2022-12-28
  • 最后修改日期:2023-02-13
  • 在线发布日期: 2023-05-22
文章二维码
您是第11418602位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号