本文已被:浏览 599次 下载 1803次
Received:January 05, 2023 Revised:February 09, 2023
Received:January 05, 2023 Revised:February 09, 2023
中文摘要: 高光谱图像波段多、波段之间关联性强, 但其空间纹理和几何信息的表达较弱, 传统分类模型存在空间光谱特征提取不充分、计算量大的问题, 分类性能有待提高. 针对此问题, 提出一种基于小波变换的多尺度多分辨率注意力特征融合卷积网络 (wavelet transform convolutional attention network, WTCAN), 采用小波变换思想对光谱波段进行4次分解, 通过层次性提取光谱特征可减少计算量. 该网络设计了空间信息提取模块, 同时引入金字塔注意力机制, 通过设计逆向跳跃连接网络结构利用多尺度获取空间位置特征, 增强空间纹理表达能力, 可以有效改进传统2D-CNN特征提取尺度单一、忽略空间纹理细节等缺陷. 本文对所提出的WTCAN模型分别在不同空间分辨率高光谱数据集Indian Pines (IP)、WHU_Hi_HanChuan (HanChuan)、WHU_Hi_HongHu (HongHu)进行实验, 通过对比SVM、2D-CNN、DBMA、DBDA、HybridSN模型效果, WTCAN模型取得较好的分类效果, 3个数据集的分类总体精度分别达到了98.41%、99.64%、99.67%, 可为高光谱图像的分类研究提供参考依据.
Abstract:Hyperspectral images have multiple bands and a strong correlation between bands, but their spatial texture and geometric information are poorly expressed. The traditional classification model has insufficient extraction of spatial spectral features and large calculation, and its classification performance needs to be improved. To solve this problem, a multi-scale and multi-resolution attention feature fusion convolution network (WTCAN) based on the wavelet transform is proposed. The concept of wavelet transform is applied to decompose the spectral band four times, and the hierarchical extraction of spectral features can reduce the calculation amount. The network has designed the spatial information extraction module and introduced the pyramid attention mechanism. By designing the reverse jump connection network structure, it uses multiple scales to obtain the spatial position features and enhances the expression ability of spatial texture, which can effectively improve the defects of traditional 2D-CNN feature extraction, such as single scale and the ignoring of spatial texture details. The proposed WTCAN model is experimented on the hyperspectral datasets with different spatial resolutions—Indian Pines (IP), WHU_Hi_HanChuan (HanChuan), and WHU_ Hi_ HongHu (HongHu) repectively. By comparing the effects of SVM, 2D-CNN, DBMA, DBDA, and HybridSN models, the WTCAN model achieves excellent classification results. The overall classification precision of the three datasets reaches 98.41%, 99.64%, and 99.67% respectively, which can provide a valuable reference for the research on the classification of hyperspectral images.
keywords: hyperspectral?imagery (HSI) classification feature extraction wavelet transform two-dimensional?convolutional?neural?network (2D-CNN) attention mechanism
文章编号: 中图分类号: 文献标志码:
基金项目:山东省自然科学基金青年项目(ZR2021QC120); 山东省科技研发项目(2019GGX101047)
引用文本:
巩传江,臧德厚,郭金,孙媛媛,宋廷强.基于小波卷积网络的高光谱图像分类.计算机系统应用,2023,32(7):23-34
GONG Chuan-Jiang,ZANG De-Hou,GUO Jin,SUN Yuan-Yuan,SONG Ting-Qiang.Hyperspectral Image Classification Based on Wavelet Convolution Network.COMPUTER SYSTEMS APPLICATIONS,2023,32(7):23-34
巩传江,臧德厚,郭金,孙媛媛,宋廷强.基于小波卷积网络的高光谱图像分类.计算机系统应用,2023,32(7):23-34
GONG Chuan-Jiang,ZANG De-Hou,GUO Jin,SUN Yuan-Yuan,SONG Ting-Qiang.Hyperspectral Image Classification Based on Wavelet Convolution Network.COMPUTER SYSTEMS APPLICATIONS,2023,32(7):23-34