本文已被:浏览 739次 下载 1816次
Received:August 19, 2022 Revised:September 22, 2022
Received:August 19, 2022 Revised:September 22, 2022
中文摘要: 针对可见光模态与热红外模态间的差异问题和如何充分利用多模态信息进行行人检测, 本文提出了一种基于YOLO的多模态特征差分注意融合行人检测方法. 该方法首先利用YOLOv3深度神经网络的特征提取主干分别提取多模态特征; 其次在对应多模态特征层之间嵌入模态特征差分注意模块充分挖掘模态间的差异信息, 并经过注意机制强化差异特征表示进而改善特征融合质量, 再将差异信息分别反馈到多模态特征提取主干中, 提升网络对多模态互补信息的学习融合能力; 然后对多模态特征进行分层融合得到融合后的多尺度特征; 最后在多尺度特征层上进行目标检测, 预测行人目标的概率和位置. 在KAIST和LLVIP公开多模态行人检测据集上的实验结果表明, 提出的多模态行人检测方法能有效解决模态间的差异问题, 实现多模态信息的充分利用, 具有较高的检测精度和速度, 具有实际应用价值.
Abstract:In order to address the difference between visible light modality and thermal infrared modality and make full use of multimodal information to perform pedestrian detection, this study proposes a multimodal feature differential attention fusion pedestrian detection method based on YOLO. The method first uses the feature extraction backbone of the YOLOv3 deep neural network to extract multimodal features respectively. Second, the differential attention module of modal features is embedded between the corresponding multimodal feature layers to fully mine the difference information between modalities, and the difference feature representation is strengthened through the attention mechanism, so as to improve the quality of feature fusion. Then, the difference information is fed back to the multimodal feature extraction backbone to improve the network’s ability to learn and fuse multimodal complementary information. In addition, the multimodal features are fused in layers to obtain the multi-scale features. Finally, target detection is performed on the multi-scale feature layer to predict the probability and location of pedestrian targets. The experimental results on the public multimodal pedestrian detection datasets of KAIST and LLVIP show that the proposed multimodal pedestrian detection method can effectively address the difference between modalities and realize the full use of multimodal information. Furthermore, it has high detection accuracy and speed and is of practical application value.
文章编号: 中图分类号: 文献标志码:
基金项目:
引用文本:
王钊,解文彬,文江.基于YOLO的多模态特征差分注意融合行人检测.计算机系统应用,2023,32(4):329-338
WANG Zhao,XIE Wen-Bin,WEN Jiang.Pedestrian Detection Based on Multimodal Feature Differential Attention Fusion and YOLO.COMPUTER SYSTEMS APPLICATIONS,2023,32(4):329-338
王钊,解文彬,文江.基于YOLO的多模态特征差分注意融合行人检测.计算机系统应用,2023,32(4):329-338
WANG Zhao,XIE Wen-Bin,WEN Jiang.Pedestrian Detection Based on Multimodal Feature Differential Attention Fusion and YOLO.COMPUTER SYSTEMS APPLICATIONS,2023,32(4):329-338