Cross-modality Person Re-identification Based on Attention Feature Fusion
CSTR:
Author:
  • Article
  • | |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • | |
  • Comments
    Abstract:

    Cross-modality person re-identification is widely used in intelligent safety monitoring systems, aiming to match visible light images and infrared images of the same person. Due to the inherent modality differences between visible and infrared modalities, cross-modality person re-identification poses significant challenges in practical applications. To alleviate modality differences, researchers have proposed many effective solutions. However, existing methods extract different modality features without corresponding modality information, resulting in insufficient discriminability of the features. To improve the discriminability of the features extracted from models, this study proposes a cross-modality person re-identification method based on attention feature fusion. By designing an efficient feature extraction network and attention feature fusion module, and optimizing multiple loss functions, the fusion and alignment of different modality information can be achieved, thereby promoting the model matching accuracy for persons. Experimental results show that this method achieves great performance on multiple datasets.

    Reference
    Related
    Cited by
Get Citation

邓淑雅,李浩源.基于注意力特征融合的跨模态行人重识别.计算机系统应用,2024,33(9):269-275

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:March 04,2024
  • Revised:April 03,2024
  • Online: July 24,2024
Article QR Code
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063