###
计算机系统应用英文版:2020,29(1):164-170
本文二维码信息
码上扫一扫!
基于生成对抗网络的跨视角步态特征提取
(河海大学 计算机与信息学院, 南京 211100)
Cross-View Gait Feature Extraction Using Generative Adversarial Networks
(College of Computer and Information, Hohai University, Nanjing 211100, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 1618次   下载 2415
Received:June 25, 2019    Revised:July 16, 2019
中文摘要: 步态是一种能够在远距离、非侵犯的条件下识别身份的生物特征,但在实际场景中,步态很容易受到拍摄视角、行走环境、物体遮挡、着装等因素的影响.在跨视角识别问题上,现有方法只注重将多种视角的步态模板转化到固定视角下,且视角跨度的增大加深了错误的累积.为了提取有效的步态特征用于跨视角步态识别,本文提出了一种基于生成对抗网络的跨视角步态特征提取方法,该方法只需训练一个模型即可将步态模板转换到任意视角下的正常行走状态,并最大化地保留原本的身份特征信息,从而提高步态识别的准确率.在CASIA-B和OUMVLP数据集上的实验结果表明,该方法在解决跨视角步态识别问题上具有一定的鲁棒性和可行性.
Abstract:Gait is a biological feature that can recognize identity at a long distance and without invasion. However, the performance of gait recognition can be adversely affected by many factors such as view angle, walking environment, occlusion, and clothing, among others. For cross-view gait recognition, the existing cross-view methods focus on transforming gait templates to a specific view angle, which may accumulate the transformation error in a large variation of view angles. To extract invariant gait features, we propose a method which is based on generative adversarial networks. In the proposed method, a gait template could be transformed to any view angle and normal walking state by training only one model. At the same time, the method maintain effective identity information to the most extent and improving the accuracy of gait recognition. Experiments on CASIA-B and OUMVLP datasets indicate that compared with several published approaches, the proposed method achieves competitive performance and is more robust and interpretable to cross-view gait recognition.
文章编号:     中图分类号:    文献标志码:
基金项目:
引用文本:
秦月红,王敏.基于生成对抗网络的跨视角步态特征提取.计算机系统应用,2020,29(1):164-170
QIN Yue-Hong,WANG Min.Cross-View Gait Feature Extraction Using Generative Adversarial Networks.COMPUTER SYSTEMS APPLICATIONS,2020,29(1):164-170