Cross-View Gait Feature Extraction Using Generative Adversarial Networks
CSTR:
Author:
  • Article
  • | |
  • Metrics
  • |
  • Reference [16]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    Gait is a biological feature that can recognize identity at a long distance and without invasion. However, the performance of gait recognition can be adversely affected by many factors such as view angle, walking environment, occlusion, and clothing, among others. For cross-view gait recognition, the existing cross-view methods focus on transforming gait templates to a specific view angle, which may accumulate the transformation error in a large variation of view angles. To extract invariant gait features, we propose a method which is based on generative adversarial networks. In the proposed method, a gait template could be transformed to any view angle and normal walking state by training only one model. At the same time, the method maintain effective identity information to the most extent and improving the accuracy of gait recognition. Experiments on CASIA-B and OUMVLP datasets indicate that compared with several published approaches, the proposed method achieves competitive performance and is more robust and interpretable to cross-view gait recognition.

    Reference
    [1] 何逸炜, 张军平. 步态识别的深度学习:综述. 模式识别与人工智能, 2018, 31(5):442-452
    [2] Goodfellow IJ, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada. 2014. 2672-2680.
    [3] Shiraga K, Makihara Y, Muramatsu D, et al. GEINet:View-invariant gait recognition using a convolutional neural network. Proceedings of 2016 International Conference on Biometrics. Halmstad, Sweden. 2016. 1-8.
    [4] Wu ZF, Huang YZ, Wang L, et al. A comprehensive study on cross-view gait based human identification with deep CNNs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(2):209-226. doi:10.1109/TPAMI.2016.2545669
    [5] Liao RJ, Cao CS, Garcia EB, et al. Pose-based temporal-spatial network (PTSN) for gait recognition with carrying and clothing variations. Proceedings of the 12th Chinese Conference on Biometric Recognition. Shenzhen, China. 2017. 474-483.
    [6] Feng Y, Li YC, Luo JB. Learning effective gait features using LSTM. Proceedings of the 2016 23rd International Conference on Pattern Recognition. Cancun, Mexico. 2016. 325-330.
    [7] Makihara Y, Sagawa R, Mukaigawa Y, et al. Gait recognition using a view transformation model in the frequency domain. Proceedings of the 9th European Conference on Computer Vision. Graz, Austria. 2006. 151-163.
    [8] Zheng S, Zhang JG, Huang KQ, et al. Robust view transformation model for gait recognition. Proceedings of the 2011 18th IEEE International Conference on Image Processing. Brussels, Belgium. 2011. 2073-2076.
    [9] Bashir K, Xiang T, Gong SG. Cross-view gait recognition using correlation strength. Proceedings of 2010 British Machine Vision Conference. London, UK. 2010. 1-11.
    [10] Yu SQ, Wang Q, Shen LL, et al. View invariant gait recognition using only one uniform model. Proceedings of the 2016 23rd International Conference on Pattern Recognition. Cancun, Mexico. 2016. 889-894.
    [11] Choi Y, Choi M, Kim M, et al. StarGAN:Unified generative adversarial networks for multi-domain image-to-image translation. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USA. 2018. 8789-8797.
    [12] Yu SQ, Chen HF, Reyes EBG, et al. GaitGAN:Invariant gait feature extraction using generative adversarial networks. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, HI, USA. 2017. 532-539.
    [13] He YW, Zhang JP, Shan HM, et al. Multi-task GANs for view-specific feature learning in gait recognition. IEEE Transactions on Information Forensics and Security, 2019, 14(1):102-113. doi:10.1109/TIFS.2018.2844819
    [14] Han J, Bhanu B. Individual recognition using gait energy image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(2):316-322. doi:10.1109/TPAMI.2006.38
    [15] Yu SQ, Tan DL, Tan TN. A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition. Proceedings of the 18th International Conference on Pattern Recognition. Hong Kong, China. 2006. 441-444.
    [16] Takemura N, Makihara Y, Muramatsu D, et al. Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition. IPSJ Transactions on Computer Vision and Applications, 2018, 10(1):4. doi:10.1186/s41074-018-0039-6
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

秦月红,王敏.基于生成对抗网络的跨视角步态特征提取.计算机系统应用,2020,29(1):164-170

Copy
Share
Article Metrics
  • Abstract:1927
  • PDF: 3466
  • HTML: 2287
  • Cited by: 0
History
  • Received:June 25,2019
  • Revised:July 16,2019
  • Online: December 30,2019
  • Published: January 15,2020
Article QR Code
You are the first990433Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063