Abstract:Gait is a biological feature that can recognize identity at a long distance and without invasion. However, the performance of gait recognition can be adversely affected by many factors such as view angle, walking environment, occlusion, and clothing, among others. For cross-view gait recognition, the existing cross-view methods focus on transforming gait templates to a specific view angle, which may accumulate the transformation error in a large variation of view angles. To extract invariant gait features, we propose a method which is based on generative adversarial networks. In the proposed method, a gait template could be transformed to any view angle and normal walking state by training only one model. At the same time, the method maintain effective identity information to the most extent and improving the accuracy of gait recognition. Experiments on CASIA-B and OUMVLP datasets indicate that compared with several published approaches, the proposed method achieves competitive performance and is more robust and interpretable to cross-view gait recognition.