Abstract:Person re-identification (ReID) technology is easily disturbed by the pose variation which causes loss of person information and appearance changes exceeding identity differences. It is a challenging task for existing ReID methods to learn robust person features. For such problems, we propose the generative adversarial network (GAN) based on variational inference and reinforcement learning (RL-VGAN). The core idea of the proposed method is to disentangle person attributes into appearance features and pose features via appearance and pose encoders, which learns robust identity-related features without interference from pose changes. Firstly, the designed variational generative network leverage the Kullback-Leibler divergence loss to strengthen the appearance encoder for inferring identity-related continuous latent variables. Secondly, we use reinforcement learning to balance the performance of the generative and discriminative networks during the training process. Thirdly, for the pose-guided generative task, a novel Inception Score loss is designed for evaluating the image synthesis quality in the variational generative network. Experimental results demonstrate the superiority of the proposed RL-VGAN over other methods for the benchmark datasets.