Abstract:Existing trajectory generation methods based on generative adversarial imitation learning (GAIL) mostly use the Markov decision process (MDP) to model human movement patterns. With limited training data, it is difficult to learn the potential relationship between action selection and locations, and the distance constraints between locations are not taken into account in the calculation of the state transition function. Therefore, the quality of the generated trajectories needs to be improved. For this reason, this study proposes a trajectory generation method based on generative adversarial imitation learning. The method first incorporates priori knowledge of the location-related action distribution into the generator to help the model understand the change patterns of the actions at a specific location, guiding it to better model the policy function that conforms to the real scenario. In addition, distance constraints are introduced into the state transition function to ensure the rationality of the generated trajectories. Experiments conducted on two real datasets show that the proposed method achieves a Rank index of 0.0268, which is 39 % better than that of the best baseline method. In addition, the accuracy of the prediction in the next position prediction task is 6 % higher than that of the best baseline.