Abstract:With wearable devices entering life on a large scale, human behavior recognition based on temporal data generated by motion sensors has become a research hotspot in this field. However, the current methods cannot find the relationship between multiple sensor data in time and space. In addition, when the traditional neural network learns a new task, the new task parameters will overwrite the old task parameters, causing catastrophic forgetting problems. To this end, this study proposes a human behavior recognition algorithm based on the fusion method of graph attention network and generative playback continuous learning mechanism. The algorithm extracts temporal features through convolutional neural network and graph attention network, enabling the model to focus on temporal and spatial features at the same time. In addition, the algorithm adopts an episodic memory continuous learning method based on a generative data replay strategy, which remembers historical data distributions by conditional variational autoencoders, to address the catastrophic forgetting problem. Finally, compared with different baseline algorithms on multiple public datasets, the experimental results show that the proposed algorithm can achieve higher accuracy while mitigating the catastrophic forgetting problem more effectively.