Abstract:Electroencephalogram (EEG) generation via generative adversarial networks (GANs) suffers from various issues including invariant features of samples generated, large amplitude differences, and slow fitting speeds. The quality of signals thus generated fails to meet the requirements of deep-learning model training and optimization. To address the issues above, this study optimizes the Wasserstein GAN gradient penalty (WGAN-GP) so that it can perform better in EEG generation. The details are as follows: (1) On the basis of the framework of the WGAN-GP network, the convolutional neural network (CNN) is replaced by the long short-term memory (LSTM) network to ensure the integrity of time-dependent features and thereby solve the issue of invariant features; (2) real EEGs are normalized and then applied to the discriminator to reduce the amplitude differences; (3) the noisy parts of EEGs are applied to the generator as prior knowledge to increase the fitting speed of the generation model. Sliced Wasserstein distance (SWD), mode score (MS), and EEGNet are applied to evaluate the proposed generation model quantitatively and hierarchically. Compared with the current generative network WGAN-GP, the proposed model provides data closer to their real counterparts.