Abstract:In most recognition models of Chinese named entities, language preprocessing only focuses on the vector representation of single words and characters and ignores the semantic relationship between them, hence failing to tackle polysemy. The transformer feature extraction model improves the understanding of natural language due to parallel computing and long-distance modeling, but its fully connected structure makes the computational complexity the square of the input length, which leads to poor recognition of Chinese named entities. A recognition method for Chinese named entities based on the BERT-Star-Transformer-TextCNN-CRF (BSTTC) model is proposed to solve these problems. First, the BERT model pre-trained on a large-scale corpus is used to dynamically generate the word vector sequence according to its input context. Then, the star Transformer-TextCNN model is adopted to further extract sentence features. Finally, the prediction result is received by inputting the feature vector sequence into the CRF model. The experimental results on the Chinese corpus from MSRA show that the accuracy, recall, and F1 value of this model are all higher than those of existing models. Moreover, its training time is 65% shorter than that of the BSTTC model.