Abstract:While natural language generation (NLG)-based large language models, represented by ChatGPT, perform well in various natural language processing tasks, their performance in sequence recognition tasks, such as named entity recognition, is somewhat inferior to that of bidirectional encoder representations from Transformer (BERT)-based deep learning models. To address this issue, this study first transforms the existing Chinese named entity recognition problem into a machine reading comprehension problem. A new name entity recognition method based on in-context learning and fine tuning is proposed, thereby enabling NLG-based language models to achieve good results in named entity recognition without changing base model pre-training parameters. Additionally, since named entities are generated by the model rather than classified from original data, there are no boundary issues. To verify the effectiveness of the new framework on named entity recognition tasks, experiments are conducted on some Chinese named entity recognition datasets. On the Resume and Weibo datasets, the F1 scores reach 96.04% and 67.87% respectively, a gain of 0.4 and 2.7 percentage points over the state-of-the-art models, confirming that the new framework can effectively utilize the text generation advantages of NLG-based language models to complete named entity recognition tasks.