###
计算机系统应用英文版:2022,31(3):212-219
本文二维码信息
码上扫一扫!
面向机器阅读理解的多任务层次微调模型
(华南师范大学 软件学院, 佛山 528225)
Multi-task Hierarchical Fine-tuning Model Toward Machine Reading Comprehension
(School of Software, South China Normal University, Foshan 528225, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 644次   下载 1614
Received:May 31, 2021    Revised:July 07, 2021
中文摘要: 机器阅读理解与问答一直以来被认为是自然语言理解的核心问题之一, 要求模型通过给定的文章与问题去挑选出最佳答案. 随着 BERT 等预训练模型的兴起, 众多的自然语言处理任务取得了重大突破, 然而在复杂的阅读理解任务方面仍然存在一些不足, 针对该任务, 提出了一个基于回顾式阅读器的机器阅读理解模型. 模型使用 RoBERTa 预训练模型对问题与文章进行编码, 并将阅读理解部分分为词级别的精读模块与句子级别的泛读模块两个模块. 这两个模块以两种不同的粒度来获取文章和问题的语义信息, 最终结合两个模块的预测答案合并输出. 该模型在 CAIL2020 的数据集上综合F1值达到了66.15%, 相较于RoBERTa模型提升了5.38%, 并通过消融实验证明了本模型的有效性.
Abstract:Machine reading comprehension and question answering has long been considered as one of the core problems of natural language understanding, which requires models to select the best answer from a given text and question. With the rise of pre-trained language models such as BERT, great breakthroughs have been made in natural language processing (NLP) tasks. However, there are still some shortcomings in complex reading comprehension tasks. To solve this problem, this paper proposes a machine reading comprehension model based on retrospective readers. The proposed model uses the pre-trained model RoBERTa to encode questions and articles and divides the reading comprehension section into two modules: an intensive reading module at the word level and a comprehensive reading module at the sentence level. These two modules capture the semantic information in articles and problems at two different granularity levels. Finally, the prediction results of the two modules are combined to produce the answer with the highest probability. The model accuracy is improved in the CAIL2020 dataset and the joint-F1 value of the model reaches 66.15%, which is 5.38% higher than that of the RoBERTa model. The effectiveness of this model is proved by ablation experiments.
文章编号:     中图分类号:    文献标志码:
基金项目:国家自然科学基金(61876067); 广东省普通高校人工智能重点领域专项(2019KZDZX1033); 广东省信息物理融合系统重点实验室建设专项(2020B1212060069)
引用文本:
丁美荣,刘鸿业,徐马一,龚思雨,陈晓敏,曾碧卿.面向机器阅读理解的多任务层次微调模型.计算机系统应用,2022,31(3):212-219
DING Mei-Rong,LIU Hong-Ye,XU Ma-Yi,GONG Si-Yu,CHEN Xiao-Min,ZENG Bi-Qing.Multi-task Hierarchical Fine-tuning Model Toward Machine Reading Comprehension.COMPUTER SYSTEMS APPLICATIONS,2022,31(3):212-219