基于大语言模型的文本摘要质量评估
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

北京市自然科学基金 (4212001)


Text Summarization Quality Evaluation Based on Large Language Model
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    自动文本摘要是自然语言处理(NLP)领域中的一个重要分支, 其主要难点之一是在于如何快速、客观且准确地评估生成摘要的质量. 针对现有文本摘要质量评估方法中评估准确度不高、需要参考文本以及计算资源消耗大的问题, 本文提出一种基于大语言模型的文本摘要质量评估方法, 设计基于思维链原理的提示词构建方法以提高大语言模型在文本摘要质量评估任务上的性能, 同时生成思维链数据集并以模型微调的方式对小型大语言模型进行训练, 显著降低了计算需求. 本文方法首先根据文本摘要的特点确定评估维度, 并基于思维链原理(chain of thought)构建提示词; 使用提示词对大型大语言模型进行引导, 使其根据摘要样本生成思维链过程与评估结果, 同时以此为基础生成思维链数据集; 使用生成的思维链数据集对小型大语言模型进行微调训练; 最后使用微调后的小型大语言模型完成文本摘要的质量评估任务. 本文在Summeval数据集上进行了对比实验与分析, 实验结果表明, 本评估方法显著提高了小型大语言模型在文本摘要质量评估任务上的评估准确度, 实现了一种无需参考文本、评估准确度高、计算需求低、便于部署的文本摘要质量评估方法.

    Abstract:

    Automatic text summarization is an important branch in the field of natural language processing (NLP), and one of its main difficulties lies in how to evaluate the quality of the generated summaries quickly, objectively, and accurately. Given the problems of low evaluation accuracy, the need for reference texts, and the large consumption of computing resources in the existing text summary quality evaluation methods, this study proposes an evaluation method for the quality of text summaries based on large language models. It designs a prompt construction method based on the principle of the chain of thought to improve the performance of large language models in the evaluation of text summary quality. At the same time, a chain of thought data set is generated and a small large language model is trained in the way of model fine-tuning, significantly reducing the computing requirements. The proposed method first determines the evaluation dimension according to the characteristics of the text summary and constructs the prompt based on the principle of chain of thought. The prompt is utilized to guide the large language model to generate the chain of thought process and evaluation results based on the summary samples. Accordingly, a chain of thought data set is generated. The generated chain of thought data set is used to fine-tune and train the small large language model. Finally, the study uses the fine-tuned small-scale large language model to complete the quality evaluation of the text summary. Comparative experiments and analyses on the Summeval dataset show that this evaluation method significantly improves the evaluation accuracy of the small-scale large language model in the task of text summary quality evaluation. The study provides a text summary quality evaluation method, which is a method with high evaluation accuracy, low computing requirements, and easy deployment without reference texts.

    参考文献
    相似文献
    引证文献
引用本文

谭琛瀚,贾克斌,王浩宇.基于大语言模型的文本摘要质量评估.计算机系统应用,,():1-9

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-07-15
  • 最后修改日期:2024-08-13
  • 录用日期:
  • 在线发布日期: 2024-12-19
  • 出版日期:
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号