基于多粒度语义交互的抽取式多文档摘要
作者:
基金项目:

国家自然科学基金 (61806221)


Extractive Multi-document Summarization Based on Multi-granularity Semantic Interaction
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [25]
  • |
  • 相似文献
  • | | |
  • 文章评论
    摘要:

    信息爆炸是信息化时代面临的普遍性问题, 为了从海量文本数据中快速提取出有价值的信息, 自动摘要技术成为自然语言处理(natural language processing, NLP)领域中的研究重点. 多文档摘要的目的是从一组具有相同主题的文档中精炼出重要内容, 帮助用户快速获取关键信息. 针对目前多文档摘要中存在的信息不全面、冗余度高的问题, 提出一种基于多粒度语义交互的抽取式摘要方法, 将多粒度语义交互网络与最大边界相关法(maximal marginal relevance, MMR)相结合, 通过不同粒度的语义交互训练句子的表示, 捕获不同粒度的关键信息, 从而保证摘要信息的全面性; 同时结合改进的MMR以保证摘要信息的低冗余度, 通过排序学习为输入的多篇文档中的各个句子打分并完成摘要句的抽取. 在Multi-News数据集上的实验结果表明基于多粒度语义交互的抽取式多文档摘要模型优于LexRank、TextRank等基准模型.

    Abstract:

    Information explosion is a common problem in the information age. In order that valuable information can be extracted rapidly from massive text data, automatic summarization technologies have become a research priority in the field of natural language processing (NLP). The purpose of multi-document summarization is to refine important content from a group of documents on the same topic and thereby help users get key information quickly. To address the problems of incomplete information and high redundancy in multi-document summarizations, this study proposes an extractive summarization method based on multi-granularity semantic interaction that combines the multi-granularity semantic interaction network with maximal marginal relevance (MMR). Semantic interaction with different granularities is used to train sentence representation and key information with different granularities is captured to ensure the comprehensiveness of the summarization. In addition, modified MMR is employed to ensure the low redundancy of the summarization. The sentences in the input documents are scored by learning to rank, and summary sentences are then extracted. Experimental results on the Multi-News dataset show that the proposed extractive multi-document summarization model based on multi-granularity semantic interaction outperforms some baseline models such as LexRank and TextRank.

    参考文献
    [1] 张随远, 薛源海, 俞晓明, 等. 多文档短摘要生成技术研究. 广西师范大学学报(自然科学版), 2019, 37(2): 60–74
    [2] Fabbri AR, Li I, She TW, et al. Multi-News: A large-scale multi-document summarization dataset and abstractive hierarchical model. Proceedings of the 57th Conference of the Association for Computational Linguistics. Long Papers: ACL, 2019. 1074–1084.
    [3] Liu Y, Lapata M. Hierarchical transformers for multi-document summarization. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence: ACL, 2019. 5070–5081.
    [4] Radev DR, Jing HY, Sty? M, et al. Centroid-based summarization of multiple documents. Information Processing & Management, 2004, 40(6): 919–938
    [5] Lamsiyah S, El Mahdaouy A, Espinasse B, et al. An unsupervised method for extractive multi-document summarization based on centroid approach and sentence embeddings. Expert Systems with Applications, 2021, 167: 114152. [doi: 10.1016/j.eswa.2020.114152
    [6] Mihalcea R, Tarau P. TextRank: Bringing order into text. Proceedings of 2004 Conference on Empirical Methods in Natural Language Processing. Barcelona: ACL, 2004. 404–411.
    [7] Erkan G, Radev DR. LexRank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 2004, 22: 457–479. [doi: 10.1613/jair.1523
    [8] Alzuhair A, Al-Dhelaan M. An approach for combining multiple weighting schemes and ranking methods in graph-based multi-document summarization. IEEE Access, 2019, 7: 120375–120386. [doi: 10.1109/ACCESS.2019.2936832
    [9] Brin S, Page L. The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems, 1998, 30(1–7): 107–117. [doi: 10.1016/S0169-7552(98)00110-X
    [10] Kleinberg JM. Authoritative sources in a hyperlinked environment. Journal of the ACM, 1999, 46(5): 604–632. [doi: 10.1145/324133.324140
    [11] 张云纯, 张琨, 徐济铭, 等. 基于图模型的多文档摘要生成算法. 计算机工程与应用, 2020, 56(16): 124–131. [doi: 10.3778/j.issn.1002-8331.1905-0456
    [12] Cao ZQ, Li WJ, Li SJ, et al. Improving multi-document summarization via text classification. Proceedings of the 31st AAAI Conference on Artificial Intelligence. San Francisco: AAAI, 2017. 3053–3059.
    [13] Yasunaga M, Zhang R, Meelu K, et al. Graph-based neural multi-document summarization. Proceedings of the 21st Conference on Computational Natural Language Learning. Vancouver: ACL, 2017. 452–462.
    [14] Wang DQ, Liu PF, Zheng YN, et al. Heterogeneous graph neural networks for extractive document summarization. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020. 6209–6219.
    [15] Cho S, Lebanoff L, Foroosh H, et al. Improving the similarity measure of determinantal point processes for extractive multi-document summarization. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence: ACL, 2019. 1027–1038.
    [16] Hinton GE, Sabour S, Frosst N. Matrix capsules with EM routing. Proceedings of the 6th International Conference on Learning Representations. Vancouver: ICLR, 2018. 1–15.
    [17] Narayan S, Cohen SB, Lapata M. Ranking sentences for extractive summarization with reinforcement learning. Proceedings of 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. New Orleans: ACL, 2018. 1747–1759.
    [18] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach: ACM, 2017. 6000–6010.
    [19] Carbonell J, Goldstein J. The use of MMR, diversity-based reranking for reordering documents and producing summaries. Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Melbourne: ACM, 1998. 335–336.
    [20] Jin HQ, Wang TM, Wan XJ. Multi-granularity interaction network for extractive and abstractive multi-document summarization. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020. 6244–6254.
    [21] See A, Liu PJ, Manning CD. Get to the point: Summarization with pointer-generator networks. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Vancouver: ACL, 2017. 1073–1083.
    [22] Gehrmann S, Deng YT, Rush AM. Bottom-up abstractive summarization. Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing. Brussels: ACL, 2018. 4098–4109.
    [23] Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 2014, 15(1): 1929–1958
    [24] Lin CY. ROUGE: A package for automatic evaluation of summaries. Proceedings of the Workshop on Text Summarization Branches Out. Barcelona: ACL, 2004. 74–81.
    [25] Lebanoff L, Song KQ, Liu F. Adapting the neural encoder-decoder framework from single to multi-document summarization. Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing. Brussels: ACL, 2018. 4131–4141.
    相似文献
    引证文献
引用本文

田媛,郝文宁,陈刚,靳大尉,邹傲.基于多粒度语义交互的抽取式多文档摘要.计算机系统应用,2022,31(7):186-193

复制
分享
文章指标
  • 点击次数:1031
  • 下载次数: 1382
  • HTML阅读次数: 1553
  • 引用次数: 0
历史
  • 收稿日期:2021-10-05
  • 最后修改日期:2021-11-08
  • 在线发布日期: 2022-06-02
文章二维码
您是第12459984位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号