基于正负样本和Bi-LSTM的文本相似度匹配模型
作者:
基金项目:

国家自然科学基金(61402246); 山东省高等学校科技计划(J14LN31)


Text Similarity Matching Model Based on Positive and Negative Samples and Bi-LSTM
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [20]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    相似度匹配是自然语言处理领域一个重要分支, 也是问答系统抽取答案的重要途径之一. 本文提出了一种基于正负样本和Bi-LSTM的文本相似度匹配模型, 该模型首先为了提升问题和正确答案之间的相似度, 构建正负样本问答对用于模型训练; 其次为了解决分词错误引起的实验误差, 采用双层嵌入词向量方法进行预训练; 再次为了解决注意力机制导致的特征向量向后偏移的问题, 在特征提取之前, 采取内部注意力机制方法; 然后为了保留重要的时序特性, 采用Bi-LSTM神经网络进行数据训练; 最后为了能在语义层次上计算相似度, 提出一种包含语义信息的相似度计算函数. 将本文提出的文本相似度匹配模型在公共数据集DuReader上进行了仿真实验, 并和其他模型进行对比分析, 实验结果表明, 提出的模型不仅准确率高且鲁棒性好, top-1准确率达到78.34%.

    Abstract:

    Similarity matching is crucial for natural language processing and also for extracting answers from the question answering system. This study proposes a model of text similarity matching based on positive and negative samples and Bi-LSTM. Firstly, this model constructs question answering pairs for positive and negative samples in model training, improving the similarity between a question and its correct answer. Secondly, it applies the dual-layer word vector embedding for pre-training to solve the experimental error caused by segmentation mistakes. Thirdly, it adopts the internal attention mechanism before feature extraction to solve the backward offset of the characteristic vectors caused by the attention mechanism. Then this model trains the data on the Bi-LSTM neural network to retain important temporal characteristics. Finally, it puts forward a similarity calculation function including semantic information to calculate similarity at the semantic level. The model proposed in this study is simulated on the public data set DuReader and compared with other models. The experimental results show that the proposed model has high accuracy and good robustness, and the accuracy of top-1 reaches 78.34%.

    参考文献
    [1] 桑瑞婷. 面向高校迎新的机器人问答系统研究[硕士学位论文]. 重庆: 重庆理工大学, 2019.
    [2] 卢超. 基于深度学习的句子相似度计算方法研究[硕士学位论文]. 太原: 中北大学, 2019.
    [3] 徐雄. 基于深度学习的问答系统研究. 湖北师范大学学报(自然科学版), 2019, 39(1): 10–18
    [4] Kumar A, Irsoy O, Ondruska P, et al. Ask me anything: Dynamic memory networks for natural language processing. Proceedings of the 33rd International Conference on International Conference on Machine Learning. New York City, NY, USA. 2016. 1378–1387.
    [5] Wang Q, Xu CM, Zhou YM, et al. An attention-based Bi-GRU-CapsNet model for hypernymy detection between compound entities. Proceedings of 2018 IEEE International Conference on Bioinformatics and Biomedicine. Madrid, Spain. 2018. 1031–1035.
    [6] Dos Santos C, Tan M, Xiang B, et al. Attentive pooling networks. arXiv: 1602.03609, 2016.
    [7] Peters M E, Neumann M, Iyyer M, et al. Deep contextualized word representations. Proceedings of 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. New Orleans, LA, USA. 2018. 2227–2237.
    [8] Zhou XY, Dong DX, Wu H, et al. Multi-view response selection for human-computer conversation. Proceedings of 2016 Conference on Empirical Methods in Natural Language Processing. Austin, TX, USA. 2016. 372–381.
    [9] Huang ZH, Xu W, Yu K. Bidirectional LSTM-CRF models for sequence tagging. arXiv: 1508.01991, 2015.
    [10] 李琳, 李辉. 一种基于概念向量空间的文本相似度计算方法. 数据分析与知识发现, 2018, 2(5): 48–58
    [11] Wang BN, Liu K, Zhao J. Inner attention based recurrent neural networks for answer selection. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany. 2016. 1288–1297.
    [12] Graves A. Supervised Sequence Labelling. Berlin, Heidelberg: Springer. 2012. 5–13.
    [13] Adomavicius G, Zhang JJ. Classification, ranking, and top-k stability of recommendation algorithms. INFORMS Journal on Computing, 2016, 28(1): 129–147. [doi: 10.1287/ijoc.2015.0662
    [14] Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space. arXiv: 1301.3781, 2013.
    [15] İrsoy O, Cardie C. Deep recursive neural networks for compositionality in language. Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, QC, Canada. 2014. 2096–2104.
    [16] 周艳平, 李金鹏, 蔡素. 基于同义词词林的句子语义相似度方法及其在问答系统中的应用. 计算机应用与软件, 2019, 36(8): 65–68
    [17] Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 2014, 15(1): 1929–1958
    [18] He W, Liu K, Liu J, et al. Dureader: A chinese machine reading comprehension dataset from real-world applications. Proceedings of the Workshop on Machine Reading for Question Answering. Melbourne, Australia. 2017. 37–46.
    [19] Abadi M, Agarwal A, Barham P, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv: 1603.04467, 2016.
    [20] 赵明, 董翠翠, 董乔雪, 等. 基于BIGRU的番茄病虫害问答系统问句分类研究. 农业机械学报, 2018, 49(5): 271–276
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

周艳平,朱小虎.基于正负样本和Bi-LSTM的文本相似度匹配模型.计算机系统应用,2021,30(4):175-180

复制
分享
文章指标
  • 点击次数:1117
  • 下载次数: 2565
  • HTML阅读次数: 2519
  • 引用次数: 0
历史
  • 收稿日期:2020-07-30
  • 最后修改日期:2020-08-26
  • 在线发布日期: 2021-03-31
文章二维码
您是第11206140位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号