本文已被:浏览 538次 下载 1371次
Received:August 13, 2021 Revised:September 13, 2021
Received:August 13, 2021 Revised:September 13, 2021
中文摘要: 文本表示学习作为自然语言处理的一项重要基础性工作, 在经历了向量空间模型、词向量模型以及上下文分布式表示的一系列发展后, 其语义表示能力已经取得了较大突破, 并直接促进模型在机器阅读、文本检索等下游任务上的表现不断提升. 然而, 预训练语言模型作为当前最先进的文本表示学习方法, 在训练阶段和预测阶段的时空复杂度较高, 造成了较高的使用门槛. 为此, 本文提出了一种基于深度哈希和预训练的新的文本表示学习方法, 旨在以更低的计算量实现尽可能高的文本表示能力. 实验结果表明, 在牺牲有限性能的情况下, 本文所提出的方法可以大幅降低模型在预测阶段的计算复杂度, 在很大程度上提升了模型在预测阶段的使用效率.
Abstract:As a cornerstone of natural language processing, text representation learning has made a great breakthrough in its semantic representation ability when it undergoes the development of the vector space model, word embedding model, and contextual distributed representation. In addition, it directly promotes the continuous improvement of the performance of models in downstream tasks such as machine reading and text retrieval. However, as the most advanced text representation learning method, the pre-trained language model has high space-time complexity in the training and prediction stages, which results in a high threshold of use. Therefore, this study proposes a new text representation learning method based on deep hashing and pre-training, which aims to achieve as high a text representation ability as possible with less computation. The experimental results show that the proposed method can remarkably reduce the computational complexity and to a great extent improve the efficiency of the model in the prediction stage.
keywords: deep hashing pre-trained language models Transformer text representation learning deep learning attention mechanism
文章编号: 中图分类号: 文献标志码:
基金项目:国家自然科学基金(61806221)
引用文本:
邹傲,郝文宁,田媛.基于深度哈希的文本表示学习.计算机系统应用,2022,31(6):158-166
ZOU Ao,HAO Wen-Ning,TIAN Yuan.Text Representation Learning Based on Deep Hashing.COMPUTER SYSTEMS APPLICATIONS,2022,31(6):158-166
邹傲,郝文宁,田媛.基于深度哈希的文本表示学习.计算机系统应用,2022,31(6):158-166
ZOU Ao,HAO Wen-Ning,TIAN Yuan.Text Representation Learning Based on Deep Hashing.COMPUTER SYSTEMS APPLICATIONS,2022,31(6):158-166