融合BERT和图注意力网络的多标签文本分类
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:


Incorporating BERT and Graph Attention Network for Multi-label Text Classification
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 增强出版
  • |
  • 文章评论
    摘要:

    多标签文本分类问题是多标签分类的重要分支之一, 现有的方法往往忽视了标签之间的关系, 难以有效利用标签之间存在着的相关性, 从而影响分类效果. 基于此, 本文提出一种融合BERT和图注意力网络的模型HBGA (hybrid BERT and graph attention): 首先, 利用BERT获得输入文本的上下文向量表示, 然后用Bi-LSTM和胶囊网络分别提取文本全局特征和局部特征, 通过特征融合方法构建文本特征向量, 同时, 通过图来建模标签之间的相关性, 用图中的节点表示标签的词嵌入, 通过图注意力网络将这些标签向量映射到一组相互依赖的分类器中, 最后, 将分类器应用到特征提取模块获得的文本特征进行端到端的训练, 综合分类器和特征信息得到最终的预测结果. 在Reuters-21578和AAPD两个数据集上面进行了对比实验, 实验结果表明, 本文模型在多标签文本分类任务上得到了有效的提升.

    Abstract:

    The multi-label text classification is one of the important branches of multi-label classification. Existing methods often ignore the relationship between labels, and thus the correlation between labels can hardly be put into effective use, which affects the effects of classification. On this basis, this study proposes a hybrid BERT and graph attention (HBGA) model that fuses BERT and the graph attention network. First, BERT is employed to obtain the context vector representation of the input text, and Bi-LSTM and the capsule network are used to extract the global and local features of the text, respectively. Then, through feature fusion, text feature vectors are constructed. Meanwhile, the correlation between labels is modeled through graphs, and the nodes in graphs are used to represent the word embedding of the labels, and these label vectors are mapped to a set of interdependent classifiers through the graph attention network. Finally, the classifiers are applied to the text features obtained by the feature extraction module for end-to-end training. The classifier and feature information are integrated to obtain the final prediction results. Comparative experiments are performed on datasets Reuters-21578 and AAPD, and the experimental results indicate that the model in this study has been effectively improved on tasks of multi-label text classification.

    参考文献
    相似文献
    引证文献
引用本文

郝超,裘杭萍,孙毅.融合BERT和图注意力网络的多标签文本分类.计算机系统应用,2022,31(6):167-174

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2021-08-13
  • 最后修改日期:2021-09-13
  • 录用日期:
  • 在线发布日期: 2022-05-26
  • 出版日期:
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号