联合知识的融合训练模型
作者:
基金项目:

国家水体污染控制与治理科技重大专项(2012ZX07505)


Ensemble Training Model Integrating Knowledge
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [22]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    在互联网医疗领域, 智能AI分科室是一个很关键的环节, 就是根据患者病情描述、疾病特征、药品等信息将患者分配到匹配的科室, 可以利用深层双向Transformer结构的BERT预训练语言模型增强字的语义, 但是患者病情文本描述具有信息稀疏的特点, 不利于BERT的充分学习其中特征. 本文提出了一种DNNBERT模型. 是一种融合知识的联合训练模型, DNNBERT结合了神经网络(DNN)和Transformer模型的优势, 能从文本中学习到更多的语义. 实验证明DNNBERT的计算时间相比BERT-large速度提升1.7倍, 并且准确率比ALBERT的F1值提高了0.12, 比TextCNN提高了0.17, 本文的工作将为特征稀疏学习提供新思路, 也将为基于深度Transformer的模型应用于生产提供新的思路.

    Abstract:

    In the field of Internet-based medical treatment, AI-based triage represents a key link, which allocates patients to departments according to conditions, disease attributes, medications, etc. We can adopt the BERT with a deep bi-directional Transformer structure for language model pre-training to enhance the word semantics; however, the text description of patients’ conditions offers sparse information, which is not conducive to the full learning of characteristics by BERT. This paper presents DNNBERT, a joint training model integrating knowledge. Combining the advantages of Deep Neural Network (DNN) and the Transformer model, DNNBERT can learn more semantics from text. The experimental results prove that the computing time of DNNBERT is 1.7 times shorter than that of BERT-large; the accuracy rate of DNNBERT is 0.12 higher than the F1 value of ALBERT and 0.17 higher than that of TextCNN. This paper will provide a new idea for sparse feature learning and the applications of deep Transformer-based models to production.

    参考文献
    [1] Lan ZZ, Chen MD, Goodman S, et al. ALBERT:A lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019.
    [2] Guo T, Lin T. Multi-variable LSTM neural network for autoregressive exogenous model. arXiv preprint arXiv:1806.06384, 2018.
    [3] Cho K, Van Merriënboer B, Gulcehre C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
    [4] Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space. Proceedings of the 1st International Conference on Learning Representations. arXiv preprint arXiv:1301.3781, 2013.
    [5] Peters ME, Neumann M, Iyyer M, et al. Deep contextualized word representations. Proceedings of 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. New Orleans, LA, USA. 2018. 2227-2237.
    [6] Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training. https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.[2020-10].
    [7] Devlin J, Chang MW, Lee K, et al. BERT:Pre-training of deep bidirectional transformers for language understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. Minneapolis, MN, USA. 2019. 4171-4186.
    [8] Yang Z, Dai Z, Yang Y, et al. XLNet:Generalized autoregressive pretraining for language understanding. 2019.
    [9] Vaswani A, Shazeer N, Parmar N. et al. Attention is all you need. Proceedings of the 31st Conference on Neural Information Processing Systems. Long Beach, CA, USA. 2017. 5998-6008.
    [10] Zhang ZY, Han X, Liu ZY, et al. ERNIE:Enhanced language representation with informative entities. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence, Italy. 2019. 1441-1451.
    [11] Liu WJ, Zhou P, Zhao Z, et al. K-BERT:Enabling language representation with knowledge graph. Proceedings of the AAAI Conference on Artificial Intelligence. New York, NY, USA. 2020. 2901-2908.
    [12] Sun Y, Wang S, Li Y, et al. ERNIE 2.0:A continual pre-training framework for language understanding. arXiv preprint arXiv:1907.12412, 2019.
    [13] Liu Y, Ott M, Goyal N, et al. Roberta:A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
    [14] Joshi M, Chen D, Liu Y, et al. Spanbert:Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics. 2020, 8:64-77.
    [15] Wang W, Bi B, Yan M, et al. StructBERT:Incorporating language structures into pre-training for deep language understanding. Proceedings of the 8th International Conference on Learning Representations. arXiv preprint arXiv:1908.04577, 2020.
    [16] Qiu XP, Sun TX, Xu YG, et al. Pre-trained models for natural language processing:A survey. arXiv preprint arXiv:2003.08271, 2020.
    [17] Qiao YF, Xiong CY, Liu ZH, et al. Understanding the behaviors of BERT in ranking. arXiv preprint arXiv:1904.07531, 2019.
    [18] Nogueira R, Yang W, Cho K, et al. Multi-stage document ranking with BERT. arXiv preprint arXiv:1910.14424, 2019.
    [19] Lu WH, Jiao J, Zhang RF, et al. TwinBERT:Distilling knowledge to twin-structured BERT models for efficient retrieval. arXiv preprint arXiv:2002.06275, 2020.
    [20] Kim Y. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014.
    [21] Joulin A, Grave E, Bojanowski P, et al. Bag of tricks for efficient text classification. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Valencia, Spain. 2017. 427-431.
    [22] 段丹丹, 唐加山, 温勇, 袁克海. 基于BERT模型的中文短文本分类算法.计算机工程, 2021, 47(01):79-86.
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

王永鹏,周晓磊,马慧敏,曹吉龙.联合知识的融合训练模型.计算机系统应用,2021,30(7):50-56

复制
分享
文章指标
  • 点击次数:892
  • 下载次数: 2144
  • HTML阅读次数: 1855
  • 引用次数: 0
历史
  • 收稿日期:2020-11-04
  • 最后修改日期:2020-12-12
  • 在线发布日期: 2021-07-02
文章二维码
您是第11200890位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号