Incremental Knowledge Construction and Mask Replay Strategy in NLP Scenario
CSTR:
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In increment learning, as the number of tasks increases, the knowledge learned by the model on the old task is catastrophically forgotten after the model is trained on the new task due to a series of problems such as step-by-step data migration, resulting in the degradation of the model performance on the old task. Given this problem, a class-incremental learning method based on knowledge decoupling is proposed in this study. This method can learn the common and unique knowledge of different tasks hierarchically, combine the two kinds of knowledge dynamically, and apply them to the downstream classification tasks. Besides, the mask strategy of the natural language model is used in replay learning, which prompts the model to quickly recall the knowledge of the previous tasks. In class-incremental experiments on NLP datasets—AGNews, Yelp, Amazon, DBPedia and Yahoo, the proposed method can effectively reduce the forgetting of the model and improve the accuracy and other indicators on various tasks.

    Reference
    Related
    Cited by
Get Citation

周航,黄震华.自然语言场景下增量知识构造与遮蔽回放策略.计算机系统应用,2023,32(8):269-277

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:January 12,2023
  • Revised:February 09,2023
  • Adopted:
  • Online: June 09,2023
  • Published:
Article QR Code
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063