Using Word Clustering to Improve Recurrent Neural Network Language Model
DOI:
CSTR:
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Previous studies proved that, adding part of speech tag information to the input layer of neural language model, can improve the performance significantly. But part of speech tag need hand-annotated data to train the tag model, which consumes a lot and the extra tagger also makes the model more complicated. To solve the problem, this article propose adding the results of brown clustering, instead of part of speech tag information to the input layer of the recurrent network language model. In the Penn Treebank corpus, the relative improvement over the original recurrent neural network language model reaches 8%~9%.

    Reference
    Related
    Cited by
Get Citation

刘章,陈小平.联合无监督词聚类的递归神经网络语言模型.计算机系统应用,2014,23(5):101-106

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:September 12,2013
  • Revised:November 11,2013
  • Adopted:
  • Online: May 29,2014
  • Published:
Article QR Code
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063