Music Mood Classification Method Based on Deep Belief Network and Multi-Feature Fusion
CSTR:
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    In the paper we explore the two important parts of music emotion classification:feature selection and classifier. In terms of feature selection, single feature cannot fully present music emotions in the traditional algorithm, which, however, can be solved by the multi-feature fusion put forward in this paper. Specifically, the sound characteristics and prosodic features are combined as a symbol to express music emotion. In the classifier selection, the deep belief networks are adopted to train and classify music emotions, which had a better performance in the area of audio retrieval. The results show that the algorithm performs better than the single feature classification and SVM classification in music emotion classification.

    Reference
    Related
    Cited by
Get Citation

龚安,丁明波,窦菲.基于DBN的多特征融合音乐情感分类方法.计算机系统应用,2017,26(9):158-164

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:December 28,2016
  • Revised:
  • Adopted:
  • Online: October 31,2017
  • Published:
Article QR Code
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063