###
计算机系统应用英文版:2017,26(9):158-164
本文二维码信息
码上扫一扫!
基于DBN的多特征融合音乐情感分类方法
(中国石油大学(华东) 计算机与通信工程学院, 青岛 266580)
Music Mood Classification Method Based on Deep Belief Network and Multi-Feature Fusion
(School of Computer & Communication, China University of Petroleum, Qingdao 266580, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 1284次   下载 4019
Received:December 28, 2016    
中文摘要: 本文在音乐情感分类中的两个重要的环节:特征选择和分类器上进行了探索.在特征选择方面基于传统算法中单一特征无法全面表达音乐情感的问题,本文提出了多特征融合的方法,具体操作方式是用音色特征与韵律特征相结合作为音乐情感的符号表达;在分类器选择中,本文采用了在音频检索领域表现较好的深度置信网络进行音乐情感训练和分类.实验结果表明,该算法对音乐情感分类的表现较好,高于单一特征的分类方法和SVM分类的方法.
Abstract:In the paper we explore the two important parts of music emotion classification:feature selection and classifier. In terms of feature selection, single feature cannot fully present music emotions in the traditional algorithm, which, however, can be solved by the multi-feature fusion put forward in this paper. Specifically, the sound characteristics and prosodic features are combined as a symbol to express music emotion. In the classifier selection, the deep belief networks are adopted to train and classify music emotions, which had a better performance in the area of audio retrieval. The results show that the algorithm performs better than the single feature classification and SVM classification in music emotion classification.
文章编号:     中图分类号:    文献标志码:
基金项目:
引用文本:
龚安,丁明波,窦菲.基于DBN的多特征融合音乐情感分类方法.计算机系统应用,2017,26(9):158-164
GONG An,DING Ming-Bo,DOU Fei.Music Mood Classification Method Based on Deep Belief Network and Multi-Feature Fusion.COMPUTER SYSTEMS APPLICATIONS,2017,26(9):158-164