本文已被:浏览 885次 下载 1284次
Received:September 18, 2021 Revised:October 29, 2021
Received:September 18, 2021 Revised:October 29, 2021
中文摘要: 目前人脸表情识别研究多数采用卷积神经网络(CNN)提取人脸特征并分类, CNN的缺点是网络结构复杂, 消耗计算资源. 针对以上缺点, 本文采用基于多层感知机(MLP)的Mixer Layer网络结构用于人脸表情识别. 采用数据增强和迁移学习方法解决数据集样本不足的问题, 搭建了不同层数的Mixer Layer网络. 经过实验比较, 4层Mixer Layer网络在CK+和JAFFE 数据集上的识别准确率分别达到了98.71%和95.93%, 8层Mixer Layer网络在Fer2013数据集上的识别准确率达到了63.06%. 实验结果表明, 无卷积结构的Mixer Layer网络在人脸表情识别任务上表现出良好的学习能力和泛化能力.
中文关键词: 深度学习 迁移学习 表情识别 Mixer Layer 图像识别
Abstract:At present, most facial expression recognition research uses a convolutional neural network (CNN) to extract facial features and classify them. The disadvantage of CNN is that its network structure is complex and consumes substantial computing resources. In response, this study uses the Mixer Layer network structure based on multilayer perceptron (MLP) for facial expression recognition. Data augmentation and transfer learning methods are employed to solve the problem of insufficient data set samples, and Mixer Layer networks with different layers are built. According to experimental comparison, the recognition accuracy of the 4-layer Mixer Layer network on CK+ and JAFFE data sets reach 98.71% and 95.93% respectively, and that of the 8-layer Mixer Layer network on Fer2013 data set is 63.06%. The experimental results show that the Mixer Layer networks without a convolution structure exhibit sound learning and generalization abilities in facial expression recognition tasks.
文章编号: 中图分类号: 文献标志码:
基金项目:北京市自然基金和北京市教委联合项目(KZ202010015021); 北京市教育委员会科研计划(KM201910015003); 北京印刷学院科研项目(Ec202002, Eb202103)
引用文本:
简腾飞,王佳,曹少中,杨树林,张寒.基于Mixer Layer的人脸表情识别.计算机系统应用,2022,31(7):128-134
JIAN Teng-Fei,WANG Jia,CAO Shao-Zhong,YANG Shu-Lin,ZHANG Han.Facial Expression Recognition Based on Mixer Layer.COMPUTER SYSTEMS APPLICATIONS,2022,31(7):128-134
简腾飞,王佳,曹少中,杨树林,张寒.基于Mixer Layer的人脸表情识别.计算机系统应用,2022,31(7):128-134
JIAN Teng-Fei,WANG Jia,CAO Shao-Zhong,YANG Shu-Lin,ZHANG Han.Facial Expression Recognition Based on Mixer Layer.COMPUTER SYSTEMS APPLICATIONS,2022,31(7):128-134