Facial Expression Recognition of Infants Based on MIFNet
CSTR:
Author:
  • Article
  • | |
  • Metrics
  • |
  • Reference [31]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    The intelligent recognition of infant facial expressions can help caregivers to better pay attention to the physical and mental health of infants. Due to the smooth facial lines and weak sharpness of facial features, the inter-class similarity of infants’ facial expressions is higher than that of adults. To address the problem of high inter-class similarity, this study proposes a multi-scale information fusion network. The network is divided into two stages as a whole. In the first stage, the fusion module is applied to fuse local features with global features in the dual dimensions of both spatial and channel domains to enhance the expression ability of features. In the second stage, the self-adaptive deep centre loss is employed to estimate the weights of fused features based on the attentional mechanism, thus guiding the center loss and promoting the intra-class compactness and inter-class separation of infant expression features. The experimental results show that the multi-scale information fusion network achieves a recognition accuracy of 95.46% in the infant facial expressions dataset, reaching 99.07%, 95.88%, and 95.89% in the three evaluation metrics of AUC, recall, and F1 score respectively. The recognition effectiveness is optimal compared with the existing facial expression recognition networks. The generalization experiments of the multi-scale information fusion network are conducted on the public facial expressions dataset, with an accuracy of 89.87%.

    Reference
    [1] Fang CY, Ma CW, Chiang ML, et al. An infant emotion recognition system using visual and audio information. Proceedings of the 4th International Conference on Industrial Engineering and Applications (ICIEA). Nagoya: IEEE, 2017. 284–291.
    [2] Zhang LR, Xu C, Li S. Facial expression recognition of infants based on multi-stream CNN fusion network. Proceedings of the 5th International Conference on Signal and Image Processing. Nanjing: IEEE, 2020. 37–41.
    [3] Zamzami G, Ruiz G, Goldgof D, et al. Pain assessment in infants: Towards spotting pain expression based on infants’ facial strain. Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). Ljubljana: IEEE, 2015. 1–5.
    [4] Messinger DS, Mahoor MH, Chow SM, et al. Automated measurement of facial expression in infant-mother interaction: A pilot study. Infancy, 2009, 14(3): 285–305. [doi: 10.1080/15250000902839963
    [5] Matsugu M, Mori K, Mitari Y, et al. Subject independent facial expression recognition with robust face detection using a convolutional neural network. Neural Networks, 2003, 16(5–6): 555–559.
    [6] 李小薪, 梁荣华. 有遮挡人脸识别综述: 从子空间回归到深度学习. 计算机学报, 2018, 41(1): 177–207. [doi: 10.11897/SP.J.1016.2018.00177
    [7] 林景栋, 吴欣怡, 柴毅, 等. 卷积神经网络结构优化综述. 自动化学报, 2020, 46(1): 24–37
    [8] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017. 6000–6010.
    [9] Liu Z, Lin YT, Cao Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021. 9992–10002.
    [10] 徐玮, 郑豪, 杨种学. 基于双注意力模型和迁移学习的Apex帧微表情识别. 智能系统学报, 2021, 16(6): 1015–1020. [doi: 10.11992/tis.202010031
    [11] Wang K, Peng XJ, Yang JF, et al. Region attention networks for pose and occlusion robust facial expression recognition. IEEE Transactions on Image Processing, 2020, 29: 4057–4069. [doi: 10.1109/TIP.2019.2956143
    [12] Gera D, Balasubramanian S. Landmark guidance independent spatio-channel attention and complementary context information based facial expression recognition. Pattern Recognition Letters, 2021, 145: 58–66. [doi: 10.1016/j.patrec.2021.01.029
    [13] Wang K, Peng XJ, Yang JF, et al. Suppressing uncertainties for large-scale facial expression recognition. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 6896–6905.
    [14] Yu NG, Bai DG. Facial expression recognition by jointly partial image and deep metric learning. IEEE Access, 2020, 8: 4700–4707. [doi: 10.1109/ACCESS.2019.2963201
    [15] Wen YD, Zhang KP, Li ZF, et al. A discriminative feature learning approach for deep face recognition. Proceedings of the 14th European Conference on Computer Vision. Amsterdam: Springer, 2016. 499–515.
    [16] Cai J, Meng ZB, Khan AS, et al. Island loss for learning discriminative features in facial expression recognition. Proceedings of 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). Xi’an: IEEE, 2018. 302–309.
    [17] Li YJ, Lu Y, Li JX, et al. Separate loss for basic and compound facial expression recognition in the wild. Proceedings of the 11th Asian Conference on Machine Learning. Nagoya: PMLR, 2019. 897–911.
    [18] He KM, Zhang XY, Ren SQ, et al. Deep residual learning for image recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016. 770–778.
    [19] Zhang H, Goodfellow IJ, Metaxas DN, et al. Self-attention generative adversarial networks. Proceedings of the 36th International Conference on Machine Learning. Long Beach: PMLR, 2019. 7354–7363.
    [20] Li X, Wang WH, Hu XL, et al. Selective kernel networks. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 510–519.
    [21] Maack JK, Bohne A, Nordahl D, et al. The Tromso infant faces database (TIF): Development, validation and application to assess parenting experience on clarity and intensity ratings. Frontiers in Psychology, 2017, 8: 409. [doi: 10.3389/fphys.2017.00409
    [22] Webb R, Ayers S, Endress A. The city infant faces database: A validated set of infant facial expressions. Behavior Research Methods, 2018, 50(1): 151–159. [doi: 10.3758/s13428-017-0859-9
    [23] Deng JK, Guo J, Ververas E, et al. RetinaFace: Single-shot multi-level face localisation in the wild. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 5202–5211.
    [24] Li S, Deng WH, Du JP. Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017. 2584–2593.
    [25] Zeng JB, Shan SG, Chen XL. Facial expression recognition with inconsistently annotated datasets. Proceedings of the 15th European Conference on Computer Vision (ECCV). Munich: Springer, 2018. 227–243.
    [26] Farzaneh AH, Qi XJ. Facial expression recognition in the wild via deep attentive center loss. Proceedings of the 2021 IEEE Winter Conference on Applications of Computer Vision. Waikoloa: IEEE, 2021. 2401–2410.
    [27] Zhang YH, Wang CR, Deng WH. Relative uncertainty learning for facial expression recognition. Proceedings of the 35th Conference on Neural Information Processing Systems. 2021. 17616–17627.
    [28] Wen ZY, Lin WZ, Wang T, et al. Distract your attention: Multi-head cross attention network for facial expression recognition. arXiv:2109.07270, 2021.
    [29] Fard AP, Mahoor MH. Ad-corre: Adaptive correlation-based loss for facial expression recognition in the wild. IEEE Access, 2022, 10: 26756–26768. [doi: 10.1109/ACCESS.2022.3156598
    [30] Heidari N, Iosifidis A. Learning diversified feature representations for facial expression recognition in the wild. arXiv:2210.09381, 2022.
    [31] Zhao ZQ, Liu QS, Zhou F. Robust lightweight facial expression recognition network with label distribution training. Proceedings of the 35th AAAI Conference on Artificial Intelligence. AAAI Press, 2021. 3510–3519.
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

耿磊,齐婷婷,张芳,肖志涛,李月龙.基于MIFNet的婴儿面部表情识别.计算机系统应用,2023,32(8):42-53

Copy
Share
Article Metrics
  • Abstract:902
  • PDF: 1867
  • HTML: 1482
  • Cited by: 0
History
  • Received:January 16,2023
  • Revised:February 13,2023
  • Online: June 09,2023
Article QR Code
You are the first990593Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063