基于多模态数据融合的飞行员注视区域分类
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

基金项目:

国家自然科学基金(52072293)


Pilot’s Gaze Zone Classification Based on Multi-modal Data Fusion
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    为了解决图像采集过程中眼图消失和头部姿态估计不准确的问题, 利用基于非接触式的眼部信息获取方法采集人脸图像, 从单个图像帧中确定飞行员当前的注视方向. 同时, 针对现有网络忽略头部运动对视线造成遮挡所导致的分类效果不佳问题, 结合人脸图像与头部姿态特征, 通过改进的MobileViT模型提出一种用于飞行员注视区域分类的多模态数据融合网络. 首先提出了多模态数据融合模块解决特征拼接过程中尺寸不平衡导致的过拟合问题, 其次提出一种基于并行分支SE机制的逆残差块, 充分利用网络浅层的空间和通道特征信息, 并结合Transformer的全局注意力机制捕捉多尺度特征. 最后, 重新设计了Mobile Block结构, 使用深度可分离卷积降低模型复杂度. 利用自制数据集FlyGaze对新模型和主流基线模型进行对比, 实验结果表明, PilotT模型对注视区域0、3、4、5的分类准确率均在92%以上, 且对人脸发生偏转的情况具有较强适应力. 研究结果对提升飞行训练质量以及飞行员意图识别和疲劳评估具有实际应用价值.

    Abstract:

    To avoid eye image disappearance and inaccurate head pose estimation during image capture, a non-contact method for acquiring eye information is employed to collect facial images, determining the pilot’s current gaze direction from a single image frame. Concurrently, considering the poor classification of current networks due to the neglect of visual obstruction caused by head movements, with a combination of facial images and head poses, a multimodal data fusion network for the pilot’s gaze region classification is proposed using an improved MobileVit model. Firstly, a multi-modal data fusion module is introduced to address the problem of overfitting resulting from size imbalances during feature concatenation. Additionally, an inverse residual block based on a parallel branch SE mechanism is proposed to fully leverage spatial and channel feature information in the shallow layers of the network. Moreover, multi-scale features are captured by integrating the global attention mechanism from the Transformer. Finally, the Mobile Block structure is redesigned and the depthwise separable convolution is utilized to reduce model complexity. Experimental comparisons with mainstream baseline models are conducted using a self-made dataset FlyGaze. The results demonstrate that the PilotT model achieves classification accuracies exceeding 92% for gaze regions 0, 3, 4, and 5, with robust adaptability to facial deflection. These findings hold practical significance for enhancing flight training quality and facilitating pilot intention recognition and fatigue assessment.

    参考文献
    相似文献
    引证文献
引用本文

段高乐,王长元,吴恭朴,王红艳.基于多模态数据融合的飞行员注视区域分类.计算机系统应用,2024,33(11):1-14

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-04-21
  • 最后修改日期:2024-05-20
  • 录用日期:
  • 在线发布日期: 2024-09-27
  • 出版日期:
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号