###
计算机系统应用英文版:2020,29(12):126-134
本文二维码信息
码上扫一扫!
基于深度学习的运动目标交接算法
(1.山东建筑大学 信息与电气工程学院, 济南 250101;2.山东省智能建筑技术重点实验室, 济南 250101)
Algorithm of Moving Target Handover Based on Deep Learning
(1.School of Information and Electrical Engineering, Shandong Jianzhu University, Jinan 250101, China;2.Shandong Provincial Key Laboratory of Intelligent Building Technology, Jinan 250101, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 1056次   下载 1901
Received:April 06, 2020    Revised:April 28, 2020
中文摘要: 针对多摄像机非重叠视域下存在的运动目标不连续性和不确定性的问题, 提出一种基于深度学习的运动行人目标的交接算法. 首先基于深度卷积神经网络构建人脸特征提取模型, 对人脸特征提取模型进行训练, 获得精确的人脸特征. 然后比较两种常用的相似度度量方法, 选择其中一种更适合的相似度度量方法, 以完成最优的人脸匹配过程, 提高人脸匹配的准确率. 最后通过对不同摄像机下的人脸进行特征匹配找到最匹配的人脸, 实现运动目标的交接. 实验表明, 深度神经网络可以减少运动目标丢失的概率, 准确地提取到运动目标的人脸特征, 有效完成多摄像机下运动目标的交接跟踪任务.
Abstract:In view of the discontinuity and uncertainty of moving objects in the non overlapping view of multiple cameras, a handover algorithm of moving pedestrian target based on deep learning is proposed. Firstly, a face feature extraction model is constructed based on deep convolution neural network, and the face feature extraction model is trained to obtain accurate face features. Then compare two common similarity measurement methods, choose a more suitable similarity measurement method to complete the optimal face matching process, and then improve the accuracy of face matching. Finally, the most matching face can be found by feature matching under different cameras to realize the handover of moving objects. Experiments show that the depth neural network can accurately extract the facial features of moving objects, achieve the accurate matching process of human faces, and effectively complete the task of moving object tracking under multi cameras.
文章编号:     中图分类号:    文献标志码:
基金项目:山东省重点研发计划(2019GSF111054, 2019GGX104095); 山东省重大科技创新工程(2019JZZY010120)
引用文本:
曹建荣,武欣莹,吕俊杰,王亚萌,杨红娟,张旭.基于深度学习的运动目标交接算法.计算机系统应用,2020,29(12):126-134
CAO Jian-Rong,WU Xin-Ying,LYU Jun-Jie,WANG Ya-Meng,YANG Hong-Juan,ZHANG Xu.Algorithm of Moving Target Handover Based on Deep Learning.COMPUTER SYSTEMS APPLICATIONS,2020,29(12):126-134