﻿ 结合单高斯与光流法的无人机运动目标检测
 计算机系统应用  2019, Vol. 28 Issue (2): 184-189 PDF

Detection of Moving Objects in UAV Video Based on Single Gaussian Model and Optical Flow Analysis
FAN Chang-Jun, WEN Ling-Yan, MAO Quan-Yong, ZHU Zhong-Ke
The 52nd Research Institute, CETHIK Group Co. Ltd., Hangzhou 310012, China
Abstract: To meet the real-time demand of moving object detection in Unmanned Air Vehicle (UAV), and to cope with the problems of moving background and variable illumination, a novel moving object detection technique based on Single Gaussian Model (SGM) and optical flow is presented. First, an improved SGM is applied to model the background of the image captured by moving camera, and then the corresponding models of previous frame are fused to compensate the motion of camera. Second, the obtained foreground is used as a mask to extract feature points to calculate optical flow, and then these sparse points are clustered to detect the objects. Experimental results demonstrated the effectiveness of the proposed approach in preventing the background model of SGM from being contaminated by the foreground, as well as dealing with illumination changes. It can also update background model quickly and obtain moving objects precisely.
Key words: moving object detection     Single Gaussian Model (SGM)     motion compensation     optical flow analysis     hierarchical clustering

1 总体流程框架

 图 1 总体流程框架

2 改进的单高斯模型 2.1 单高斯模型的生命值

 $\mu _i^t = \frac{{\widetilde l_i^{t - 1}}}{{\widetilde l_i^{t - 1} + 1}}\widetilde \mu _i^{t - 1} + \frac{1}{{\widetilde l_i^{t - 1} + 1}}U_i^t$ (1)
 $\sigma _i^t = \frac{{\widetilde l_i^{t - 1}}}{{\widetilde l_i^{t - 1} + 1}}\widetilde \sigma _i^{t - 1} + \frac{1}{{\widetilde l_i^{t - 1} + 1}}V_i^t$ (2)
 $l_i^t = \widetilde l_i^{t - 1} + 1$ (3)

 $U_i^t = \frac{1}{{\left| {S_i^t} \right|}}\sum\limits_{j \in {S_i}} {I_j^t}$ (4)
 $V_i^t = \mathop {\max }\limits_{j \in {S_i}} {(\mu _i^t - I_j^t)^2}$ (5)
2.2 单高斯模型的选择

 ${(U_i^t - \mu _{C,i}^t)^2} < {\theta _S}\sigma _{C,i}^t$ (6)
 ${(U_i^t - \mu _{O,i}^t)^2} < {\theta _S}\sigma _{O,i}^t$ (7)
 $l_{C,i}^t < l_{O,i}^t$ (8)

2.3 运动补偿

 $\widetilde \mu_i^{t - 1} = \sum\limits_{k \in G_i^t} {{\omega _k}\mu_k^{t - 1}}$ (9)
 $\widetilde \sigma _i^{t - 1} = \sum\limits_{k \in G_i^t} {{\omega _k}[\sigma _k^{t - 1} + {{(\mu _k^{t - 1})}^2} - {{(\widetilde \mu _i^{t - 1})}^2}]}$ (10)
 $\widetilde l_i^{t - 1} = \sum\limits_{k \in G_i^t} {{\omega _k}l_k^{t - 1}}$ (11)

 ${\omega _k} \propto \iint\limits_{\widetilde {S_i^{t - 1}} \cap G_i^t} {dxdy} \text{并且}\sum\limits_k {{\omega _k}} = 1$ (12)

3 稀疏光流分析

 $N = \sum\limits_p {\left| {I(p) - I(o)} \right|} < \xi$ (13)

 图 2 角点探测模板示意图

1) 读入一帧图像, 对其进行灰度化操作, 若当前处理的是第一帧图像, 只初始化相应参数, 否则转步骤2);

2) 采用FAST算法计算得到前一帧图像的特征点集合PreFeaturePtSet, 获取原跟踪点的运动轨迹在前一帧的位置集合PreTrackPtSet;

3) 计算PreFeaturePtSet集合中的特征点与PreTrackPtSet集合中的各跟踪点之间是否存在欧式距离小于阈值的情况, 如果不存在, 则认为该特征点是新出现的需要跟踪的点, 将其加入PreTrackPtSet集合中;

4) 对PreTrackPtSet中的跟踪点在前一帧和当前帧图像间进行LK金字塔光流检测, 得到在当前帧的位置集合CurTrackPtSet;

5) 根据检测的结果依次处理各类跟踪点. 对已存在运动轨迹的原跟踪点, 如果检测到光流, 则更新跟踪点在当前帧的位置为CurTrackPtSet中的对应点, 并更新其last_update_index为当前帧序号; 如果没有检测到光流, 则继承上一帧的跟踪结果, 不更新其last_update_index. 对检测到光流的新跟踪点, 为其建立运动轨迹对应的数据结构, 并按上述方法更新;

6) 判断每个跟踪点的last_update_index与当前帧序号之间的差值大小, 若大于阈值, 意味着该点对应的运动轨迹长时间没有被更新, 则删除;

7) 统计各跟踪点运动轨迹对应在每帧图像的位置变动, 若随着时间推移, 该跟踪点在每帧图像间的位置变动不大, 则删除. 继续执行步骤1).

4 层次聚类

1) 将每个特征点归为一类, 共得到 $N$ 类, 每类仅包含一个特征点, 类与类之间的距离就是它们所包含的特征点之间的距离;

2) 找到距离最近的两个类并合并成一类, 于是类别个数减一;

3) 重新计算合并的新类与所有旧类之间的距离;

4) 重复步骤2)和步骤3), 直到两类间的距离不小于距离阈值;

 ${D_{\min }}({c_i},{c_j}) = \mathop {\min }\limits_{p \in {c_i},{p'} \in {c_j}} \left\{ {\left| {p - {p'}} \right|} \right\}$ (14)
 ${D_{\max }}({c_i},{c_j}) = \mathop {\max }\limits_{p \in {c_i},{p'} \in {c_j}} \left\{ {\left| {p - {p'}} \right|} \right\}$ (15)
 ${D_{\rm{avg}}}({c_i},{c_j}) = \frac{1}{{{n_i}{n_j}}}\sum\limits_{p \in {c_i},{p'} \in {c_j}} {\left\{ {\left| {p - {p'}} \right|} \right\}}$ (16)
5 实验结果与分析

 图 3 三种方法运动目标检测效果对比

 图 4 光线变化对目标检测效果的影响

6 总结

 [1] 仝小敏. 航拍视频运动目标检测与跟踪方法研究[博士学位论文]. 西安: 西北工业大学, 2015. [2] 徐诚, 黄大庆. 基于鲁棒M估计和Mean Shift聚类的动态背景下运动目标检测. 光子学报, 2014, 43(1): 136-141. [3] 齐美彬, 汪巍, 蒋建国, 等. 动态场景下的快速目标检测算法. 电子测量与仪器学报, 2011, 25(9): 756-761. [4] Zamalieva D, Yilmaz A, Davis JW. A multi-transformational model for background subtraction with moving cameras. 13th European Conference on Computer Vision. Zurich, Switzerland. 2014. 803–817. [5] Lim J, Han B. Generalized background subtraction using superpixels with label integrated motion estimation. In: Fleet D, Pajdla T, Schiele B, et al. Computer Vision–ECCV 2014. Cham: Springer, 2014, 8693: 173–187. [6] Kwak S, Lim T, Nam W, et al. Generalized background subtraction based on hybrid inference by belief propagation and Bayesian filtering. International Conference on Computer Vision. Barcelona, Spain. 2011. 2174–2181. [7] Minematsu T, Uchiyama H, Shimada A, et al. Adaptive search of background models for object detection in images taken by moving cameras. 2015 IEEE International Conference on Image Processing. Quebec City, QC, Canada. 2015. 2626–2630. [8] Kim J, Wang XF, Wang H, et al. Fast moving object detection with non-stationary background. Multimedia Tools and Applications, 2013, 67(1): 311-335. DOI:10.1007/s11042-012-1075-3 [9] Kim SW, Yun K, Yi KM, et al. Detection of moving objects with a moving camera using non-panoramic background model. Machine Vision and Applications, 2013, 24(5): 1015-1028. DOI:10.1007/s00138-012-0448-y [10] Yi KM, Yun K, Kim SW, et al. Detection of moving objects with non-stationary cameras in 5.8 ms: Bringing motion detection to your mobile device. 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Portland, OR, USA. 2013. 27–34. [11] Shah AJ. Optical flow for motion detection with moving background.https://www.researchgate.net/publication/282913643. 2015. [12] Kurnianggoro L, Shahbaz A, Jo KH. Dense optical flow in stabilized scenes for moving object detection from a moving camera. 2016 16th International Conference on Control, Automation and Systems. Gyeongju, South Korea. 2016. 704–708. [13] 何伟, 张国云, 吴健辉, 等. 结合运动边界和稀疏光流的运动目标检测方法. 小型微型计算机系统, 2017, 38(3): 635-639. [14] Tomasi C, Kanade T. Detection and tracking of point features. Technical Report, 1991, 91(21): 9795-9802. [15] Rosten E, Drummond T. Machine learning for high-speed corner detection. 9th European Conference on Computer Vision. Graz, Austria. 2006. 430–443. [16] 王慧贤, 勒惠佳, 王娇龙, 等. K均值聚类引导的遥感影像多尺度分割优化方法. 测绘学报, 2015, 44(5): 526-532. [17] 李果家, 李显凯. 基于稀疏光流场分割的运动目标实时检测算法. 计算机工程与设计, 2017, 38(11): 3029-3035. [18] VIVID-PETS 2005. http://vision.cse.psu.edu/data/vividEval/datasets/datasets.html. [19] Robicquet A, Sadeghian A, Alahi A, et al. Learning social etiquette: human trajectory understanding in crowded scenes. 14th European Conference on Computer Vision. Amsterdam, The Netherlands. 2016. 549–565. [20] LIMU Dataset. http://limu.ait.kyushu-u.ac.jp/dataset/en.