一种健壮的超像素跟踪算法
作者:

Robust Superpixes Tracking Method
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [29]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    在目标跟踪中,传统的超像素跟踪算法在发生遮挡等情况后,会将非目标超像素标记为目标加入到特征空间. 在对候选样本置信度计算中,利用特征空间中最近邻超像素来划定样本中超像素的簇归属会产生错误;而依据的近邻超像素数量过多时,又会造成分类误差的积累. 为解决上述问题,本文提出一种健壮的超像素跟踪算法. 本算法以贝叶斯算法为框架,首先,将前几帧进行超像素切割,提取特征并使用均值漂移聚类算法和基于超像素的外观表示模型进行分类和计算类置信度,放入特征空间中. 其次,根据接下来几帧的平均中心误差确定最佳近邻数目. 最后,在跟踪过程中,对获取帧的指定区域进行超像素切割,提取特征、进行软分类和计算置信度;根据上一帧目标位置进行高斯采样,累加样本内超像素置信度,获得样本置信度;在发生严重遮挡时,不进行滑动窗口更新和外观模型修改,使用当前模型继续跟踪. 与传统的最近邻超像素算法相比,本算法能够有效提升跟踪成功率和降低平均中心误差.

    Abstract:

    During the object tracking, when occlusion occurs, the traditional superpixel tracking algorithm will add the superpixels of the non-target area into the feature space. In the calculation of the candidate sample confidence, the nearest neighbor superpixel in the feature space is used to delimit the cluster attribution of the superpixels in the sample, and the accumulation of the classification error is caused by the excessive number of neighboring superpixels. To solve the problems above, we propose a robust superpixels tracking method. This algorithm uses Bayesian algorithm as the framework. Firstly, we slice the first few frames into superpixels, extract the feature, use the mean shift clustering algorithm and representation model based on superpixel to classify and calculate the class confidence value, and put the feature into feature space. Secondly, the suitable numbers of neighbors can be found with the mean center error of next few frames. Last but not least, during the tracking process, the superpixel is segmented in the specified area of the acquired frame, to extract the feature. The cluster is confirmed with soft classification and the confidence value is calculated. According to the previous frame target position, the Gaussian sampling is collected. We can obtain the sample confidence value with the accumulation of the confidence value. In case of severe occlusion, the sliding window update and the appearance model modification are not carried out, and we continue to use the current model to track. Compared with the traditional tracking algorithm based on nearest superpixel, the algorithm can effectively improve the tracking success rate and reduce the average center errors.

    参考文献
    [1] Yilmaz A, Javed O, Shah M. Object tracking: A survey. ACM Computing Surveys, 2006, 38(4): 13. [DOI:10.1145/1177352]
    [2] Salari V, Sethi IK. Feature point correspondence in the presence of occlusion. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1990, 12(1): 87-91. [DOI:10.1109/34.41387]
    [3] Veenman CJ, Reinders MJT, Backer E. Resolving motion correspondence for densely moving points. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2001, 23(1): 54-72. [DOI:10.1109/34.899946]
    [4] Broida TJ, Chellappa R. Estimation of object motion parameters from noisy images. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1986, 8(1): 90-99.
    [5] Bar-Shalom Y, Fortmann TE. Tracking and data association. San Diego, CA, USA: Academic Press Professional, Inc., 1988.
    [6] Streit RL, Luginbuhl TE. Maximum likelihood training of probabilistic neural networks. IEEE Trans. on Neural Networks, 1994, 5(5): 764-783. [DOI:10.1109/72.317728]
    [7] Comaniciu D, Ramesh V, Meer P. Kernel-based object tracking. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2003, 25(5): 564-577. [DOI:10.1109/TPAMI.2003.1195991]
    [8] Shi JB, Tomasi. Good features to track. 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994. Seattle, WA, USA. 1994. 593-600.
    [9] Tao H, Sawhney HS, Kumar R. Object tracking with Bayesian estimation of dynamic layer representations. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2002, 24(1): 75-89. [DOI:10.1109/34.982885]
    [10] Black MJ, Jepson AD. EigenTracking: Robust matching and tracking of articulated objects using a view-based representation. International Journal of Computer Vision, 1998, 26(1): 63-84. [DOI:10.1023/A:1007939232436]
    [11] Avidan S. Support vector tracking. Proc. of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001. Kauai, HI, USA. 2001. 184-191.
    [12] Isard M, Blake A. CONDENSATION-Conditional density propagation for visual tracking. International Journal of Computer Vision, 1998, 29(1): 5-28. [DOI:10.1023/A:1008078328650]
    [13] Bertalmio M, Sapiro G, Randall G. Morphing active contours: A geometric approach to topology-independent image segmentation and tracking. Proc. of 1998 International Conference on Image Processing. Chicago, IL, USA. 1998. 318-322.
    [14] Kang JM, Cohen I, Medioni G. Object reacquisition using invariant appearance model. Proc. of the 17th International Conference on Pattern Recognition. Cambridge, UK. 2004. 759-762.
    [15] Jepson AD, Fleet DJ, El-Maraghi TF. Robust online appearance models for visual tracking. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2003, 25(10): 1296-1311. [DOI:10.1109/TPAMI.2003.1233903]
    [16] Lim J, Ross DA, Lin RS, et al. Incremental learning for visual tracking. Advances in Neural Information Processing Systems. Vancouver, Canada. 2004. 793-800.
    [17] Collins RT, Liu YX, Leordeanu M. Online selection of discriminative tracking features. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2005, 27(10): 1631-1643. [DOI:10.1109/TPAMI.2005.205]
    [18] Adam A, Rivlin E, Shimshoni I. Robust fragments-based tracking using the integral histogram. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York, NY, USA. 2006. 798-805.
    [19] Grabner H, Leistner C, Bischof H. Semi-supervised on-line boosting for robust tracking. Proc. of the 10th European Conference on Computer Vision: Part I. Marseille, France. 2008. 234-247.
    [20] Kwon J, Lee KM. Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive basin hopping monte carlo sampling. IEEE Conference on Computer Vision and Pattern Recognition(CVPR 2009). Miami, FL, USA. 2009. 1208-1215.
    [21] Kwon J, Lee KM. Visual tracking decomposition. 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010). San Francisco, CA, USA. 2010. 1269-1276.
    [22] Ren XF, Malik J. Tracking as repeated figure/ground segmentation. IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis, MN, USA. 2007. 1-8.
    [23] Yang F, Lu HC, Yang MH. Robust superpixel tracking. IEEE Trans. on Image Processing, 2014, 23(4): 1639-1651. [DOI:10.1109/TIP.2014.2300823]
    [24] 陈桂景, 王尧弘. 自适应近邻判别分析. 数学物理学报, 1996, 16(S1): 9-19.
    [25] Ross DA, Lim J, Lin RS, et al. Incremental learning for robust visual tracking. International Journal of Computer Vision, 2008, 77(1-3): 125-141. [DOI:10.1007/s11263-007-0075-7]
    [26] Kwon J, Lee KM. Visual tracking decomposition. 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010). San Francisco, CA, USA. 2010. 1269-1276.
    [27] Bao CL, Wu Y, Ling HB, et al. Real time robust L1 tracker using accelerated proximal gradient approach. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012). Providence, RI, USA. 2012. 1830-1837.
    [28] Kalal Z, Matas J, Mikolajczyk K. P-N learning: Bootstrapping binary classifiers by structural constraints. 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010). San Francisco, CA, USA. 2010. 49-56.
    [29] Hare S, Golodetz S, Saffari A, et al. Struck: Structured output tracking with kernels. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2016, 38(10): 2096-2109. [DOI:10.1109/TPAMI.2015.2509974]
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

郭利,周盛宗,付璐斯,于志刚.一种健壮的超像素跟踪算法.计算机系统应用,2017,26(12):130-136

复制
分享
文章指标
  • 点击次数:1786
  • 下载次数: 1954
  • HTML阅读次数: 0
  • 引用次数: 0
历史
  • 收稿日期:2017-03-16
  • 最后修改日期:2017-04-05
  • 在线发布日期: 2017-12-07
文章二维码
您是第11418191位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京海淀区中关村南四街4号 中科院软件园区 7号楼305房间,邮政编码:100190
电话:010-62661041 传真: Email:csa (a) iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号