###
计算机系统应用英文版:2020,29(10):267-273
本文二维码信息
码上扫一扫!
基于公路监控视频的车辆检测和分类
(1.中国科学技术大学 软件学院, 合肥 230027;2.福建省 厦门市公路局 信息处, 厦门 361008)
Vehicle Detection and Classification Based on Highway Monitoring Video
(1.School of Software Engineering, University of Science and Technology of China, Hefei 230027, China;2.Information Office, Highway Administration of Xiamen, Fujian Province, Xiamen 361008, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 964次   下载 2150
Received:January 15, 2020    Revised:February 13, 2020
中文摘要: 在学习了已有的检测与分类算法以后,设计了一种将改进的高斯混合模型(GMM)与分类网络(GoogLeNet)融合的方案用于车辆的检测和分类.针对高斯混合模型存在模型初始化速度慢和计算复杂的问题,改进了初始化模型的算法提升初始化效率.运用五帧差法做车辆初提取,在提取到的车辆区域上运用高斯混合模型获得车辆图片,把五帧差法和高斯混合模型结合起来减小了建模的区域,提升了检测速度,提高了系统实时性.最后使用GoogLeNet对车辆分类.实验证明相较于现有的车辆检测分类方法,本文所提方法在检测速度和分类准确性上都有很大提升,满足了现实场景下对监控视频的车辆检测和分类的实时性要求.
Abstract:Having studied the existing detection and classification algorithms, we design a scheme of fusion of improved Gaussian Mixture Model (GMM) and classification network (GoogLeNet) for vehicle detection and classification. In view of the inaccurate initialization and complex computation of GMM, we improve the algorithm of initialization models to increase the initialization efficiency. The five-frame difference method is used to execute the preliminary vehicle extraction. In the extracted vehicle area, GMM is used to get vehicle images, the five-frame difference method is combined with GMM to reduce the area of modeling and to increase the speed of vehicle detection and improve the real-time performance of the system. At last, we use GoogLeNet to execute the vehicle classification. The results show that the proposed methods have greatly improved the detection speed and recognition accuracy, and satisfy the real-time requirement of vehicle detection and recognition for surveillance video in real scenario.
文章编号:     中图分类号:    文献标志码:
基金项目:福建省交通运输厅科技发展项目(201431)
引用文本:
曹富奎,白天,许晓珑.基于公路监控视频的车辆检测和分类.计算机系统应用,2020,29(10):267-273
CAO Fu-Kui,BAI Tian,XU Xiao-Long.Vehicle Detection and Classification Based on Highway Monitoring Video.COMPUTER SYSTEMS APPLICATIONS,2020,29(10):267-273