###
计算机系统应用英文版:2018,27(4):88-93
本文二维码信息
码上扫一扫!
城市量化研究中视频人流统计分析
(1.四川大学 电子信息学院, 成都 610065;2.成都市规划设计研究院, 成都610081)
Human Traffic Analysis Based on Video for Urban Quantitative Research
(1.College of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China;2.Chengdu Institute of Planning & Design, Chengdu 610081, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 1781次   下载 2211
Received:July 26, 2017    Revised:August 14, 2017
中文摘要: 在现代城市规划研究中,聚焦人的信息的深度分析至关重要.采用有效的视频分析技术处理和分析监控视频可以极大地扩充行人信息的基础数据,对城市量化研究具有重大意义.该研究方法通过拍摄一段时期同一街道视频进行相应的处理.采用基于前向传播卷积神经网络模型的深度学习方式检测视频中指定监测区域的行人.为确保行人信息的准确性,故对检测到的行人进行跟踪处理,同时添加跟踪目标丢失判断及相应处理.最后量化行人数量、运动方向、滞留时间以及运动速度等信息数据,进行相应的数据分析.实验结果表明该研究方法能有效的实现量化行人信息数据,为城市定量化研究提供准确有效的数据支撑.
Abstract:In the research of modern urban planning, the in-depth analysis of the information that focuses on human is crucial. The use of an effective video analysis technology to analyze and monitor video can greatly expand the basic data of pedestrian, which is of great significant to urban quantitative researches. This study deals with video that shot pedestrians in the same street for a period of time. Deep learning is used for detecting pedestrians in the specified monitoring area of the video based on the forward propagation convolution neural network model. In order to ensure the accuracy of the information for pedestrians, it tracks the detected pedestrians and determines whether the target is lost. Finally, it quantifies the number of pedestrians, the direction and speed of movement, the time of retention, etc., and carries out corresponding data analysis. The results show that the method can effectively quantify data of pedestrian information, then provide accurate and effective data support for urban quantitative studies.
文章编号:     中图分类号:    文献标志码:
基金项目:成都市科技惠民项目(2015-HM01-00293-SF);国家自然科学基金(61471248)
引用文本:
曹诚,卿粼波,韩龙玫,何小海.城市量化研究中视频人流统计分析.计算机系统应用,2018,27(4):88-93
CAO Cheng,QING Lin-Bo,HAN Long-Mei,HE Xiao-Hai.Human Traffic Analysis Based on Video for Urban Quantitative Research.COMPUTER SYSTEMS APPLICATIONS,2018,27(4):88-93