###
:2019,28(11):176-181
←前一篇   |   后一篇→
本文二维码信息
码上扫一扫!
基于融合特征的视频关键帧提取方法
(浙江理工大学 信息学院, 杭州 310018)
Video Keyframe Extraction Method Based on Fusion Feature
(School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou 310018, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 15次   下载 16
投稿时间:2019-04-17    修订日期:2019-05-20
中文摘要: 当前对视频的分析通常是基于视频帧,但视频帧通常存在大量冗余,所以关键帧的提取至关重要.现有的传统手工提取方法通常存在漏帧,冗余帧等现象.随着深度学习的发展,相对传统手工提取方法,深度卷积网络可以大大提高对图像特征的提取能力.因此本文提出使用深度卷积网络提取视频帧深度特征与传统方法提取手工特征相结合的方法提取关键帧.首先使用卷积神经网络对视频帧进行深度特征提取,然后基于传统手工方法提取内容特征,最后融合内容特征和深度特征提取关键帧.由实验结果可得本文方法相对以往关键帧提取方法有更好的表现.
Abstract:Currently, video analysis is usually based on video frames, but video frames usually have a lot of redundancy, so the extraction of key frames is crucial. The existing traditional manual extraction methods usually have the phenomena of missing frames, redundant frames and so on. With the development of deep learning, compared with traditional manual extraction methods, deep convolution network can greatly improve the ability of image feature extraction. Therefore, this study proposes a method to extract key frames by combining the depth feature extraction of video frame with the traditional manual feature extraction method. First, the convolutional neural network was used to extract the depth features of video frames, then the content features were extracted based on the traditional manual method, and finally the content features and depth features were fused to extract the key frames. The experimental results show that the proposed method has better performance than the previous key frame extraction method.
文章编号:     中图分类号:    文献标志码:
基金项目:
引用文本:
张晓宇,张云华.基于融合特征的视频关键帧提取方法.计算机系统应用,2019,28(11):176-181
ZHANG Xiao-Yu,ZHANG Yun-Hua.Video Keyframe Extraction Method Based on Fusion Feature.COMPUTER SYSTEMS APPLICATIONS,2019,28(11):176-181

用微信扫一扫

用微信扫一扫