Video Keyframe Extraction Method Based on Fusion Feature
CSTR:
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Currently, video analysis is usually based on video frames, but video frames usually have a lot of redundancy, so the extraction of key frames is crucial. The existing traditional manual extraction methods usually have the phenomena of missing frames, redundant frames and so on. With the development of deep learning, compared with traditional manual extraction methods, deep convolution network can greatly improve the ability of image feature extraction. Therefore, this study proposes a method to extract key frames by combining the depth feature extraction of video frame with the traditional manual feature extraction method. First, the convolutional neural network was used to extract the depth features of video frames, then the content features were extracted based on the traditional manual method, and finally the content features and depth features were fused to extract the key frames. The experimental results show that the proposed method has better performance than the previous key frame extraction method.

    Reference
    Related
    Cited by
Get Citation

张晓宇,张云华.基于融合特征的视频关键帧提取方法.计算机系统应用,2019,28(11):176-181

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:April 17,2019
  • Revised:May 20,2019
  • Adopted:
  • Online: November 08,2019
  • Published: November 15,2019
Article QR Code
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063