OPT OpenIR  > 光学影像学习与分析中心
Key Frame Extraction in the Summary Space
Li, Xuelong1; Zhao, Bin2; Lu, Xiaoqiang1; Lu, XQ (reprint author), Chinese Acad Sci, Ctr Opt Imagery Anal & Learning, Xian Inst Opt & Precis Mech, Xian 710119, Shaanxi, Peoples R China.
Department光学影像学习与分析中心
2018-06
Source PublicationIEEE TRANSACTIONS ON CYBERNETICS
ISSN2168-2267
Volume48Issue:6Pages:1923-1934
Contribution Rank1
Abstract

Key frame extraction is an efficient way to create the video summary which helps users obtain a quick comprehension of the video content. Generally, the key frames should be representative of the video content, meanwhile, diverse to reduce the redundancy. Based on the assumption that the video data are near a subspace of a high-dimensional space, a new approach, named as key frame extraction in the summary space, is proposed for key frame extraction in this paper. The proposed approach aims to find the representative frames of the video and filter out similar frames from the representative frame set. First of all, the video data are mapped to a high-dimensional space, named as summary space. Then, a new representation is learned for each frame by analyzing the intrinsic structure of the summary space. Specifically, the learned representation can reflect the representativeness of the frame, and is utilized to select representative frames. Next, the perceptual hash algorithm is employed to measure the similarity of representative frames. As a result, the key frame set is obtained after filtering out similar frames from the representative frame set. Finally, the video summary is constructed by assigning the key frames in temporal order. Additionally, the ground truth, created by filtering out similar frames from human-created summaries, is utilized to evaluate the quality of the video summary. Compared with several traditional approaches, the experimental results on 80 videos from two datasets indicate the superior performance of our approach.

KeywordDiverse Key Frame Representative Summary Space
Subject AreaAutomation & Control Systems
DOI10.1109/TCYB.2017.2718579
Indexed BySCI ; EI
Language英语
WOS Research AreaAutomation & Control Systems ; Computer Science
WOS IDWOS:000435342400020
EI Accession Number20172903947231
Citation statistics
Cited Times:2[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.opt.ac.cn/handle/181661/30402
Collection光学影像学习与分析中心
Corresponding AuthorLu, XQ (reprint author), Chinese Acad Sci, Ctr Opt Imagery Anal & Learning, Xian Inst Opt & Precis Mech, Xian 710119, Shaanxi, Peoples R China.
Affiliation1.Chinese Acad Sci, Ctr Opt Imagery Anal & Learning, Xian Inst Opt & Precis Mech, Xian 710119, Shaanxi, Peoples R China
2.Northwestern Polytech Univ, Ctr Opt Imagery Anal & Learning, Xian 710072, Shaanxi, Peoples R China
Recommended Citation
GB/T 7714
Li, Xuelong,Zhao, Bin,Lu, Xiaoqiang,et al. Key Frame Extraction in the Summary Space[J]. IEEE TRANSACTIONS ON CYBERNETICS,2018,48(6):1923-1934.
APA Li, Xuelong,Zhao, Bin,Lu, Xiaoqiang,&Lu, XQ .(2018).Key Frame Extraction in the Summary Space.IEEE TRANSACTIONS ON CYBERNETICS,48(6),1923-1934.
MLA Li, Xuelong,et al."Key Frame Extraction in the Summary Space".IEEE TRANSACTIONS ON CYBERNETICS 48.6(2018):1923-1934.
Files in This Item:
File Name/Size DocType Version Access License
Key Frame Extraction(2123KB)期刊论文作者接受稿开放获取CC BY-NC-SAView Application Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Li, Xuelong]'s Articles
[Zhao, Bin]'s Articles
[Lu, Xiaoqiang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Li, Xuelong]'s Articles
[Zhao, Bin]'s Articles
[Lu, Xiaoqiang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Li, Xuelong]'s Articles
[Zhao, Bin]'s Articles
[Lu, Xiaoqiang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Key Frame Extraction in the Summary Space.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.