OPT OpenIR  > 光学影像学习与分析中心
From Deterministic to Generative: Multimodal Stochastic RNNs for Video Captioning
Song, Jingkuan1; Guo, Yuyu1; Gao, Lianli1; Li, Xuelong2; Hanjalic, Alan3; Shen, Heng Tao1
Department光学影像学习与分析中心
2019-10
Source PublicationIEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
ISSN2162-237X;2162-2388
Volume30Issue:10Pages:3047-3058
Contribution Rank2
Abstract

Video captioning, in essential, is a complex natural process, which is affected by various uncertainties stemming from video content, subjective judgment, and so on. In this paper, we build on the recent progress in using encoder-decoder framework for video captioning and address what we find to be a critical deficiency of the existing methods that most of the decoders propagate deterministic hidden states. Such complex uncertainty cannot be modeled efficiently by the deterministic models. In this paper, we propose a generative approach, referred to as multimodal stochastic recurrent neural networks (MS-RNNs), which models the uncertainty observed in the data using latent stochastic variables. Therefore, MS-RNN can improve the performance of video captioning and generate multiple sentences to describe a video considering different random factors. Specifically, a multimodal long short-term memory (LSTM) is first proposed to interact with both visual and textual features to capture a high-level representation. Then, a backward stochastic LSTM is proposed to support uncertainty propagation by introducing latent variables. Experimental results on the challenging data sets, microsoft video description and microsoft research videoto-text, show that our proposed MS-RNN approach outperforms the state-of-the-art video captioning benchmarks.

KeywordRecurrent neural network (RNN) uncertainty video captioning
DOI10.1109/TNNLS.2018.2851077
Indexed BySCI
Language英语
WOS IDWOS:000487199000014
PublisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Citation statistics
Cited Times:6[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.opt.ac.cn/handle/181661/31880
Collection光学影像学习与分析中心
Affiliation1.Univ Elect Sci & Technol China, Ctr Future Media, Chengdu 611731, Sichuan, Peoples R China
2.Chinese Acad Sci, Xian Inst Opt & Precis Mech, Xian 710119, Shaanxi, Peoples R China
3.Delft Univ Technol, Dept Intelligent Syst, Delft, Netherlands
Recommended Citation
GB/T 7714
Song, Jingkuan,Guo, Yuyu,Gao, Lianli,et al. From Deterministic to Generative: Multimodal Stochastic RNNs for Video Captioning[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2019,30(10):3047-3058.
APA Song, Jingkuan,Guo, Yuyu,Gao, Lianli,Li, Xuelong,Hanjalic, Alan,&Shen, Heng Tao.(2019).From Deterministic to Generative: Multimodal Stochastic RNNs for Video Captioning.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,30(10),3047-3058.
MLA Song, Jingkuan,et al."From Deterministic to Generative: Multimodal Stochastic RNNs for Video Captioning".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 30.10(2019):3047-3058.
Files in This Item:
File Name/Size DocType Version Access License
From Deterministic t(3046KB)期刊论文出版稿限制开放CC BY-NC-SAApplication Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Song, Jingkuan]'s Articles
[Guo, Yuyu]'s Articles
[Gao, Lianli]'s Articles
Baidu academic
Similar articles in Baidu academic
[Song, Jingkuan]'s Articles
[Guo, Yuyu]'s Articles
[Gao, Lianli]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Song, Jingkuan]'s Articles
[Guo, Yuyu]'s Articles
[Gao, Lianli]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.