OPT OpenIR  > 光谱成像技术研究室
From Deterministic to Generative: Multimodal Stochastic RNNs for Video Captioning
Song, Jingkuan1; Guo, Yuyu1; Gao, Lianli1; Li, Xuelong2; Hanjalic, Alan3; Shen, Heng Tao1
作者部门光谱成像技术研究室
2019-10
发表期刊IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
ISSN2162-237X;2162-2388
卷号30期号:10页码:3047-3058
产权排序2
摘要

Video captioning, in essential, is a complex natural process, which is affected by various uncertainties stemming from video content, subjective judgment, and so on. In this paper, we build on the recent progress in using encoder-decoder framework for video captioning and address what we find to be a critical deficiency of the existing methods that most of the decoders propagate deterministic hidden states. Such complex uncertainty cannot be modeled efficiently by the deterministic models. In this paper, we propose a generative approach, referred to as multimodal stochastic recurrent neural networks (MS-RNNs), which models the uncertainty observed in the data using latent stochastic variables. Therefore, MS-RNN can improve the performance of video captioning and generate multiple sentences to describe a video considering different random factors. Specifically, a multimodal long short-term memory (LSTM) is first proposed to interact with both visual and textual features to capture a high-level representation. Then, a backward stochastic LSTM is proposed to support uncertainty propagation by introducing latent variables. Experimental results on the challenging data sets, microsoft video description and microsoft research videoto-text, show that our proposed MS-RNN approach outperforms the state-of-the-art video captioning benchmarks.

关键词Recurrent neural network (RNN) uncertainty video captioning
DOI10.1109/TNNLS.2018.2851077
收录类别SCI
语种英语
WOS记录号WOS:000487199000014
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
被引频次:182[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.opt.ac.cn/handle/181661/31880
专题光谱成像技术研究室
作者单位1.Univ Elect Sci & Technol China, Ctr Future Media, Chengdu 611731, Sichuan, Peoples R China
2.Chinese Acad Sci, Xian Inst Opt & Precis Mech, Xian 710119, Shaanxi, Peoples R China
3.Delft Univ Technol, Dept Intelligent Syst, Delft, Netherlands
推荐引用方式
GB/T 7714
Song, Jingkuan,Guo, Yuyu,Gao, Lianli,et al. From Deterministic to Generative: Multimodal Stochastic RNNs for Video Captioning[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2019,30(10):3047-3058.
APA Song, Jingkuan,Guo, Yuyu,Gao, Lianli,Li, Xuelong,Hanjalic, Alan,&Shen, Heng Tao.(2019).From Deterministic to Generative: Multimodal Stochastic RNNs for Video Captioning.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,30(10),3047-3058.
MLA Song, Jingkuan,et al."From Deterministic to Generative: Multimodal Stochastic RNNs for Video Captioning".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 30.10(2019):3047-3058.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
From Deterministic t(3046KB)期刊论文出版稿限制开放CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Song, Jingkuan]的文章
[Guo, Yuyu]的文章
[Gao, Lianli]的文章
百度学术
百度学术中相似的文章
[Song, Jingkuan]的文章
[Guo, Yuyu]的文章
[Gao, Lianli]的文章
必应学术
必应学术中相似的文章
[Song, Jingkuan]的文章
[Guo, Yuyu]的文章
[Gao, Lianli]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。