OPT OpenIR  > 光谱成像技术研究室
Sound Active Attention Framework for Remote Sensing Image Captioning
Lu, Xiaoqiang1; Wang, Binqiang1,2; Zheng, Xiangtao1
作者部门光谱成像技术研究室
2020-03
发表期刊IEEE Transactions on Geoscience and Remote Sensing
ISSN01962892;15580644
卷号58期号:3页码:1985-2000
产权排序1
摘要

Attention mechanism-based image captioning methods have achieved good results in the remote sensing field, but are driven by tagged sentences, which is called passive attention. However, different observers may give different levels of attention to the same image. The attention of observers during testing, then, may not be consistent with the attention during training. As a direct and natural human-machine interaction, speech is much faster than typing sentences. Sound can represent the attention of different observers. This is called active attention. Active attention can be more targeted to describe the image; for example, in disaster assessments, the situation can be obtained quickly and the corresponding disaster areas can be located related to the specific disaster. A novel sound active attention framework is proposed for more specific caption generation according to the interest of the observer. First, sound is modeled by mel-frequency cepstral coefficients (MFCCs) and the image is encoded by convolutional neural networks (CNNs). Then, to handle the continuity characteristic of sound, a sound module and an attention module are designed based on the gated recurrent units (GRUs). Finally, the sound-guided image feature processed by the attention module is imported into the output module to generate descriptive sentence. Experiments based on both fake and real sound data sets show that the proposed method can generate sentences that can capture the focus of human. © 1980-2012 IEEE.

关键词Active attention remote sensing image captioning semantic understanding
DOI10.1109/TGRS.2019.2951636
收录类别SCI ; EI
语种英语
WOS记录号WOS:000519598700037
出版者Institute of Electrical and Electronics Engineers Inc.
EI入藏号20201008273153
引用统计
被引频次:50[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.opt.ac.cn/handle/181661/93309
专题光谱成像技术研究室
通讯作者Lu, Xiaoqiang
作者单位1.Key Laboratory of Spectral Imaging Technology CAS, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an; 710119, China;
2.School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing; 100049, China
推荐引用方式
GB/T 7714
Lu, Xiaoqiang,Wang, Binqiang,Zheng, Xiangtao. Sound Active Attention Framework for Remote Sensing Image Captioning[J]. IEEE Transactions on Geoscience and Remote Sensing,2020,58(3):1985-2000.
APA Lu, Xiaoqiang,Wang, Binqiang,&Zheng, Xiangtao.(2020).Sound Active Attention Framework for Remote Sensing Image Captioning.IEEE Transactions on Geoscience and Remote Sensing,58(3),1985-2000.
MLA Lu, Xiaoqiang,et al."Sound Active Attention Framework for Remote Sensing Image Captioning".IEEE Transactions on Geoscience and Remote Sensing 58.3(2020):1985-2000.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Sound Active Attenti(5034KB)期刊论文出版稿限制开放CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Lu, Xiaoqiang]的文章
[Wang, Binqiang]的文章
[Zheng, Xiangtao]的文章
百度学术
百度学术中相似的文章
[Lu, Xiaoqiang]的文章
[Wang, Binqiang]的文章
[Zheng, Xiangtao]的文章
必应学术
必应学术中相似的文章
[Lu, Xiaoqiang]的文章
[Wang, Binqiang]的文章
[Zheng, Xiangtao]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。