Sound Active Attention Framework for Remote Sensing Image Captioning | |
Lu, Xiaoqiang1![]() ![]() | |
作者部门 | 光谱成像技术研究室 |
2020-03 | |
发表期刊 | IEEE Transactions on Geoscience and Remote Sensing
![]() |
ISSN | 01962892;15580644 |
卷号 | 58期号:3页码:1985-2000 |
产权排序 | 1 |
摘要 | Attention mechanism-based image captioning methods have achieved good results in the remote sensing field, but are driven by tagged sentences, which is called passive attention. However, different observers may give different levels of attention to the same image. The attention of observers during testing, then, may not be consistent with the attention during training. As a direct and natural human-machine interaction, speech is much faster than typing sentences. Sound can represent the attention of different observers. This is called active attention. Active attention can be more targeted to describe the image; for example, in disaster assessments, the situation can be obtained quickly and the corresponding disaster areas can be located related to the specific disaster. A novel sound active attention framework is proposed for more specific caption generation according to the interest of the observer. First, sound is modeled by mel-frequency cepstral coefficients (MFCCs) and the image is encoded by convolutional neural networks (CNNs). Then, to handle the continuity characteristic of sound, a sound module and an attention module are designed based on the gated recurrent units (GRUs). Finally, the sound-guided image feature processed by the attention module is imported into the output module to generate descriptive sentence. Experiments based on both fake and real sound data sets show that the proposed method can generate sentences that can capture the focus of human. © 1980-2012 IEEE. |
关键词 | Active attention remote sensing image captioning semantic understanding |
DOI | 10.1109/TGRS.2019.2951636 |
收录类别 | SCI ; EI |
语种 | 英语 |
WOS记录号 | WOS:000519598700037 |
出版者 | Institute of Electrical and Electronics Engineers Inc. |
EI入藏号 | 20201008273153 |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://ir.opt.ac.cn/handle/181661/93309 |
专题 | 光谱成像技术研究室 |
通讯作者 | Lu, Xiaoqiang |
作者单位 | 1.Key Laboratory of Spectral Imaging Technology CAS, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an; 710119, China; 2.School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing; 100049, China |
推荐引用方式 GB/T 7714 | Lu, Xiaoqiang,Wang, Binqiang,Zheng, Xiangtao. Sound Active Attention Framework for Remote Sensing Image Captioning[J]. IEEE Transactions on Geoscience and Remote Sensing,2020,58(3):1985-2000. |
APA | Lu, Xiaoqiang,Wang, Binqiang,&Zheng, Xiangtao.(2020).Sound Active Attention Framework for Remote Sensing Image Captioning.IEEE Transactions on Geoscience and Remote Sensing,58(3),1985-2000. |
MLA | Lu, Xiaoqiang,et al."Sound Active Attention Framework for Remote Sensing Image Captioning".IEEE Transactions on Geoscience and Remote Sensing 58.3(2020):1985-2000. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
Sound Active Attenti(5034KB) | 期刊论文 | 出版稿 | 限制开放 | CC BY-NC-SA | 请求全文 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论