Spatial attention based visual semantic learning for action recognition in still images | |
Zheng, Yunpeng1,2; Zheng, Xiangtao1![]() ![]() | |
作者部门 | 光谱成像技术研究室 |
2020-11-06 | |
发表期刊 | NEUROCOMPUTING
![]() |
ISSN | 0925-2312;1872-8286 |
卷号 | 413页码:383-396 |
产权排序 | 1 |
摘要 | Visual semantic parts play crucial roles in still image-based action recognition. A majority of existing methods require additional manual annotations such as human bounding boxes and predefined body parts besides action labels to learn action related visual semantic parts. However, labeling these manual annotations is rather time-consuming and labor-intensive. Moreover, not all manual annotations are effective when recognizing a specific action. Some of them can be irrelevant and even misguided. To address these limitations, this paper proposes a multi-stage deep learning method called Spatial Attention based Action Mask Networks (SAAM-Nets). The proposed method does not need any additional annotations besides action labels to obtain action-specific visual semantic parts. Instead, we propose a spatial attention layer injected in a convolutional neural network to create a specific action mask for each image with only action labels. Moreover, based on the action mask, we propose a region selection strategy to generate a semantic bounding box containing action-specific semantic parts. Furthermore, to effectively combine the information of the whole scene and the sematic box, two feature attention layers are adopted to obtain more discriminative representations. Experiments on four benchmark datasets have demonstrated that the proposed method can achieve promising performance compared with state-of-the-art methods. (C) 2020 Elsevier B.V. All rights reserved. |
关键词 | Still image-based action recognition Spatial attention Semantic parts Deep learning |
DOI | 10.1016/j.neucom.2020.07.016 |
收录类别 | SCI |
语种 | 英语 |
WOS记录号 | WOS:000579803700032 |
出版者 | ELSEVIER |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://ir.opt.ac.cn/handle/181661/93762 |
专题 | 光谱成像技术研究室 |
通讯作者 | Zheng, Xiangtao |
作者单位 | 1.Chinese Acad Sci, Xian Inst Opt & Precis Mech, Key Lab Spectral Imaging Technol CAS, Xian 710119, Shaanxi, Peoples R China 2.Univ Chinese Acad Sci, Beijing 100049, Peoples R China |
推荐引用方式 GB/T 7714 | Zheng, Yunpeng,Zheng, Xiangtao,Lu, Xiaoqiang,et al. Spatial attention based visual semantic learning for action recognition in still images[J]. NEUROCOMPUTING,2020,413:383-396. |
APA | Zheng, Yunpeng,Zheng, Xiangtao,Lu, Xiaoqiang,&Wu, Siyuan.(2020).Spatial attention based visual semantic learning for action recognition in still images.NEUROCOMPUTING,413,383-396. |
MLA | Zheng, Yunpeng,et al."Spatial attention based visual semantic learning for action recognition in still images".NEUROCOMPUTING 413(2020):383-396. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
Spatial attention ba(5211KB) | 期刊论文 | 出版稿 | 限制开放 | CC BY-NC-SA | 请求全文 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论