OPT OpenIR  > 光谱成像技术研究室
Spatial attention based visual semantic learning for action recognition in still images
Zheng, Yunpeng1,2; Zheng, Xiangtao1; Lu, Xiaoqiang1; Wu, Siyuan1
作者部门光谱成像技术研究室
2020-11-06
发表期刊NEUROCOMPUTING
ISSN0925-2312;1872-8286
卷号413页码:383-396
产权排序1
摘要

Visual semantic parts play crucial roles in still image-based action recognition. A majority of existing methods require additional manual annotations such as human bounding boxes and predefined body parts besides action labels to learn action related visual semantic parts. However, labeling these manual annotations is rather time-consuming and labor-intensive. Moreover, not all manual annotations are effective when recognizing a specific action. Some of them can be irrelevant and even misguided. To address these limitations, this paper proposes a multi-stage deep learning method called Spatial Attention based Action Mask Networks (SAAM-Nets). The proposed method does not need any additional annotations besides action labels to obtain action-specific visual semantic parts. Instead, we propose a spatial attention layer injected in a convolutional neural network to create a specific action mask for each image with only action labels. Moreover, based on the action mask, we propose a region selection strategy to generate a semantic bounding box containing action-specific semantic parts. Furthermore, to effectively combine the information of the whole scene and the sematic box, two feature attention layers are adopted to obtain more discriminative representations. Experiments on four benchmark datasets have demonstrated that the proposed method can achieve promising performance compared with state-of-the-art methods. (C) 2020 Elsevier B.V. All rights reserved.

关键词Still image-based action recognition Spatial attention Semantic parts Deep learning
DOI10.1016/j.neucom.2020.07.016
收录类别SCI
语种英语
WOS记录号WOS:000579803700032
出版者ELSEVIER
引用统计
被引频次:12[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.opt.ac.cn/handle/181661/93762
专题光谱成像技术研究室
通讯作者Zheng, Xiangtao
作者单位1.Chinese Acad Sci, Xian Inst Opt & Precis Mech, Key Lab Spectral Imaging Technol CAS, Xian 710119, Shaanxi, Peoples R China
2.Univ Chinese Acad Sci, Beijing 100049, Peoples R China
推荐引用方式
GB/T 7714
Zheng, Yunpeng,Zheng, Xiangtao,Lu, Xiaoqiang,et al. Spatial attention based visual semantic learning for action recognition in still images[J]. NEUROCOMPUTING,2020,413:383-396.
APA Zheng, Yunpeng,Zheng, Xiangtao,Lu, Xiaoqiang,&Wu, Siyuan.(2020).Spatial attention based visual semantic learning for action recognition in still images.NEUROCOMPUTING,413,383-396.
MLA Zheng, Yunpeng,et al."Spatial attention based visual semantic learning for action recognition in still images".NEUROCOMPUTING 413(2020):383-396.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Spatial attention ba(5211KB)期刊论文出版稿限制开放CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Zheng, Yunpeng]的文章
[Zheng, Xiangtao]的文章
[Lu, Xiaoqiang]的文章
百度学术
百度学术中相似的文章
[Zheng, Yunpeng]的文章
[Zheng, Xiangtao]的文章
[Lu, Xiaoqiang]的文章
必应学术
必应学术中相似的文章
[Zheng, Yunpeng]的文章
[Zheng, Xiangtao]的文章
[Lu, Xiaoqiang]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。