OPT OpenIR  > 光谱成像技术研究室
Unsupervised Salient Object Detection via Inferring From Imperfect Saliency Models
Quan, Rong1; Han, Junwei1; Zhang, Dingwen1; Nie, Feiping2,3; Qian, Xueming4; Li, Xuelong5; Han, JW (reprint author), Northwestern Polytech Univ, Sch Automat, Xian 710072, Shaanxi, Peoples R China.
作者部门光学影像学习与分析中心
2018-05-01
发表期刊IEEE TRANSACTIONS ON MULTIMEDIA
ISSN1520-9210
卷号20期号:5页码:1101-1112
产权排序5
摘要

Visual saliency detection has become an active research direction in recent years. A large number of saliency models, which can automatically locate objects of interest in images, have been developed. As these models take advantage of different kinds of prior assumptions, image features, and computational methodologies, they have their own strengths and weaknesses and may cope with only one or a few types of images well. Inspired by these facts, this paper proposes a novel salient object detection approach with the idea of inferring a superior model from a variety of previous imperfect saliency models via optimally leveraging the complementary information among them. The proposed approach mainly consists of three steps. First, a number of existing unsupervised saliency models are adopted to provide weak/imperfect saliency predictions for each region in the image. Then, a fusion strategy is used to fuse each image region's weak saliency predictions into a strong one by simultaneously considering the performance differences among various weak predictions and various characteristics of different image regions. Finally, a local spatial consistency constraint that ensures high similarity of the saliency labels for neighboring image regions with similar features is proposed to refine the results. Comprehensive experiments on five public benchmark datasets and comparisons with a number of state-of-the-art approaches can demonstrate the effectiveness of the proposed work.

文章类型Article
关键词Salient Object Detection Weak Prediction Fusion Strategy Local Spatial Consistency Constraint
学科领域Computer Science, Information Systems
WOS标题词Science & Technology ; Technology
DOI10.1109/TMM.2017.2763780
收录类别SCI ; EI
关键词[WOS]REGION DETECTION ; IMAGE SEGMENTATION ; VISUAL-ATTENTION
语种英语
WOS研究方向Computer Science ; Telecommunications
项目资助者National Science Foundation of China(61473231)
WOS类目Computer Science, Information Systems ; Computer Science, Software Engineering ; Telecommunications
WOS记录号WOS:000430728400007
EI入藏号20174304304322
引用统计
被引频次:27[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.opt.ac.cn/handle/181661/30075
专题光谱成像技术研究室
通讯作者Han, JW (reprint author), Northwestern Polytech Univ, Sch Automat, Xian 710072, Shaanxi, Peoples R China.
作者单位1.Northwestern Polytech Univ, Sch Automat, Xian 710072, Shaanxi, Peoples R China
2.Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China
3.Northwestern Polytech Univ, Ctr Opt IMagery Anal & Learning, Xian 710072, Shaanxi, Peoples R China
4.Xi An Jiao Tong Univ, Sch Elect & Informat Engn, Xian 710049, Shaanxi, Peoples R China
5.Chinese Acad Sci, Xian Inst Opt & Precis Mech, Xian 710119, Shaanxi, Peoples R China
推荐引用方式
GB/T 7714
Quan, Rong,Han, Junwei,Zhang, Dingwen,et al. Unsupervised Salient Object Detection via Inferring From Imperfect Saliency Models[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2018,20(5):1101-1112.
APA Quan, Rong.,Han, Junwei.,Zhang, Dingwen.,Nie, Feiping.,Qian, Xueming.,...&Han, JW .(2018).Unsupervised Salient Object Detection via Inferring From Imperfect Saliency Models.IEEE TRANSACTIONS ON MULTIMEDIA,20(5),1101-1112.
MLA Quan, Rong,et al."Unsupervised Salient Object Detection via Inferring From Imperfect Saliency Models".IEEE TRANSACTIONS ON MULTIMEDIA 20.5(2018):1101-1112.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Unsupervised Salient(1329KB)期刊论文作者接受稿限制开放CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Quan, Rong]的文章
[Han, Junwei]的文章
[Zhang, Dingwen]的文章
百度学术
百度学术中相似的文章
[Quan, Rong]的文章
[Han, Junwei]的文章
[Zhang, Dingwen]的文章
必应学术
必应学术中相似的文章
[Quan, Rong]的文章
[Han, Junwei]的文章
[Zhang, Dingwen]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。