Zero-Shot Learning Using Synthesised Unseen Visual Data with Diffusion Regularisation | |
Long, Yang1; Liu, Li2,3; Shen, Fumin4; Shao, Ling2,3; Li, Xuelong5![]() | |
作者部门 | 光学影像学习与分析中心 |
2018-10 | |
发表期刊 | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
![]() |
ISSN | 0162-8828;1939-3539 |
卷号 | 40期号:10页码:2498-2512 |
产权排序 | 5 |
摘要 | Sufficient training examples are the fundamental requirement for most of the learning tasks. However, collecting well-labelled training examples is costly. Inspired by Zero-shot Learning (ZSL) that can make use of visual attributes or natural language semantics as an intermediate level clue to associate low-level features with high-level classes, in a novel extension of this idea, we aim to synthesise training data for novel classes using only semantic attributes. Despite the simplicity of this idea, there are several challenges. First, how to prevent the synthesised data from over-fitting to training classes? Second, how to guarantee the synthesised data is discriminative for ZSL tasks? Third, we observe that only a few dimensions of the learnt features gain high variances whereas most of the remaining dimensions are not informative. Thus, the question is how to make the concentrated information diffuse to most of the dimensions of synthesised data. To address the above issues, we propose a novel embedding algorithm named Unseen Visual Data Synthesis (UVDS) that projects semantic features to the high-dimensional visual feature space. Two main techniques are introduced in our proposed algorithm. (1) We introduce a latent embedding space which aims to reconcile the structural difference between the visual and semantic spaces, meanwhile preserve the local structure. (2) We propose a novel Diffusion Regularisation (DR) that explicitly forces the variances to diffuse over most dimensions of the synthesised data. By an orthogonal rotation (more precisely, an orthogonal transformation), DR can remove the redundant correlated attributes and further alleviate the over-fitting problem. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data for zero-shot learning. Extensive experimental results suggest that our proposed approach significantly outperforms the state-of-the-art methods. |
关键词 | Zero-shot Learning Data Synthesis Diffusion Regularisation Visual-semantic Embedding Object Recognition |
DOI | 10.1109/TPAMI.2017.2762295 |
收录类别 | SCI ; EI |
语种 | 英语 |
WOS记录号 | WOS:000443875500016 |
出版者 | IEEE COMPUTER SOC |
EI入藏号 | 20174304296448 |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://ir.opt.ac.cn/handle/181661/30620 |
专题 | 光谱成像技术研究室 |
通讯作者 | Shao, Ling |
作者单位 | 1.Univ Newcastle, Sch Comp Sci, OpenLab, Newcastle Upon Tyne NE4 5TG, Tyne & Wear, England 2.Incept Inst Artificial Intelligence, Abu Dhabi, U Arab Emirates 3.Univ East Anglia, Sch Comp Sci, Norwich NR4 7TJ, Norfolk, England 4.Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Sichuan, Peoples R China 5.Chinese Acad Sci, Xian Inst Opt & Precis Mech, Xian 710119, Shaanxi, Peoples R China |
推荐引用方式 GB/T 7714 | Long, Yang,Liu, Li,Shen, Fumin,et al. Zero-Shot Learning Using Synthesised Unseen Visual Data with Diffusion Regularisation[J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,2018,40(10):2498-2512. |
APA | Long, Yang,Liu, Li,Shen, Fumin,Shao, Ling,&Li, Xuelong.(2018).Zero-Shot Learning Using Synthesised Unseen Visual Data with Diffusion Regularisation.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,40(10),2498-2512. |
MLA | Long, Yang,et al."Zero-Shot Learning Using Synthesised Unseen Visual Data with Diffusion Regularisation".IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 40.10(2018):2498-2512. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
Zero-Shot Learning U(1288KB) | 期刊论文 | 出版稿 | 限制开放 | CC BY-NC-SA | 请求全文 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论