OPT OpenIR  > 光谱成像技术研究室
Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval
Xu, Xing1; Shen, Fumin1; Yang, Yang1; Shen, Heng Tao1; Li, Xuelong2; Shen, Fumin (fumin.shen@gmail.com)
作者部门光学影像学习与分析中心
2017-05-01
发表期刊IEEE TRANSACTIONS ON IMAGE PROCESSING
ISSN1057-7149
卷号26期号:5页码:2494-2507
产权排序2
摘要

Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.

文章类型Article
关键词Cross-modal Retrieval Hashing Discrete Optimization Discriminant Analysis
WOS标题词Science & Technology ; Technology
DOI10.1109/TIP.2017.2676345
收录类别SCI ; EI
关键词[WOS]IMAGE RETRIEVAL ; SEMANTICS ; SEARCH ; SPACE
语种英语
WOS研究方向Computer Science ; Engineering
项目资助者National Natural Science Foundation of China(61602089 ; National Thousand-Young-Talents Program of China ; Fundamental Research Funds for the Central Universities(ZYGX2014Z007 ; 61502081 ; ZYGX2015J055 ; 61572108 ; ZYGX2016KYQD114) ; 61632007 ; 61472063)
WOS类目Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS记录号WOS:000399396400031
引用统计
被引频次:324[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.opt.ac.cn/handle/181661/28861
专题光谱成像技术研究室
通讯作者Shen, Fumin (fumin.shen@gmail.com)
作者单位1.Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 610051, Peoples R China
2.Chinese Acad Sci, Ctr OPT IMagery Anal & Learning, Xian Inst Opt & Precis Mech, State Key Lab Transient Opt & Photon, Xian 710119, Peoples R China
推荐引用方式
GB/T 7714
Xu, Xing,Shen, Fumin,Yang, Yang,et al. Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2017,26(5):2494-2507.
APA Xu, Xing,Shen, Fumin,Yang, Yang,Shen, Heng Tao,Li, Xuelong,&Shen, Fumin .(2017).Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval.IEEE TRANSACTIONS ON IMAGE PROCESSING,26(5),2494-2507.
MLA Xu, Xing,et al."Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval".IEEE TRANSACTIONS ON IMAGE PROCESSING 26.5(2017):2494-2507.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Learning Discriminat(2126KB)期刊论文作者接受稿限制开放CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Xu, Xing]的文章
[Shen, Fumin]的文章
[Yang, Yang]的文章
百度学术
百度学术中相似的文章
[Xu, Xing]的文章
[Shen, Fumin]的文章
[Yang, Yang]的文章
必应学术
必应学术中相似的文章
[Xu, Xing]的文章
[Shen, Fumin]的文章
[Yang, Yang]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。