标题:Fast Scalable Supervised Hashing
作者:Luo, Xin; Nie, Liqiang; He, Xiangnan; Wu, Ye; Chen, Zhen-Duo; Xu, Xin-Shun
通讯作者:Xu, XS
作者机构:[Luo, Xin; Nie, Liqiang; Wu, Ye; Chen, Zhen-Duo; Xu, Xin-Shun] Shandong Univ, Jinan, Shandong, Peoples R China.; [He, Xiangnan] Natl Univ Singapore, 更多
会议名称:41st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)
会议日期:JUL 08-12, 2018
来源:ACM/SIGIR PROCEEDINGS 2018
出版年:2018
页码:735-744
DOI:10.1145/3209978.3210035
关键词:Learning to Hash; Large-Scale Retrieval; Discrete Optimization;; Supervised Hashing
摘要:Despite significant progress in supervised hashing, there are three common limitations of existing methods. First, most pioneer methods discretely learn hash codes bit by bit, making the learning procedure rather time-consuming. Second, to reduce the large complexity of the n by n pairwise similarity matrix, most methods apply sampling strategies during training, which inevitably results in information loss and suboptimal performance; some recent methods try to replace the large matrix with a smaller one, but the size is still large. Third, among the methods that leverage the pairwise similarity matrix, most of them only encode the semantic label information in learning the hash codes, failing to fully capture the characteristics of data. In this paper, we present a novel supervised hashing method, called Fast Scalable Supervised Hashing (FSSH), which circumvents the use of the large similarity matrix by introducing a pre-computed intermediate term whose size is independent with the size of training data. Moreover, FSSH can learn the hash codes with not only the semantic information but also the features of data. Extensive experiments on three widely used datasets demonstrate its superiority over several state-of-the-art methods in both accuracy and scalability. Our experiment codes are available at: https://lcbwlx.wixsite.com/fssh.
收录类别:CPCI-S
资源类型:会议论文
TOP