标题:Cross-modal hashing based on category structure preserving
作者:Dong, Fei; Nie, Xiushan; Liu, Xingbo; Geng, Leilei; Wang, Qian
作者机构:[Dong, Fei] Shandong Normal Univ, Sch Journalism & Commun, Jinan, Shandong, Peoples R China.; [Nie, Xiushan; Geng, Leilei; Wang, Qian] Shandong Univ 更多
通讯作者:Nie, XS;Nie, Xiushan
通讯作者地址:[Nie, XS]Shandong Univ Finance & Econ, Sch Comp Sci & Technol, Jinan, Shandong, Peoples R China.
来源:JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION
出版年:2018
卷:57
页码:28-33
DOI:10.1016/j.jvcir.2018.10.006
关键词:Cross-modal retrieval; Supervised hashing; Category-specific structure; preserving
摘要:Cross-modal hashing has made a great development in cross-modal retrieval since its vital reduction in computational cost and storage. Generally, projections for each modality that map heterogeneous data into a common space are used to bridge the gap between different modalities. However, category specific distributions are usually be ignored during the projection. To address this issue, we propose a novel cross-modal hashing, termed as Category Structure Preserving Hashing (CSPH), for cross-modal retrieval. In CSPH, category-specific distribution is preserved by a structure-preserving regularization term during the hash learning. Compared with existing methods, CSPH not only preserves the local structure of each category, but also generates more stable hash codes with less time for training. Extensive experiments conducted on three benchmark datasets, and the experimental results demonstrate the superiority of CSPH under various cross-modal scenarios. (C) 2018 Elsevier Inc. All rights reserved.
收录类别:EI;SCOPUS;SCIE
资源类型:期刊论文
原文链接:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85055198420&doi=10.1016%2fj.jvcir.2018.10.006&partnerID=40&md5=79c80d56a7af0a119892043815203b8e
TOP