Deep Common Semantic Space Embedding for Sketch-Based 3D Model Retrieval
AbstractSketch-based 3D model retrieval has become an important research topic in many applications, such as computer graphics and computer-aided design. Although sketches and 3D models have huge interdomain visual perception discrepancies, and sketches of the same object have remarkable intradomain visual perception diversity, the 3D models and sketches of the same class share common semantic content. Motivated by these findings, we propose a novel approach for sketch-based 3D model retrieval by constructing a deep common semantic space embedding using triplet network. First, a common data space is constructed by representing every 3D model as a group of views. Second, a common modality space is generated by translating views to sketches according to cross entropy evaluation. Third, a common semantic space embedding for two domains is learned based on a triplet network. Finally, based on the learned features of sketches and 3D models, four kinds of distance metrics between sketches and 3D models are designed, and sketch-based 3D model retrieval results are achieved. The experimental results using the Shape Retrieval Contest (SHREC) 2013 and SHREC 2014 datasets reveal the superiority of our proposed method over state-of-the-art methods. View Full-Text
Share & Cite This Article
Bai, J.; Wang, M.; Kong, D. Deep Common Semantic Space Embedding for Sketch-Based 3D Model Retrieval. Entropy 2019, 21, 369.
Bai J, Wang M, Kong D. Deep Common Semantic Space Embedding for Sketch-Based 3D Model Retrieval. Entropy. 2019; 21(4):369.Chicago/Turabian Style
Bai, Jing; Wang, Mengjie; Kong, Dexin. 2019. "Deep Common Semantic Space Embedding for Sketch-Based 3D Model Retrieval." Entropy 21, no. 4: 369.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.