Assembling Deep Neural Networks for Medical Compound Figure Detection
AbstractCompound figure detection on figures and associated captions is the first step to making medical figures from biomedical literature available for further analysis. The performance of traditional methods is limited to the choice of hand-engineering features and prior domain knowledge. We train multiple convolutional neural networks (CNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) networks on top of pre-trained word vectors to learn textual features from captions and employ deep CNNs to learn visual features from figures. We then identify compound figures by combining textual and visual prediction. Our proposed architecture obtains remarkable performance in three run types—textual, visual and mixed—and achieves better performance in ImageCLEF2015 and ImageCLEF2016. View Full-Text
Scifeed alert for new publicationsNever miss any articles matching your research from any publisher
- Get alerts for new papers matching your research
- Find out the new papers from selected authors
- Updated daily for 49'000+ journals and 6000+ publishers
- Define your Scifeed now
Yu, Y.; Lin, H.; Meng, J.; Wei, X.; Zhao, Z. Assembling Deep Neural Networks for Medical Compound Figure Detection. Information 2017, 8, 48.
Yu Y, Lin H, Meng J, Wei X, Zhao Z. Assembling Deep Neural Networks for Medical Compound Figure Detection. Information. 2017; 8(2):48.Chicago/Turabian Style
Yu, Yuhai; Lin, Hongfei; Meng, Jiana; Wei, Xiaocong; Zhao, Zhehuan. 2017. "Assembling Deep Neural Networks for Medical Compound Figure Detection." Information 8, no. 2: 48.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.