Captioning Transformer with Stacked Attention Modules
AbstractImage captioning is a challenging task. Meanwhile, it is important for the machine to understand the meaning of an image better. In recent years, the image captioning usually use the long-short-term-memory (LSTM) as the decoder to generate the sentence, and these models show excellent performance. Although the LSTM can memorize dependencies, the LSTM structure has complicated and inherently sequential across time problems. To address these issues, recent works have shown benefits of the Transformer for machine translation. Inspired by their success, we develop a Captioning Transformer (CT) model with stacked attention modules. We attempt to introduce the Transformer to the image captioning task. The CT model contains only attention modules without the dependencies of the time. It not only can memorize dependencies between the sequence but also can be trained in parallel. Moreover, we propose the multi-level supervision to make the Transformer achieve better performance. Extensive experiments are carried out on the challenging MSCOCO dataset and the proposed Captioning Transformer achieves competitive performance compared with some state-of-the-art methods. View Full-Text
Share & Cite This Article
Zhu, X.; Li, L.; Liu, J.; Peng, H.; Niu, X. Captioning Transformer with Stacked Attention Modules. Appl. Sci. 2018, 8, 739.
Zhu X, Li L, Liu J, Peng H, Niu X. Captioning Transformer with Stacked Attention Modules. Applied Sciences. 2018; 8(5):739.Chicago/Turabian Style
Zhu, Xinxin; Li, Lixiang; Liu, Jing; Peng, Haipeng; Niu, Xinxin. 2018. "Captioning Transformer with Stacked Attention Modules." Appl. Sci. 8, no. 5: 739.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.