Keyword-driven Image Captioning via Context-dependent Bilateral LSTM

Xiaodan Zhang, Shuqiang Jiang, Qixiang Ye, Jianbin Jiao, Rynson W.H. Lau
(ICME2017)
[PDF][Oral Slides]

Abstract

Image captioning has recently received much attention. Existing approaches, however, are limited to describing images with simple contextual information, which typically generate one sentence to describe each image with only a single contextual emphasis. In this paper, we address this limitation from a user perspective with a novel approach. Given some keywords as additional inputs, the proposed method would generate various descriptions according to the provided guidance. Hence, descriptions with different focuses can be generated for the same image. Our method is based on a new Context-dependent Bilateral Long Short-Term Memory (CDB-LSTM) model to predict a keyword-driven sentence by considering the word dependence. The word dependence is explored externally with a bilateral pipeline, and internally with a unified and joint training process. Experiments on the MS COCO dataset demonstrate that the proposed approach not only significantly outperforms the baseline method but also shows good adaptation and consistency with various keywords.


  • Xiaodan Zhang, Shengfeng He, Xinhang Song, Pengxu Wei, Shuqiang Jiang, Qixiang Ye, Jianbin Jiao, Rynson W.H. Lau. Keyword-driven Image Captioning via Context-dependent Bilateral LSTM. Proceedings of the IEEE International Conference on Multimedia and Expo (ICME 2017), July 10-14, 2017, Hong Kong.