Bundled Object Context for Referring Expressions

Xiangyang Li, Shuqiang Jiang
(IEEE Transactions on Multimedia 2018)
(Accepted February 9, 2018)


Referring expressions are natural language descriptions of objects within a given scene. Context is of crucial importance for a referring expression as the description not only depicts the properties of the object, but also involves the relationships of the referred object with other ones. Most of previous work uses either the whole image or one particular contextual object as the context. However, the context of these approaches is holistic and insufficient, as a referring expression often describes relationships of multiple objects in an image. To leverage rich context information from all objects in an image, in this work, we propose a novel scheme which is composed of a visual context Long Short-Term Memory (LSTM) module and a sentence LSTM module to model bundled object context for referring expressions. All contextual objects are arranged with their spatial locations and progressively fed into the visual context LSTM module to acquire and aggregate the context features. And then the concatenation of the learned context features and the features of the referred object are put into the sentence LSTM module to learn the probability of a referring expression. The feedback connections and internal gating mechanism of the LSTM cells enable our model to selectively propagate relevant contextual information through the whole network. Experiments on three benchmark datasets show our methods can achieve promising results compared to state-of-the-art methods. Moreover, visualization of the internal states of the visual context LSTM cells also shows that our method can automatically select the pertinent context objects.

  • Xiangyang Li, Shuqiang Jiang, "Bundled Object Context for Referring Expressions", IEEE Transactions on Multimedia, vol.20, no.10, pp.2749-2760, 2018.