Attribute-Guided Feature Learning for Few-Shot Image Recognition

Yaohui Zhu, Weiqing Min, Shuqiang Jiang
(IEEE Transactions on Multimedia 2020)
[PDF]

我们提出了一种属性指导的两层学习框架,该框架能够获得通用的特征表示。属性学习被作为小样本图像识别在多任务学习框架下的另一学习目标。在该框架下,小样本图像识别在任务层面学习和属性学习在图像上进行,他们共享同一网络。此外,在属性学习的指导下,来自不同层次的特征是不同级别的属性表示,它们能在多个方面进行小样本图像识别。因此,本文建立了一种以属性为指导的两层学习机制,以捕获更多判别性表示。与单层学习机制相比,两层学习机制获得的是互补表示。所提出的框架与特定的模型无关。两种典型的方法:基于度量的小样本方法和元学习方法都能插入到提出的框架中。

Abstract

Few-shot image recognition has become an essential problem in the field of machine learning and image recognition, and has attracted more and more research attention. Typically, most few-shot image recognition methods are trained across tasks. However, these methods are apt to learn an embedding network for discriminative representations of training categories, and thus could not distinguish well for novel categories. To establish connections between training and novel categories, we use attribute-related representations for few-shot image recognition and propose an attribute-guided two-layer learning framework, which is capable of learning general feature representations. Specifically, few-shot image recognition trained over tasks and attribute learning trained over images share the same network in a multi-task learning framework. In this way, few-shot image recognition learns feature representations guided by attributes, and is thus less sensitive to novel categories compared with feature representations only using category supervision. Meanwhile, the multi-layer features associated with attributes are aligned with category learning on multiple levels respectively. Therefore we establish a two-layer learning mechanism guided by attributes to capture more discriminative representations, which are complementary compared with a single-layer learning mechanism. Experimental results on CUB-200, AWA and Mini- ImageNet datasets demonstrate our method effectively improves the performance.


  • Yaohui Zhu, Weiqing Min, Shuqiang Jiang. “Attribute-Guided Feature Learning for Few-Shot Image Recognition”, IEEE Transactions on Multimedia (TMM), 2020.