Multi-scale multi-feature context modeling for scene recognition in the semantic manifold

Xinhang Song, Shuqiang Jiang, Luis Herranz


Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, etc). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own co-occurrence patterns that are consistent across the images in that category. Thus, discovering and modeling these patterns is critical to improve the recognition performance in this representation. Since the emergence of large datasets, such as ImageNet and Places, these approaches have been relegated in favor of the much more powerful convolutional neural networks (CNNs), which can automatically learn multi-layered representations from the data. In this paper we address many limitations of the original SM approach and related works. We propose discriminative patch representations using neural networks and further propose a hybrid architecture in which the semantic manifold is built on top of multiscale CNNs. Both representations can be computed significantly faster than the Gaussian mixture models of the original SM. To combine multiple scales, spatial relations and multiple features we formulate rich context models using Markov random fields. To solve the optimization problem we analyze global and local approaches, where a top-down hierarchical algorithm has the best performance. Experimental results show that exploiting different types of contextual relations jointly consistently improves the recognition accuracy.

  • Xinhang Song, Shuqiang Jiang, Luis Herranz. “Multi-scale multi-feature context modeling for scene recognition in the semantic manifold.” IEEE Transactions on Image Processing (TIP), 2017, CCF A