中科院计算所视觉信息处理与学习组
中科院计算所视觉信息处理与学习组


News / Events

VIPL's 2 papers are accepted by NeurIPS 2021

Date of publication:2021-11-03      Number of clicks: 10


Congratulations! The laboratory has 2 papers accepted by NeurIPS 2021. Neural Information Processing Systems, referred as NeurIPS, is a top international conference on machine learning. The summarized information of the two papers is as follows:


1. When False Positive is Intolerant: End-to-End Optimization with Low FPR for Multipartite Ranking (Peisong Wen, Qianqian Xu, Zhiyong Yang, Yuan He, Qingming Huang)


In some application scenarios of multi-partite ranking, i.e., medical diagnosis and content review, True Positive Rate (TPR) is meaningful only under low False Positive Rate (FPR). Therefore, the commonly used evaluation metric, Area Under the receiver operating characteristics Curve (AUC), is inconsistent with the expected performance in such scenarios. Based on the above consideration, we consider an alternative metric for multipartite ranking evaluating the True Positive Rate (TPR) at a given FPR, denoted as TPR@FPR. Unfortunately, the key challenge of direct TPR@FPR optimization is two-fold: (a) the original objective function is not differentiable, making gradient backpropagation impossible; (b) the loss function could not be written as a sum of independent instance-wise terms, making mini-batch based optimization infeasible. To address these issues, we propose a novel framework on top of the deep learning framework named Cross-Batch Approximation for Multipartite Ranking (CBA-MR). In face of (a), we propose a differentiable surrogate optimization problem where the instances having a short-time effect on FPR are rendered with different weights based on the random walk hypothesis. To tackle (b), we propose a fast-ranking estimation method, where the full-batch loss evaluation is replaced by a delayed update scheme with the help of an embedding cache. Finally, experimental results on four real-world benchmarks are provided to demonstrate the effectiveness of the proposed method.




2. See More for Scene: Pairwise Consistency Learning for Scene Classification (Gongwei Chen, Xinhang Song, Bohan Wang, and Shuqiang Jiang)

 

Scene classification is a valuable classification subtask and has its own characteristics which still needs more in-depth studies. Basically, scene characteristics are distributed over the whole image, which cause the need of “seeing” comprehensive and informative regions. Previous works mainly focus on region discovery and aggregation, while rarely involves the inherent properties of CNN along with its potential ability to satisfy the requirements of scene classification. In this paper, we propose to understand scene images and the scene classification CNN models in terms of the focus area. From this new perspective, we find that large focus area is preferred in scene classification CNN models as a consequence of learning scene characteristics. Meanwhile, the analysis about existing training schemes helps us to understand the effects of focus area, and also raises the question about optimal training method for scene classification. Pursuing the better usage of scene characteristics, we propose a new learning scheme with a tailored loss in the goal of activating larger focus area on scene images. Since the supervision of the target regions to be enlarged is usually lacked, our alternative learning scheme is to erase already activated area, and allow the CNN models to activate more area during training. The proposed scheme is implemented by keeping the pairwise consistency between the output of the erased image and its original one. In particular, a tailored loss is proposed to keep such pairwise consistency by leveraging category-relevance information. Experiments on Places365 show the significant improvements of our method with various CNNs. Our method shows an inferior result on object dataset, ImageNet, which experimentally indicates that it captures the unique characteristics of scenes.





中科院计算所视觉信息处理与学习组
  • 单位地址:北京海淀区中关村科学院南路6号
  • 邮编:100190
  • 联系电话:010-62600514
  • Email:yi.cheng@vipl.ict.ac.cn
  • Valse

  • 深度学习大讲堂

版权所有 @ 中科院计算所视觉信息处理与学习组 京ICP备05002829号-1 京公网安备1101080060