Visual Information Processing and Learning
Visual Information Processing and Learning


News / Events

VIPL's paper on person re-identification is accepted by IEEE TNNLS

Date of publication:2020-08-10      Number of clicks: 0

Recently, one paper on person re-identification is accepted by the journal IEEE TNNLS. The full name of IEEE TNNLS is IEEE Transactions on Neural Networks and Learning Systems, which is one of the international journals on machine learning with an impact factor of 8.793 announced in 2019. The paper information is as follows:

Ruibing Hou, Bingpeng Ma, Hong Chang, Xinqian Gu, Shiguang Shan and Xilin Chen, “IAUnet: Global Context-Aware Feature Learning for Person Re-Identification”, IEEE Transactions on Neural Networks and Learning Systems, 2020. (Accepted).

For person re-identification, the feature generated for a pedestrian sequence is often corrupted by the mis-detected frames. In addition, different identities may have similar local parts which are difficult to distinguish the two persons. In this work, we propose to leverage the spatial-temporal context information to clarify the local distractions to enhance the target feature representation. Specifically, we present a novel block, Interaction-Aggregation-Update (IAU) block. As shown in the figure below, we firstly use a part division unit to extract the part features for each frame. Then we perform interaction and aggregation to the part features. We consider two types of relations: spatial relations and temporal relations. Here the spatial relation learns to compute the contextual dependencies between different body parts in a single frame. While the temporal relation is used to capture the contextual dependencies between same body parts across all frames. With help of the spatial and temporal contexts, the features of the corrupted parts can be adaptively changed to describe the target person. The proposed IAU block is lightweight, end-to-end trainable and can be easily plugged into existing CNNs to form the IAUnet. We conduct extensive experiments on both image and video based reID benchmarks. Experimental results show that our IAUnet performs favorably against state-of-the-arts on five reID datasets.




Related Papers:

[1] Ruibing Hou, Bingpeng Ma, Hong Chang, Xinqian Gu, Shiguang Shan and Xilin Chen. IAUnet: Global Context-Aware Feature Learning for Person Re-Identification, IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2020. (Accepted).

[2] Ruibing Hou, Bingpeng Ma, Hong Chang, Xinqian Gu, Shiguang Shan and Xilin Chen. Interaction-and-Aggregation Network for Person Re-identification, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.


Visual Information Processing and Learning
  • Address :No.6 Kexueyuan South Road
  • Zhongguancun,Haidian District
  • Beijing,China
  • Postcode :100190
  • Tel : (8610)62600514
  • Email:yi.cheng@vipl.ict.ac.cn
  • Valse

  • Big Lecture of DL

Copyright @ Visual Information Processing and Learning 京ICP备05002829号 京公网安备1101080060