Affective Computing
Leader: Jiabei Zeng / Shiguang Shan (Professor)
Email: jiabei.zeng [at] vipl.ict.ac.cn; sgshan [at] ict.ac.cn
Introduction of research group

  Affective Computing is computing that relates to, arises from, or deliberately influences emotion or other affective phenomena (Picard, MIT Press 1997). Affective computing group focus on perceiving and analyzing peoples’ facial expressions, emotions or other affective phenomena mainly according to the visual information.

Research

The main research topics focus on:

A. Algorithms and methodologies that address the basic issues in recognizing emotions, detection facial action unit, estimating affect valence and arousal. For example,

    1) How to train facial expression recognition systems from ill-annotated data (e.g., incorrect labels, inconsistent labels)?

    2) How to recognize facial expression under the open scenarios (e.g., partially occluded faces, with multiple modality)?

B. Applications

     1) Students’ engagement estimation.

     2) Credit risk assessment from facial expressions

Papers

Journal Papers

  • Mengyi Liu, Shaoxin Li, Shiguang Shan, Xilin Chen, "AU-inspired Deep Networks for Facial Expression Feature Learning," Neurocomputing, vol. 159, pp. 126-136, 2015.
  • Mengyi Liu, Ruiping Wang, Shiguang Shan, Xilin Chen, "Learning Prototypes and Similes on Grassmann Manifold for Spontaneous Expression Recognition," Computer Vision and Image Understanding, vol. 147, pp. 95-101, 2016.
  • Mengyi Liu, Ruiping Wang, Shaoxin Li, Zhiwu Huang, Shiguang Shan, Xilin Chen, "Video Modeling and Learning on Riemannian Manifold for Emotion Recognition in the Wild," Journal on Multimodal User Interfaces, vol. 10, no. 2, pp. 113-124, 2016.
  • Mengyi Liu, Shiguang Shan, Ruiping Wang, Xilin Chen, "Learning Expressionlets via Universal Manifold Model for Dynamic Facial Expression Recognition," IEEE Transactions on Image Processing, vol. 25, no. 12, pp. 5920-5932, 2016.
  • Yong Li, Jiabei Zeng and Shiguang Shan. Learning Representations for Facial Actions from Unlabeled Videos. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020. (Accepted)

Conference Papers

  • Yong Li, Jiabei Zeng, Shiguang Shan, Xilin Chen, "Self-supervised Representation Learning from Videos for Facial Action Unit Detection", IEEE Conference on Computer Vision and Pattern Recognition(CVPR), pp.10924-10933, Long Beach, California, USA, June 16-20, 2019.
  • Xin Cai, Jiabei Zeng and Shiguang Shan. Landmark-aware Self-supervised Eye Semantic Segmentation. Unknown Aware Feature Learning for Face Forgery Detection. International Conference on Automatic Face and Gesture Recognition (FG), 2021. (Accepted)
  • Yunjia Sun, Jiabei Zeng, Shiguang Shan, Xilin Chen. Cross-Encoder for Unsupervised Gaze Representation Learning. IEEE/CVF International Conference on Computer Vision (ICCV), pp. 3702-3711, Montreal, Canada, Oct. 11-17, 2021.
  • Mengyi Liu, Xin Liu, Yan Li, Xilin Chen, Alexander Hauptmann, Shiguang Shan, "Exploiting Feature Hierarchies With Convolutional Neural Networks for Cultural Event Recognition," IEEE International Conference on Computer Vision(ICCV2015) Workshops, pp. 32-37, 2015.
  • Xuran Sun, Jiabei Zeng and Shiguang Shan. Emotion-aware Contrastive Learning for Facial Action Unit Detection. International Conference on Automatic Face and Gesture Recognition (FG), 2021. (Accepted)
  • Jiabei Zeng, Shiguang Shan, Xilin Chen, "Facial Expression Recognition with Inconsistently Annotated Datasets," To appear in Proc. European Conference on Computer Vision (ECCV2018), pp. 1-16, Munich, Germany, 2018.
  • Zijia Lu, Jiabei Zeng, Shiguang Shan, Xilin Chen, "Zero-Shot Facial Expression Recognition with Multi-Label Label Propagation," Asian Conference on Computer Vision 2018(ACCV2018), 2-6 Dec. 2018, Perth Western Australia