Visual Information Processing and Learning
Visual Information Processing and Learning
Shuang Yang

Assistant Professor

Research area: Computer vision, Image sequence analysis

Shuang Yang is an assistant professor with the Key Laboratory of Intelligent Information Processing, Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS). She received her Ph.D. degree from the Institute of Automation (IA), University of Chinese Academy of Sciences (CAS) in 2016. Her research interests cover computer vision, pattern recognition, machine learning, especially include deep learning, Bayesian modeling and inference,probabilistic graphical models, etc. She mainly focuses on topics related to lip-reading now.

Academic service
Journal services
  • Reviewer of TNNLS, TKDE, Pattern Recognition
Experience
Educational experience
  • 2011-2016: Ph.D., Institute of Automation, Chinese Academy of Sciences
  • 2008-2011: M.E., HuNan University
  • 2004-2008: B.E., HuNan University
Research content

1.   Problems in Computer Vision and Pattern Recognition

Video Understanding; Action Analysis; Image Sequence Processing

2.   Machine Learning Methods

Probabilistic Graphical Models, Recurrent Neural Networks, LSTM, and so on.
Research project

1.   Recognition of Image Sequences by combing Probabilistic Graphical Models and Deep Learning

Project type: NSFC
Project time: 2018.1-2020.12
Project leader: Shuang Yang
Book or Paper

Papers

Yuanhang Zhang, Shuang Yang, Jingyun Xiao, Shiguang Shan, Xilin Chen, Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition, IEEE FG 2020 (Oral).


Mingshuang Luo, Shuang Yang, Shiguang Shan, Xilin Chen, Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence Lip-Reading, IEEE FG 2020.

Jingyun Xiao, Shuang Yang, Yuanhang Zhang, Shiguang Shan, Xilin Chen, Deformation Flow Based Two-Stream Network for Lip Reading. IEEE FG 2020.


Xing Zhao, Shuang Yang, Shiguang Shan, Xilin Chen, Mutual Information Maximization for Effective Lip Reading, IEEE FG 2020.


Shuang Yang, Yuanhang Zhang, Dalu Feng, Mingmin Yang, Chenhao Wang, Jingyun Xiao, Keyu Long, Shiguang Shan, Xilin Chen, LRW-1000: A Naturally-Distributed Large-Scale Benchmark for Lip Reading in the Wild, IEEE FG 2019 (Oral).


Shanru Li, Liping Wang, Shuang Yang, Yuanquan Wang, Chongwen Wang: TinyPoseNet: A Fast and Compact Deep Network for Robust Head Pose Estimation. ICONIP (2) 2017: 53-63 


Wen Sun, Chunfeng Yuan, Pei Wang, Shuang Yang, Weiming Hu, Zhaoquan Cai: Hierarchical Bayesian Multiple Kernel Learning Based Feature Fusion for Action Recognition. MPRSS 2016: 85-97 


Shuang Yang, Chunfeng Yuan, Baoxin Wu, Weiming Hu, Fangshi Wang: Multi-feature max-margin hierarchical Bayesian model for action recognition. CVPR 2015: 1610-1618 


Guan Luo, Shuang Yang, Guodong Tian, Chunfeng Yuan, Weiming Hu, Stephen J. Maybank: Learning Human Actions by Combining Global Dynamics and Local Appearance. IEEE Trans. Pattern Anal. Mach. Intell. 36(12): 2466-2482(2014)


Shuang Yang, Chunfeng Yuan, Weiming Hu, Xinmiao Ding: A Hierarchical Model Based on Latent Dirichlet Allocation for Action Recognition. ICPR 2014: 2613-2618


Baoxin Wu, Shuang Yang, Chunfeng Yuan, Weiming Hu, Fangshi Wang: Human Action Recognition Based on Oriented Motion Salient Regions. ACCV Workshops (1) 2014: 113-128 


Chunfeng Yuan, Weiming Hu, Guodong Tian, Shuang Yang, Haoran Wang: Multi-task Sparse Learning with Beta Process Prior for Action Recognition. CVPR 2013: 423-429 


Shuang Yang, Chunfeng Yuan, Haoran Wang, Weiming Hu: Combining sparse appearance features and dense motion features via random forest for action detection. ICASSP 2013: 2415-2419



Visual Information Processing and Learning
  • Address :No.6 Kexueyuan South Road
  • Zhongguancun,Haidian District
  • Beijing,China
  • Postcode :100190
  • Tel : (8610)62600514
  • Email:yi.cheng@vipl.ict.ac.cn
  • Valse

  • Big Lecture of DL

Copyright @ Visual Information Processing and Learning 京ICP备05002829号 京公网安备1101080060