您当前的位置:
杨双
杨双 副研究员
电子邮箱: shuang.yang@ict.ac.cn
通讯地址: 北京市海淀区科学院南路6号
研究方向: 视听语言感知与理解
个人简介

杨双,中科院计算所,副研究员,硕士生导师,2016年于中国科学院自动化研究所获得博士学位。研究领域为计算机视觉、模式识别、机器学习,目前尤其关注视听语言的感知与理解、唇语识别、视频分析等问题。获科技委首届GF科技创新杯技术创新组第四名、中科院计算所“学术百星”;所指导的学生先后获VIPL最佳新人奖、中国科学院大学三好学生、中国科学院大学生奖学金、中国科学院大学优秀本科毕业论文等荣誉;分别获ActivityNet Challenge@CVPR-2019与2021中ASD (Active Speaker Detection) 任务的第二名与冠军;带队研发的唇语识别系统获评中国人工智能多媒体信息识别技术竞赛“创新之星”、CCTV《机智过人》专题报道等;担任CVPR、ICCV、BMVC、TMM、CVIU、PR等刊/会的评审人。


Shuang Yang is an associate professor with the Key Laboratory of Intelligent Information Processing, Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS). She received her Ph.D. degree from the Institute of Automation (IA), University of Chinese Academy of Sciences (CAS). Her research interests cover computer vision, pattern recognition, machine learning, especially include deep learning, Bayesian modeling and inference,probabilistic graphical models, etc. She mainly focuses on topics related to lip-reading at present.

* Google Scholar Homepage: [@Google Scholar]

* Our team webpage: [@GitHub]


经历

教育经历

2011-2016: Ph.D., Institute of Automation, Chinese Academy of Sciences

2008-2011: M.E., HuNan University

2004-2008: B.E., HuNan University


学术服务

刊物服务

T-KDE, CVIU,T-MM

T-NNLS,CSVT

PR,TVCJ

JEI,FCS,KBS

随着评审刊会的增加,此项后续不再一一列出更新,请理解。(2021.2)。

会议服务

CVPR, ICCV, ECCV

BMVC, ICME, ICPR

WACV, IJCB, PRCV

随着评审刊会的增加,此项后续不再一一列出更新,请理解。(2021.2)。


著论

Please kindly refer to my Google scholar homepage: https://scholar.google.com/citations?user=8wizL74AAAAJ&hl=en

论文

Yuanhang Zhang, Susan Liang, Shuang Yang, Xiao Liu, Zhongqin Wu, Shiguang Shan, Xilin Chen. UniCon: Unified Context Network for Robust Active Speaker Detection. ACM Multimedia 2021 (Oral).[pdf]


Yuanhang Zhang, Susan Liang, Shuang Yang, Xiao Liu, Zhongqin Wu, Shiguang Shan. ICTCAS-UCAS-TAL Submission to the AVA-ActiveSpeaker Task at ActivityNet Challenge 2021. The ActivityNet Large-Scale Activity Recognition Challenge @ CVPR 2021.(The First place)[pdf]


Feng, Dalu, Shuang Yang, and Shiguang Shan. "An Efficient Software for Building Lip Reading Models Without Pains." In 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW),  IEEE, 2021[pdf]


Mingshuang Luo, Shuang Yang, Shiguang Shan, Xilin Chen, Synchronous Bidirectional Learning for Multilingual Lip Reading, BMVC 2020.[pdf]


Yuanhang Zhang, Shuang Yang, Jingyun Xiao, Shiguang Shan, Xilin Chen, Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition, IEEE FG 2020 (Oral).[pdf]

Mingshuang Luo, Shuang Yang, Shiguang Shan, Xilin Chen, Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence Lip-Reading, IEEE FG 2020.[pdf]

Jingyun Xiao, Shuang Yang, Yuanhang Zhang, Shiguang Shan, Xilin Chen, Deformation Flow Based Two-Stream Network for Lip Reading. IEEE FG 2020.[pdf][code]


Xing Zhao, Shuang Yang, Shiguang Shan, Xilin Chen, Mutual Information Maximization for Effective Lip Reading, IEEE FG 2020.[pdf][code]


Shuang Yang, Yuanhang Zhang, Dalu Feng, Mingmin Yang, Chenhao Wang, Jingyun Xiao, Keyu Long, Shiguang Shan, Xilin Chen: LRW-1000: A Naturally-Distibuted Large-Scale Benchmark for Lip Reading in the Wild, IEEE FG 2019 (Oral)[pdf][code-A][code-B]


Yuanhang Zhang, Jingyun Xiao, Shuang Yang, Shiguang Shan, Multi-Task Learning for Audio-Visual Active Speaker Detection, The ActivityNet Large-Scale Activity Recognition Challenge 2019.[pdf]


Shanru Li, Liping Wang, Shuang Yang, Yuanquan Wang, Chongwen Wang: TinyPoseNet: A Fast and Compact Deep Network for Robust Head Pose Estimation. ICONIP (2) 2017: 53-63


Wen Sun, Chunfeng Yuan, Pei Wang, Shuang Yang, Weiming Hu, Zhaoquan Cai: Hierarchical Bayesian Multiple Kernel Learning Based Feature Fusion for Action Recognition. MPRSS 2016: 85-97


Shuang Yang, Chunfeng Yuan, Baoxin Wu, Weiming Hu, Fangshi Wang: Multi-feature max-margin hierarchical Bayesian model for action recognition. CVPR 2015: 1610-1618


Guan Luo, Shuang Yang, Guodong Tian, Chunfeng Yuan, Weiming Hu, Stephen J. Maybank: Learning Human Actions by Combining Global Dynamics and Local Appearance. IEEE Trans. Pattern Anal. Mach. Intell. 36(12): 2466-2482(2014)


Shuang Yang, Chunfeng Yuan, Weiming Hu, Xinmiao Ding: A Hierarchical Model Based on Latent Dirichlet Allocation for Action Recognition. ICPR 2014: 2613-2618


Baoxin Wu, Shuang Yang, Chunfeng Yuan, Weiming Hu, Fangshi Wang: Human Action Recognition Based on Oriented Motion Salient Regions. ACCV Workshops (1) 2014: 113-128


Chunfeng Yuan, Weiming Hu, Guodong Tian, Shuang Yang, Haoran Wang: Multi-task Sparse Learning with Beta Process Prior for Action Recognition. CVPR 2013: 423-429


Shuang Yang, Chunfeng Yuan, Haoran Wang, Weiming Hu: Combining sparse appearance features and dense motion features via random forest for action detection. ICASSP 2013: 2415-2419