* The Audio-Visual Speech Understanding Group has been founded since 2017, which takes Lip Reading as the core task now, with other auxiliary tasks such as Speech Emotion Analysis, Visual Voice Activity Detection, Visual Key-Word Spotting. The related technologies can be used to assist speech recognition, to achieve a more intelligent and robust human-computer interaction, or can be used independently in intelligent teaching, security verification, public security and other fields.
For the related codes and paper of the team, please refer to : https://github.com/yshnny/Collections-of-The-Lip-Reading-Work-of-VIPL-LR
2020.3: Four papers are accepted by IEEE FG 2020. The performance on both LRW nad LRW-1000 has achieved the state of the art.
2019.8: Our Lip-reading system was awarded as the “Innovation Star of Artificial Intelligence” in the competition of multimedia information technology in Chinese Congress on Artificial Intelligence
2019.6: We won the 2nd place in the AVA Challenge: Task-2 (Active Speaker) of ActivityNet Challenge 2019, which is denoted as the “ImageNet” Competition in the domain of video based action recognition. The results were released in CVPR 2019.
2019.4: ACM ICMI 2019-MAVSR competition starts! The competition was jointly organized by researchers from the Institute of Computing Technology (Chinese academy of sciences), Imperial College London, the university of Oxford and Samsung American Research Institute. For more details about the competition, please refer to MAVSR2019!
2018.10: The LR Group has released the large-scale naturally distributed lip reading dataset LRW-1000. This dataset is not only the currently largest word-level lip reading dataset, but also the only one public Mandarin lip reading dataset. For more details, please refer to the data pape.
2018.4~2018.10: The LR Group has been invited by CCTV-1 to show the lip reading technology and system to the whole television audiences. For more details, please click here.
* Research Topics：
Visual Speech Recognition (VSR) | Lip Reading (LR)
This topic mainly focuses on how to use and especially only use visual information to infer what the speaker is saying in the video (with or without sound). It can be used to help hearing-impaired people, and also play an important role for many audio-based speech recognition systems, especially in nosiy environment.
2. Talking face Generation
This task aims at making the given static face images “talk” given words, i.e. generating a video based only a clip of speech and the given face images of the target identity.
3. Visual Voice Activity Detection (VVAD)
This topic focuses on how to use visual information for speech activity detection, which is important for many practical speech recognition systems.
4. Multi-modal VSR/ KWS/ VVAD
* Related Applications:
※ Lip code/password, Liveness detection, Command statement recognition, and help adjust pronunciation in intelligent education systems, and so on.