Date of publication：2018-06-14 Number of clicks: 133
Report Title：Advanced Topics in Multi-label Learning
Time：14: 00 p.m. to 15: 00 p.m. on June 14
Venue：Room 446 in ICT
Summary of the report：
Multi-label learning, in which each instance can belong to multiple labels simultaneously, has significantly attracted the attention of researchers as a result of its wide range of applications, which range from document classification and automatic image annotation to video annotation. Many multi-label learning models have been developed to capture label dependency. Amongst them, the classifier chain (CC) model is one of the most popular methods due to its simplicity and promising experimental results. However, CC suffers from three important problems: Does the label order affect the performance of CC? Is there any globally optimal classifier chain which can achieve the optimal prediction performance for CC? If yes, how can the globally optimal classifier chain be found? It is non-trivial to answer these problems. Another important branch of methods for capturing label dependency is encoding-decoding paradigm. Based on structural SVMs, maximum margin output coding (MMOC) has become one of the most representative encoding-decoding methods and shown promising results for multi-label classification. Unfortunately, MMOC suffers from two major limitations: 1) Inconsistent performance. 2) Prohibitive computational cost. Therefore, it is non-trivial to break the bottlenecks of MMOC, and develop efficient and consistent algorithms for solving multi-label learning tasks. The prediction of most multi-label learning methods either scales linearly with the number of labels or involves an expensive decoding process, which usually requires solving a combinatorial optimization. Such approaches become unacceptable when tackling thousands of labels. It is imperative to design an efficient, yet accurate multi-label learning algorithm with the minimum number of predictions. This report systematically shows how to solve aforementioned issues.
Brief introduction of the speaker：
Liu Weiwei, currently a postdoctoral researcher at the University of New South Wales in Australia, received his PhD from the University of Sydney Science and Technology (UTS) in August 2017 under the supervision of Prof. Ivor W. Tsang. The main research fields include multi-label learning, clustering, feature selection and sparse learning. More than 10 papers have been published in top CCF A journals and conferences in the world, including the Journal of Machine Learning Research (JMLR), Pattern Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), NIPS, AAAI, IJCAI, and so on. He was guest editor of IEEE transactions on Neural Networks and Learning Systems (TNNLS). In 2017, he won the Scholarship of "Outstanding overseas students at their own expense" from China Scholarship Council.