On the Completeness of Self-Supervised Learning: Group Theory, Causality, and their Combinations
Project Description
In this project, we aim to answer the following three fundamental questions in self-supervised learning and foresee their far-reaching social impact: 1) What does AI learn? 2) How does AI learn? 3) How does AI use the learned knowledge? Unfortunately, today’s theory in AI is only based on statistics that cannot answer those fundamental questions. We propose a group-theoretic causal framework to tackle the theoretical challenges. The framework is able to learn sustainable, generalizable, and interpretable feature representations.
In this project, we aim to answer the following three fundamental questions in self-supervised learning and foresee their far-reaching social impact: 1) What does AI learn? 2) How does AI learn? 3) How does AI use the learned knowledge? Unfortunately, today’s theory in AI is only based on statistics that cannot answer those fundamental questions. We propose a group-theoretic causal framework to tackle the theoretical challenges. The framework is able to learn sustainable, generalizable, and interpretable feature representations.
Research Technical Area
- Deep Learning
- Group Theory
- Causality
Benefits to the society
Sustainable, Responsible, and Explainable AI
Project’s Publications
–
Team’s Principal Investigator
Hanwang ZHANG
Nanyang Technological University (NTU)
Hanwang ZHANG is an Assistant Professor at Nanyang Technological University’s School of Computer Science and Engineering. His research interests include Computer Vision, Natural Language Processing, Causal Inference, and their combinations. Due to his contribution in applied causality, he has received numerous awards including the Singapore President Award Young Scientist 2021, IEEE AI’s-10-To-Watch 2020, Alibaba Innovative Research Award 2019, Nanyang Assistant Professorship 2018, and several best paper awards.
Recent Notable Awards
- Singapore President Award Young Scientist 2021
- IEEE AI’s-10-To-Watch 2020