In this project, we aim to answer the following three fundamental questions in self-supervised learning and foresee their far-reaching social impact: 1) What does AI learn? 2) How does AI learn? 3) How does AI use the learned knowledge? Unfortunately, today’s theory in AI is only based on statistics that cannot answer those fundamental questions. We propose a group-theoretic causal framework to tackle the theoretical challenges. The framework is able to learn sustainable, generalizable, and interpretable feature representations.
In this project, we aim to answer the following three fundamental questions in self-supervised learning and foresee their far-reaching social impact: 1) What does AI learn? 2) How does AI learn? 3) How does AI use the learned knowledge? Unfortunately, today’s theory in AI is only based on statistics that cannot answer those fundamental questions. We propose a group-theoretic causal framework to tackle the theoretical challenges. The framework is able to learn sustainable, generalizable, and interpretable feature representations.
