Despite the exceptional performance of Deep Neural Networks, it is important that we develop “explainable AI”. Explainable AI is critical in multiple ways, e.g., to gain trust from human users, and to produce casual models for proposing model improvement. The challenge is that we must be able to explain AI at a level tailored to the limited bandwidth of humans.
We aim to address the problem by adopting a classic concept in computer science and human reasoning, i.e., abstraction, as a primary method to reduce the complicated models used in AI application into simple, transparent and explainable ones. Fundamentally, human brains are designed to understand and reason abstractly depending on the task to solve, i.e., building a simple model with only very relevant information in order to make certain decision. We thus aim to build our solutions for explaining AI at the right level of abstraction (i.e., in an application-dependent and probabilistic way).