Explaining AI with the Right Level of Abstraction

Project Description

Despite the exceptional performance of Deep Neural Networks, it is important that we develop “explainable AI”. Explainable AI is critical in multiple ways, e.g., to gain trust from human users, and to produce casual models for proposing model improvement. The challenge is that we must be able to explain AI at a level tailored to the limited bandwidth of humans.

We aim to address the problem by adopting a classic concept in computer science and human reasoning, i.e., abstraction, as a primary method to reduce the complicated models used in AI application into simple, transparent and explainable ones. Fundamentally, human brains are designed to understand and reason abstractly depending on the task to solve, i.e., building a simple model with only very relevant information in order to make certain decision. We thus aim to build our solutions for explaining AI at the right level of abstraction (i.e., in an application-dependent and probabilistic way). 

Research Technical Area

  • Knowledge representation and reasoning
  • Reasoning under uncertainty

Benefits to the society

The outcome of the project will have potential impact on safety/security and reliability of AI systems across many application domains.

Project’s Publications

Team’s Principal Investigator

Associate Professor Jun Sun

Singapore Management University

Dr. SUN, Jun is currently an associate professor at Singapore Management University (SMU). He received Bachelor and Ph.D. degrees in computing science from the National University of Singapore (NUS) in 2002 and 2006. In 2007, he received the prestigious LEE KUAN YEW postdoctoral fellowship. He has been a faculty member since 2010 and was a visiting scholar at MIT from 2011-2012. Jun’s research interests include software engineering, cyber-security, formal methods and artificial intelligence.

 

Recent Notable Awards

  • Technology Cooperation Excellence Award, by Huawei, 2019.
  • ACM distinguished paper award, 2018
  • The 20 Year ICFEM Most Influential System Award for Developing the PAT Verification System, 2018

The Team

Co-Principal Investigator

Dr. Lu Wei, Singapore University of Technology and Design

Research Focus: Machine learning, Natural language processing, AI

 

Collaborator

Dr. Wang Jingyi, National University of Singapore
Research Focus: Formal methods, security, AI