Rationalizing Deep Learning Model Decisions
Project Description
In this research, our aim is to provide good explanatory power without sacrificing the performance of machine learning models. The focus of this research is to develop a general unified framework that can serve as a principled basis for the interpretability of deep learning models in medical image classification applications. We will provide two-level interpretability in terms of decision confidence and decision justification. Decision confidence refers to a second-order estimate of confidence, separate from the existing first-order classification predictions. While a model may deem an input as likely to be from some particular class, over other competing class, it may also simultaneously be extremely unsure of this decision. For decision justification, we propose a novel linguistic justification generator that is model-based, intuitive and precise. In other words, the explanation is directly derived from the model being interpreted and indicates the features that influenced its decision.
Research Technical Area
Benefits to the society
The output of this research will increase the adoption rate of clinicians to AI systems.
Project’s Publications
–
Team’s Principal Investigator
Professor Wynne Hsu
School of Computing
National University of Singapore