University of Pittsburgh

Transparency and Explanation in Deep Reinforcement Learning Neural Networks

Professor of Information Sciences and Intelligent Systems, University of Pittsburgh
Friday, March 23, 2018 - 1:00pm - 1:30pm

For Autonomous AI systems to be accepted and trusted, the users should be able to understand the reasoning process of the system, i.e. the system should be transparent. System transparency enables humans to form coherent explanations of the system’s decisions and actions.   Transparency is important not only for user trust, but also for software debugging and certification. In recent years, Deep Neural Networks have made great advances in multiple application areas. However, deep neural networks are opaque and while mechanisms for making their behavior more transparent have been proposed and demonstrated, the effectiveness of these methods for tasks outside of classification have not been verified.  Deep Reinforcement Learning Networks (DRLN) have been extremely successful in accurately learning action control in image input domains, such as Atari games where successful play can be learned from pixels alone. 

Our study extends these methods by:

(a) incorporating explicit object recognition processing into deep reinforcement learning models and

(b) introducing “object saliency maps” to provide visualization of internal states of DRLNs, thus enabling the formation of “explanations. 

We present computational results and human experiments in matching saliency maps to game play and predicting actions in the immediate future.

Copyright 2009 | Web site by UMC Web Team