Abstract

Debug and Approve your Deep Networks by Overcoming the Black Box Problem

Deep Learning AI may learn to perform tasks by cheating in unknown and unexpected ways, which may be a liability for the developer. Feedforward networks are the basis of artificial neural networks such as deep, convolution, recurrent, and even machine learning regression methods. However the internal decision processes of feedforward networks are difficult to explain: they are known to be a "black-box". ...

 


Session No: SIL8146
Speaker: Tsvi Achler
Type:

Accelerated Data Analytics

Date: Thursday - October 18, 2018 01:00 PM - 01:45 PM
Location: Hall I

Deep Learning AI may learn to perform tasks by cheating in unknown and unexpected ways, which may be a liability for the developer. Feed forward networks are the basis of artificial neural networks such as deep, convolution, recurrent, and even machine learning regression methods. However the internal decision processes of feedforward networks are difficult to explain: they are known to be a "black-box". This is especially problematic in applications where consequences of an error can be severe, such as in medicine, banking, or self-driving cars. Optimizing Mind has developed a new type of feedback neural networks motivated by neuroscience that allows easier understanding of the internal decision process. Developers, regulators, and users can better understand their AI, reduce unexpected surprises, and liability by having feedforward networks converted to our Illuminated form to explain the internal decision processes. We'll demonstrate some of these benefits.