A research team from Università di Firenze, Università di Siena, University of Cambridge and Universitè Côte d’Azur proposes a general approach to explainable artificial intelligence (XAI) in neural architectures, designing interpretable deep learning models called Logic Explained Networks (LENs). The novel approach yields better performance than established white-box models while providing more compact and meaningful explanations.

Here is a quick read: Logic Explained Deep Neural Networks: A General Approach to Explainable AI.

The paper Logic Explained Networks is on arXiv.



Source link