Explainability in AI and interpretability in Machine learning are very active areas. However, one of the key concepts in these domains, namely the hierarchical latent representations of deep neural networks and their characteristics, is far from being simple to understand. I propose here a brief definition, extracted from a scientific article
Tag: Neural networks
Interpretability vs. explainability: Interpretability according to different approaches (2/3)
Last updated : 20.10.202 This article was originally published in french on the Scilog blog related to the magazine “Pour … More
Let’s go back to the basics: Artificial neuron, biological neuron
Last updated: 08.10.2018 This article was originally published in French on the “Intelligence Mecanique” blog, a Scilog blog related to … More