Explainability in AI and interpretability in Machine learning are very active areas. However, one of the key concepts in these domains, namely the hierarchical latent representations of deep neural networks and their characteristics, is far from being simple to understand. I propose here a brief definition, extracted from a scientific article
Author: Ikram Chraibi Kaadoud
Interpretable Machine learning: some needed definitions
Last updated: 15.03.2022 Short introduction: Interpretability in Machine learning is a very active area. The number of publications is exploding, the … More
From Data to Knowledge
Last updated: 27.01.2022 Short introduction: Data is an indispensable element for the development of neural networks in Machine learning and … More
Interpretability, bias, ethics and transparency: what is the relationship? (3/3)
Last updated: 30.10.2020 This article was originally published in french on the Scilog blog related to the magazine “Pour la … More
Interpretability vs. explainability: Interpretability according to different approaches (2/3)
Last updated : 20.10.202 This article was originally published in french on the Scilog blog related to the magazine “Pour … More
Interpretability vs. explainability: understanding vs. explaining one’s neural network (1/3)
Last updated: 28.08.2020 This article was originally published in french on the Scilog blog related to the magazine “Pour la … More
Cognition or what is cognitive science?
How can a human memorize information? How does he manage to express himself, to reason, to solve a problem, to make a decision…? We do not realize it, but the human being evolves throughout his life via the intervention of mental functions which are used to process the information which surrounds us, we speak then of cognition
Let’s go back to the basics: Artificial neuron, biological neuron
Last updated: 08.10.2018 This article was originally published in French on the “Intelligence Mecanique” blog, a Scilog blog related to … More