Explainability in AI and interpretability in Machine learning are very active areas. However, one of the key concepts in these domains, namely the hierarchical latent representations of deep neural networks and their characteristics, is far from being simple to understand. I propose here a brief definition, extracted from a scientific article