Last update: 04/01/2023
A journey through AI and cognitive science
I’m currently an external associate researcher working in the field of Human-Centered XAI through the subjects of eXplainable Artificial Intelligence (XAI) and Cognitive Sciences sciences. As for my post-doc, I work with Philippe Lenca and Lina Fahed, both associate researchers in the DECIDE TEAM of the Lab-STICC laboratory within the engineering school IMT Atlantique (Brest, France) and French National Center for Scientific Research, CNRS.
My Ph.D. in computer science specialized in artificial intelligence and cognitive science was supervised by Frederic Alexandre and Nicolas Rougier from INRIA, LABRI, and the National institute of neurodegenerative diseases (IMN, Institut des maladies neurogénératives). I worked on the design and implementation of interpretability algorithms inspired by human cognition. Passionate about scientific outreach, I seek to contribute to a science accessible to all, especially in very technical fields, and thus inspire and share with passionate and curious people.
I am also involved in several research projects all related to explainable AI and Interpretable ML:
- Latent and hierarchical representation of recurrent neural networks
- Explainable AI for unlabelled data
- Developmental/cognitive robotics in order to understand the development of knowledge in autonomous agents
- The use of explainable AI applied to mental health
- The impact of biases on AI models and project management
- Trust and acceptability of explanations in XAI
I am also part of the editorial board of two french blogs for the general public: the Blog Binaire, a popular science blog dedicated to computer science, and the blog Scilog “Mechanical Intelligence” dedicated to digital sciences.
Keywords: XAI, Human-centered AI (biases, trust, acceptability), Human-computer interaction (HCI), Interpretable ML, Contextual AI, context, Knowledge Extraction, Multi-modal explanations, Cognitive Biases, Neuro-symbolic AI/Hybride AI
Selected articles & talks
I. Chraibi Kaadoud, A. Bennetot, B. Mawhin, V. Charisi & N. Díaz-Rodríguez (2022). “Explaining Aha! moments in artificial agents through IKE-XAI: Implicit Knowledge Extraction for eXplainable AI”. Neural Networks, 155, p.95-118. 10.1016/j.neunet.2022.08.002 Journal: Rank A, impact factor 9.657, SRJ Q1 Artificial Intelligence Q1 Cognitive Neuroscience
I.Chraibi Kaadoud, N. P. Rougier, F. Alexandre (2022). “Knowledge extraction from the learning of sequences in a long short term memory (LSTM) architecture”. Knowledge-Based Systems, 235, 107657. ⟨10.1016/j.knosys.2021.107657⟩. ⟨hal-03437920⟩ Journal: Rank A, impact factor 8.038, SRJ Q1 Artificial Intelligence
I.Chraibi Kaadoud, L. Fahed, P. Lenca. (2021) “Explainable AI: a narrative review at the crossroad of Knowledge Discovery, Knowledge Representation and Representation Learning”. Twelfth International Workshop Modelling and Reasoning in Context (MRC) @IJCAI 2021, Aug 2021, Montréal (virtual), Canada. pp.28-40. Link to the article: ⟨hal-03343687⟩; Link to the video; Link to the associated poster
I.Chraibi K., Lina Fahed. (2021) Intelligence artificielle explicable : vers des systèmes transparents, acceptables et éthiques. Women TechMakers** 2021, Google, Mar 2021, Montréal et Québec (virtuel), Canada. Link to the video (in french)
I.Chraibi K. (2021) “eXplainable Artificial intelligence: From machine to humans, how to make them collaborate?”. WiDS 2021: Women in data science Benguerir* @ UM6P : Data science between academia and industry, MSDA (Modeling Simulation & Data Analysis), Mohammed VI Polytechnic University, Mar 2021, Ben Guerir (virtual), Morocco. Link to the video (in english)