Education & Career

Last update: 04/01/2023

Education

2019

Qualification of researcher* at National Council of Universities (equivalent in french: Conseil national des universités (CNU)) N°20227322266, section 27 (Computer sciences), France

*necessary step to be eligible for candidacy to the ranks of associate professors (equivalent in french: Maitre de conférence, Enseignant-chercheur).

2015-2018

Ph.D. in Computer Science specializing in Artificial Intelligence and cognitive science, University of Bordeaux, Bordeaux, France

2017

2015

Deep learning summer school, Bilbao, Spain (July 2017)

“Semantic web and web data”: MOOC proposed by INRIA, WIMMICS team on the FUN MOOC platform, managed by the French public institution France Université Numérique

2013-2014

Master in Cognitive Science, University of Bordeaux, Bordeaux, France

2008-2013

Computer Engineering degree, Expert in Computer Science and Information Systems, selected option: Software Engineering, Title RNPC Level 1 (French degree issued by the French Ministry of Work), EPSI, Bordeaux, France

2008-2010

Technical degree in computer management (Brevet de Technicien superieur, BTS en informatique de gestion), Bordeaux, France

Career path

2022-current External Associate Researcher in XAI & Cognitive sciences

DECIDE Team, IMT Atlantique, Lab-STICC, UMR CNRS 6285, F-29238 Brest, France

Abstract: As a continuation of my postdoc and as an opening to this one, I am working on the extraction of explainable knowledge in order to generate multi-modal explanations according to the level of expertise of the target audience. Through this topic, I also explore what are the elements impacting the acceptability and the perception of what is a good explanation. Several research topics are conducted in parallel:

  • The generation of multimodal explanations from heterogeneous data
  • The generation of visual explanations thanks to the interpretability of CNNs
  • Biases and their impact on the management of AI projects: cognitive, statistical and algorithmic biases
  • The impact of context in the acceptability of an explanation


Keywords: XAI, Human-centered AI, Human-computer interaction (HCI), Interpretable ML, Contextual AI, context, Knowledge Extraction, Multi-modal explanations, Cognitive Biases

2021-2022 Post-doctoral researcher in Explainable Artificial Intelligence for heterogeneous time series,

DECIDE Team, IMT Atlantique, Lab-STICC, UMR CNRS 6285, F-29238 Brest, France

Context: The LEARN-IA project, whose main objective is the improvement of the energy performance of infrastructures in industrial environments. The project brings together three actors: two companies and the IMT Atlantique engineering school.

Supervisors: Lina Fahed, Associate Researcher and Philippe Lenca, Associate researcher HDR

Close collaborators: Tian Tian, post-doc in NLP and Yannis Haralambous, Associate researcher HDR

Funding: European Regional Development Fund (FEDER,Fonds européen de développement régional) and the Bretagne-Atlantique Region.

Label: Group ”Images Réseaux” and Group ”Mer”

Abstract: Research for the design of explicable models from the analysis of time series representing industrial boiler sensor readings in two steps: 1) Time series analysis: pattern detection and extraction 2) Design of an explainable model allowing the generation of a knowledge base representing the business knowledge of experts in the field of energy management of industrial boilers from heterogeneous time series: numerical and semantic.

Keywords: XAI, Interpretability, Pattern Extraction, Knowledge Extraction, Time Series, Data Mining


2018-2020 Researcher in Artificial Intelligence and Cognitive Sciences, R &D and AI Consultant

In industry, France

Research Missions:

• Machine Learning Research: Interpretability of neural networks, confidence, biases in AI, NLP, Deep Learning, Knowledge extraction.

• Implementation in python with the Keras framework of recurrent networks and analysis of their

• Publication of scientific and popularization articles

• Organization of scientific and popularization workshops

• Supervision of students on different research topics :

  1. Image analysis for emotion detection and recognition
  2. Study and analysis of workplace well-being in the context of knowledge transformation through training: a study of learner profiles
  3. LSTM network interpretability: application to java code. See the student master’s thesis on the subject

Corporate missions: Consulting missions, development of training content and acculturation Data, AI, and cognitive sciences


2015-2018 PhD student in computer science, AI, and cognitive science

Mnemosyne Team, INRIA Bordeaux Sud-Ouest, Institute of Neurodegenerative Diseas UMR CNRS 5293, and LABRI laboratory, UMR CNRS 5800, Bordeaux-Bidart, France

Funding: Algo’Tech Informatique, a company located at Bidart

Thesis title: ”Learning sequences and extracting rules from recurrent networks: application to the drawing of technical diagrams.”

Supervisors: Frédéric Alexandre, Head of Mnemosyne team, and Nicolas Rougier, INRIA Researcher

Abstract: The implicit knowledge of an individual is acquired in two ways. The first one consists of the repetition of sequences, which allows the individual to implicitly extract regularities. The second way is a migration from explicit knowledge to implicit knowledge due to development expertise. Both are implicit learning. In our work, we wanted to look at the sequences of components that are used in the and in particular the problem of extracting implicit rules in these sequences, an important aspect of the extraction of business expertise from technical drawings. We place ourselves in the connective field, in particular, we have considered neural models capable of processing sequences. We have implemented two neural networks recurrent: Elman’s model, the Simple Recurrent Network, and a model with LSTM (Long Short Term Memory) units. We evaluated these two models on different artificial grammars (Reber’s grammar and its variations) in terms of learning, their ability to generalize from this one, and their management of sequential dependencies. Finally, we have also shown that it was possible to extract the encoded rules (from the sequences) in the recurrent network equipped with LSTM, in the form of an automaton. The electrical field is particularly relevant to this issue because it is more constrained (more combinatorial and less combinatorial) than task scheduling in more general cases such as navigation for example, which could constitute a perspective of this work. For more information: URL

Keywords: Recurrent networks, Interpretability, Rule extraction, LSTM, Sequence learning, Cognitive modeling, Implicit learning


2014-2015 Intern in computational neurosciences: Episodic memory and hippocampus modeling

Mnemosyne Team, INRIA Bordeaux Sud-Ouest and University of Bordeaux

Mission: Study of the episodic memory through the study of interactions between Hippocampus and Cortex and computer modeling of the hippocampus in hetero-associative and auto-associative memories using the C++ programming language.Master thesis: ”Modelling of information exchanges between the hippocampus and the cortex”. For more information: URL


2010-2013 Analyst programmer in a part-time training

ASAPE company and EPSI Bordeaux, Bordeaux, France

Mission: To meet the company’s client, to analyze their needs, then create, add features and improve the existing projects already used by the company’s client.Engineering school thesis: ” Datamining and project management”. For more information: URL