Launch of a series of policy briefs: XAI, Cognitive Science, and International Governance: First Brief!

For ten years, I’ve worked in computer science, specifically in explainable AI — building systems that are meant to be understandable to the humans who use them. What I’ve learned is that the hardest problem isn’t technical. It’s that the governance frameworks meant to protect people from AI failures say almost nothing :

  • about whether those people can actually evaluate what they’re given.
  • tend to discuss human-centered approaches, laws, regulations, etc., without considering humans in all their diversity: cognitive, psychological, professional, cultural, geographical, etc.

More precisely, usually, AI governance frameworks define what systems must do. They never define what humans must be capable of doing.

That’s the structural blind spot I’ve been studying for years — first in research labs, then in companies, and now at the scale of global public policy.

Today, I’m launching a series of policy briefs at the intersection of explainable AI (XAI), cognitive science, and international governance with multi-cultural aspects. Each brief comes in an academic-policy version in English on #Substack!

My goal? To explore AI together along the axis of “human cognition > individual/organization > geopolitics,” since AI is no longer merely a technological issue, but also a socio-organizational and geopolitical one.

Brief 1 is available today: Metacognitive Readiness as a Missing Variable in AI Governance

👉 Read Brief 1: https://ikramchraibik.substack.com/p/metacognitive-readiness-as-a-missing

I look forward to read your opinion!

A general-interest version in French will be available soon.

#Metacognition
#AIGovernance #ExplainableAI #PolicyBrief #EUAIAct #XAI
#TrustworthyAI, #Ethics & #AIgovernance #Strategy

I’m Ikram, a researcher in Explainable AI & Cognitive Sciences, trying to keep up with this world !