Working Seminar on Fairness and Explainability in the Digital and ML Environments and Special Seminar on Fairness of Machine Learning

フランスInria(国立情報科学/デジタル科学研究所)関係来日グループとの情報交換ワーキングセミナー(第1部)と機械学習フェアネスに関する2つの講演会(第2部)(Part 1: Working seminar, Part 2: Two special invited talks)

(Note: Program Revised プログラムに一部変更があります)

Hybrid Meeting: Pre-registration is required. Pre-registration is open until just before starting Part 2.

ハイブリッド形式:要事前登録。会議1部及び第2部の開始直前まで登録を受け付けます。


  • Date: March 27th, 2025, 13:00 – 18:00
  • Venue: Open-Lab, 4th floor of the East Building at the East Gate of Mita-Campus of Keio University and Zoom
    The East Building is #13 of the campus map below:
    https://www.keio.ac.jp/en/maps/
  • 日時:2025年3月27日(木) 13:00-18:00 (ハイブリッド開催)
  • 場所:慶應義塾大学三田キャンパス東館4階Open-Lab(キャンパスマップの13番の建物)、及びZoom
    キャンパスマップ:https://www.keio.ac.jp/ja/maps/mita.html

Web page: https://abelard.flet.keio.ac.jp/2025/0327_Workshop


Hybrid-form Meeting. Pre-registration is required.
Pre-registration Form / 申込フォーム: https://forms.gle/ygQb8x4fwdcNTzoY8


On the occasion of a visit by French experts in Machine Learning, AI Fairness, and Digital Ethics from our partner groups at Inria, we are organizing a series of sessions, as a follow-up to the one held on March 24. This seminar will focus on Fairness and Explainability, as well as the challenges of fair digital development in developing countries and regions.

Two invited talks on Fairness, Explainability, and Privacy are scheduled for Part 2.


Program

Part 1
13:00  Opening Roundtable discussion

Generative AI: Towards the recombination of symbolic and neural AI

Claude Kirchner(President of the French Consultative National Committee on Digital Ethics, Senior Researcher Emeritus at Inria)

Mitsuhiro Okada (Keio University)

Koji Mineshima (Keio University)


13:20 Sepecial Session: Towards fair developments of digital environments in developing countries/regions

Helene Kirchner (Senior Researcher Emeritus at Inria)
Case of Africa

Interview to Prof. Kazutsuna Yamaji (NII, NIIオープンサイエンス基盤研究センター長)

Discussion


14:00 Session: Basic issues of explainability

Mitsuhiro Okada and Koji Mineshima (Keio University)
TBA

Hirohiko Abe (Keio University)
Context-sensitive Explanation for Assurance: From an Epistemological Viewpoint

Yuki Sugawara (Osaka University), Kazuho Kambara (NICT, Ritsumeikan University)
Introducing many uses of "explain" (tentative title) Video presentation

Hirohiko Abe (Keio University), Kentaro Ozeki (University of Tokyo/ Keio University), Risako Ando (Keio University)
Towards evaluating formal reasoning ability and explainability in LLMs

Discussion


15:30 Break


Part 2
15:45 Special Invited Talks on Fairness of Machine Learning (The two abstracts are attached below)

1. Ruta Binkyte (CISPA Helmholtz Center for Information Security)
Causal Methods for AI Fairness and Explainability

2. Catuscia Palamidessi (Inria Saclay)
Trade-off between Fairness and Privacy in Machine Learning.

18:00 Closing

Abstracts

Ruta Binkyte (CISPA Helmholtz Center for Information Security)

  • Title: Causal Methods for AI Fairness and Explainability
  • Abstract: In this presentation, we provide a concise introduction to causal concepts—including structural causal models, interventions, and mediation analyses—and show how these tools can inform fairness in decision-making. We focus especially on path-specific fairness, which distinguishes legitimate from illegitimate pathways by which a sensitive attribute can influence an outcome. Through illustrative examples, we demonstrate how to (1) construct a causal graph that encodes relevant variables, confounders, and mediators; (2) identify and estimate direct and indirect (mediated) effects. We also touch upon recent advances in model interpretability, highlighting the potential of causal frameworks to clarify how complex models make decisions at a mechanistic level. We conclude by discussing the practical challenges of implementing path-specific fairness—such as defining “legitimate” vs. “illegitimate” routes, managing unmeasured confounders, and coping with high-dimensional feature spaces—and emphasize why these nuanced causal approaches can be more aligned with real-world fairness goals than simpler group-level parity metrics.

Catuscia Palamidessi (Inria Saclay)

  • Title: Trade-off between Fairness and Privacy in Machine Learning.
  • Abstract: Privacy and Fairness are two important ethical issues in machine learning, and several research efforts have been dedicated to trying to understand how they interact, and how they affect accuracy. In this talk, I will summarize the main results in the literature, and present our own study on the effect of local differential privacy on some of the main notions of fairness: Statistical Parity, Conditional Statistical Parity, and Equality of Opportunity.

後援

慶應義塾大学グローバルリサーチインスティチュート・チャレンジグラント「説明可能なコンピューティング環境の実現に向けて」 (Keio University KGRI Challenge Grant: Toward the Realization of Explainable Computing Environments)

Organizing Committee

Mitsuhiro Okada, Koji Mineshima, and Hirohiko Abe

Contact

logic@abelard.flet,keio,ac.jp