フェアなデジタル環境・ML環境・フェアな知識共有の実現について:フランスInria情報科学者グループとの対話 (ハイブリッド, 3月24日午後)


France-Japan Meeting Considering Fair Development and Scientific Knowledge & Data Sharing with the Digital Environments of Our Lives and Society (March 24th Afternoon, Hybrid, Keio U. Mita Campus)


(This Workshop is held as an intermediate workshop of the France-Japan Cybersecurity Workshop series with a formal methods WG session on fairness of ML)


最新情報はこちらをご覧ください。 See this URL for the updated information
https://abelard.flet.keio.ac.jp/2025/0324_Workshop
(URL開設中の場合は一両日以内に開設されます。See a day later if the URL is not open yet.)


OA・OSの日仏最新事情, フランスのデジタル倫理事情、機械学習フェアネス研究状況, 発展途上地域のデジタル環境のフェアネス諸問題などを情報科学者と哲学系研究者を含む研究者間で学際的に議論します。

We discuss the most updated current Open-Access・Open Science landscapes of Japan and France・EU, digital ethics, especially Machine Learning-AI fairness’ technical issues as well as social issues. We also discuss how to realize fair developments of digital and ML・AI environments of lives without regional discrimination in the world. Hybrid-form Meeting.


ハイブリッド形式・要事前登録。
Hybrid-form Meeting. Pre-registration is required.
Pre-registration Form / 申込フォーム: https://forms.gle/qDF5B3G7WGqfVbes7


日時:3月24日(月) 13:00-18:00 (ハイブリッド開催)
場所:慶應義塾大学三田キャンパス東館6階G-Lab(キャンパスマップの13番の建物)およびZoom
https://www.keio.ac.jp/ja/maps/mita.html

Date: March 24th, 2025, 13:00 – 18:00
Venue: G-Lab, 6th floor of the East Building at the East Gate of Mita-Campus of Keio University and Zoom
The East Building is #13 of the campus map bellow:
https://www.keio.ac.jp/en/maps/

Program

12:50  Registration


13:00 Opening Remark (Mitsuhiro Okada and Koji Mineshima)

Opening address: On the topic of fairness; issues and scopes of the France-Japan Meeting

Claude Kirchner (President of the French Consultative National Committee on Digital Ethics, Senior Researcher Emeritus at Inria)

Jean-Baptiste Bordes (Attaché, the Science and Technology Division, French Embassy in Japan)


13:20 Session 1: Towards Fair developments of digital computing and AI environments and related issues on fair research environments

  • Claude Kirchner(President of the French Consultative National Committee on Digital Ethics, Senior Researcher Emeritus at Inria)
  • Yuko Murakami (Rikkyo University)
  • Helene Kirchner (Senior Researcher Emeritus at Inria)
  • Coordinator: Mitsuhiro Okada

14:10 pause


14:20 Session 2: Discussing fair sharing knowledge/data and beyond Updating the status of the OA-OS in Japan, France/EU and other regions Invited speakers

  • Miho Funamori (NII)
    “Current status on OA/OS in Japan”
  • Claude Kirchner(President of the French Consultative National Committee on Digital Ethics, Senior Researcher Emeritus at Inria)
    “Current status and trends on OA/OS in France and Europe”
  • Roundtable & Discussion
    Coordinator: Mitsuhiro Okada and Koji Mineshima

15:40 pause


15:50 Session 3: Special invited talks on some aspects of fairness research of machine learning

  • Ruta Binkyte (Inria and CISPA Helmholtz Center for Information Security)
    Title: Causality in Trustworthy AI (No-Technical Knowledge is assumed)
    Abstract: In this presentation, we examine the trade-offs in trustworthy AI—including fairness, interpretability, robustness, and privacy—and discuss how causal principles can help balance these sometimes competing objectives. Drawing on insights from Causality Is Key to Understand and Balance Multiple Goals in Trustworthy ML and Foundation Models (Binkyte et al.), we illustrate how a causal framework clarifies relationships between design choices and downstream outcomes, enabling more principled compromises across multiple trustworthiness criteria. We then showcase a case study from LLM4GRN: Discovering Causal Gene Regulatory Networks with LLMs — Evaluation through Synthetic Data Generation (Afonja, Sheth, Binkyte et al.), where causal synthetic data is employed to improve model faithfulness, fairness, and privacy. This example demonstrates how carefully designed causal interventions help to preserve essential structure in complex biological data (gene regulatory networks), while also protecting sensitive information. We conclude by highlighting the broader potential of causal methods to guide trustworthy AI development, ensuring that ethical, technical, and real-world constraints are balanced in transparent and robust ways.

  • Catuscia Palamidessi (Inria Saclay)
    Title: Fairness in Machine Learning Predictions.
    Abstract: Machine learning models are increasingly used to help in decisional task that may affect people's life, such as job application screenings, university admissions, accepting or rejecting credit loan applications, etc. It is therefore extremely important to ensure that these models are not biased against certain groups, for instance based on age, gender, or ethnicity. It is In this talk, I will introduce the problem of fairness in machine learning, and I will discuss some notions of fairness that have been proposed in the literature. Then, I will focus on the problem of biased training data, which is one of the main sources of unfairness, and I will discuss the typical scenario in which the "right" decision should be based on data that are not directly observable, while the observables data are themselves biased. For example, the decision to accept an application to prestigious schools should be based on merit, but, since the merit is not directly observable, admission is typically based on entry tests. Such tests give an idea of the merit, but unfortunately may be biased by factors like social status (a richer student can access a better training), insecurity (according to some studies, female students tend to panic more during certain kinds of tests), etc. I will then present a proposal, BaBE (Bayesian Bias Elimination), to mitigate this kind of problem. BaBE is a pre-processing method that helps eliminating the bias from the training data, and is based on the Expectation-Maximization, which is a powerful method from statistics to retrieve a latent variable from observable related variables, even when the latter are influenced also by other factors.

Discussion


18:00 Closing




Organizers

Mitsuhiro Okada, Koji Mineshima, and Hirohiko Abe

Contact

logic@abelard.flet,keio,ac.jp