「論理とリーズニングにおける不一致」日仏遠隔セミナーのご案内


France-Japan Open Zoom Seminar on “Disagreement in Logic and Reasoning”


2023年5月10日(水)日本時間20:00-22:00 (France 13:00-15:00 May 10th, 2023)
Zoom: Prerigistration Here / ここから事前登録をお願いします。
パリ第1大学科学技術史・科学技術哲学研究所(IHPST)及び同大学哲学科と慶應義塾大・都立大・京都大の日本側オーガナイザーグループにより、「論理とリーズニングにおける不一致」をテーマとしたワークショップ・セミナーシリーズを共同開催しています。今回はZoom遠隔セミナー形式で開催します。 以下のようにフランス側と日本側の講演と討論を行います。
This seminar series on “Disagreement in Logic and Reasoning” is jointly organized by the French group of l’Université Paris 1 Panthéon-Sorbonne (from I.H.P.S.T. and the Philosophy Department), led by Prof. Pierre Wagner, the director of I.H.P.S.T. and the Japanese group (from Keio University, Tokyo Metropolitan University, Kyoto University, and others). Each seminar has one speaker from the French group and one from the Japanese group.

Open Seminar Website: https://abelard.flet.keio.ac.jp/seminar/A_France_Japan_Workshop_Disagreement_in_Logic_and_ReasoningSecond/

Preregistration Form: https://forms.gle/6LBR9QVH7hdZKeZe6


PROGRAM
Two talks
Emmanuel Picavet (Université Paris 1 Panthéon-Sorbonne, ISJPS)
"On accepted proposals and the design of compromises"
Abstract
This paper examines some of the institutional reasons why the relation to norms of truth or validity remains important in institutional contexts that lead to the acceptance of propositions to which authentic belief is not always attached and that play a role in the formation of compromises. The acceptance of propositions is not systematically linked to a cognitive architecture, especially when the motives are utilitarian or instrumental, or else diplomatic. However, the relationship to truth or validity is not at all irrelevant in these situations, due to the dynamic role of argument and expertise in the evolution of arrangements.

Nishimura, Tomoumi (Kyushu Univ.)
"Distinguishing self / supervisory explainability of AI"
Abstract
In recent discussions related to AI ethics, one of the most important topics is the "explainability" of AI. This time, I would discuss the distinction focusing on the subject generating explanations, and show that this distinction relates to the consideration of what an objection is. That is, when addressing the issue of explainability, we can consider two problems: (1) whether the AI itself can generate explanations, and (2) whether the supervisor of AI, such as a developer or manager, can generate explanations. To determine which problem is more relevant, it is necessary to consider what an objection is, and what it means to have accountability.
Japanese Organizing Committee:
  • Ryosuke Igarashi (Kyoto University)
  • Onyu Mikami (Tokyo Metropolitan University)
  • Koji Mineshima (Keio University)
  • Mitsuhiro Okada (Keio University)
  • Kengo Okamoto (Tokyo Metropolitan University)
Japanese contact:
Japanese contact: Mitsuhiro Okada and Koji Mineshima
logic(AT) abelard.flet.keio.ac.jp