PROGRAM
Two talks
Emmanuel Picavet (Université Paris 1
Panthéon-Sorbonne, ISJPS)
"On accepted proposals and the design of
compromises"
Abstract
This paper examines some of the institutional reasons why the relation to norms of truth or validity
remains important in institutional contexts that lead to the acceptance of propositions to which
authentic belief is not always attached and that play a role in the formation of compromises. The
acceptance of propositions is not systematically linked to a cognitive architecture, especially when
the motives are utilitarian or instrumental, or else diplomatic. However, the relationship to truth
or validity is not at all irrelevant in these situations, due to the dynamic role of argument and
expertise in the evolution of arrangements.
Nishimura, Tomoumi (Kyushu Univ.)
"Distinguishing self / supervisory explainability of
AI"
Abstract
In recent discussions related to AI ethics, one of the most important topics is the "explainability"
of AI. This time, I would discuss the distinction focusing on the subject generating explanations,
and show that this distinction relates to the consideration of what an objection is. That is, when
addressing the issue of explainability, we can consider two problems: (1) whether the AI itself can
generate explanations, and (2) whether the supervisor of AI, such as a developer or manager, can
generate explanations. To determine which problem is more relevant, it is necessary to consider what
an objection is, and what it means to have accountability.