科学基礎論学会-DLMPST/ IUHPST共催シンポジウム

形式システムの公平性、公正性、透明性:科学技術に支配されつつある現代社会のための挑戦
2021年6月19日(無料事前登録受付中)



日本語追加補足資料

(シンポジウムの使用言語は英語ですが、日本語資料及び事前配布可能な英語テキストをこの文書およびリンクでまとめています。)

URL : http://abelard.flet.keio.ac.jp/seminar/symposium-202106-supplement

本ページの内容:次の情報があります
  • 概要及び無料事前登録ページを含むURL情報
  • 一部事前資料(日本側登壇者の日本語講演原稿・日本語アブストラクト、海外登壇者のハンドアウト、日本側登壇者のスライド)
  • プログラムと最初の5つの講演アブストラクト
概要
新しいデジタル環境は私たちの社会、個人生活、学術を豊かにしています。一方で、新しいデジタル環境におけるアルゴリズムシステムの公平性・公正性・透明性・説明可能性に関わる諸問題の重要性も指摘されています。これらを学際的に検討する機会として、科学基礎論学会と国際科学基礎論連合(DLMPST/IUHPST)により本シンポジウムが企画されました。特に、科学・技術哲学、政治哲学、論理学、情報科学・計算機科学系の研究者たちによる提題講演をベースとして討論を行います。新しいデジタル環境に現れる機械学習を含むアルゴリズムの諸問題についての整理、評価、社会との関わり、この学際分野における今後文理融合学術課題、等を検討します。

第1部で3人の招待講演、第2部で2人の招待指定討論者講演と3人の招待講演者による応答・追加コメント、第3部の前にZoom Webinar ChatのQ&A機能による参加者からのコメント・質問受付、第3部で2人の指定質問者によるコメントから始まる自由討論セッションを予定しています。シンポジウム終了後、インフォーマル討論のためにMeeting Roomも用意しています。


学会非会員も無料で事前登録できます。(科学哲学会会員及び日本科学史学会員の方には別途参加情報をお知らせしていますので、各学会ページをご覧ください。)

プログラム及び無料事前参加登録の詳細については、この情報ページをご覧ください。
Symposium “Fairness, Integrity and Transparency of Formal Systems: Challenges for a Society Increasingly Dominated by Technology” (keio.ac.jp)

科学基礎論学会トップページの日本語ポスター内には非会員用の登録方法の案内があります。DLMPSTページのActivities > 2021の本シンポジウム案内に沿って、DLMPSTがわから事前参加登録することもできます。
一部事前資料
以下の資料が現在、事前閲覧・ダウンロード可能です。
  • 日本側登壇者鈴木貴之先生の講演アブストラクト日本語版を含む、5人の講演者のアブストラクトは[こちら]です。
  • 日本側登壇者久木田水生先生の講演原稿日本語版は[こちら]です。
  • オーストラリア国立大学Seth Lazar先生の(講演内容に沿った)配布資料は[こちら]です。
  • 日本側登壇者久木田水生先生の講演スライドは[こちら]です。
  • 日本側登壇者鈴木貴之先生の講演スライドは[こちら]です。
このページ及び上記情報補足URLでは今後も情報をupdateしていきます。


科学基礎論学会-DLMPST/IUHPST共催シンポジウム
共同オーガナイザー兼ディスカッションコーディネータ 岡田光弘

プログラム / Symposium Program
JST 14:30 (CET 7:30, GMT 6:30, AEST 15:30)
Opening Greetings / Remarks
  • Kengo Okamoto (President of JAPS)
  • Benedikt Löwe (Secretary General of DLMPST and Co-organizer)
  • Mitsuhiro Okada (Discussion Coordinator and Co-organizer)
JST 14:40 (CET 7:40, GMT 6:40, AEST15:40)
Invited Talks
  • Seth Lazar (Australian National University)
  • Gilles Dowek (INRIA and ENS Paris-Saclay)
  • Edith Elkind (Oxford University)
JST 16:00 (CET 9:00, GMT 8:00, AEST 17:00)
Invited Discussant Talks
  • Minao Kukita (Nagoya University)
  • Takayuki Suzuki (The University of Tokyo)
Responses / Comments from the Three Invited Speakers

(We receive questions & comments through Q&A by CHAT, just before the Discussion Session)
JST 17:00 (CET 10:00, GMT 9:00, AEST 18:00)
Discussion Session
Comments by Invited Questioners
  • Yuko Murakami (Rikkyo University)
  • Mikiko Yokoyama (Tsukuba University)
Discussion & Comments
JST 18:15 (CET 11:15, GMT 10:15, AEST 19:15)
Closing the Symposium
(Before Closing, we announce the Zoom Meeting Room URL / ID for the people who wish to continue discussion.)
JST 18:20 (CET 11:20, GMT 10:20, AEST 19:20)
Informal Discussion Room for free informal discussion
Closing Informal Discussion before JST 19:00 (CET 12:00, GMT 11:00, AEST 20:00)

Outline of the Symposium Program
In this Symposium, we discuss Fairness, Integrity, and Transparency Issues, including Explainability Issues in AI and Algorithms in the digital environments of our society, from the interdisciplinary viewpoints, with the special emphasis on philosophy of science & technology, political philosophy, logic, computer science and computation theory.
In the First Session, we have the first-round talks by the three invited speakers. In the Second Session, we have two discussions by the invited discussants, and Responses/Comments by the three speakers. We then receive questions and comments by Q&A of CHAT (of Zoom-Webinar) from the audience during the break. We have the Third Session, Discussion Session, starting with Invited Questioners’ comments.
After closing the Discussion Session, we close the official schedule of the Symposium and invite the audience to an informal discussion room for the audience who would like to continue discussing for a while.

ABSTRACTS and Supplementary Materials

Supplementary Materials in English (Handouts/Slides of some speakers) HERE
日本語補足資料(講演原稿日本語版、アブストラクト日本語版)こちら

Abstracts of the (first-round) talks…

Invited Talk 1

“Legitimacy, Authority, and the Political Value of Explanations”
Seth Lazar (Australian National University)
As rapid advances in Artificial Intelligence and the rise of some of history's most potent corporations meet the diminished neoliberal state, we have become increasingly subject to power exercised by means of automated systems. Machine learning, big data, and related computational technologies now underpin vital government services from criminal justice to tax auditing, from public health to social services, from immigration to defence. Two-sided markets connecting consumers and producers are shaped by algorithms proprietary to companies such as Google and Amazon. Google's search algorithm determines, for many of us, how we find out about everything from how to vote to where to get vaccinated; Facebook, Twitter and Google decide which of our fellow citizens' speech we get to see—both what gets taken down, and (more importantly) what gets promoted. We sometimes imagine AI as a far off goal—either the handmaiden to a new post-scarcity world, or else humanity's apocalyptic 'final invention'. But we are already using AI to shape an increasing proportion of our online and offline lives. As the pandemic economic shock ramifies, and the role of technology in our lives grows exponentially, this will only intensify.

We are increasingly subject to Automatic Authorities—automated computational systems that are used to exercise power over us, by substantially determining what we may know, what we may have, and what our options will be. These computational systems promise radical efficiencies and new abilities. But, as is now widely recognised, they also pose new risks. In this paper I focus on one in particular: that the adoption of Automatic Authorities leads us to base increasingly important decisions on systems whose operations cannot be adequately explained to democratic citizens.

Philosophers have long debated the importance of justifications in morality and politics, but they have not done the same for explanations. What's more, the most prominent Automatic Authorities in our lives today are deployed by non-state actors like Google and Facebook, and analytical political philosophy has focused much more on state than non-state power. To make progress on one of the most pressing questions of the age of Automatic Authorities, therefore, we must make substantial first-order progress, on two fronts, in moral and political philosophy.

That is my goal in this paper. My central claim: only if the powerful can adequately explain their decisions to those on whose behalf or by whose licence they act can they exercise power legitimately and with proper authority, and so overcome presumptive objections to their exercise of power grounded in individual freedom, social equality, and collective self-determination. This applies to all authorities, not only automatic ones. But I will demonstrate its application to Automatic Authorities, including those sustained by non-state actors. I will use this account of why explanations matter to address the urgent regulatory questions of to whom explanations are owed, and what kinds of explanations are owed to them.

Invited Talk 2

“Explanation: from ethics to logic”
Gilles Dowek (INRIA and ENS Paris-Saclay)
Abstract: Explaining decisions is an ethical necessity. For example, neither a person nor a piece of software ought to reject a bank loan application, without providing an explanation for this rejection. But, defining the notion of explanation is a challenge. Starting from concrete examples, we attempt to understand what such a definition could look like.
A usual definition assumes that what is explained is a statement and what explains it is a logical proof of this statement. For example, a proof in elementary geometry both shows that the statement is true and explains why it is true. But this definition is not sufficient, as some logical proofs are seen as more explanatory than others.

In this talk, we give two successive definitions of the notion of explanation. The first keeps the idea that an explanation is a logical proof, but the explanatory character of the a proof relies on the fact that it contains a cut: the proof of general statements followed by a specialization to a particular one. So, the fact that a proof is explanatory is measured by the degree of generalization it allows. The second definition uses the algorithmic interpretation of proofs to generalize this definition. An explanation is then a pair formed with a short, fast, and wide algorithm and an input value for this algorithm.

Invited Talk 3

“Justified Representation in Approval-Based Committee Voting”
Edith Elkind (Oxford University)
Computational social choice is a rapidly growing research field that studies algorithmic aspects of collective decision-making and preference aggregation. It studies questions such as: can we quickly compute the outcome of a given voting rule? For a given voting rule, can a strategic voter efficiently compute her optimal strategy? Is there a voting rule that satisfies a particular set of desirable properties (axioms) and admits an efficient winner determination algorithm?

In this talk, we consider these questions in the context of multiwinner voting rules with approval ballots. We formulate a fairness axiom, which we call Justified Representation, as well as a strengthening of this axiom. We identify voting rules that satisfy the JR axiom and investigate their algorithmic complexity. Finally, we propose a tractable rule that satisfies the strong version of the axiom.
Two Invited Discussant’s Talks

Invited Discussant’s Talk 1

“What are explanations worth in science?”
Minao Kukita (Nagoya University)
Steven Weinberg argues that the exemplar of physics is "a simple set of mathematical principles that govern a wide range of phenomena with precision. As can be seen here, "generality," "precision," and "simplicity" are highly thought of in science. Then, why are they important in science? In this talk, I will discuss the value of generality and simplicity in particular from the perspective of the importance of information communication in science. Thus, we will focus on the value of explanations in science in reducing the cognitive cost of sharing and applying information.

I will argue that the increasing use of artificial intelligence based on big data in the practice of science means that information will no longer be shared and applied in the traditional way. Scientific explanations will then become irrelevant in terms of saving costs in information communication. However, scientific explanations also have aesthetic value beyond mere means of saving cognitive costs.How the perception of such aesthetic value will change in the future is an important topic, and I would like to call on people working in the fields of philosophy of science, sociology of science, and STS to explore this topic.

Invited Discussant’s Talk 2

"Transparency in AI: Identifying the Real Issue"
Takayuki Suzuki (The University of Tokyo)
It is often argued that AI lacks transparency and that this might be a potential risk to our society. It is not clear, however, in what sense AI lacks transparency. Many classical algorithms are transparent. Even in deep neural networks, which are typically seen as opaque, we can have some explanation of how they work. Moreover, in areas other than AI, we often don't care much about transparency.

It is true, however, in some cases, how AI works is inscrutable. This is why we sometimes find unexpected responses and biases in AI, especially in deep neural networks. So, it seems that there really is a problem of transparency that is unique to AI. We need more work to identify what it is.

It also should be noted that human mind, as well as AI, is often opaque and biased. It will be more productive if we try to overcome our weaknesses with AI, rather than try to replicate our intelligence with AI, because the weaknesses and the strengths of human mind and those of AI complement each other.