PETAL: Conference

ARTIFICIAL INTELLIGENCE & ETHICS

This conference will consist of panel discussions focusing on the following four topics: safety, justice, morality, and privacy. Each panel discussion will: (i) begin with 15 minute opening statements from each expert, (ii) be followed by a time for experts to ask each other questions, and (iii) end with a moderated Q&A with the audience.

AI & SAFETY

November 5 @ 9-10:30 am (China)
November 4 @ 8-9:30 pm (North Carolina, U.S.)
Moderator: Daniel Lim (Duke Kunshan)

Zoom & AB3107, snacks will be provided


Zack Cooper
Research Fellow
American Enterprise Institute

I will discuss how Artificial Intelligence is changing the nature of warfare and the risks that this poses for both ethics and conflict escalation.


Dorsa Sadigh
Assistant Professor of Computer Science
Stanford University

I will be talking about a human-centered approach toward challenges in AI safety. Specifically, how can we anticipate human behavior when interacting with AI agents, and what are the challenges that arise when our human modeling assumptions fail, i.e., when humans don’t act optimally, or when planning for robots in challenging and near the end of the risk spectrum scenarios.


Lan Xue (薛澜)
Chair, National Expert Committee on AI Governance
Professor of Public Policy
Tsinghua University

This presentation will outline the current development of AI in China, discuss the governance concerns, including safety issues, and propose ways to address these concerns in the future.

AI & JUSTICE

November 5 @ 9-10:30 pm (China)
November 5 @ 8-9:30 am (North Carolina, U.S.)
Moderator: Vincent Conitzer (Duke)


Timnit Gebru
Co-Leader, Ethical AI Team
Google Research

Computer vision has ceased to be a purely academic endeavor. From law enforcement, to border control, to employment, healthcare diagnostics, and assigning trust scores, computer vision systems are being rapidly integrated into all aspects of society. A critical public discourse surrounding the use of computer-vision based technologies has been mounting. In this talk, I will highlight some of these issues and proposed solutions to mitigate bias, as well as how some of the proposed fixes could exacerbate the problem rather than mitigate it.


Rui Guo (郭锐)
Associate Professor of Law
Renmin University of China

Contemporary AI technologies are already deeply “involved” in human decision-making, either because of the nature of the technological application itself, or the specific roles that the society assigns to it as it is applied. China has initiated an unprecedented project to apply AI in the judicial process. What are the potential ethical risks involved? Are there ways to manage these risks?


Hoda Heidari
Assistant Professor of Machine Learning
Carnegie Mellon University

Numerous studies and articles have raised concerns about the use of Machine Learning (ML) in automating or informing consequential decisions for people. In response, the ML community has proposed various mathematical formulations of “fairness” and algorithmic mechanisms to enforce those definitions through the ML pipeline. I will talk about the specific moral assumptions underlying some of the existing mathematical formulations of fairness by interpreting them through the lens of equality of opportunity. I will conclude by mentioning several critical limitations with any computationally feasible formulation of fairness.


Crystal Yang
Professor of Law
Harvard University

There has been a dramatic increase in the use of predictive algorithms in recent years. Predictive algorithms typically use individual characteristics to predict future outcomes, guiding important decisions in nearly every facet of life. The increasing use of these algorithms has contributed to an active debate on whether algorithms intentionally or unintentionally discriminate against certain groups, in particular racial minorities and other protected classes. In this talk, I will discuss concerns over how bias can be “baked in” to the algorithm, what it means for an algorithm to be fair, and potential solutions to address these concerns.

AI & MORALITY

November 6 @ 9-10:30 am (China)
November 5 @ 8-9:30 pm (North Carolina, U.S.)
Moderator: Walter Sinnott-Armstrong (Duke)

Zoom & AB3107, snacks will be provided


Bertram Malle
Professor of Psychology
Brown University

Robots operating in social roles and contexts must be moral. I will explain what such robot morality would consist of; report research on how people evaluate near-future moral robots; and review some advances in building actual moral robots.


Wendell Wallach
Ethicist, Interdisciplinary Center for Bioethics, Yale University
Senior Advisor, Hastings Center

The study of AI & Morality has been evolving on two very different fronts. On the one hand, there is the prospect of developing moral machines — the implementation of sensitivity to moral considerations and the ability to factor these into decision-making by artificial systems. On the other hand, there is the development of broad principles for the safe and ethical deployment of AI systems, and how these might be enforced through hard and soft governance mechanisms. This talk will outline progress to date on both fronts, and make proposals as to how they may further evolve over the next five-ten years.


Yi Zeng (曾毅)
Professor and Deputy Director
Research Center for Brain-Inspired Intelligence
Chinese Academy of Sciences

In this talk, I will start with the relationship between intelligence and the Self. I will introduce recent progress in self-recognition and cognitive empathy of brain-inspired AI. I will then discuss why certain levels of self-consciousness as such is the foundation and starting point to achieve moral AI and provide concrete technical examples. Finally, I will summarize grand challenges both from the technical and social perspectives.

AI & PRIVACY

November 6 @ 9-10:30 pm (China)
November 6 @ 8-9:30 am (North Carolina, U.S.)
Moderator: Jana Schaich Borg (Duke)


Shouling Ji (纪守领)
Professor of Computer Science
Director, Network System Security & Privacy Research Lab
Zhejiang University

Nowadays, AI has been applied to more and more computing systems and applications. However, in the meantime, AI systems are facing ever-increasing security and privacy threats. In addition, how to build a fair AI system is also a challenging task. In this talk, based on our recent research, I will introduce several security, privacy, and fairness issues of AI, and discuss potential countermeasures towards secure, privacy-preserving and fair AI.


Ashwin Machanavajjhala
Associate Professor of Computer Science
Duke University

There is growing demand for analyzing troves of information collected from individuals to learn detailed insights using statistical and ML tools. However, we are also seeing an increase in sophisticated attacks that can uncover private information of individuals from seemingly innocuous statistical insights. This talk will briefly describe the attack landscape, highlight the fundamental tradeoff between privacy and statistical utility and introduce how a breakthrough technology called differential privacy can enable privacy in AI and ML workflows.


Kevin Macnish
Assistant Professor of Ethics and Information Technology
University of Twente

From deep fakes to facial recognition, social stigmatization and predictive intelligence, this talk will consider some of the ethically problematic ways in which AI can challenge privacy. We will briefly look at what is meant by privacy and why it is valuable to us as individuals and society, before turning to ethical challenges currently facing the AI community.

Program Committee

Daniel Lim, Associate Professor of Philosophy, Duke Kunshan University
Walter Sinnott-Armstrong, Chauncey Stillman Professor of Practical Ethics, Duke University
Xuan Zhou, Professor of Data Science and Engineering, East China Normal University