Attending the 2019 Artificial Intelligence, Ethics and Society Conference

by Runya Liu

As a member of the Planetary Ethics and Artificial Intelligence Laboratory (PETAL) I had the privilege of attending a conference on Artificial Intelligence, Ethics and Society with Duke Kunshan University Professor Daniel Lim in Hawai’i from January 27-28. The invited talks and presentations were wonderful opportunities to learn how people from different fields see the ethical issues related to AI. People majoring in law, engineering, computer science, philosophy, etc., were all here to discuss the future development of AI. People gathered because they want to contribute to problems that the whole human race is going to face. The hot discussion triggered by questions in talks and presentations created an sincere academic atmosphere.

Two invited talks left me with a deep impression. The first talk was given by Professor Ryan Calo, from the University of Washington School of Law. He pointed out that appropriate regulation of AI has three steps. The first is to have ethical judgement toward certain issues, the second is to make laws to regulate, and the third is to change the law to fit in to the application AI technology in society. In many big AI companies, the developers do have the ethical conception of certain AI technologies applied, but the society as a whole doesn’t have enough governance of the technologies. He suggested that ethics alone are not  enough. Laws also are necessary.

However, there’s an interesting dilemma here. On the one hand, the reason why we haven’t come up with laws is that we don’t know enough about AI. On the other hand, if we actually make laws, it may change the potential track of development of AI. The main problem here is that we have to make regulations about something that we don’t know much about.

I find it interesting because there’s an analogy between regulations of AI development and the Anthropocene epoch. At a recent DKU colloquium by  Professor David Grinspoon, he made the argument that humans are at the crucial stage of transition on which our future survival depends. He calls the successful transformed humans as “Terra Sapiens”. However, it’s difficult for us to imagine what “Terra Sapiens” would be like. At the same time, it’s important for us to change. Therefore, we are changing for something that we aren’t certain about. I find this dilemma not only constrained to AI, but to all kinds of deeply humane ethical issues, which rely on us to solve and change the world, while not having all the information to hand.

Another inspiring talk was given by Professor David Danks. The main issue he raised was “Is AI trustworthy?” The theme of his talk was “trust,” but this word can be interpreted so differently in various fields, such as social science, social psychology, and philosophy, that it’s hard to define. At the same time, trust is extremely important when it comes to AI.  Explainability, intelligibility, reliability, transparency, are of value only with the generation and maintenance of appropriate trust.

I find the trust issue within psychological context very interesting. There are two kinds of trust, one based on behavioral reliability, and the other on understanding values and beliefs. I later did some research on another paper, Trust but Verify: The Difficulty of Trusting Autonomous Weapon Systems, co-written with Heather M. Roff, and find that, in their opinion, trust should not be a mere yes or no answer. Depending on the level of autonomy, different levels of trust should be given to AI weapons. They draw an interesting tension between the development of technology and the liability to be trusted. To be more specific, if a weapon has better learning and planning ability, it means that the algorithm and technology behind is so difficult for humans to understand that humans wouldn’t know the value and belief behind their behaviors, which makes it hard for humans to generate deep trust within the autonomous weapons. However, if AI doesn’t have sophisticated technologies, they wouldn’t be developed in the first place. This is the tension of trust that lies in the nature of AI.

During the conference, I met with experts from many different areas who were also  giving presentations about their research. I was able to talk with some about their ideas. This conference opened up a new world to me. I realized that the only way to solve the future issue related to AI is to become truly interdisciplinary, combining technical skills with ethical ideas. I find myself attracted by the deep philosophical issues and will dig deeper for more answers.