Kathryn Francis, Measuring Morality with Immersive Technology
Institute of State & Law, 7th floor, Národní 18, Prague
Dr. Kathryn Francis is Lecturer in Psychology at the University of Bradford, UK. Prior to this, she was a Postdoctoral Research Fellow in Psychology and Philosophy at the University of Reading, UK. Her interests lie at the intersection of psychology and philosophy, predominantly in moral cognition. She adopts interdisciplinary approaches to the investigation of cognitive and social phenomena.
Moral decision-making has long been studied using text-based vignettes adopted from philosophy. While these allow systematic comparisons of moral principles, they can lack contextual information. To address this, we have created Virtual Reality (VR) and Haptic VR simulations of moral dilemmas to assess moral decision-making. How we measure morality has implications for a number of different fields in moral psychology, moral philosophy, and beyond. We will review the interplay between technologies and moral decision-making and consider the implications of this emerging field of research.
Technology Ethics in Central Europe: A New Hope in Prague
Patrick Lin Interviews David Cerny and Tomas Hribek for the Forbes magazine, Sep 9, 2019
In the business world, money might be the lingua franca to connect us all, but there’s really no equivalent in academic and policy circles. Language barriers often mean different rates of concern for emerging issues, such as in technology ethics. And this means different levels of readiness in adapting to new innovations and the social disruptions they can bring.
Consider the ethics of autonomous driving, for instance. Besides possible cultural differences in values, the needs of European cities may diverge from US cities, since the former existed for hundreds (and sometimes thousands) of years before highways ever did, whereas the latter was mostly built around and co-evolved with highways. Self-driving cars would likely impact different cities in different ways.
That’s why it’s critical to bring such conversations closer to the affected populations. Enter a new center in the heart of Central Europe that’s helping to bring the old geography into the modern world. Housed in the Czech Academy of Sciences, the Karel Čapek Center for Values in Science and Technology (CEVAST) was formed last year and has been immediately convening important meetings in artificial intelligence and robot ethics.
Fun facts: The center is named after the country’s novelist and playwright who gave the word “robot” to the world in 1920; in Czech, “robota” means drudgery or hard, forced labor, that is, work we humans don’t want to do. Based in Prague, the city is more than a thousand years old and was twice the capital of the Holy Roman Empire.
Founded by philosophers David Cerny and Tomas Hribek, along with computer scientist Jiri Wiedermann, CEVAST hosted a workshop on the ethics of self-driving cars earlier this summer, as the Czech Republic and other regional states look harder at adopting the technology. In this interview, they share some key insights from the expert meeting, which helped push the conversation in new directions:
~ ~ ~
Q: Hi David and Tomas, thanks for inviting me to your workshop and to be an external member of your center. Let’s start here: why Prague for a workshop on robot cars?
Tomas Hribek: Well, Bohemia, or the Czech Lands if you will, used to be the industrial base of the old Austrian Empire in the latter half of the 19th century, and this tradition carried over to Czechoslovakia, established in 1918. There were many talented entrepreneurs and technology innovators, including a strong automotive industry with brands such as Škoda, which has more recently become a successful component of the Volkswagen Group. Nowadays there is also a strong local research interest in artificial intelligence, so the topic of autonomous driving has a lot of natural appeal for us.
Q: And I noticed that policymakers and other government representatives were at the workshop, too. What were some of the highlights of the meeting?
David Cerny: The workshop was called “Autonomous Vehicle Ethics: Beyond the Trolley Problem.” We had realized that the primary focus of past discussions was the familiar “trolley problem”, but it already felt rather limiting, and we wanted to push past that.
Q: Can you remind the readers about the trolley problem, and specifically how it applies to autonomous driving?
David Cerny: The trolley problem refers to a class of thought-experiments in ethics, the original form of which goes roughly as follows: You see a runaway trolley hurtling down the track on which there is a group of five people, all of whom will be dead if run over by the trolley. However, you can pull a lever which would redirect the trolley onto a side-track on which there is a single person, who will of course also die if run over. So you have two options: either do nothing, let the trolley continue down the main track, and let five people die; or pull the lever, thus redirecting the trolley to the side-track where it will kill one person.
There are of course countless ways to interpret the trolley problem; originally it was meant to point out the inadequacy of utilitarian ethics. However, this model has recently been applied with new urgency to discussions about how to program autonomous vehicles. It’s possible that ethical dilemmas similar to the trolley problem could happen in the real world—that is, choosing between two evils, such as weighing the value of a car’s passenger versus a nearby pedestrian, or in any number of related scenarios. Hence it seemed important to figure out how autonomous vehicles should navigate such situations.
Q: Critics often point out that the trolley situations are so rare or implausible that developers of autonomous vehicles don’t need to be concerned or distracted by them. And I’ve said that this misses the point; even ordinary decisions about how much room to give another car, or a bicyclist, or a child playing on the street can be an ethical dilemma—maybe not as dramatic as the original trolley problem, but definitely more realistic and commonplace. Is that what you meant by wanting to go “beyond” the trolley problem?
Tomas Hribek: Yes, that’s part of it. Even if it is true that the trolley problem, with its dramatic choice of five human lives over one, is exceptional in a real life, autonomous vehicles will be involved in moral choices practically all the time, such as in navigating through traffic and making risk decisions on behalf of nearby road users, as you just suggested.
But we also want to go beyond crash and navigation dilemmas. So we invited experts to help explore other moral, and even political, dimensions of autonomous driving. The thing is, ethicists must consider not just what happens on the road, narrowly speaking, but all kinds of other ways in which autonomous vehicles will impact human health, preferences, and opportunities. Indeed, autonomous driving is likely to have a political, not only ethical dimension—meaning that it will affect the design of our institutions and communities, not merely interpersonal relations.
Q: Could you give a specific example? This could be new territory for many ethicists and policymakers in the space.
Tomas Hribek: Well, autonomous driving will most likely involve the elimination of lots of jobs for human drivers. On the other hand, autonomous driving might improve the mobility of many people, and thus also their chances to access other kinds of jobs. Other issues concerning autonomous vehicles that transcend the trolley problem emerge at the borderline between ethics and sociology. This is actually the whole collection of issues connected with building a new infrastructure, and possibly changing the urban design to accommodate the needs of autonomous vehicles.
Q: You mean, the needs of vehicles, instead of people’s needs?
Tomas Hribek: Right, that’s a good point. One of the participants at the workshop, the Italian legal scholar Ugo Pagallo, argued that autonomous vehicles could help solve the problem that cars created for traditional European cities that—unlike the cities in the United States—were just not built for cars. Some innovative cities in Europe, in particular Amsterdam, are already exploring the possibility that they would allow access only to public-service autonomous vehicles, thus solving the problem of traffic congestion created by traditional cars. This is probably not of such a priority in the American cities built around highways.
Q: Are there other differences between the North American and European approaches to the ethics of autonomous driving?
Tomas Hribek: At the workshop, we had a number of distinguished participants from several European countries. Christoph Lütge from the Technical University in Munich discussed the recent code of ethics for autonomous cars that was proposed in Germany. This legislation is very much opposed to the utilitarian solutions of the trolley problems that are perhaps more popular in the US or in the Anglophone world in general. However, Lütge made it clear that the German approach is motivated not just by a different philosophical tradition, but also by practical concerns such as the inability of autonomous cars today to distinguish among different people involved in a crash scenario, such as the ages of possible victims.
Also remarkable was a contribution of Giuseppe Contissa, a noted Italian specialist in computer law from the University of Bologna, who presented the idea of the so-called ethical knob. This is a proposal that autonomous cars could offer their passengers a spectrum of possibilities—within certain legally defined limits—of turning their particular vehicle into a more or less egoistic one in situations of collision. This might perhaps help make autonomous cars more acceptable to customers. Of course, this proposal faces a number of criticisms, including those you had raised in a previous article.
David Cerny: I should like to mention another outstanding contribution from the workshop, which does not necessarily concern the US versus European contrast, because the author is from Israel; but it still presents an alternative to the approaches prevalent in the US:
Saul Smilansky, a philosopher from the University of Haifa, suggested that the case of autonomous driving is a great testing ground for his ambitious idea of normative pluralism. Smilansky notes that much of work on the ethics of autonomous driving, especially in the US, is driven by a monistic idea that one ethical theory must be right—either utilitarianism, or contractualism, or what have you. For instance, one of the distinguished American participants, Derek Leben from the University of Pittsburgh at Johnstown, defended contractualism as a universal ethic, and feared that Smilansky’s pluralism would lead to nihilism. But Smilansky thinks we can avoid nihilism because some ethical choices are just plain wrong, while many other choices are reasonable; it’s not a case of “anything goes.” This framing might support many different ways of programming autonomous vehicles. One way could prioritize safety, but another way could give higher priority to speed, and so on.
Tomas Hribek: There were other intriguing ideas, for instance, from the Canadian scholar Jason Millar from the University of Ottawa. He made use of the framework of the so-called basic rights, originally worked out by the great American political philosopher, John Rawls. Millar derives the non-basic right to transportation from the basic rights, such as the right of freedom and property, and the right of free assembly.
And there were other excellent contributions by other speakers. What may be unusual about our meeting, as compared to a typical philosophy workshop, is that we had speakers from industry, namely Barbara Wege of Audi, and Guillaume de la Roche from Renault Group, which is important to keep the ethical discussion grounded in reality.
Q: Thanks for those highlights and for pointing readers to experts, if they want to learn more. Your workshop was, as far as I know, the most ambitious event thus far organized by your new center, is that right? Tell me more about that institution.
David Cerny: Well, at least at moment, it’s more of a shared platform than a self-standing institution. We have observed the trends abroad, like the work of your own Ethics + Emerging Sciences Group at Cal Poly, and realized a need to do something similar in Central Europe. There are now similar initiatives in the vicinity, though, such as in Munich, Germany; Vienna, Austria; and Budapest, Hungary. But we believe a comparative advantage of the Karel Čapek Center is that it is a platform which is truly interdisciplinary, because it unites the researchers from several institutes of the Czech Academy of Sciences: philosophers, computer scientists, legal theorists, and much more.
Q: As a last question, do you have any future plans you can share?
David Cerny: We definitely plan to continue the work on autonomous vehicles, as we have only begun exploring various aspects of this topic. However, we are also currently conducting research on sexbots, and another on a desirability of the human form in robotics. As for conferences and such, we have just started to prepare a major event for the summer 2020 here in Prague: the next conference of the International Association of Computing and Philosophy (IACAP).
~ ~ ~
David Cerny is a research fellow at the Institute of State & Law and the Institute of Computer Science, both under the Academy of Sciences of the Czech Republic. He works primarily on topics in applied ethics. His forthcoming book on the principle of double effect will be published by Routledge.
Tomas Hribek is a research fellow at the Institute of Philosophy of the Academy of Sciences of the Czech Republic. He has done work mainly on issues in philosophy of mind, more recently also on bioethics.