Open menu

Autonomous Vehicle Ethics: Beyond the Trolley Problem

Academy of Sciences, Prague, Czech Republic, June 27-28, 2019

Villa Lanna

Most of the work in the ethics of autonomous vehicles has so far concentrated on the variety of Trolley problems that arise in this area. However, it is time to move beyond these issues. The upcoming conference will  explore, among others, the following issues:

  • how the algorithms in autonomous vehicles might embed ethical assumptions in cases other than crash scenarios;
  • how autonomous vehicles might interact with human drivers in mixed or “hybrid” traffic environments;
  • how autonomous vehicles might reshape our cities and landscapes;
  • what unique security or privacy concerns are raised by autonomous vehicles;
  • whether long times spend in autonomous vehicles might lead to new kinds of alienation or loneliness;
  • how the benefits and burdens of this new technology will be distributed throughout society.


Scientific committee members:

  • Prof. Jiří Wiedermann, Institute of Computer Science, the Czech Academy of Sciences, Prague
  • Dr. Ján Matejka, Institute of State and Law, the Czech Academy of Science, Prague
  • Prof. Partick Lin, the Ethics + Emerging Sciences Group, CalPoly, San Luis Obispo, California
  • Prof. Colin Allen, University of Pittsburg, Pennsylvania, USA


Program committee members:

  • Dr. David Černý, Institute of State and Law, the Czech Academy of Sciences, Prague
  • Dr. Tomáš Hříbek, Institute of Philosophy, the Czech Academy of Sciences, Prague
  • Dr. Daniel Novotný, University of South Bohemia in České Budějovice


The conference is organized by the Karel Čapek Center for Values in Science and Technology (Prague, Czech Republic), the Institute of State and Law of the Czech Academy of Sciences, the Institute of Computer Science of the Czech Academy of Sciences, the Institute of Philosophy of the Czech Academy of Sciences, the Faculty of Science, Charles University,   the Department of Philosophy of the University of Haifa (Israel), and Ethics + Emerging Sciences Group at the California Polytechnic State University (San Luis Obispo, CA).

Foto - workshop_poster.jpg

Conference registration will open in March, 2019.


Registration fee

The registration fee covers access to all  workshop presentations, the lunches and coffee breaks specified in the conference program, and  the conference dinner.

Registration fee is € 50 per person/1.260 Kč per person.

The payment details will be sent to the registered participant within 48 hours from the registration through the form below.

Payment options:Bank transfer On-site
The information provided will be used only to distribute information about the workshop. It will not be given or sold to any other organization for any other purpose

June 27

12:00 – 12:10 Welcome Address from the Organizers

12:10 – 14:00 Lunch

14:00 – 14:10 Václav Kobera (Ministry of Transportation of the Czech Republic), Welcome Address

14:10 – 15:00 Patrick Lin (California Polytechnic State University, San Luis Obispo), Algorithmic Bias in Autonomous Vehicles: What Is It Exactly?

15:00 – 15:10 Coffee Break

15:10 – 16:00 Derek Leben (University of Pittsburgh at Johnstown), Discrimination in the Trolley Problem

16:00 – 16:50 Christoph Lütge (Technical University of Munich), Autonomous Vehicles Ethics: Beyond the Trolley Problem

16:50 – 17:00 Coffee Break

17:00 – 17:50 Giuseppe Contissa (University of Bologna), The Ethical Knob: Ethically-Customisable Automated Vehicles and the Law

17:50 – 18:40 Barbara Wege (AUDI AG), Joining Forces to Shape the Future

19:00 – 21:00 Dinner


June 28

09:30 – 10:20 Saul Smilansky (University of Haifa), Autonomous Vehicles and Normative Pluralism

10:20 – 11:10 Ugo Pagallo (University of Turin), The Ethics of Autonomous Vehicles, and their Ecosystem: A Guide for a Good AV Society

11:10 – 11:20 Coffee Break

11:20 – 12:10 Nicholas Evans (University of Massachusetts, Lowell), Distributive Justice and Autonomous Vehicles

12:10 – 13:00 Jason Millar (University of Ottawa), People Packets: Mobility Neutrality in the Age of Automated and Connected Mobility

13:00 – 14:30 Lunch

14:30 – 15:20 Guillaume de la Roche (Groupe Renault), AI with Autonomous Cars: How to Be Ready to Overcome the Ethical Issues?

15:20 – 16:10 Rebecca Davnall (University of Liverpool), When the Car Knows Too Much

16:10 – 17:00 Informal Closing Discussion

Guillaume de la Roche

Guillaume de la Roche

Dr. Guillaume DE LA ROCHE has been working for Renault since 2017 as a senior engineer in Sophia Antipolis in France. His expertise is on validation of algorithms for ADAS (Advanced Driver Assistance Systems). Prior to that he was with Infineon (2001-2002, Germany), Sygmum (2003-2004, France), CITI Laboratory (2004-2007, France), Centre for Wireless Network Design (2007-2011, United Kingdom), Mindspeed (2011-2014, France) and Intel (2014-2017, France). He holds a Dipl-Ing from CPE Lyon, a MSc and PhD from INSA and an executive MBA from EM-Lyon. He is a coauthor of more than 70 publications including 3 books. He has been an Expert Evaluator for the European commission (FP7 and Europa 2020) since 2011.

Giuseppe Contissa

Giuseppe Contissa

Giuseppe Contissa is adjunct professor in Legal Informatics at the University of Bologna, and professor in Legal Informatics and in Legal Theory at LUISS University – Rome. He obtained the Italian National Scientific Qualification for the role of Associate Professor in Philosophy of Law (12/H3).He received his PhD in legal informatics and computer law from the University of Bologna, where he is currently a researcher at CIRSFID. He has been a Max Weber fellow and a research associate at the European University Institute (EUI), Florence, and resident fellow at the Stanford Center for Computers and the Law (CodeX), at Stanford University. His research interests include artificial intelligence and law, computable models of legal reasoning and knowledge, legal theory, game theory and the law, legislative drafting, and law and automation in socio-technical systems, with a specific focus on the liability issues arising in connection with the use of autonomous systems. He has published widely on these topics and has worked in several national and European projects, while also speaking at national and international conferences.

Rebecca Davnall

Rebecca Davnall

Rebecca Davnall received her PhD in 2014 from the University of Liverpool and now lectures there in the Department of Philosophy as part of the 'Philosophy of the Future' initiative.

Her research centers the practicalities of emerging technologies - what these devices will be able to do now and in the very short-term future, who will pay for them and whose lives will be affected. She is also interested in questions of how the future is represented in popular and news media and in fiction, what exactly is promised or threatened when someone says the future looks bright or grim, and how, in particular, the power of modern marketing techniques shapes and constrains what it is possible to imagine.

Derek Leben

Derek Leben

Derek Leben is the chair of the philosophy department and Associate Professor of Philosophy at the University of Pittsburgh, Johnstown. His research focuses on the intersection between ethics, cognitive science, and emerging technologies. In his recent book, "Ethics for Robots: how to design a moral algorithm," Dr. Leben demonstrates how principles from traditional moral theories could be implemented into autonomous machines, such as driverless vehicles.

Patrick Lin

Patrick Lin

He has published several books and papers in the field of technology ethics, especially with respect to robotics—including Robot Ethics (MIT Press, 2012) and Robot Ethics 2.0 (Oxford University Press, 2017)—human enhancement, cyberwarfare, space exploration, nanotechnology, and other areas. He teaches courses in ethics, political philosophy, technology ethics, and philosophy of law. Dr. Lin has appeared in international media such as BBC, Forbes, National Public Radio (US), Popular Mechanics, Popular Science, Reuters, Science Channel, Slate, The Atlantic, The Christian Science Monitor, The Times (UK), Wired, and others. Dr. Lin is currently or has been affiliated with several other leading organizations, including: Stanford Law School's Center for Internet and Society, Stanford's School of Engineering (CARS), 100 Year Study on AI, World Economic Forum, New America Foundation, UN Institute for Disarmament Research, University of Notre Dame, University of Iceland's Centre for Arctic Policy Studies, US Naval Academy, and Dartmouth College. He earned his BA from University of California at Berkeley, and MA and PhD from University of California at Santa Barbara.

Christoph Lütge

Christoph Lütge

Christoph Lütge holds the Chair of Business Ethics and Global Governance at Technical University of Munich (TUM). He has a background in business informatics and philosophy and has held visiting professorships in Taipei, Kyoto and Venice. He was awarded a Heisenberg Fellowship in 2007. In 2019, Lütge was appointed director of the new TUM Institute for Ethics in Artificial Intelligence.

Among his major publications are: “The Ethics of Competition” (Elgar, 2019), “Order Ethics or Moral Surplus: What Holds a Society Together?” (Lexington, 2015), and the "Handbook of the Philosophical Foundations of Business Ethics" (Springer, 2013). He has commented on political and economic affairs on Times Higher Education, Bloomberg, Financial Times, Frankfurter Allgemeine Zeitung, La Repubblica and numerous other media. Moreover, he has been a member of the Ethics Commission on Automated and Connected Driving of the German Federal Ministry of Transport and Digital Infrastructure, as well as of the European AI Ethics initiative AI4People. He has also done consulting work for the Singapore Economic Development Board and the Canadian Transport Commission.

Barbara Wege

Barbara Wege

Barbara Wege is Project Leader for the Beyond Initiative at AUDI AG, an interdisciplinary network of international AI experts. Prior to joining the automotive industry in 2014, she worked as a journalist for daily and weekly news publications and their online editions. Following her academic background in journalism, politics, economics and sociology, the focus of Barbara’s work lies at the interface of technology, science and society.

Jason Millar

Jason Millar

Jason Millar is an Assistant Professor at the University of Ottawa’s School of Electrical Engineering and Computer Science, and an Affiliate Researcher at the Center for Automotive Research at Stanford, and the Center for Law, Technology and Society (uOttawa). He researches the ethical engineering of robotics and artificial intelligence (AI), with a focus on developing tools and methodologies engineers can use to integrate ethical considerations into their daily engineering workflow. Jason has authored several articles on ethics and robotics/AI, and has provided expert testimony at the UN CCW and the Senate of Canada on the ethics of military robotics. He consults internationally on policy and ethical engineering issues in emerging robotics and AI applications.

Nicholas Evans

Nicholas Evans

Dr. Nicholas G. Evansis Assistant Professor of Philosophy at the University of Massachusetts Lowell, where he conducts research on national security and emerging technologies. His work on “dual-use research” in the life sciences is published in major US and international journals in philosophy, public policy, and the life sciences; his first sole-authored book, Neuroscience and National Security, will be published by Routledge in 2019. In 2017, Dr. Evans was awarded funding from the National Science Foundation to examine the ethical issues arising in the development and deployment of autonomous vehicles.

Dr. Evans also maintains an active research program on the ethics of infectious disease, with a focus on clinical and public health decision making during disease pandemics. Most recently, he has published on the health security impact of the 2013-2015 Ebola virus disease (EVD) outbreak in Biological Threats in the 21stCentury (ed. Lentzos). His 2016 edited volume, Ebola’s Message: Public Health and Medicine in the 21st Century focuses on the clinical, political, and bioethical impact of EVD, and received favourable reviews in Nature. 

Prior to his appointment at the University of Massachusetts, Dr. Evans completed postdoctoral work in medical ethics and health policy at the Perelman School of Medicine at the University of Pennsylvania. In 2015, he held an Emerging Leaders in Biosecurity Fellowship at the UPMC Center for Health Security, Baltimore. Dr. Evans has conducted research at the Monash Bioethics Centre, The Centre for Applied Philosophy and Public Ethics, Australian Defence Force Academy, and the University of Exeter. He has also served as a policy officer with the Australian Department of Health and Australian Therapeutic Goods Administration.

Ugo Pagallo

Ugo Pagallo

Ugo Pagallo is a former lawyer and current professor of Jurisprudence at the Department of Law, University of Turin (Italy), member of the Expert Group set up by the EU Commission on liability and new technology/new technologies formation, he is also working with the European Institute for Science, Media, and Democracy (Atomium), in order to set up AI4People, the first global forum in Europe on the Social Impacts of Artificial Intelligence.

He is also collaborating with the IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems; the European Science Foundation of Strasbourg, France; and the Joint International Doctoral (PhD) degree in Law, Science and Technology, part of the EU’s Erasmus Mundus Joint Doctorates (EMJDs). Author of eleven monographs and numerous essays in scholarly journals and book chapters, his main interests are Artificial Intelligence & law, network and legal theory, and information technology law (specially data protection law and copyright). The Japanese edition of his book on The Laws of Robots is available since Spring 2018.

Saul Smilansky

Saul Smilansky

Saul Smilansky received his D.Phil. from Oxford University and is a Professor at the Department of Philosophy, University of Haifa, Israel. He works primarily on normative and applied ethics and on the metaphysics of free will and its ethical implications. He is a chief representative of illusionism, which is a theory close to hard determinism, according to which there is no libertarian free will, yet an illusion of such free will is important for maintaining our moral practices and legal institutions. Prof. Smilansky is the author of Free Will and Illusion (Oxford University Press 2000), 10 Moral Paradoxes(Blackwell 2007), and over seventy papers in philosophical journals.

Foto - praha-vila-lanna.jpg

VilLa Lanna

Located in the residential quarter of Bubeneč in Prague 6, the Neo-Renaissance landmark building was built in 1872 by the prominent industrialist and art collector Vojtěch Lanna as a summer retreat. The villa stands on the site of the former main road used by the Prague social elite to reach their favourite recreation grounds in Stromovka.  The frescoes on the outside of the building were painted by Viktor Barvitius using studies produced by Josef Mánes; Barvitius also contributed to the fresco decoration in the interior. The ground floor of the villa features a foyer with a reception desk and two social halls with elaborate wall paintings. The smaller hall with a maximum seating capacity of 18 (originally the billiard room) is ideal for ceremonial lunches and dinners. The larger social hall with an outdoor terrace is suitable for conferences, seminars, weddings and family celebrations. Also available for use in the historic building is a lounge on the first floor with a lovely terrace offering fine views of the villa's tended garden.  The newly opened and restored tower with an observation deck provides impressive views of the area surrounding Villa Lanna as well as the Prague Zoo and the Troja Chateau and its vineyards. Those with a little free time on their hands can enjoy the permanent exhibit of works by the famous printmaker Oldřich Kulhánek. The renovated garden with a number of benches and an arbour is a pleasant place to sit or enjoy a stroll at any time of the year.  Secure parking is available on the premises.

Foto - 1.jpg_2129726935.jpg

The Villa Lanna offers guests a wide range of accommodation options. The first floor of the historic building has a total of seven rooms without private baths; fours bathrooms with toilets are located next to the rooms. All rooms are equipped with a refrigerator, a satellite television, a telephone with a direct outside line and a Wi-Fi Internet connection. A lounge with a fireplace, a terrace decorated with frescoes and an unimpeded view of the finely tended garden serves for both accommodations and meetings. In 2000 accommodation capacity was increased by 17 rooms with the addition of two separate buildings on the grounds. These rooms have private baths and are equipped with a satellite television, a refrigerator, a telephone with a direct outside line and a Wi-Fi Internet connection. A barrier-free room is also available. Guests can use the parking lot on the premises, and the large reconstructed garden is ideal for strolls and sitting in the arbour. Price for accomodation includes breakfast, all taxes, VAT and unlimited internet connection.

Address:
Vila Lanna
V sadech 1
160 00 Prague 6

Abstracts:


The Ethical Knob: ethically-customisable automated vehicles and the law

Giuseppe Contissa

Accidents involving autonomous vehicles (AVs) raise difficult ethical dilemmas and legal issues. It has been argued that self-driving cars should be programmed to kill, that is, they should be equipped with pre-programmed approaches to the choice of what lives to sacrifice when losses are inevitable. Here we shall explore a different approach, namely, giving the user/passenger the task (and burden) of deciding what ethical approach should be taken by AVs in unavoidable accident scenarios. We thus assume that AVs are equipped with what we call an ‘‘Ethical Knob’’, a device enabling passengers to ethically customise their AVs, namely, to choose between different settings corresponding to different moral approaches or principles. Accordingly, AVs would be entrusted with implementing users’ ethical choices, while manufacturers/programmers would be tasked with enabling the user’s choice and ensuring implementation by the AV.

When the Car Knows Too Much

Rebecca Davnall

Self-driving car trolley problems tend to include information which contemporary self-driving cars will not have reliable access to, and to treat these features as morally salient. For example, it is sometimes suggested that the life of an elderly person is less worthy of preservation than that of a child, and that self-driving cars should thus prefer to hit an elderly rather than a young pedestrian (e.g. Lin, 2016, p.70).

Remove such details, however, and the self-driving car trolley problem becomes much more tractable. A car that swerves is more dangerous than one which performs a straight-line emergency stop. Because tyres do not provide infinite traction, a swerving car is likely to spin out of control (Abe & Manning, 2009, pp.24-5); by suddenly departing from its lane a swerving car breaks familiar patterns of road use and may distract and endanger road users who are otherwise safe. Even absent these risks, swerving extends the car’s stopping distance, resulting in higher-speed, more dangerous collisions (Rosen & Sander, 2009, p.540).

With minimal information about nearby road users, a self-driving car’s safest course of action is to stop as soon as possible within its own lane. Increasing the information available to the car does not consistently reduce the overall risk of a bad decision. Information capture, analysis and storage always add risks that must be balanced against any benefit from the information accumulated. In particular, in order to make use of more information, more complex training datasets will be needed for the car’s controlling AI, and greater complexity in this area entails a greater risk that concealed biases in human behaviour will filter through into the car’s (Noble, 2018).

Even if bias is controlled for, many features posited as relevant to the resolution of high-information cases will not be reliably discernible. For example, age manifests so differently in different people that reliable visual determinations of age are extremely difficult (Yoo et al, 2018, p.808). Any other method by which that information might become available to the car (such as networked access to personal information carried by pedestrians themselves) implies a violation of privacy which carries unacceptable risks compared to the lower-information approach.

Self-driving cars should thus collect only minimal information about their environment, and ensure safety by erring on the side of caution when it comes to emergency stops.

References

Abe, M. and Manning, W. 2009. Vehicle Handling Dynamics. Amsterdam: Butterworth-Heinemann

Lin, P. 2016. Why Ethics Matters for Autonomous Cars. In: Maurer, M., Gerdes, J. C., Lenz, B. and Winner, H. ed. 2016. Autonomous Driving: Technical, Legal and Social Aspects. Berlin: Springer

Noble, S.U. 2018. Algorithms of Oppression. New York: New York University Press

Rosen, E. and Sander, U. 2009. Pedestrian fatality risk as a function of car impact speed. Accident Analysis and Prevention41: 536-542

Yoo, B., Kwak, Y., Kim, Y., Choi., C and Kim., J. 2018. Deep Facial Age Estimation Using Conditional Multitask Learning With Weak Label Expansion. IEEE Signal Processing Letters25(6): 808-812

Distributive Justice and Autonomous Vehicles

Nicholas Evans

Research into the ethics of autonomous vehicles focuses, almost exclusively, on whether decisions by individual cars conform to, reflect, or promote certain values. Less discussed is whether autonomous vehicles are permissible, or how they might be permissibly developed and deployed, from the perspective of distributive justice. Here, I argue that autonomous vehicles—as an object of moral concern and as a component in transportation systems—ought to be considered in terms of distributive justice. I then apply John Rawls’ theory, justice as fairness, to AVs and future transport networks. Transportation connects to mobility, which holds an important role in a scheme of basic liberties and is subject to the difference principle in distributing economic goods and social goods to members of a society under conditions of fair cooperation. I conclude by considering objections, feasibility concerns, and potential alternate schemes of justice.

Discrimination in the Trolley Problem

Derek Leben

When used properly, the trolley problem can be an effective tool for comparing the predictions of moral theories (e.g., Utilitarianism vs. Deontology) and isolating which features have an effect on people’s intuitive judgments about weighing harms (e.g., physical distance, causal influence, past responsibility, etc.). However, the problem itself cannot tell us which predictions are correct, or which features are morally relevant. Many experiments, including the massive MIT “Moral Machine” study, have revealed that factors like gender, social status, and race can have significant effects on people’s intuitive judgments about trolley scenarios. I will employ a Rawlsian Contractarianism to argue that this type of information is morally irrelevant. Demographic facts are morally relevant only when they can potentially influence the distribution of primary goods, such as life, liberty, and opportunity. Thus, age and size may be relevant in estimating the likelihood of survival in a collision, but race and sexual orientation are not. If trolley scenarios are being employed to test moral theories or study moral judgments, then the use of irrelevant information is extremely misleading. At best, the main benefit is to demonstrate how people’s intuitive judgments are systematically biased, and therefore an unreliable guide to designing moral machines.

Autonomous Vehicles Ethics: Beyond the Trolley Problem

Christoph Lütge

The ethics of autonomous cars and automated driving have been a subject of research and public discussion for a number of years. While automated and autonomous cars have a chance of being much safer than human-driven cars in many regards, situations will arise in which accidents cannot be completely avoided. Such situations will have to be dealt with when programming the software of these vehicles. In 2017, an ethics committee for automated and connected driving, appointed by the German Federal Minister of Transport and Digital Infrastructure, presented the world’s first code of ethics for autonomous cars. This code, which moves the ethical discussion on AVs forward beyond the trolley problem, will be presented here.

People Packets: Mobility Neutrality in the Age of Automated and Connected Mobility

Jason Millar

As we move toward a system of automated and connected mobility, we will need to confront important ethical, legal and political questions concerning the rules that will ultimately govern the mobility system. In this talk, the authors describe how we are already well into the process of automating driving, largely the result of our dependence on turn-by-turn navigation and other automation technologies. We describe important gaps in the current mobility governance landscape and discuss several key issues, the answers to which will shape the ethical character of tomorrow’s mobility system. Finally, we draw on historical ethical, legal and political aspects of the net neutrality debate to characterize the challenges we will face with the future of mobility, and suggest strategies for avoiding similar problems.

AI with Autonomous cars: how to be ready to overcome the ethical issues?

Guillaume de la Roche

Following the report from French deputy Cedric Villani (2018) regarding the development of artificial intelligence in France, car manufacturers have to face many ethical challenges. In this talk we will first introduce all the important necessary concepts including the status of artificial intelligence and the main techniques like fusion, machine learning or big data. Then we will explain the main ethical problems related to artificial intelligence with autonomous cars. We will detail some significant works under progress (based on literature review or ongoing projects) to overcome these ethical challenges. We will see that the problems are huge when developing artificial intelligence for cars, because many different ethical theories could be used for different situations. Some examples will be given. Then, we will explain the ongoing works related to explainable artificial intelligence and audit and validation of the algorithms (which is a key for responsibility and legal aspects). To conclude we will give recommendations and perspectives described from a car manufacturer point of view.

Autonomous Vehicles and Normative Pluralism

Saul Smilanky

The prospect of autonomous vehicles has made normative and applied ethical thinking directly relevant. These vehicles create a need to programme the computers that will guide them in decision making that is of manifestly moral import. Moreover, unlike human agents, these vehicles will be able to evaluate moral situations in a much more complex way, taking more factors into consideration, evaluating them more quickly, and communicating with similar vehicles and the general automotive grid in real time. This means that while initially the idea was for autonomous vehicles to aim to mimic human ethical decision making, the prospect of such vehicles suggests a much more ambitious ability for stronger, and indeed much more sophisticated, decision making, then us much more limited humans are capable of, particularly in challenging driving circumstances. I propose to explore these radically new scenarios, focusing on the implications of normative pluralism and normative relativism for the decision-making of autonomous vehicles in the near future. Some of the possibilities that these new developments make possible, and that have been relatively neglected in the literature, involve applying normative pluralism and perhaps relativism into the decision-making of individual autonomous vehicles. The ways in which this could be done, and the advantages and disadvantages of attempting to do so, will broaden our thinking on this topic. In the process, I hope to show the advantages of ways of normative reckoning that I have been developing recently beyond the context of autonomous vehicles.

Algorithmic Bias in Autonomous Vehicles: What Is It Exactly?

Patrick Lin

Unfair bias is a known problem with artificial intelligence and machine learning.  But outside of bizarre crash dilemmas, the problem is little discussed in connection with autonomous vehicles (AVs).  After Germany’s 2017 code of AV ethics had (very reasonably) established a strict principle of non-discrimination in decisions made by autonomous vehicles, the problem seems to have been solved.  This talk will argue that bias is often oversimplified—reduced to being agnostic to gender, ethnicity, and so on.  Yet in many cases, it is ethically and legally permissible to treat people differently based exactly on these personal characteristics.  In fact, people may be put at a greater risk of harm, if these characteristics are always ignored.  Non-discrimination, then, is more nuanced than we might think.  I will offer a basic framework for diagnosing discrimination and apply it to possible (non-crash) scenarios involving AVs.

The Ethics of Autonomous Vehicles, and their Ecosystem: A Guide for a Good AV Society

Ugo Pagallo

We have a moral duty to abandon a world, in which around 40,000 Americans die in car accidents per year (the victims of road accidents in EU are around 25,000 per year). The technology of autonomous ground vehicles and systems can help us accomplishing this moral duty. Drawing on the AI4people project's final document, i.e. "An Ethical Framework for a Good AI Society" (Floridi et al. 2018), the aim of the paper is threefold. First, the intent is to flesh out five ethical principles that should undergird the development and adoption of this technology, that is, "self-driving cars." Second, focus is on the limits of any "top-down" approach, whether utilitarian or Kantian, to the normative challenges brought about by our moral duty to abandon today's human drivers-world. An Aristotelian attitude may rather fit threats and risks of the transition from the current tragic and cynical situation, to a future of self-driving cars (and systems). The third and final goal of the paper is to shed light on how this is feasible, namely, how to assess, develop, incentivize, and support a Good AV Society.

Joining Forces to Shape the Future

Barbara Wege

Helping to make sure that AI is being applied for the benefit of the individual and society – that is the mission of the beyond Initiative. Founded by AUDI AG, the initiative has set up an interdisciplinary network in the past three years with international AI experts from science and startups. Besides questions around the future of work in the age of AI, the initiative’s work focuses on ethical, legal and social aspects of autonomous driving. The talk looks at beyond’s lessons learned and the importance of joining forces in order to shape a good future with AVs.

David Černý (The Karel Čapek Center)
Institute of State & Law, CAS
Národní 18, 11600 Praha
Czech Republic
David Černý david.cerny (at) ilaw.cas.cz

Tomáš Hříbek (The Karel Čapek Center)
Institute of Philosophy, CAS
Jilská 1, 11000 Praha
Czech Republic
Tomáš Hříbek hribek (at) flu.cas.cz