Open menu
An Interview with Prof. Patrick Lin

An Interview with Prof. Patrick Lin

(California Polytechnic University, San Luis Obispo, CA)

David Černý: Recent years have brought rapid development in the field of artificial intelligence and robotics. Thinking machines already outperform human intelligence in different domains and many researchers believe that these machines will reach the level of general human intelligence relatively soon. Then it might be just one more step to creating artificial superintelligence. Today, AI driven devices and robots, from social and medical robots to autonomous vehicles and drones, play an increasingly significant role in our private and social lives and have already become omnipresent. Patrick, what do you think an ethicist can offer to the modern men surrounded by sophisticated technologies? Do the groundbreaking developments in AI and robotics raise specifically ethical questions or should we put our trust in the hands of scientists and expect them to solve what may seem to belong into the province of ethics?

Patrick Lin: Right, our world is increasingly ruled by technology, but we still have a role in determining our own future. If we’re not deliberate and thoughtful, then we’re “leaving it up to the market” on how technology is developed and used. But the forces that drive the market—such as efficiency, pricing, branding, and so on—are not necessarily the same forces that promote social responsibility or “a good life.”

For example, if we’re not careful, we could see massive labor disruptions by a robotic workforce, and this would be dangerous if there’s no plan to retrain displaced human workers or otherwise take care of their daily needs. Some jobs might be too important to hand over to AI and machines, such as judges and soldiers or even teachers and caregivers. There might be benefits for having robotic workers, but we haven’t had enough conversation about what might be lost.

Related to those concerns is the fact that even AI decisions today can be hard to understand. Most of us don’t know how they work or why they arrived at a decision—it’s a black box. Without that transparency, it’s hard to trust these systems, which means we should be asking ourselves if we should deploy them at all. Programmers themselves also often don’t fully understand how and why machine learning works, which creates a responsibility gap when things go wrong—and they will go wrong, as the entire history of technology shows.

So, the ethical questions raised by emerging technologies are very broad, ranging from the design process to use-cases to unintended impacts and more. This is something like verification and validation in engineering, which ask respectively: did we build it correctly (was the technology built to the specifications) and, at a more basic level, did we build the right thing (was it actually what’s needed)?

Answering these questions is too big a responsibility to impose or hand over to any particular group of people, as those answers can reshape an entire society. Engineers may be helpful people in general who want to make the world a better place, but that doesn’t mean they understand the nuances of ethics, especially as it relates to programming AI and robots that interact with people or make critical decisions about their lives. Bias is all too easy to slip in at many points in the design process, such as data used for machine learning.

I wouldn’t hand over this responsibility to only philosophers, either. But philosophers and ethicists must have a seat at the table, and I’m encouraged that this seems to be happening more and more, as government and industry meetings recognize the need for actual expertise in ethics and other social issues.


DČ: Well, it might seem at this point that allowing philosophers and ethicists to have a seat at the table would solve the problem of how to ethically design algorithms for AI and robots. But philosophers are in general better at raising questions than answering them. Moreover, there is a widespread disagreement concerning the question “What ethical system is right?” or, more specifically, “What code of moral rules should AI and robots incorporate in their algorithms?” Imagine, for example, autonomous vehicles and possible crash situations. A utilitarian would suggest that AVs should maximize utility, a more deontologically inclined ethicist, on the other hand, would tend to impose some constraints on AVs behavior. Could this basic disagreement threaten our best efforts at constructing ethically sound and well-behaving AI and robots?

PL: That’s a fair question, but I don’t think it’s a real problem, only an academic one. I’m not saying that ethics is only academic—far from it, it affects real lives every day. But for some reason, in academic philosophy, there’s the mentality that ethics is something like the rings of power from The Hobbit, that there’s only One Ring to rule them all. Philosophers might say that we need to choose only one theory as the right one, because that’s what is needed in order to be principled and for intellectual honesty. After all, to accept consequentialism and deontological ethics seems to be accepting a contradiction, that results are the only things that matter and, at the same time, results don’t matter at all.

But I think all that is just pedantic. Instead, I think there’s something very useful and right in all of the major ethical theories. Sometimes they may be contradictory, but that’s okay—life is often a balancing test among competing core values, such as liberty versus security. So, why should ethics be a simple, straightforward formula? It looks more to be an art than a science. So, I would favor what I’d call a hybrid approach that draws from the best of these theories.

The process is something like this: run an ethical question through one theory, and then run it again through another, and continue to do this for the other theories you want to check it against. Think of this as a malware scan: no malware app is perfect, so it’s useful to run several scans with different apps to cover their gaps. Or think of it as a courtroom of several different judges who are experts in different areas.

Ideally, our ethical theories would converge on the same answer to any given question. But when they don’t, that’s where the hard work—the art—of balancing the different considerations is required. The point about ethics, I believe, isn’t just getting an answer to whether action x is ethical or not. But ethics is a process. It’s about explaining how you’ve balanced a given bundle of interests and considerations. These considerations don’t just include the action and its effects, but they also include one’s intentions, how well the act or intention promotes a good character, and so on.

Given that approach, I don’t think the answer to any particular ethical dilemma—if they’re truly dilemmas—is what’s important. That’s not the real prize we’re after. Instead, what matters is ensuring that a technology developer’s decision to do x is informed by a thoughtful process, and this process for different organizations may lead to different answers. And that might be okay, because there may be room for autonomous vehicles that behave differently, just like traffic today still moves ahead despite different driving styles. Some robot cars might drive faster and take more chances than others, and that may be fine if they’re all within the same ethical tolerances.

Anyway, arriving at a definitive answer to a genuine dilemma is a fool’s errand: by definition, there’s no consensus on the “right answer.” That philosophers haven’t been able to “solve” or converge on a single answer, or a single ethical theory, after thousands of years isn’t a bug; it’s a feature of a dilemma. Therefore, there’s no way that Tesla, Waymo, Ford, Zoox, or other car manufacturers—who have no particular expertise in ethics—can solve the problem in the next few years, or even a hundred years.

This isn’t to say that AV manufacturers can do whatever they like. If their products are involved in an accident, those companies will need to defend themselves and explain their thinking about the design elements implicated, e.g., are they prioritizing certain lives or things over others? And this can’t just be a post hoc rationalization, but to be an effective defense, they’ll need to proactively think through these questions to show that they’ve done their due diligence before their products have rolled out on the streets.

Back to your question, philosophers’ disagreements on how to best program AI is not going to hold up technology, because we’re not the ones developing it. At best, philosophers are advising technology developers who will release a product one way or another, and with or without our input. Manufacturers won’t wait for us to come to an agreement. Again, this is to say that our basic disagreement on ethics isn’t a real obstacle to developing AI and robotics.

However, we can help pave a smoother, more responsible path for new technologies, if we can give sensible guidance to both industry and policymakers. Often, they’re looking for exactly this kind of guidance when the law is unclear—they need a moral North Star to follow when they’re lost in uncharted seas. Philosophers can help anticipate problems, including those that might lead to future lawsuits, and suggest best-practices for addressing those problems. We can help guide technology’s use and evolution, as opposed to leaving it to market forces that care only about efficiency and profit. In that regard, philosophers and ethicists have an absolutely crucial role in the future of emerging technologies.


DČ: Many philosophers and scientists warn us against potential dangers connected with AI and rapidly progressing automation. If humanity succeeds in creating a general artificial (super)intelligence, its behavior may well be very unpredictable and even hostile towards any possible threads to its very existence, human beings included (see, e.g. Nick Bostrom's Superintelligence). Illah Reza Nourbakhsh, professor of robotics at Carnegie Mellon University, describes in his book Robot Futures a possible future scenario in which "democracy is effectively displaced by universal remote control through automatically customized new media." AI and AI embedded robots may either turn out to be a blessing for our civilization or be our final invention. What is your opinion, Patrick, about these matters? Are you more optimistic or pessimistic about our AI and robots containing future? Do you think that we will be able to implement ethical behavior into AI in a way that precludes (any) inimical attitudes and behavior towards human beings?

PL: This is a very hard question for me, because I don’t consider myself a futurist, and I try not to make faraway predictions. It’s not so much that I can’t, but that no one has a good crystal ball, really. If they say they do, then they’re trying to sell you something.

With AI and robotics, we’re venturing in unknown territory that has very little historical precedent, despite loose comparisons to the Industrial Revolution and so on. Whatever optimism or pessimism one might have, I’d say that all bests are off when it comes to this area, which is emerging and evolving too quickly for thoughtful policy to keep up, especially as international governments don’t seem to have the will to develop coordinated, cooperative policies. This is different from, say, predictions about climate change, which we can make reasonable predictions about, and all of them seem bad. Back to the technological race, Martin Luther King Jr. observed, “Our scientific power has outrun our spiritual power. We have guided missiles and misguided men.” Fifty years later, he’s still right: ethics, law, and policy need a turbo-boost to catch up to our technologies that relentlessly march forward.

So, without much optimism or pessimism, I’d tend to agree with the wisdom of Winston Churchill, who said: “It is a mistake to try to look too far ahead. The chain of destiny can only be grasped one link at a time.” This doesn’t mean we can’t try to forecast distant scenarios and plan for them, but we need to be very careful not to avoid the anchoring effect and lock ourselves into certain paths. Related to that is the bias of wish-fulfillment, so we need to be very self-aware about what we wish for.

The most important step is always the next one. Every step must be guided by real science and technology, as well as sober analysis—not science fiction or unjustified hopes and fears. We need a diversity of experts, who bring in new perspectives and can see things that we might miss, to give us sure footing throughout the journey. That’s why I’m excited to see the creation of the Karel Čapek Center, which will help energize the field, as well as reach new experts and stakeholders to expand the conversation globally, as it must be.


DČ: Patrick, thank you for the interview, and I’m happy that you have agreed to become part of our team at the Center.