A scientist’s opinion: Interview with Patrick Lin about autonomous cars
Interview with Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University.
Who should make the moral decisions of autonomous vehicles (AVs): programmers, companies or governments? Is there a consensus on this within the scientific community?
As it is right now, product development is generally left to the company, which responds to market pressures. Sometimes in the case of drugs, vehicles, and other products that could be dangerous, regulators become involved, and they respond to public pressures. This arrangement seems correct. Every industry is desperately seeking regulator guidance, since the risks can be large, and they want to be assured that they have permission and can limit their liability.
Expertise also matters. For instance, if you were constructing a house, you would want input from professional architects and engineers. Why would that be different with ethical issues? Admittedly, there’s still disagreement among philosophers on what the “right” ethical theory is, but we don’t need that answer for practical ethics – we can borrow methods from different ethical theories to draw out the various considerations. Ultimately, how an autonomous vehicle should be programmed is a political decision that must include buy-in from the general public. This isn’t to say the public is always right – they’re clearly not. But they are a critical stakeholder, and if they’re not on board or consulted, then industry will continue to suffer setbacks.
While automated vehicles have been manufactured and tested for a while now, does the technology even exist that makes it possible to differentiate between people, animals and other traffic participants?
Computer vision, including algorithms that process images, still need improvement. They still have a hard time picking up small objects, like animals, as well as distinguishing shadows and dark spaces, like a pothole. They don’t work well at all in the rain and other bad weather, which occurs in most places of the world. That’s why it’s a good idea to have multiple sensing systems, from digital cameras to radars to lidars and other emerging technologies.
Still, they’re surprisingly good and can be made better. They can distinguish a person from a bicyclist from an animal from a car, for instance. And we already know that our laptops and apps have facial recognition technology can identify specific persons; so in theory, robot cars could do that, though it’s unproven at highway speeds with the cameras that these cars have now.
If it’s important for robot cars to identify the various things on the road, we could also create a vehicle-to-vehicle (V2V) communications system, or vehicle-to-infrastructure (V2I), or both (V2X). This could include having tags or transmitters on cars, as well as motorcycle helmets, smartphones if you’re a pedestrian, and so on. But that’s much more work than making more advanced sensors, and it’ would open up more ways for abuse or system failures.
Is policy making possible when the effects of AVs are so unpredictable?
Absolutely. We might not be able to foresee all the possible effects, or even effects beyond the near- or mid-term, but that doesn’t mean that all paths forward are equal. We just have to reason as best as we can under these conditions of uncertainty, and more perspectives tend to be better than having fewer – there’s wisdom in the crowd. Anyway, if we don’t proactively make policy, that is a policy itself in that we’re letting “the market” control our fates. But the forces that drive the market – such as efficiency, pricing, branding, and so on – are not necessarily the same forces that promote social responsibility or “a good life.”
For example, if we’re not careful, we could see massive labor disruptions by a robotic workforce, and this would be dangerous if there’s no plan to retrain displaced human workers or otherwise take care of their daily needs. Truck drivers would seem to be among the first casualties, and they represent one of the most popular jobs in the world, as a vital link in our transportation infrastructure. Taxi and other hired drivers may be next, along with traditional auto mechanics who aren’t also computer engineers.
Back to ethical programming, there’s also lots of uncertainty when it comes to a potential crash. But, one way or another, a decision about that programming will be made – either by the programmer and company, or by a more inclusive group that accounts for what the public wants.
Posted with the permission of Patrick Lin.
Living with Robots
Paul Dumouchel and Luisa Damiano
“Living with Robots is a convincing reflection on the increasing presence of robots in society. Designed to operate in an environment shaped and occupied by humans, robots are the new actors in a technical, social, and cultural transformation. The book offers a distinctive and fruitful approach to social robotics through different theoretical frameworks, analyzing the implications of interactions between humans and robots, between humans via robots, and between robots themselves.”
—Zaven Paré, Rio de Janeiro State University
Living with Robots recounts a foundational shift in the field of robotics, from artificial intelligence to artificial empathy, and foreshadows an inflection point in human evolution. Today’s robots engage with human beings in socially meaningful ways, as therapists, trainers, mediators, caregivers, and companions. Social robotics is grounded in artificial intelligence, but the field’s most probing questions explore the nature of the very real human emotions that social robots are designed to emulate.
Social roboticists conduct their inquiries out of necessity—every robot they design incorporates and tests a number of hypotheses about human relationships. Paul Dumouchel and Luisa Damiano show that as roboticists become adept at programming artificial empathy into their creations, they are abandoning the conventional conception of human emotions as discrete, private, internal experiences. Rather, they are reconceiving emotions as a continuum between two actors who coordinate their affective behavior in real time. Rethinking the role of sociability in emotion has also led the field of social robotics to interrogate a number of human ethical assumptions, and to for mulate a crucial political insight: there are simply no universal human characteristics for social robots to emulate. What we have instead is a plurality of actors, human and nonhuman, in noninterchangeable relationships.
As Living with Robots shows, for social robots to be effective, they must be attentive to human uniqueness and exercise a degree of social autonomy. More than mere automatons, they must become social actors, capable of modifying the rules that govern their interplay with humans.
Paul Dumouchel is Full Professor of Philosophy at the Graduate School of Core Ethics and Frontier Sciences at Ritsumeikan University in Kyoto, Japan.
Luisa Damiano is Associate Professor of Logic and Philosophy of Science at the University of Messina in Messina, Italy.
Damouchel, P., Damiano, L. Living with Robots. Harvard University Press, Cambridge (Mass.), 2017. Translated by Malcolm DeBevoise.
Report: Ethics of Hacking Back
It is widely believed that a cyberattack victim should not “hack back” against attackers. Among the chief worries are that hacking back is (probably) illegal and immoral; and if it targets foreign networks, then it may spark a cyberwar between states. However, these worries are largely taken for granted: they are asserted without much argument, without considering the possibility that hacking back could ever be justified. This policy paper offers both the case for and against hacking back—examining six core arguments—to more carefully consider the practice.
- Argument from the rule of law
- Argument from self-defense
- Argument from attribution
- Argument from escalation
- Argument from public health
- Argument from practical effects
Please click here for the full report, funded by the US National Science Foundation.
Would 'Deviant' Sex Robots Violate Asimov's Law of Robotics?
Like it or not, sex robots are already here, and someday they might hurt you, if you ask nicely. As they cater to an ever-increasing range of tastes, some folks predict BDSM types (bondage, discipline, and sadomasochism) in the future bedroom.
But, wait, you might ask: wouldn’t these “deviant” or non-normative types violate the basic robot-ethics principle to not hurt people?
Sci-fi writer Isaac Asimov gave us the First Law of Robotics, which is: a robot may not injure a human being or, through inaction, allow a human being to come to harm. But sex-bots that spank, whip, and tie people up would seem to do exactly that.
Though it might seem silly, this discussion is actually relevant to AI and robotics in many other industries. What constitutes harm will be important for, say, medical and caretaking robots that may be instructed to “do no harm.”
Without that clarity, there’s no hope to translate the First Law into programming code that a robot or AI can properly follow.
What is harm? A conceptual analysis.
As the First Law commands, a robot is prohibited from acting in a way that harms a human. For instance, whipping a person, even if lightly, might hurt them or cause pain; and pain usually is an indication of a harm or injury. Tying a person up tends to make them feel vulnerable and very uneasy, so we usually consider that to be a negative psychological effect and therefore a harm.
But this is true only if we understand “harm” in a naive, overly simplistic way. Sometimes, the more important meaning is net-harm. For example, it might be painful when a child must have a cavity drilled out or take some awful medicine—the kid might cry inconsolably and even say that she hates you—but we understand that this is for the child’s own good: in the long term, the benefits far outweigh the initial cost. So, we wouldn’t say that taking the kid to a dentist or doctor is “harming” her. We’re actually trying to save her from a greater harm.
This is easy enough for us to understand, but some obvious concepts are notoriously hard to reduce into lines of code. For one thing, determining harm may require that we consider a huge range of future effects in order to tally up the net-result. This is an infamous problem for consequentialism, the moral theory that treats ethics as a math problem.
Thus, any harm inflicted by a BDSM robot is presumably welcomed, because it’s outweighed by a greater pleasure experienced by the person. What’s also at play is the concept of “wrongful harm”—a harm that’s suffered unwillingly and inflicted without justification.
The difference between a wrong and a harm is subtle: if I snuck a peek at your diary without your permission or knowledge, and I’m not using that information against you, then it’s hard to say that you suffered a harm. You might even self-report that everything is fine and unchanged from the moment before. Nonetheless, you were still wronged—I violated your right to privacy, even if you didn’t know it. Had I asked, you wouldn’t have given me permission to look.
Now, a person can also be harmed without being wronged: if we were boxing, and I knocked your tooth out with an ordinary punch, that’s certainly a harm, but I wasn’t wrong to do it—it was within bounds of boxing’s rules, and so you couldn’t plausibly sue me or have me arrested. You had agreed to box me, and you understood that boxing carries a risk of harm. Thus, you suffered the harm willingly, even if you preferred not to.
Back to robots, a BDSM robot would seem to inflict harm onto you, but if you had requested this, then it wasn’t wrongfully done. If the robot were to take it too far, despite your protests and without good reason (as a parent of a child with a cavity might have), then it’s wrongfully harming you because it’s violating your autonomy or wishes. In fact, it’d be doubly wrong, since it violates Asimov’s Second Law of Robotics: a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
But assuming the robot is doing what you want, the pain inflicted is only technically and temporarily harm, but it’s not harm in the commonsense way that the First Law should be understood. A computer, of course, can’t read our minds to figure out what we really mean—it can only follow its programming. Ethics is often too squishy to lay out as a precise decision-making procedure, especially given the countless variables and variations around a particular action or intent. And that’s exactly what gives rise to drama in Asimov’s stories.
What about the Zeroth Law?
Ok, maybe a BDSM robot could, in principle, comply with Asimov’s First Law to not harm humans, if the directive is properly coded (and the machine is capable enough to pick up our social cues, an entirely separate issue). Machine learning could be helpful for an AI to grasp nuanced, ineffable concepts, such as harm, without our explicitly unpacking the concepts; but there’d still be the problem of verifying what the AI has learned, which still requires a firm understanding of the concept on human side, at least.
But what about Asimov’s subtly different “Zeroth Law”, which supersedes the First Law? This Law focuses on the population-scale and not the individual scale, stating that: a robot may not harm humanity, or, by inaction, allow humanity to come to harm.
That becomes a much different conversation.
It could be that sex robots in general, and not just BDSM robots, would promote certain human desires that should not be indulged. For instance, if you think sex robots objectify women and even gamify relationships—some sex-bots require a certain sequence of moves or foreplay to get them to comply, a Konami code of sorts—then that might be bad for other people and humanity at large, even if not obviously harmful to the individual user. If sex robots become so compelling that a lot of people no longer have or need human partners, then that can also harm humanity.
On the other hand, many people are unable to form relationships, especially intimate ones. So, it might be better for them, and for humanity, if these folks had options, even if with robots (which may just be glorified sex toys). It’s also possible that sex-bots can help teach users about consent in their human relationships.
But it gets super-tricky when you consider that sex-bots have already been made to look like children. Is that a desire that should be indulged? Strong intuitions may point to “no”, but there’s not a lot of research on therapy for pedophiles. It’s possible that these kinds of robots could be enough to distract would-be predators from targeting actual humans. Or it could go the other way, by fueling their dark desires and acting them out in the real world.
The analysis of how various types of sex-bots may or may not comply with the Zeroth Law is much more complicated than we can offer here. Entire reports and books grapple with the ethics of sex robots right now, and this article is focused primarily on the First Law, again which has relevance far beyond just sex machines.
But the sex industry has long been a harbinger of things to come, and so it deserves our attention. After all, it pioneered technologies from photography to film to the Internet to virtual reality and more. Not just new media, but it also spurred the development of e-commerce—from online credit cards purchases to the rise of cryptocurrencies—because nothing sells like the “world’s oldest profession.”
Back to programming, though the Laws of Robotics are really only a literary device, they’re still an important touchstone in robot ethics, and a tribute must be paid. They’re essentially a meme, and memes are very hard to dislodge once they’ve taken root. So, whether or not Asimov’s Laws are the right principles for AI and robotics, they’re a reasonable, maybe irresistible, starting point for a much larger conversation.
Originally published in the Oct. 15, 20018 issue of Forbes.