Open menu
Would 'Deviant' Sex  Robots Violate Asimov's Law of Robotics?

Would 'Deviant' Sex Robots Violate Asimov's Law of Robotics?

Patrick Lin

Like it or not, sex robots are already here, and someday they might hurt you, if you ask nicely. As they cater to an ever-increasing range of tastes, some folks predict BDSM types (bondage, discipline, and sadomasochism) in the future bedroom.

But, wait, you might ask: wouldn’t these “deviant” or non-normative types violate the basic robot-ethics principle to not hurt people?

Sci-fi writer Isaac Asimov gave us the First Law of Robotics, which is: a robot may not injure a human being or, through inaction, allow a human being to come to harm. But sex-bots that spank, whip, and tie people up would seem to do exactly that.

Though it might seem silly, this discussion is actually relevant to AI and robotics in many other industries. What constitutes harm will be important for, say, medical and caretaking robots that may be instructed to “do no harm.”

Without that clarity, there’s no hope to translate the First Law into programming code that a robot or AI can properly follow.

What is harm?  A conceptual analysis.

As the First Law commands, a robot is prohibited from acting in a way that harms a human. For instance, whipping a person, even if lightly, might hurt them or cause pain; and pain usually is an indication of a harm or injury. Tying a person up tends to make them feel vulnerable and very uneasy, so we usually consider that to be a negative psychological effect and therefore a harm.

But this is true only if we understand “harm” in a naive, overly simplistic way. Sometimes, the more important meaning is net-harm. For example, it might be painful when a child must have a cavity drilled out or take some awful medicine—the kid might cry inconsolably and even say that she hates you—but we understand that this is for the child’s own good: in the long term, the benefits far outweigh the initial cost. So, we wouldn’t say that taking the kid to a dentist or doctor is “harming” her. We’re actually trying to save her from a greater harm.

This is easy enough for us to understand, but some obvious concepts are notoriously hard to reduce into lines of code. For one thing, determining harm may require that we consider a huge range of future effects in order to tally up the net-result. This is an infamous problem for consequentialism, the moral theory that treats ethics as a math problem.

Thus, any harm inflicted by a BDSM robot is presumably welcomed, because it’s outweighed by a greater pleasure experienced by the person. What’s also at play is the concept of “wrongful harm”—a harm that’s suffered unwillingly and inflicted without justification.

The difference between a wrong and a harm is subtle: if I snuck a peek at your diary without your permission or knowledge, and I’m not using that information against you, then it’s hard to say that you suffered a harm. You might even self-report that everything is fine and unchanged from the moment before. Nonetheless, you were still wronged—I violated your right to privacy, even if you didn’t know it. Had I asked, you wouldn’t have given me permission to look.

Now, a person can also be harmed without being wronged: if we were boxing, and I knocked your tooth out with an ordinary punch, that’s certainly a harm, but I wasn’t wrong to do it—it was within bounds of boxing’s rules, and so you couldn’t plausibly sue me or have me arrested. You had agreed to box me, and you understood that boxing carries a risk of harm. Thus, you suffered the harm willingly, even if you preferred not to.

Back to robots, a BDSM robot would seem to inflict harm onto you, but if you had requested this, then it wasn’t wrongfully done. If the robot were to take it too far, despite your protests and without good reason (as a parent of a child with a cavity might have), then it’s wrongfully harming you because it’s violating your autonomy or wishes. In fact, it’d be doubly wrong, since it violates Asimov’s Second Law of Robotics: a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

But assuming the robot is doing what you want, the pain inflicted is only technically and temporarily harm, but it’s not harm in the commonsense way that the First Law should be understood. A computer, of course, can’t read our minds to figure out what we really mean—it can only follow its programming. Ethics is often too squishy to lay out as a precise decision-making procedure, especially given the countless variables and variations around a particular action or intent. And that’s exactly what gives rise to drama in Asimov’s stories.


What about the Zeroth Law?

Ok, maybe a BDSM robot could, in principle, comply with Asimov’s First Law to not harm humans, if the directive is properly coded (and the machine is capable enough to pick up our social cues, an entirely separate issue). Machine learning could be helpful for an AI to grasp nuanced, ineffable concepts, such as harm, without our explicitly unpacking the concepts; but there’d still be the problem of verifying what the AI has learned, which still requires a firm understanding of the concept on human side, at least.

But what about Asimov’s subtly different “Zeroth Law”, which supersedes the First Law? This Law focuses on the population-scale and not the individual scale, stating that: a robot may not harm humanity, or, by inaction, allow humanity to come to harm.

That becomes a much different conversation.

It could be that sex robots in general, and not just BDSM robots, would promote certain human desires that should not be indulged. For instance, if you think sex robots objectify women and even gamify relationships—some sex-bots require a certain sequence of moves or foreplay to get them to comply, a Konami code of sorts—then that might be bad for other people and humanity at large, even if not obviously harmful to the individual user. If sex robots become so compelling that a lot of people no longer have or need human partners, then that can also harm humanity.

On the other hand, many people are unable to form relationships, especially intimate ones. So, it might be better for them, and for humanity, if these folks had options, even if with robots (which may just be glorified sex toys). It’s also possible that sex-bots can help teach users about consent in their human relationships.

But it gets super-tricky when you consider that sex-bots have already been made to look like children. Is that a desire that should be indulged? Strong intuitions may point to “no”, but there’s not a lot of research on therapy for pedophiles. It’s possible that these kinds of robots could be enough to distract would-be predators from targeting actual humans. Or it could go the other way, by fueling their dark desires and acting them out in the real world.

The analysis of how various types of sex-bots may or may not comply with the Zeroth Law is much more complicated than we can offer here. Entire reports and books grapple with the ethics of sex robots right now, and this article is focused primarily on the First Law, again which has relevance far beyond just sex machines.

But the sex industry has long been a harbinger of things to come, and so it deserves our attention. After all, it pioneered technologies from photography to film to the Internet to virtual reality and more. Not just new media, but it also spurred the development of e-commerce—from online credit cards purchases to the rise of cryptocurrencies—because nothing sells like the “world’s oldest profession.”

Back to programming, though the Laws of Robotics are really only a literary device, they’re still an important touchstone in robot ethics, and a tribute must be paid. They’re essentially a meme, and memes are very hard to dislodge once they’ve taken root. So, whether or not Asimov’s Laws are the right principles for AI and robotics, they’re a reasonable, maybe irresistible, starting point for a much larger conversation.

Originally published in the Oct. 15, 20018 issue of Forbes.