[Robert J. Sawyer] Science Fiction Writer
Hugo and Nebula Winner

SFWRITER.COM > Nonfiction > Random Musings > On Laws of Robotics


On Asimov's Three Laws of Robotics

by Robert J. Sawyer

Copyright © 1991 and 1994 by Robert J. Sawyer
All Rights Reserved.

Isaac Asimov's Three Laws of Robotics
  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence, except where such protection would conflict with the First or Second Law.

People in the process of reading my novel Golden Fleece keep saying to me, what about Isaac Asimov's Three Laws of Robotics? I thought they were guiding modern artificial-intelligence research?

Nope, they're not. First, remember, Asimov's "Laws" are hardly laws in the sense that physical laws are laws; rather, they're cute suggestions that made for some interesting puzzle-oriented stories half a century ago. I honestly don't think they will be applied to future computers or robots. We have lots of computers and robots today and not one of them has even the rudiments of the Three Laws built-in. It's extraordinarily easy for "equipment failure" to result in human death, after all, in direct violation of the First Law.

Asimov's Laws assume that we will create intelligent machines full-blown out of nothing, and thus be able to impose across the board a series of constraints. Well, that's not how it's happening. Instead, we are getting closer to artificial intelligence by small degrees and, as such, nobody is really implementing fundamental safeguards.

Take Eliza, the first computer psychiatric program. There is nothing in its logic to make sure that it doesn't harm the user in an Asimovian sense, by, for instance, re-opening old mental wounds with its probing. Now, we can argue that Eliza is way too primitive to do any real harm, but then that means someone has to say arbitrarily, okay, that attempt at AI requires no safeguards but this attempt does. Who would that someone be?

The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. (A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none have accepted an absolute edict against ever causing harm to humans.)

Indeed, given that a huge amount of AI and robotics research is underwritten by the military, it seems that there will never be a general "law" against ever harming human beings. The whole point of the exercise, at least from the funders' point of view, is to specifically find ways to harm those human beings who happen to be on "the other side."

We already live in a world in which Asimov's Three Laws of Robotics have no validity, a world in which every single computer user is exposed to radiation that is considered at least potentially harmful, a world in which machines replace people in the workplace all the time. (Asimov's First Law would prevent that: taking away someone's job absolutely is harm in the Asimovian sense, and therefore a "Three Laws" robot could never do that, but, of course, real robots do it all the time.)

So, what does all this mean? Where's it all going? Ah, that I answer at length — in Golden Fleece.

More Good Reading

  • Rob's editorial from Science on Robot Ethics
    Rob's interview with Isaac Asimov
    Rob's thoughts about the future of artificial intelligence
    A dialog on Ray Kurzweil's The Age of Spiritual Machines On Bill Joy's "The Future Doesn't Need Us"

    Random Musings index

    My Very Occasional Newsletter

    About Rob
    Book Clubs
    Press Kit
    How to Write
    Email Rob
    Canadian SF

    HOME • [Menu]MENU • TOP


    Copyright © 1995-2024 by Robert J. Sawyer.