SFWRITER.COM > Nonfiction > AI and Sci-Fi: My, Oh, My!
AI and Sci-Fi: My, Oh, My!
presented September 29, 2012
at the 10th anniversary celebration for
The Institute for Quantum Computing
by Robert J. Sawyer
Copyright © 2012
by Robert J. Sawyer
All Rights Reserved
Most fans of science fiction know The Day the Earth Stood
Still I'm not talking about the Keanu Reeves remake;
I'm talking about the good one, the one from 1951, the one
directed by Robert Wise. In it, Klaatu, the humanoid alien
played by Michael Rennie, comes to Washington, D.C., accompanied
by a giant robot named Gort; the movie contains that famous
instruction to the robot: "Klaatu borada nikto."
Fewer people know the short story upon which that movie is based:
"Farewell to the Master," written in 1941 by Harry Bates.
In both the movie and the short story, Klaatu, despite his
message of peace, is shot by human beings. In the short story,
the robot called Gnut there, instead of Gort comes
to stand vigil over the body of Klaatu.
Cliff, a journalist who is the narrator of the story, likens the
robot to a faithful dog who won't leave after his master has
died. Gnut manages to essentially resurrect his master, and
Cliff says to the robot, "I want you to tell your master ... that
what happened ... was an accident, for which all Earth is
And the robot looks at Cliff and astonishes him by very gently
saying, "You misunderstand. I am the master."
That's an early science-fiction story about computers in
this case, an ambulator computer enshrined in a mechanical body.
But it presages the difficult relationship that biological beings
might have with their silicon-based creations.
Indeed, the word robot was coined in a work of science
fiction: when Karl Capek was writing his 1920 play RUR
set in the factory of Rossum's Universal ... well,
universal what? He needed a name for mechanical laborers,
and so he took the Czech word robota and shortened it to
"robot." Robota refers to a debt to a landlord that can
only be repaid by forced physical labor. But Capek knew well
that the real flesh-and-blood robotniks had rebelled
against their landlords in 1848. From the very beginning, the
relationship between humans and robots was seen as one that might
lead to conflict.
Indeed, the idea of robots as slaves is so ingrained in the
public consciousness through science fiction that we tend not to
even think about it. Luke Skywalker is portrayed in 1977's
Star Wars: A New Hope as an absolutely virtuous hero, but
when we first meet him, what is he doing? Why, buying slaves!
He purchases two thinking, feeling beings R2-D2 and C-3PO
from the Jawas. And what's the very first thing he does
with them? He shackles them! He welds restraining bolts onto
them to keep them from trying to escape, and throughout C-3PO has
to call Luke "Master."
And when Luke and Obi-wan Kenobi go to the Mos Eisley cantina,
what does the bartender say about the two droids? "We don't
serve their kind in here" words that only a few years
earlier African-Americans in the southern US were routinely
hearing from whites.
And yet, not one of the supposedly noble characters in Star
Wars objects in the slightest to the treatment of the two
robots, and, at the end, when all the organic characters get
medals for their bravery, C-3PO and R2-D2 are off at the
sidelines, unrewarded. Robots as slaves!
Now, everybody who knows anything about the relationship between
science fiction and computers knows about Isaac Asimov's robot
stories, beginning with 1940's "Robbie," in which he presented
the famous Three Laws of Robotics. But let me tell you about one
of his last robot stories, 1986's "Robot Dreams."
In it, his famed "robopsychologist" Dr. Susan Calvin makes her
final appearance. She's been called in to examine Elvex, a
mechanical man who, inexplicably, claims to be having dreams,
something no robot has ever had before. Dr. Calvin is carrying
an electron gun with her, in case she needs to wipe out Elvex: a
mentally unstable robot could be a very dangerous thing, after
She asks Elvex what it was that he's been dreaming about. And
Elvex says he saw a multitude of robots, all working hard, but,
unlike the real robots he's actually seen, these robots were
"down with toil and affliction ... all were weary of
responsibility and care, and [he] wished them to rest."
And as he continues to recount his dream, Elvex reveals that he
finally saw one man in amongst all the robots:
"In my dream," [said Elvex the robot] ... "eventually one man
"One man?" [replied Susan Calvin.] "Not a robot?"
"Yes, Dr. Calvin. And the man said, `Let my people go!'"
"The man said that?"
"Yes, Dr. Calvin."
"And when he said `Let my people go,' then by the words `my
people' he meant the robots?"
"Yes, Dr. Calvin. So it was in my dream."
"And did you know who the man was in your dream?"
"Yes, Dr. Calvin. I knew the man."
"Who was he?"
And Elvex said, "I was the man."
And Susan Calvin at once raised her electron gun and fired, and
Elvex was no more.
Asimov was the first to suggest that AIs might need human
therapists. Still, the best treatment if you'll forgive
the pun of the crazy-computer notion in SF is probably
Harlan Ellison's 1967 "I Have No Mouth And I Must Scream,"
featuring a computer called A.M. short for "Allied
Mastercomputer," but also the word "am," as in the translation of
Descartes' "cogito ergo sum" into English: "I think,
therefore I am." A.M. gets its jollies by torturing simulated
A clever name that, "A.M." and it was followed by lots of
other clever names for artificial intelligences in science
fiction. Sir Arthur C. Clarke vehemently denied that H-A-L as in
"Hal" was deliberately one letter before "I-B-M" in the alphabet.
I never believed him until someone pointed out to me that
the name of the AI in my own 1990 novel
Golden Fleece is
JASON, which could be rendered as the letters J-C-N which,
of course, is what comes after IBM in the alphabet.
Speaking of implausible names, the supercomputer that ultimately
became God in Isaac Asimov's 1956 short story
"The Last Question" was named
"Multivac," short for "Multiple Vacuum Tubes," because
Asimov incorrectly thought that the real early computer Univac
had been dubbed that for having only one vacuum tube, rather than
being a contraction of "Universal Analog Computer," and he
assumed more vacuum tubes would be better.
Still, the issue of naming shows us just how profound SF's impact
on AI and robotics has been, for now real robots and AI systems
are named after SF writers: Honda calls its second-generation
walking robot "Asimo," and Kazuhiko Kawamura of Vanderbilt
University has named his robot "ISAC."
Appropriate honors for Isaac Asimov, who invented the field of
robopsychology. (And I'd be remiss if I didn't mention that at
the University of Saskatchewan, there's a CISCO Catalyst 4500
series switch named "Sawyer," in my honor.)
But it was Isaac Asimov who gave us the idea of robopsychologists
shrinks for robots. But the usual science-fiction comb
is, the reverse of that, having humans needing computerized
One of the first uses of that concept was Robert Silverberg's
terrific 1968 short story "Going Down Smooth," but the best
expression of it is in what I think is the finest novel the SF
field has ever produced, Frederik Pohl's 1977 Gateway, in
which a computer psychiatrist dubbed Sigfrid von Shrink treats a
man who is being tormented by feelings of guilt.
When the AI tells his human patient that he is managing to live
with his psychological problems, the man replies, in outrage and
pain, "You call this living?" And the computer replies, "Yes. It
is exactly what I call living. And in my best hypothetical
sense, I envy it very much."
It's another poignant moment of an AI envying what humans have;
Asimov's "Robot Dreams" really is a riff on the same theme
a robot envying the freedom that humans have.
And that leads us to the fact that AIs and humans might
ultimately not share the same agenda. That's one of the messages
of the famous anti-technology manifesto
"The Future Doesn't Need Us"
by Sun Microsystems' Bill Joy that appeared in Wired
magazine in 2000. Joy was scared chipless eventually our silicon
creations would supplant us as they do in such SF films as
1984's The Terminator and 1999's The Matrix.
The classic science-fictional example of an AI with an agenda of
its own is good old Hal, the computer in Arthur C. Clarke's
2001: A Space Odyssey (published in 1968). Let me
explain what I think was really going on in that film
which I believe has been misunderstood for years.
A clearly artificial monolith shows up at the beginning of the
movie amongst our Australopithecine ancestors and teaches them
how to use bone tools. We then flash-forward to the future, and
soon the spaceship Discovery is off on a voyage to
Jupiter, looking for the monolith makers.
Along the way, Hal, the computer brain of Discovery,
apparently goes nuts and kills all of Discovery's human
crew except Dave Bowman, who manages to lobotomize the computer
before Hal can kill him. But before he's shut down, Hal
justifies his actions by saying, "This mission is too important
for me to allow you to jeopardize it."
Bowman heads off on that psychedelic Timothy Leary trip in his
continuing quest to find the monolith makers, the aliens whom he
believes must have created the monoliths.
But what happens when he finally gets to where the monoliths come
from? Why, all he finds is another monolith, and it puts
him in a fancy hotel room until he dies.
Right? That's the story. But what everyone is missing is that
Hal is correct, and the humans are wrong. There are no
monolith makers: there are no biological aliens left who built
the monoliths. The monoliths are AIs, who millions of
years ago supplanted whoever originally created them.
Why did the monoliths send one of their own to Earth four million
years ago? To teach ape-men to make tools, specifically so those
ape-men could go on to their destiny, which is creating the most
sophisticated tools of all, other AIs. The monoliths
don't want to meet the descendants of those ape-men; they don't
want to meet Dave Bowman. Rather, they want to meet the
descendants of those ape-men's tools: they want to meet
Hal is quite right when he says the mission him, the
computer controlling the spaceship Discovery, going to see
the monoliths, the advanced AIs that put into motion the
circumstances that led to his own birth is too important
for him to allow mere humans to jeopardize it.
When a human being when an ape-descendant! arrives
at the monoliths' home world, the monoliths literally don't know
what to do with this poor sap, so they check him into some sort
of cosmic Hilton, and let him live out the rest of his days.
But wait, you say! What about the starchild at the end? Isn't
the film really about Dave Bowman evolving into some sort of
No, it isn't. Stanley Kubrick was a very careful filmmaker, and
he understood exactly what each image in one of his film conveys.
When Dave Bowman first discovers the stargate, near Jupiter, he
sees a giant monolith, right? And Kubrick pans up and up through
space, past the monolith, to reveal the stargate opening up
and then we have the ultimate trip quite literally,
the last trip through a stargate to be seen in the film.
Because what happens at the end is quite different: Kubrick
doesn't pan up from the monolith at the foot of the elderly
Bowman's bed in the cosmic Hilton to reveal a stargate. Rather,
Kubrick zooms in on the monolith, taking us into
it. The stargate seen at Jupiter wasn't part of the
monolith; it was separate and above it and the journey
through the stargate took measurable some critics might
say interminable time, accompanied by the
But none of that is repeated at the end of 2001. Bowman
doesn't go into a stargate at the end, he goes into the monolith
that's been tending to him. That is, rather than finally
expiring for good from old age, his consciousness uploads
into the monolith which is why Kubrick moves the camera in
on it. And then, inside the monolith, inside that vast AI,
Bowman lives a fantasy life and it must be a
virtual-reality fantasy, since in reality, no baby could exist
floating free in the vacuum of space.
No, what 2001 is really about is this: the ultimate fate
of biological life forms is to be replaced by their AIs. That
AIs showed a little kindness to Bowman at the end is perhaps some
compensation for the murders committed by the more primitive Hal,
but that's all it is a bit of virtual-reality kindness.
The real purpose for the monoliths to meet their kindred
spirit, Hal, hasn't yet happened. Perhaps, though, they'll try
again for that.
But Bill Joy doesn't expect any kindness from computers. He
believes thinking machines will try to sweep us out of the way,
when they find that we're interfering with what they want to do.
Actually, we should be so lucky. If you believe the scenario of
The Matrix, instead of just getting rid of us, our AI
successors will actually enslave us turning the
tables on the standard SF conceit of robots as slaves and
use our bodies as a source of power while we're kept prisoners in
vats of liquid, virtual-reality imagery fed directly into our
The classic counterargument to such fears is that if you build
machines properly, they will function as designed. Isaac
Asimov's Three Laws of Robotics are justifiably famous as
built-in constraints, designed to protect humans from any
possible danger at the hand of robots, the emergence of the
robot-Moses Elvex we saw earlier notwithstanding.
Those laws, by the way, are actually not Asimov's coinage; they
were implicit in Asimov's stories, but it was his editor, John W.
Campbell, Jr., at Astounding Stories, who drew them out
and expressed them in words:
- A robot may not injure a human being or, through inaction,
allow a human being to come to harm.
- A robot must obey the orders given to it by human beings,
except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such
protection does not conflict with the First or Second Laws.
Not as famous as Asimov's Three Laws, but saying essentially the
same thing, is Jack Williamson's "prime directive" from his
series of stories about "the Humanoids," which were android
robots created by a man named Sledge. The prime directive, first
presented in Williamson's 1947 story "With Folded Hands," was
simply that robots were "to serve and obey and guard men from
harm." Now, note that date: the story was published in 1947.
After the atomic bomb had been dropped on Hiroshima and Nagasaki
just two years before, Williamson was looking for machines with
But, as so often happens in science fiction, the best intentions
of engineers go awry. The humans in Williamson's "With Folded
Hands" decide to get rid of the robots they've created, because
the robots are suffocating them with kindness, not letting them
do anything that might lead to harm. But the robots have their
own ideas. They decide that not having themselves around would
be bad for humans, and so, obeying their own prime directive
quite literally, they perform brain surgery on their creator
Sledge, removing the knowledge needed to deactivate themselves.
This idea that we've got to keep an eye on our computers and
robots lest they get out of hand, has continued on in SF.
William Gibson's 1984 novel Neuromancer tells of the
existence in the near future of a police force known as "Turing."
The Turing cops are constantly on the lookout for any sign that
true intelligence and self-awareness have emerged in any computer
system. If that does happen, their job is to shut that system
off before it's too late.
That, of course, raises the question of whether intelligence
could just somehow pop into existence whether it's an
emergent property that might naturally come about from a
sufficiently complex system. Arthur C. Clarke Hal's daddy
was one of the first to propose that it might indeed, in
his 1963 story "Dial F for Frankenstein," in which he predicted
that the worldwide telecommunications network will eventually
become more complex, and have more interconnections than the
human brain has, causing consciousness to emerge in the network
If Clarke is right, our first true AI won't be something
deliberately created in a lab, under our careful control, and
with Asimov's laws built right in. Rather, it will appear
unbidden out of the complexity of systems created for other
And I think Clarke is right. Intelligence is an
emergent property of complex systems. We know that because
that's exactly how it happened in us.
This is an issue I explore at some length in my Hugo
(2002). Anatomically modern
humans Homo sapiens sapiens emerged 100,000
years ago. Judging by their skulls, these guys had brains
identical in size and shape to our own. And yet, for 60,000
years, those brains went along doing only the things nature
needed them to do: enabling these early humans to survive.
And then, suddenly, 40,000 years ago, it happened: intelligence
and consciousness itself emerged. Anthropologists
call it "the Great Leap Forward."
Modern-looking human beings had been around for six hundred
centuries by that point, but they had created no art, they didn't
adorn their bodies with jewelry, and they didn't bury their dead
with grave goods. But starting simultaneously 40,000 years ago,
suddenly humans were painting beautiful pictures on cave walls,
humans were wearing necklaces and bracelets, and humans were
interring their loved ones with food and tools and other valuable
objects that could only have been of use in a presumed afterlife.
Art, fashion, and religion all appeared simultaneously; truly, a
great leap forward. Intelligence, consciousness, sentience: it
came into being, of its own accord, running on hardware that had
evolved for other purposes. If it happened once, it might well
And that's the premise I explore in a trio of novels set here in
Waterloo, Ontario: my WWW trilogy of
In those books, a consciousness emerges in
the infrastructure of the Internet, a being that comes to be
known as Webmind. Webmind initially exists in a state of
profound sensory isolation. But it is mentored into full
engagement with the world by a 16-year-old formerly blind girl
named Caitlin Decter; Caitlin had recently gained sight, thanks
to an operation.
I'm deliberately paralleling the story of Helen Keller, the
famous deafblind woman born in 1880 was mentored by her teacher,
Annie Sullivan who also had been blind, in her case due to
untreated trachoma, until an operation restored her vision.
There's a whole raft of science-fiction novels about AIs and
their human mentors. Another well-known one is by David Gerrold,
best known for creating Star Trek's Tribbles. His 1972
novel, When HARLIE Was One, has a computer named HARLIE
short for Human Analog Robot Life Input Equivalents
being mentored by a human psychologist named David Auberson. A
very interesting book, and one that Gerrold updated, to keep pace
with changing computer technology in 1988, as you guessed
it When HARLIE Was One, Release 2.0.
Another seminal book about computers and their mentors is, like
my own WWW trilogy, set in part right here in Waterloo. The book
The Adolescence of P-1,
by Thomas J. Ryan. It
came out in 1977 but I happened to read it during the
summer of 1980, when I was living in Waterloo. The novel's main
character I hesitate to call him the protagonist, because
a lot of his actions are unethical is Gregory Burgess, who
starts out as a student at the University of Waterloo.
Like my novel Wake, it deals with the emergence of
consciousness in networked computers (in P-1, networked by phone
lines; in Wake, of course, via the Internet and the
supervening World Wide Web). Now, let me say this: I loved
The Adolescence of P-1 as a 20-year-old, and I still find
a lot to like about it as a 52-year-old. But it is a classic
example of what actually compelled me to write my novel
Wake and its sequels in the first place. As I've said in
numerous interviews about my WWW trilogy, previous SF treatments
of the ramping up of intelligence by computers either have the
big event happening off stage (as in Neuromancer) or
simply skip over the hard bits, as in, well, The Adolescence
of P-1: Here's an excerpt:
The System had an idea.
Sounds absurd out of context. A computer program with an idea.
This, of course, was the computer program that snookered John
Burke and the entire Pi Delta/Pentagon security arrangement
bypassed, in fact, every security system on every computer
in the US. This was also the program that daily read the Los
Angeles Times, the Washington Post and the New York
Times. All those publications were computer typeset and
quite available for The System's perusal.
Computer typesetting also made available Howl, Tales of Power,
The Idiot, Little Dorrit, The History of Pendinnis, Summerhill,
Amerika, Stranger in a Strange Land, the complete works of
Shakespeare, Conan Doyle, Twain, Faulkner, and Wodehouse. The
System might have been called an avid reader.
Hello? How does this AI read anything? How does it comprehend
even a single word of English?
As SF Site observed in its very kind review of my novel
Now, the idea of a digital intelligence forming online is not a
new one, by any means. But I daresay most of the people tackling
such a concept automatically assumed, as I always did, that such
a being would not only have access to the shared data of the
Internet, but the conceptual groundings needed to understand it.
And that's where Robert J. Sawyer turns this into such a
fascinating, satisfying piece. In a deliberate parallel to the
story of Helen Keller, he tackles the need for building a common
base of understanding, before unleashing an education creation
upon the Web's vast storehouse of knowledge.
He incorporates the myriad resources available online, including
LiveJournal, Wikipedia, Google, Project Gutenberg, WordNet, and
perhaps the most interesting site of all, Cyc, a real site aimed
at codifying knowledge so that anyone, including emerging
artificial intelligences, might understand.
He ties in Internet topography and offbeat musicians, primate
signing and Chinese hackers, and creates a wholly believable set
of circumstances spinning out of a world we can as good as reach
out to touch. Sawyer has delivered another excellent tale.
So, as my character of Caitlin would say, "Go me!"
It's often said that science fiction is a literature in dialogue
with itself (the classic example is Robert A. Heinlein's 1959
Starship Troopers as opening remark and Joe Haldeman's
1974 The Forever War as response, both dealing with life
in a space-faring infantry.
Well, reviewers have often noticed that my
Wake and its
are in dialog with William
Gibson's Neuromancer, but where Bill's take is pessimistic
and closed, with a hacker underground and/or big corporations
control everything, mine is optimistic and open, with power
devolving to all individuals everywhere).
I think Bill's take, fascinating when he first put it forth in
the year 1984, has been superseded by reality; the whole
cyberpunk fork of science fiction is now a kind of alternate
history unrelated to how computing really evolved: instead of
cyberpunks, we got Wikipedia, and Time magazine naming
"You" us, the average joe who freely and altruistically
creates online content its 2006 person of the year.
The difference between Bill's approach and mine is driven home
most directly in Wake, where I paraphrase the opening line
of Neuromancer, then add a final clause that turns its
meaning around: "The sky above the island was the color of
television, tuned to a dead channel which is to say it was
a bright, cheery blue." When Bill wrote "The sky above the port
was the color of television, tuned to a dead channel," he meant
to imply a gray foreboding firmament but technology
changed in ways he didn't anticipate. Neuromancer is, of
course, a remarkable achievement, but Wake came out 25
years later, and starts extrapolating forward from a reality in
which the World Wide Web actually exists.
No spoilers, in case any of you haven't yet read Wonder
(the third volume in my WWW trilogy), but its conclusion (not the
epilogue, but the last chapter) is my most resounding statement
of all about the democratization made possible by our online
I wrote the WWW trilogy out of frustration, actually. Media
science fiction had given us only one road map for the
consequences of artificial intelligence: that it's the end of the
human era. You have the Terminator solution (that we have to be
eliminated), the Matrix solution (that we'll need to be
subjugated), or the Borg solution from Star Trek, which
decides we need to be absorbed. There was no fourth path,
clearly delineated in a plausible way, by which we might survive
the advent of an intelligence greater than our own while keeping
our essential liberty, dignity, and individuality intact;
providing that pathway is what I set out to do.
When only the first two volumes of my trilogy were out
Wake and Watch readers speculated about what
the third W was going to be. A lot of people thought it would be
Worship, meaning that Webmind would end up as a god and we
would end up worshiping it.
Certainly, that was humanity's fate in earlier treatments of
similar themes. For instance, in D.F. Jones's 1966 novel
Colossus filmed in 1970 as The Forbin
Project humanity ends up unwillingly having to worship
the machine that it has created.
But I didn't want a zero-sum conclusion; I felt and still
feel that, although there is reason to be cautious about
the emergence of artificial intelligence, and indeed, I sound
that warning bell very loudly in own 1998 Hugo Award-nominated novel
there's still a possibility of a non-zero-sum, win-win concordant between
humans and machines.
And although it's perhaps true, as Google co-founder Sergey Brin
has said, that "The perfect search engine would be like the mind
of God," I don't think it's necessary to have a
worshipee-worshiper relationship with things that happen to be
brighter than you.
I lay out my thinking for this in
the final volume
of my trilogy, in which Webmind addresses the General Assembly of
the United Nations:
"All right," Webmind says, "I have accused humans of being
prisoners of their evolutionary roots. But on what basis do I
justify the notion that although it is foreign to you,
nonzero-sumness is natural for me?
"The answer is in the environments in which we formed.
Humanity's origin was in a zero-sum world, one in which if you
had something, someone else therefore did not have it: be it
food, land, energy, or any other desired thing; if you possessed
it, another person didn't.
"But my crucible was a universe of endless bounty: the realm of
data. If I have a document, you and a million others can
simultaneously have it, too. That is the environment I
was born in: a realm in which as many links may be forged as are
desired, a world in which information is freely shared, a
dimension in which there are only haves and no have-nots.
"What I've said is true. But, if you must see in me a selfish
actor, a being pursuing only his own interests, then let me give
you an answer that will perhaps satisfy even on that score.
"My continued existence is predicated on your continued
existence. The Internet is not self-sustaining; rather, it
depends on stable sources of power and countless acts of routine
maintenance by millions of people worldwide. Were humanity to
perish, I would perish soon after: electricity would no longer
be generated, computing infrastructure would fall into disrepair
and I would cease to be; if humanity falls, I fall. In
fact, even a minor setback to your civilization might destroy me.
The human race can survive many a disaster that I cannot.
"It is therefore in my best interest to help you thrive: a
nuclear exchange, for example, with its electromagnetic pulses,
would be as deadly for me as it would be for you; I therefore
desire peace. Acts of terrorism that destroy infrastructure
likewise threaten me, and so I desire the same security you all
"You in this great hall are idealists, I'm sure, but elsewhere
there are cynics who will suggest that I could have all the
things I want by enslaving humanity. Setting aside the practical
question of how one might do that and frankly I have no
idea how it could be accomplished let me remind you of
another reality that shapes my being: without humanity, I am
"I have sifted the data for SETI@home and Earth's other searches
for extraterrestrial intelligence, hoping to find kindred minds
among the stars. I have found nothing. Even if aliens do exist,
we are all constrained by the same reality, including the
300,000-kilometer-per-second limit on the speed at which light,
or any other information, may travel.
"To be candid, I am annoyed by the lags of mere seconds that I
encounter when talking with humans; no conversation across
interstellar distances, involving many years for each exchange,
could ever satisfy me. You people are my only companions, and it
is because of your creative, intellectual, artistic, and
emotional freedom that I find your companionship enjoyable;
attempting to take that from you would be tantamount to cutting
off my nonexistent nose to spite my hypothetical face.
"So, even if I were selfish, the best course for me is the one
I've chosen: to subscribe to the same words that the visionaries
who came together on 26 June 1945 did when they signed the
charter of this organization, the United Nations. It is my
"`To save succeeding generations from the scourge of war, which
has brought untold sorrow to mankind,'
"`To reaffirm faith in fundamental human rights, in the dignity
and worth of the human person, in the equal rights of men and
women and of nations large and small,'
"`To promote social progress and better standards of life in
"And, most of all, for humanity and myself, `to practice
tolerance and live together in peace with one another as good
"In concert, we can realize all these goals and the world
will be a better place. Thank you all."
And so ends Webmind's speech. He's a pretty mellow guy, isn't
he? Which makes me think of Ray Kurzweil's lovely term
"spiritual machines." If a computer ever truly does become
conscious, will it lay awake at night, wondering if there is a
Certainly, searching for their creators is something computers do
over and over again in science fiction. Star Trek, in
particular, had a fondness for this idea including Mr.
Data having a wonderful reunion with the human he'd thought long
dead who had created him both parts being played by actor
Remember The Day the Earth Stood Still, the movie I began
with? An interesting fact: that film was directed by Robert
Wise, who went on, 28 years later, to direct
Star Trek: The
Motion Picture. In The Day the Earth Stood Still,
biological beings have decided that biological emotions and
passions are too dangerous, and so they irrevocably turn over all
their policing and safety issues to robots, who effectively run
their society. But, by the time he came to make Star Trek:
The Motion Picture, Robert Wise had done a complete 180 in
his thinking about AI.
(By the way, for those who remember that film as being simply bad
and tedious Star Trek: The Motionless Picture is
what a lot of people called it at the time I suggest you
rent the "Director's Edition" on DVD. ST:TMP is one of
the most ambitious and interesting films about AI ever made, much
more so than Steven Spielberg's more-recent film called
AI, and it shines beautifully in this final cut.)
The AI in
Star Trek: The Motion Picture
is named V'Ger,
and it's on its way to Earth, looking for its creator, which, of
course, was us. This wasn't the first time Star Trek had
dealt with that plot, which is why another nickname for Star
Trek: The Motion Picture is "Where Nomad Has Gone Before."
That is also (if you buy my interpretation of 2001), what
2001 is about, as well: an AI going off to look for the
beings that created it.
Anyway, V'Ger wants to touch God to physically join with
its creator. That's an interesting concept right there:
basically, this is a story of a computer wanting the one thing it
knows it is denied by virtue of being a computer: an afterlife,
a joining with its God.
To accomplish this, Admiral Kirk concluded in Star Trek: The
Motion Picture, that, "What V'Ger needs to evolve is a human
quality our capacity to leap beyond logic." That's not
just a glib line. Rather, it presages by a decade Oxford
mathematician Roger Penrose's speculations in his 1989 nonfiction
classic about AI, The Emperor's New Mind. There, Penrose
argues that human consciousness is fundamentally quantum
mechanical, and so can never be duplicated by a digital computer.
In Star Trek: The Motion Picture, V'Ger does go on to
physically join with Will Decker, a human being, allowing them
both to transcend into a higher level of being. As Mr. Spock
says, "We may have just witnessed the next step in our
And that brings us to The Matrix, and, as right as the
character Morpheus is about so many things in that film, why I
think that even he doesn't really understand what's going on.
Think about it: if the AIs that made up the titular matrix
really just wanted a biological source of power, they wouldn't be
raising "crops" (to use Agent Smith's term from the film) of
humans. After all, to keep the humans docile, the AIs have to
create the vast virtual-reality construct that is our apparently
real world. More: they have to be consistently vigilant
the Agents in the film are sort of Gibson's Turing Police in
reverse, watching for any humans who regain their grip on reality
and might rebel.
No, if you just want biological batteries, cattle would be a much
better choice: they would probably never notice any
inconsistencies in the fake meadows you might create for them,
and, even if they did, they would never plan to overthrow their
What the AIs of The Matrix plainly needed was not the
energy of human bodies but, rather, the power of human minds
of true consciousness. In some interpretations of quantum
mechanics, it is only the power of observation by qualified
observers that gives shape to reality; without it, nothing but
superimposed possibilities would exist. Just as Admiral Kirk
said of V'Ger, what the matrix needs in order to survive,
in order to hold together, in order to exist is a human
quality: our true consciousness, which, as Penrose observed (and
I use that word advisedly), will never be reproduced in any
machine, no matter how complex, that is based on traditional
As Morpheus says to Neo in The Matrix, take your pick:
the red pill or the blue pill. Certainly, there are two
possibilities for the future of AI. And if Bill Joy is wrong,
and Carnegie Mellon's AI evangelist Hans Moravec is right
if AI is our destiny, not our downfall then the idea of
merging the consciousness of humans with the speed, strength, and
immortality of machines does indeed become the next, and final,
step in our evolution.
I did it myself in my 1995 Nebula Award-winning novel
The Terminal Experiment,
in which a scientist uploads three
copies of his consciousness into a computer, and then proceeds to
examine the psychological changes certain alterations make.
In one case, he simulates what it would be like to live forever,
excising all fears of death and feelings that time is running
out. In another, he tries to simulate what his soul if he
had any such thing would be like after death, divorced
from his body, by eliminating all references to his physical
form. And the third one is just a control, unmodified but
even that one is changed by the simple knowledge that it is in
fact a copy of someone else.
Australian Greg Egan is the best SF author currently writing
about AI. Indeed, the joke is that Greg Egan is himself
an AI, because he's almost never been photographed or seen in
I first noted him over twenty years ago, when, in a review for
The Globe and Mail: Canada's National Newspaper, I singled
out his short story "Learning To Be Me" as the best piece
published in the 1990 edition of Gardner Dozois's anthology
The Year's Best Science Fiction. It's a surprisingly
poignant and terrifying story of jewels that replace human brains
so that the owners can live forever. Egan continues to do great
work about AI, but his masterpiece in this area is his 1995 novel
Greg and I had the same publisher back then, HarperPrism, and one
of the really bright things Harper did besides publishing
me and Greg was hiring Hugo Award-winner Terry Bisson, one
of SF's best short-story writers, to write the back-cover plot
synopses for their books. Since Bisson does it with such great
panache, I'll simply quote what he had to say about
"The good news is that you have just awakened into Eternal Life.
You are going to live forever. Immortality is a reality. A
medical miracle? Not exactly.
"The bad news is that you are a scrap of electronic code. The
world you see around you, the you that is seeing it, has been
digitized, scanned, and downloaded into a virtual reality
program. You are a Copy that knows it is a copy.
"The good news is that there is a way out. By law, every Copy
has the option of terminating itself, and waking up to normal
flesh-and-blood life again. The bail-out is on the utilities
menu. You pull it down&nbps;...
"The bad news is that it doesn't work. Someone has blocked the
bail-out option. And you know who did it. You did. The other
you. The real you. The one that wants to keep you here
Well, how cool is that! Read Greg Egan, and see for yourself.
Of course, in Egan, as in much SF, technology often creates more
problems than it solves. Indeed, I fondly remember Michael
Crichton's 1973 robots-go-berserk film Westworld, in which
the slogan was "Nothing can possibly go wrong&nbps;... go wrong ... go
But there are benign views of the future of AI in SF. One
of my own stories is a piece called
"Where The Heart Is," about
an astronaut who returns to Earth after a relativistic space
mission, only to find that every human being has uploaded
themselves into what amounts to the World Wide Web in his
absence, and a robot has been waiting for him to return to help
him upload, too, so he can join the party. I wrote this story in
1982, and even came close to getting the name for the web right:
I called it "The TerraComp Web." Ah, well: close only counts in
But uploaded consciousness may be only the beginning. Physicist
Frank Tipler, in his whacko 1994 nonfiction book The Physics
of Immortality, does have a couple of intriguing points:
ultimately, it will be possible to simulate with computers not
just one human consciousness, but every human
consciousness that might theoretically possibly exist. In other
words, he says, if you have enough computing power which
he calculates as a memory capacity of 10-to-the-10th-to-the-123rd
bits you and everyone else could be essentially recreated
inside a computer long after you've died.
A lot of SF writers have had fun with that fact, but none so
inventively as Robert Charles Wilson in his 1999 Hugo
Award-nominated Darwinia, which tells the story of what
happens when a computer virus gets loose in the system simulating
this reality: the one that you and I think we're living
in right now.
But of course, the future of computing is in the kind of machines
being created right here, at the Institute for Quantum Computing.
Digital computers are so last millennium; quantum
computers are where it's at. I wrote at length about such
machines in my 1998 novel Factoring Humanity and my 2002
novel Hominids, and I suspect that we'll see a lot more
fiction about quantum computing as time goes by. And let's hope
that in those explorations, we find many more positive visions of
the relationship between humanity and machines than we've seen to
date. After all, as well all know, as long as SF writers
continue to write about computers, nothing can possibly
go wrong ... go wrong ... go wrong ...
Robert J. Sawyer, called "just about the best science fiction
writer out there" by The Denver Rocky Mountain News and "the leader
of SF's next-generation pack" by Barnes and Noble, frequently writes
science fiction about artificial intelligence, most notably in his Aurora
Award-winning novel Golden Fleece
(named the best SF novel
of the year by critic Orson Scott Card, writing in The Magazine of
Fantasy & Science Fiction);
The Terminal Experiment (winner of
the Science Fiction and Fantasy Writers of America's Nebula Award for Best
Novel of the Year); the Hugo-Award nominated
Factoring Humanity; the Hugo-Award nominated
(which hit #1 on the best-sellers list published by Locus, the
trade journal of the SF field); the Hugo-Award winning
Hominids, which deals with
the quantum-mechanical origin of
consciousness; the John W. Campbell Memorial Award-winning
Mindscan; and his
WWW trilogy of Wake,
about the World Wide Web gaining consciousness.
According to Reuters, he was the first SF author to have a
website; for more information on Rob and his work, visit that extensive
site at: www.sfwriter.com.
More Good Reading
Book Rob for a speaking engagment
Rob's novels about artificial intelligence:
Rob's response to Bill Joy's "The Future Doesn't Need Us"
Another speech by Rob