5.
Well Bless My Cyborg Soul
“But
soon,” he cried with sad and solemn enthusiasm, “I shall die, and
what I now feel be no longer felt. Soon these burning miseries will
be extinct. I shall ascend my funeral pile triumphantly, and exult in
the agony of the torturing flames. The light of that conflagration
will fade away; my ashes will be swept into the sea by the winds. My
spirit will sleep in peace; or if it thinks, it will not surely think
thus. Farewell.”
The speaker of the
forgoing quotation makes assumptions that will be familiar to the
majority of human beings: that he is alive now and will soon die;
that dying will be unpleasant; and that some part of his being will,
or at least might, survive his passing. Or in short, he entertains
the possibility that he has a soul. Also worth noting is that he is
presently suffering. That is, he experiences, and knows himself to
experience, pain. This also is a common human phenomenon, as it is
largely through reflecting on our own suffering and its causes that
we come to understand both ourselves and our world, and the value of
a compassionate relationship between the two.
The speaker, however, is
not human: these are the final works of the Monster in the 1818
edition of Mary Shelley's novel Frankenstein. The character,
then, is a product not of “nature” as commonly understood, but of
technology: as a hybrid of biology and technology, he is a cyborg
according to any current definition of the term. And as I've
mentioned in an earlier posting on this blog (entry # 2), he also
meets Locke's definition of personhood, namely the possession of
reason, self-awareness, and memory. The question of souls, though,
may appear to be a different matter. In my teaching on cyborgs, I've
come across some concern on the subject of the soul. Do made beings
have souls? Can made beings have souls? Does this question
conceivably draw a line between the purely human, if such a being
exists, and the bio-tech hybrid? At some point, given a progressive
hybridization, might a human be understood to have lost her or his
soul? I've long suspected, and in the most recent run of the Human
Nature and Technology course at STU had it more or less confirmed,
that a major source of discomfort in our society with the image of
the cyborg, and the prospect of increasing biotechnological
hybridization, centres precisely on the question of souls. And so far
in my own reading in the field of cyborg studies, I've seen little
explicit mention of the subject.
I suppose the first thing
to do is to lay out what I mean by “soul” as the term is almost
as slippery as “nature.” While it might be tempting to work with
a specifically Christian conception of the term, I am going to avoid
this temptation in favour of a more general understanding. After all,
nothing in Shelley's novel identifies her own sense of the
term with any specific religious doctrine. The Monster, in his final
words, seems to appeal to a basic dualism that pervades most cultural
traditions, East and West. That is, he at least entertains the
possibility that some essential element of our identities is not
bound to any physical substance, but rather is metaphysical in
nature. It is this basic metaphysical claim that I wish to address.
And to be clear, I will not be making a case in this entry for or
against the existence of the soul. Rather, I will be making a simple
appeal for intellectual consistency.
So ...
Let's begin with the fact
that there is no empirical evidence for the existence of the soul. If
there were, arguments for the soul would not be metaphysical: they
would be physical. So any claim for a supernatural dimension to our being must be
based on something other than evidence—whatever that might be. The sheer ubiquity of the belief in some part of us that is not bound to our
physical form, moreover, is no argument for the correctness of the
position. To accept such an argument is to fall victim to an informal
logical fallacy known as the argumentum ad populum
fallacy, or in more colloquial terms, the bandwagon fallacy.
It is also, as I think anyone reading this blog will recognize upon
self-reflection, extremely difficult to imagine a future from which
all facets of ourselves are absent. Knowing our bodies to have
relatively short expiry dates, it is therefore normal to project a
non-physical type of consciousness onto the world both present and
future—and in many case, particularly in the East, past as well.
And while some particularly brilliant thinkers, such as René
Decartes, have argued for the existence of the soul through intensive
and systematic skepticism of all outward or physical things as
opposed to the apparent existence and unity of the thinking self,
such arguments fall apart when confronted with the findings of modern
cognitive science, principally that the apparent unity of the mind
is, itself, the illusory product of several discrete systems working
in the brain. Descartes cannot be faulted for his conclusions here:
he simply did not have access to our data, or to the technology that
made it possible.
But what does this have
to do with Frankenstein's Monster? The Monster is, after all, a made
thing: a product of human hands. Here is where the intellectual
consistency part of this entry comes in. The Monster's point of view
on this matter is essentially our own, with the single exception that
he has no illusions of a metaphysical origin: having read the words
of his creator pertaining to his creation, he knows himself to be the
product of physical mechanisms, much as evolutionary science
subsequent to the novel's publication has shown us to be as well. And
yet he imagines, and allows for the possibility, of the metaphysical.
There is no evidence for his supposition—and he does at least have
the intellectual integrity not to assert it as a fact.
So given the lack of
evidence for a soul, what is the difference between the Monster and
us? Quite frankly, there is none. The Monster has as much claim to a
soul as does any human being on the planet. This is not to say that
souls do not exist: it is to say that the intellectually honest
reader of this novel is faced with a choice between two positions: to
deny the Monster a soul is to deny a soul to any human being, while
to claim a soul for oneself is to allow the Monster the same.
The next and most obvious
question is, “So what?” And I think this is one of the most
important questions that the novel raises, because of course the
question extends not just to the monster but to any potential
intelligent machine as well. That is, when we finally build a machine
that has intelligence comparable to our own, we will have no more or
no less reason to assume ensoulment with our creation than we do with
ourselves. To assert that another intelligent entity does not have a
soul while claiming a soul for oneself is to assume for oneself a
special or privileged position in the absence of any supporting
evidence. This is an essentially racist position—a position that
James Hughes in his 2004 book Citizen Cyborg labels “human
racism”—and as such is no less morally culpable than racism
directed at any other being that qualifies for personhood. Any
intelligent machine that we make will be of a kind with us simply by
virtue of its intelligence, and thus in community with us. Any a
priori assumptions about the presence of a soul in us and its
absence in the thinking and self-aware product of our science will
certainly lead to a mythically based presumption of superiority—and
our own history tells us, over and over again, where that road leads.
Further, should such a creation ultimately become more intelligent
than us, this discriminatory position would be roughly analogous to a
homo habilis asserting its superiority over homo sapiens by
virtue of some particular characteristic that it had but that we do
not have, without taking into account the many gains that have been
made over the last two million years of evolution. That is, to create
a definition of “soul” that can only be met by homo sapiens
would be to proceed ontologically. Such an argument would have about
as much to do with the world outside the arguer's head as does any
other ontological argument, for instance those of Anshelm or
Descartes.
So ultimately, what am I
getting at in this rambling little entry? I suppose it is this: given
the equivalence of any argument either for the soul or against the
soul where a multiplicity of intelligent beings, born or made, is
concerned, the question is utterly irrelevant. If I have a soul, then
so do you, and that's just peachy. And if you don't, then I don't
either, and that's fine, too. In any case, the question can have no
bearing on how we treat or respect each other. Or as Confucius put it
in the sixth century B.C., “Do not do to others what you would not
have done to yourself.” As our machines progress more and more
rapidly toward intelligence, this moral admonition becomes
increasingly relevant to beings other than humans. There is likely to
come a point, quite possibly during the current century, when to hit
or not to hit the “off” switch becomes one of our most pressing
ethical questions.
Intriguing post. I'm not going to try to refute anything said here (I don't think I really could) but would just like to comment on a couple points.
ReplyDeleteFirstly, I think it's important to note, as you do, that there should be no 'scientific evidence' of the soul. The mechanisms of science are explicitly concerned with what the soul is not--a physical, testable, *thing*. As such, it seems to be outside the realms of science. And not to jump on a bandwagon, but it may be said that I believe I have a soul because I can feel it (however un-academic that may be). And I think it is precisely this feature that can distinguish us from progressive machines.
You repeat the idea of machines progressing toward intelligence--and there are likely machines now that have more intelligence than many humans--but that does not necessarily make them more human-like. The Monster was made by a physical process, and the same could be said of humans, being made by the physical process of evolution. But unlike intelligent machines, humans have the capacity to feel deep emotional thrusts of love, grief, despair, ecstasy, and so on. This does not solely rely on intelligence, but something else. If I recall correctly, you are not the biggest Fukuyama fan, but that little x-factor I find hard to shake. And sometimes I think it is encouraging that we have no better name than that, no knowledge that can identify it and rip it apart. The x-factor is beyond us, beyond intelligence, but yet somehow still resides in the crux of our being. So when you say, "when we finally build a machine that has intelligence comparable to our own, we will have no more or no less reason to assume ensoulment with our creation than we do with ourselves," I think there may still need to be something beyond just intelligence that makes them comparable to ourselves. And to submit that such a entity will inevitably be made I think is an optimistic, if not somewhat conceited, remark of science.
The Monster has the emotional quality, but I don't think any uber-intelligent machine will be made in the likeness of the Monster/Humans, unless this ungraspable, unknowable X can be dominated. Sure, one could say that emotions can be generated by drugs or hormones, but, and maybe I am just being stubborn, I think that fabrication is still lacking, still insufficient, when compared to the primal movings of what could be called the soul. Maybe science will prove me wrong, but I find it hard to imagine an entity will be created by humans that has, regardless of intelligence, equal emotional depth and relations with other humans to qualify as ethically the same.
There is lots left to be said. For instance, if we evolve from animals, do those animals also have souls, and what is the ethical situation there? The difference with machines is that we are their creators (in a way different than parents and children), and consequently the ethical situation has a new twist. I'm starting to lose my mind here so I'll say goodbye and hope this makes some sense. Thanks for the thoughtful start of the day.
So I just finished my comment on this, and I opened another tab to get the link for the Heavy Rain short film and I proceeded to close the tab. So here I am, rewriting. :)
ReplyDeleteI agree with many of points you've put forth here, especially the idea that human created entities could feasibly have a soul. An example of this is Kara from the short film I posted on the Technology Forum this past semester. (vimeo.com/38303600) I also agree that it is ignorant to assume that only humans can have souls. Though I do agree with you here, I have a bit of a sticking point. The difficulty I am having involves the relationship between intelligence and a soul. I suppose it's difficult to define what exactly a soul is, and because of this it is also difficult to understand what it is that gives rise to a soul. For all I know, the soul really could be merely a product of intelligence, though I believe it to be more than that. I have not read Fukuyama and I know nothing of the 'x factor', but I agree with Ross' point that there is more to a soul than intelligence. I think that the soul could be a byproduct of a specific set of circumstances or factors, that once these conditions are present, the soul arises. Though I say this, don't expect me to know what these conditions are. :P Great post.
Sam