The Illusions of Meaning in Divination and AI[1]
Despite the fact that divination does not work, for many centuries
humans nonetheless believed there was meaning in the messages generated by
divination techniques. While we no longer believe in divination per se, similar
illusions of meaning tend to operate in our reactions to modern generative
machines. One of the main reasons we turn to AI is predictive modeling of
climate, economics, or security. Divination was used for the exact same reasons.
(Perhaps, following William Gibson, we could call predicting the future by
means of AI neuromancy.) Despite divination being entirely
unsuited to this task (being no better than chance when done fairly, and no
better than human cleverness when the system was rigged) it was widely used for millennia. That fact
invites alternative explanations for its purpose.
Passing Responsibility
One possibility is that those who use divination aren’t searching for accuracy but for
absolution: for someone else to take over the making of decisions that are too
psychologically difficult to make themselves. Attributing the decision to the fates could serve as a way to avoid
criticism from others in the society. There is a strange paradox in making
choices: the more evenly weighted two choices are, the more difficult it is to
choose between them, but the less difference the choice makes (in the sense
that the same balancing of pros and cons that makes it a difficult choice
balances out the outcomes). In this case, flipping a coin is a good way to
break the stalemate and take some action. Children’s games, like “eenie meenie
minee moe,” or “rock-paper-scissors” bear similarities to divination techniques
such as drawing lots, and are used primarily to make a disinterested decision. Divination
could have served a similar purpose.
Entertainment
Divination was partially used for entertainment, exciting because it
promised mystery and attention. (Magic 8-balls and Ouija boards are sold as children’s
entertainment, as modern examples.)
Dispelling Worry
Just talking with someone about our dreams and worries for
the future can be therapeutic. Either feeling reassured that everything will
turn out all right, or being prepared for when things will inevitably go wrong,
are both arguably healthier states to be in than a state of worried indecision,
at least for events over which we have no control.
In addition to these reasons, there are some powerful universal
illusions that contribute to our perception of such devices. Illusions come
from the biases built into the brain. When such biases are applied in an
inappropriate situation, we call the result an illusion. Illusions are very
helpful to scientists studying perception because they give us clues to what
the brain is doing behind the scenes. (Such biases are often exploited by
people who want to sell you something that reason alone wouldn’t convince you
to buy.) Without understanding how these illusions work, it’s impossible to
understand why people respond in the ways they do when they interact with
devices designed to imitate a mind. What ties all these illusions together is
the fact that a large part of our brain is built for understanding and
interacting with other people, and these modules are reused in other
situations.
Illusion of Intentionality
The perception of meaning where
none is present is an extremely persistent illusion. Just as we find
faces in the clouds, we are primed to recognize order so strongly that we
perceive it even when it isn’t present. Optical illusions are caused by the
brain applying specialized modules for the early visual system in places that
they are inappropriate. Divination systems were convincing because they
exploited another kind of mental illusion, the mental components for
recognizing intention in others.
We know quite a bit about the part of the brain used in
attributing intentionality. In one experiment, people played
rock-paper-scissors against a generator of random throws. Some were told they
were playing against a random machine; others were told there was another
player on the network. Their brain scans were compared, and the only
significant difference was shown in an area called the anterior paracingulate
gyrus, or PCC. People with damage to this area of the brain are unable to
predict how others will behave.
This appears to be a universal human trait: we project intention
and personality even when there is none present. It’s inherent in how children
interact with their toys, in how many religions dealt with the ideas of Fate or
Fortune, in our response to dramatic performance, and in how we interact with
the simple artificial intelligences in video games.
Experiments have been done since the 1940s with simple geometric
figures (circles, squares, triangles) moving in relation to each other.[2]
When the shapes are moved in certain simple ways, adults describe the action as
one shape “helping” or “attacking” another, saying that some shape “wants” to
achieve a particular goal. Infants will stare at the shapes moving in these purposeful
ways longer, indicating that they are already paying more attention to things
that seem to be alive.
Illusion of Accuracy
Another illusion
affecting our judgment is the tendency to attribute accuracy with
after-the-fact judgments, known as “confirmation bias.” Those predictions which
happen to be true will stick out in the memory more than the others, giving an
inflated impression of confidence.
Illusion of Meaning
The illusion of meaning is another link between board games,
divination, and AI. Even in a game determined entirely by chance (Chutes and Ladders, for example, or Candyland), children interpret events
in a game as a meaningful story, with setbacks and advantage, defeat and
victory. The child is pleased at having won such a game, and feels that in some
sense it shows his or her superiority. It is only after repeated exposure that,
with some conscious effort, we are able to overcome this illusion. Many
gamblers never do get past it, and continue to feel that their desires
influence random events.
Another example is professional sports. We identify with one
arbitrarily chosen set of players over another, and take their victories and
defeats as our own. Yet our actions have very little influence on whether the
team will be successful or not.
Illusion of Authorship
Creativity that we think is coming from a machine may actually
be coming from the author of the program. The creative writing program RACTER, for
example, got many of its most clever phrases directly from the programmers. In
1983, William Chamberlain and Thomas Etter published a book written by Racter
called The Policeman’s Beard is
Half-Constructed, but it was never entirely clear how much of the writing
was generated by the program, and how much was in the templates themselves. A
sample of Racter’s output:
More than iron, more than lead, more than gold I need electricity.
I need it more than I need pork or lettuce or cucumber.
I need it for my dreams.
These illusions are necessary for the success of magic
tricks, and for the success of computer programs that are designed to create. It
may seem strange to draw such a close parallel between machines and magic. However,
both words come from the same root word (the proto-Indo-European root *magh-,
meaning “to be able, to have power”) and have a common purpose. [3]
They only differ in whether the effect is achieved by means we understand, or
by means we don’t. What is hidden from us is occult. Aleister Crowley wrote:
Lo! I put forth my Will, and my Pen moveth upon the Paper, by
Cause that my will mysteriously hath Power upon the Muscle of my Arm, and these
do Work at a mechanical Advantage against the Inertia of the Pen …The Problem
of every Act of Magick is then this: to exert a Will sufficiently powerful to
cause the required Effect, through a Menstruum or Medium of Communication. By
the common Understanding of the Word Magick, we however exclude such Media as
are generally known and understood.[4]
With the invention of the computer, we have built the world
that ancient magicians imagined already existed. It is a world formed by
utterances, a textually constructed reality. The world imaged through the
screen of a ray tracer doesn’t resemble our world—it is instead the world that
Plato described, where a single mathematically perfect Ideal sphere without
location in time or space manifests through many visual spheres, which cast
their flat shadows onto the pixels of the screen. The spheres are hollow: computer
graphics is a carefully constructed art of illusion, presenting only on the
surface.
The Turing Test
Pioneering computer scientist Alan Turing wrote a paper in
1950 exploring the possibility of whether a machine can be said to think. He
proposed that a blind test, where a human asks questions in an attempt to
elicit inhuman responses, would be the best way to answer this question. If a
human interrogator couldn’t tell whether she was having a conversation with a
machine or another human, the machine would pass the test and be considered to
think. It remains a popular goal line that AI researchers would someday like to
cross.
The point here is that the Turing Test requires a program to be deceitful in order to be successful. Even
a genuinely intelligent machine (whatever that might mean) would still need to
deceive the users into believing it was not a machine but a person in order to
pass the test. The trick of getting people to believe is built into our
understanding of what it means for a machine to exhibit intelligence. Turing
argued that when a computer program could consistently fool people into
believing it was an intelligent human, we would know that it actually was
intelligent. I would argue that that threshold was passed long ago, before the
invention of writing, and that we know nothing of the kind. Divination machines
convinced human interrogators that there was a thinking spirit behind them
thousands of years ago.
It may sound as if I am coming down harshly on AI, saying it is
nothing more than a sham, merely unscientific nonsense. My intention is rather
in the opposite direction: to say that meaning in AI systems comes from the
same root as meaning in many of the most important areas of our lives. Like the
rules we agree to when we sit down to play a game, and like language, money,
law or culture, the meaning in artificially created utterances or artwork only
exists to the extent that we as a society agree to behave as if it does. When
we do, it can be just as real to us as those very real institutions. It can
affect the world, for good or for ill, by the meaning we take it to have.
When we speak a language, the sounds we make don’t really
have any significance of themselves. It is only because we all pretend that a
particular series of sounds stands for a particular idea that the system of
language works. If we lost faith in it, the whole system would fall apart like
in the story of the tower of Babel. It’s a game, and because we all know and
play by the same rules, it’s a fantastically useful one. The monetary system is
the same way. “Let’s pretend,” we say, “that these pieces of paper are worth
something.” And because we all play
along, the difference between playing and reality fades away. But when we lose
faith in the game, when society realizes that other players have been cheating,
the monetary system collapses. Artificial creativity seems much the same. If
our society acts like the creative productions of a machine have artistic
value, then they will have value. Value is an aspect of the socially
constructed part of our reality.
In the future, more sophisticated AI systems will be better able
to deal with the meaning of words, whether or not this meaning is grounded in
actual conscious perception[5]. For
many human purposes, though, how well an AI works is irrelevant. The way we
relate to a system is largely unchanged by its accuracy or its humanness of
thought. For those who want to design creative machines, this is both a
blessing and a danger. We will need to
think very carefully about how we design and train machines that may, someday,
be better at getting their own way than we are. Norbert Weiner, the founder of
cybernetics, warned about the potential of learning machines that seem able to
grant our every wish:
"The final wish is that this ghost should go away.
In all these stories the point is that the agencies of magic
are literal minded; and if we ask for a boon from them, we must ask for what we
really want and not for what we think we want. The new and real agencies of the
learning machine are also literal-minded. If we program a machine for winning a
war, we must think well what we mean by winning. A learning machine must be
programmed with experience… If we are to use this experience as a guide for our
procedure in a real emergency, the values of winning which we have employed in
the programming games must be the same values which we hold at heart in the
actual outcome of a war. We can fail in this only at our immediate, utter, and irretrievable
peril. We cannot expect the machine to follow us in those prejudices and
emotional compromises by which we enable ourselves to call destruction by the
name of victory.
If we ask for victory and do not know what we
mean by it, we shall find the ghost knocking at our door."
[1] For a careful examination
of many of the cognitive issues which surround divination, see Anders Lisdorf, The Dissemination of Divination in Roman
Republican Times– A Cognitive Approach, 2007 (PhD dissertation, University
of Copenhagen).
The connection between AI and divination has been explored
often in science fiction literature. The
Postman by David Brin, for example, explores how belief shapes AI,
divination, and social structures.
[2] Kuhlmeier, Bloom, and
Wynn. “Do 5-month old infants see humans as material objects?” Cognition, Issue 1, November 2004, p.
95-103
[3] Joshua Madara, Of Magic and Machine, 2008 (web page)
The Crowley quote is also found in this essay.
No comments:
Post a Comment