Thursday, September 19, 2013

Excerpt 11: Aleatory writing

When we build machines that deal with meaning, it can be hard to unravel what part of the meaning is created by each of the three participants—the machine, the machine’s creator, and the one reading the output of the machine. A device for doing divination is at a disadvantage because it not only must create meaningful utterances, but also choose its words so that they form a valid reply to the question. There is a field of poetry called aleatory (meaning “chance”) writing that uses similar techniques, but without any pretence of predicting the future. The poet Christian Bök writes, “Aleatory writing almost evokes the mystique of an oracular ceremony—but one in which the curious diviner cannot pose any queries.”
Games, aleatory writing, and divination all have in common the creation of meaning. When poetry is composed with the aid of computers or randomizing elements, it raises questions about the nature and origin of meaning. Discussing such random poetry, Bök writes:
"The reader in the future might no longer judge a poem for the stateliness of its expression, but might rather judge the work for the uncanniness of its production. No longer can the reader ask: “How expressive or how persuasive is this composition?’—instead, the reader must ask: “How surprising or how disturbing is this coincidence?’
…When we throw the dice, we throw down a gauntlet in the face of chance, doing so in order to defy the transcendence of any random series, thereby forcing chance itself to choose sides, either pro or con, with respect to our fortune. Does such a challenge occur when a poet decides to write according to an aleatory protocol? Does the poet wager that, despite the improbable odds, a randomly composed poem is nevertheless going to be more expressive and more suggestive than any poem composed by wilful intent? Is meaning the stake wagered in this game? [1]"

What follows are a few examples of machines designed to generate writing or poetry through the years. Not to in any way denigrate the cleverness of their creators, but none of them are actually very good at writing. Even with the power of modern computers it is still impossible to generate a paragraph of sensible text on a topic without following a very strict template. (For example, programs that take financial data or sports scores and generate a daily news report.)
Even such simple systems, however, illustrate that meaning for a reader in a text can be completely disconnected from the intentions present when the text is written. Remember the story about the million monkeys typing the works of Shakespeare—a random process is perfectly capable of creating anything that can be written, if we’re willing to put in the effort to sort through all the garbage it generates to find the gems. This demonstrates that the key to creativity, the really hard part, is judgment of quality, selectivity. How do we recognize good creative works when by the definition of creativity, they are something new that we have never seen before?



[1] Christian Bök, Harriet: Poetry Foundation Blog, “Random Poetry”, 2008 (web page)

Wednesday, September 11, 2013

Excerpt 10: The Illusions of Meaning in Divination and AI

The Illusions of Meaning in Divination and AI[1]

Despite the fact that divination does not work, for many centuries humans nonetheless believed there was meaning in the messages generated by divination techniques. While we no longer believe in divination per se, similar illusions of meaning tend to operate in our reactions to modern generative machines. One of the main reasons we turn to AI is predictive modeling of climate, economics, or security. Divination was used for the exact same reasons. (Perhaps, following William Gibson, we could call predicting the future by means of AI neuromancy.) Despite divination being entirely unsuited to this task (being no better than chance when done fairly, and no better than human cleverness when the system was rigged) it was widely used for millennia. That fact invites alternative explanations for its purpose.

Passing Responsibility

One possibility is that those who use divination aren’t searching for accuracy but for absolution: for someone else to take over the making of decisions that are too psychologically difficult to make themselves. Attributing the decision to the fates could serve as a way to avoid criticism from others in the society. There is a strange paradox in making choices: the more evenly weighted two choices are, the more difficult it is to choose between them, but the less difference the choice makes (in the sense that the same balancing of pros and cons that makes it a difficult choice balances out the outcomes). In this case, flipping a coin is a good way to break the stalemate and take some action. Children’s games, like “eenie meenie minee moe,” or “rock-paper-scissors” bear similarities to divination techniques such as drawing lots, and are used primarily to make a disinterested decision. Divination could have served a similar purpose.

Entertainment

Divination was partially used for entertainment, exciting because it promised mystery and attention. (Magic 8-balls and Ouija boards are sold as children’s entertainment, as modern examples.)

Dispelling Worry

Just talking with someone about our dreams and worries for the future can be therapeutic. Either feeling reassured that everything will turn out all right, or being prepared for when things will inevitably go wrong, are both arguably healthier states to be in than a state of worried indecision, at least for events over which we have no control.
In addition to these reasons, there are some powerful universal illusions that contribute to our perception of such devices. Illusions come from the biases built into the brain. When such biases are applied in an inappropriate situation, we call the result an illusion. Illusions are very helpful to scientists studying perception because they give us clues to what the brain is doing behind the scenes. (Such biases are often exploited by people who want to sell you something that reason alone wouldn’t convince you to buy.) Without understanding how these illusions work, it’s impossible to understand why people respond in the ways they do when they interact with devices designed to imitate a mind. What ties all these illusions together is the fact that a large part of our brain is built for understanding and interacting with other people, and these modules are reused in other situations.

Illusion of Intentionality

The perception of meaning where none is present is an extremely persistent illusion. Just as we find faces in the clouds, we are primed to recognize order so strongly that we perceive it even when it isn’t present. Optical illusions are caused by the brain applying specialized modules for the early visual system in places that they are inappropriate. Divination systems were convincing because they exploited another kind of mental illusion, the mental components for recognizing intention in others.
We know quite a bit about the part of the brain used in attributing intentionality. In one experiment, people played rock-paper-scissors against a generator of random throws. Some were told they were playing against a random machine; others were told there was another player on the network. Their brain scans were compared, and the only significant difference was shown in an area called the anterior paracingulate gyrus, or PCC. People with damage to this area of the brain are unable to predict how others will behave.
This appears to be a universal human trait: we project intention and personality even when there is none present. It’s inherent in how children interact with their toys, in how many religions dealt with the ideas of Fate or Fortune, in our response to dramatic performance, and in how we interact with the simple artificial intelligences in video games.
Experiments have been done since the 1940s with simple geometric figures (circles, squares, triangles) moving in relation to each other.[2] When the shapes are moved in certain simple ways, adults describe the action as one shape “helping” or “attacking” another, saying that some shape “wants” to achieve a particular goal. Infants will stare at the shapes moving in these purposeful ways longer, indicating that they are already paying more attention to things that seem to be alive.

Illusion of Accuracy

Another illusion affecting our judgment is the tendency to attribute accuracy with after-the-fact judgments, known as “confirmation bias.” Those predictions which happen to be true will stick out in the memory more than the others, giving an inflated impression of confidence.

Illusion of Meaning

The illusion of meaning is another link between board games, divination, and AI. Even in a game determined entirely by chance (Chutes and Ladders, for example, or Candyland), children interpret events in a game as a meaningful story, with setbacks and advantage, defeat and victory. The child is pleased at having won such a game, and feels that in some sense it shows his or her superiority. It is only after repeated exposure that, with some conscious effort, we are able to overcome this illusion. Many gamblers never do get past it, and continue to feel that their desires influence random events.
Another example is professional sports. We identify with one arbitrarily chosen set of players over another, and take their victories and defeats as our own. Yet our actions have very little influence on whether the team will be successful or not.

Illusion of Authorship

Creativity that we think is coming from a machine may actually be coming from the author of the program. The creative writing program RACTER, for example, got many of its most clever phrases directly from the programmers. In 1983, William Chamberlain and Thomas Etter published a book written by Racter called The Policeman’s Beard is Half-Constructed, but it was never entirely clear how much of the writing was generated by the program, and how much was in the templates themselves. A sample of Racter’s output:

More than iron, more than lead, more than gold I need electricity.

I need it more than I need pork or lettuce or cucumber.

I need it for my dreams.


These illusions are necessary for the success of magic tricks, and for the success of computer programs that are designed to create. It may seem strange to draw such a close parallel between machines and magic. However, both words come from the same root word (the proto-Indo-European root *magh-, meaning “to be able, to have power”) and have a common purpose. [3] They only differ in whether the effect is achieved by means we understand, or by means we don’t. What is hidden from us is occult. Aleister Crowley wrote:
Lo! I put forth my Will, and my Pen moveth upon the Paper, by Cause that my will mysteriously hath Power upon the Muscle of my Arm, and these do Work at a mechanical Advantage against the Inertia of the Pen …The Problem of every Act of Magick is then this: to exert a Will sufficiently powerful to cause the required Effect, through a Menstruum or Medium of Communication. By the common Understanding of the Word Magick, we however exclude such Media as are generally known and understood.[4]
With the invention of the computer, we have built the world that ancient magicians imagined already existed. It is a world formed by utterances, a textually constructed reality. The world imaged through the screen of a ray tracer doesn’t resemble our world—it is instead the world that Plato described, where a single mathematically perfect Ideal sphere without location in time or space manifests through many visual spheres, which cast their flat shadows onto the pixels of the screen. The spheres are hollow: computer graphics is a carefully constructed art of illusion, presenting only on the surface.

The Turing Test

Pioneering computer scientist Alan Turing wrote a paper in 1950 exploring the possibility of whether a machine can be said to think. He proposed that a blind test, where a human asks questions in an attempt to elicit inhuman responses, would be the best way to answer this question. If a human interrogator couldn’t tell whether she was having a conversation with a machine or another human, the machine would pass the test and be considered to think. It remains a popular goal line that AI researchers would someday like to cross.
The point here is that the Turing Test requires a program to be deceitful in order to be successful. Even a genuinely intelligent machine (whatever that might mean) would still need to deceive the users into believing it was not a machine but a person in order to pass the test. The trick of getting people to believe is built into our understanding of what it means for a machine to exhibit intelligence. Turing argued that when a computer program could consistently fool people into believing it was an intelligent human, we would know that it actually was intelligent. I would argue that that threshold was passed long ago, before the invention of writing, and that we know nothing of the kind. Divination machines convinced human interrogators that there was a thinking spirit behind them thousands of years ago.
It may sound as if I am coming down harshly on AI, saying it is nothing more than a sham, merely unscientific nonsense. My intention is rather in the opposite direction: to say that meaning in AI systems comes from the same root as meaning in many of the most important areas of our lives. Like the rules we agree to when we sit down to play a game, and like language, money, law or culture, the meaning in artificially created utterances or artwork only exists to the extent that we as a society agree to behave as if it does. When we do, it can be just as real to us as those very real institutions. It can affect the world, for good or for ill, by the meaning we take it to have.
When we speak a language, the sounds we make don’t really have any significance of themselves. It is only because we all pretend that a particular series of sounds stands for a particular idea that the system of language works. If we lost faith in it, the whole system would fall apart like in the story of the tower of Babel. It’s a game, and because we all know and play by the same rules, it’s a fantastically useful one. The monetary system is the same way. “Let’s pretend,” we say, “that these pieces of paper are worth something.” And because we all play along, the difference between playing and reality fades away. But when we lose faith in the game, when society realizes that other players have been cheating, the monetary system collapses. Artificial creativity seems much the same. If our society acts like the creative productions of a machine have artistic value, then they will have value. Value is an aspect of the socially constructed part of our reality.
In the future, more sophisticated AI systems will be better able to deal with the meaning of words, whether or not this meaning is grounded in actual conscious perception[5]. For many human purposes, though, how well an AI works is irrelevant. The way we relate to a system is largely unchanged by its accuracy or its humanness of thought. For those who want to design creative machines, this is both a blessing and a danger. We will need to think very carefully about how we design and train machines that may, someday, be better at getting their own way than we are. Norbert Weiner, the founder of cybernetics, warned about the potential of learning machines that seem able to grant our every wish:
"The final wish is that this ghost should go away.
In all these stories the point is that the agencies of magic are literal minded; and if we ask for a boon from them, we must ask for what we really want and not for what we think we want. The new and real agencies of the learning machine are also literal-minded. If we program a machine for winning a war, we must think well what we mean by winning. A learning machine must be programmed with experience… If we are to use this experience as a guide for our procedure in a real emergency, the values of winning which we have employed in the programming games must be the same values which we hold at heart in the actual outcome of a war. We can fail in this only at our immediate, utter, and irretrievable peril. We cannot expect the machine to follow us in those prejudices and emotional compromises by which we enable ourselves to call destruction by the name of victory.
If we ask for victory and do not know what we mean by it, we shall find the ghost knocking at our door."




[1] For a careful examination of many of the cognitive issues which surround divination, see Anders Lisdorf, The Dissemination of Divination in Roman Republican Times– A Cognitive Approach, 2007 (PhD dissertation, University of Copenhagen).
The connection between AI and divination has been explored often in science fiction literature. The Postman by David Brin, for example, explores how belief shapes AI, divination, and social structures.
[2] Kuhlmeier, Bloom, and Wynn. “Do 5-month old infants see humans as material objects?” Cognition, Issue 1, November 2004, p. 95-103
[3] Joshua Madara, Of Magic and Machine, 2008 (web page) The Crowley quote is also found in this essay.
[4] Binsbergen, ibid.
[5] Perceptual consciousness and the grounding of meaning are discussed in Chapter 5. 

Friday, September 6, 2013

Excerpt 9: Divination, Mathematics and Ontology

 Divination and Mathematics
These games and divination systems are remarkably old. Consider the die used in most games of chance: the reason it has pips instead of numbers on the faces is that the form of the die settled into its present form before the invention of Arabic numerals.
Divination drove the development of mathematics: much of Mayan, Egyptian, and Babylonian mathematics were used for astrological purposes. For example, our measurement of time and angles come from Babylonian astrologers’ division of the heavens in their base 60 system. The most advanced mechanical computers from Greek and from Arab inventors in the ancient world were complex representations of the heavens, used for navigation and astrology. The Antikythera mechanism (often called the first mechanical computer) is the best known of these, as few others have been preserved. Found in a shipwreck and dating from around 200 BC, it showed the position of all the known planets, the sun, and the moon, requiring over 30 gears to do so. Modern scientists, who find such a device fascinating for the level of mechanical sophistication it displays, seem reluctant to admit that the only practical use such a device could have had was casting horoscopes and determining auspicious days. Watching how the planets move back and forth around the wheel of the zodiac on a recreation of this device, it is not hard to see how such an irregular motion would give the impression of an intelligent and willful plan being acted out. Early attempts by archaeologists to understand the device focused on the words inscribed on it, and were unsuccessful. It was only when an attempt was made to understand the gearing system that the meaning of the device was recovered.
Later, it was the analysis of games of chance that led to the development of probability theory and statistics, which are key components of most modern AI systems, since absolute reasoning is often too brittle to deal with real-world situations.
The combinatoric principles of the I Ching and the geomantic divination (introduced at the beginning of this chapter) inspired the 17th century philosopher Gottfried Leibniz to develop binary notation. These binary codes are found in other divination systems around the world, such as the African Ifa or Sikidy systems of divination. In recent years, the fields of “ethno-mathematics” and “ethno-computation” have begun studying these cultural artifacts to explore the mathematical ideas of non-Western cultures.
Elements of recursion play a large role in these games and divination systems, where the state resulting from a series of actions is the beginning point for the same series of actions, performed again and again. In Mancala, for example, the game is played by choosing one pit, scooping up all the seeds from the pit, and planting one in each of the following pits. The object of the game is to be the first to get all of one’s seeds into the final pit. One strategy to do this is to find a pattern that persists over time, so that the seeds in multiple pits move together in a train. These patterns were discovered and used by experienced players across Africa. In the field of cellular automata, this is known as a “glider.” As a form that maintains itself as it moves through a space divided into discrete cells, it is an important component in the study of these computational systems, a study which only began in the 1940s as computers were invented.
These connections to mathematics are a natural extension of the representational nature of the tokens and spaces used in board games and divination. As a simpler system than the real world, it provided a fertile ground to begin development of mathematical ideas.

Divination and Ontology
The systems of classical elements (Earth, Air, Fire, and Water in Western cultures or Wood, Fire, Earth, Metal, and Water in China) used in divination rituals were attempts to find symmetries and order underlying reality, to find general systematic laws that applied to all aspects of nature and human life. These systems had appealing symmetries and provided a theoretical framework in which physics, anatomy, psychology, or any of dozens of other sciences could be understood. Most of the connections made were illusory, forced by overzealous application of symmetry, but the overall attempt to find such connections and symmetries is similar to much of modern science.
In his study of African divination methods, Wim van Binsbergen identified three features of geomantic divination:

  • a physical apparatus serving as a random generator
  • a set of rules which allow for the translation or coding of the numerical outcome of the random generator in terms of culturally agreed specific values with a divinatory meaning
  • an interpretative catalogue listing such divinatory meanings and accessing them through the assigned codes

Using an assortment of pre-created elements, rules to combine them, and a randomizer, these divination systems pioneered a way of getting seemingly original creations from a machine. Are machines necessarily limited to this kind of recombination of pre-created ideas, or is it possible for them to create new works of art, new ideas which we would judge as creative if they came from a person? This is a question we will return to periodically throughout the book, as other inventors and artists used these same methods to try to build creative machines.

Cicero on Divination
The Roman scholar and philosopher Cicero examined divination critically in 45 BC in the book On Divination. It’s hard to say exactly what Cicero believed about divination because he was careful to examine all the different possibilities in his work. One of the ideas he explored, however, was that divination might be accurate, even if it isn’t guided by the gods:
"For the presages which we deduce from an examination of a victim's entrails, from thunder and lightning, from prodigies, and from the stars, are founded on the accurate observation of many centuries. Now it is certain, that a long course of careful observation, thus carefully conducted for a series of ages, usually brings with it an incredible accuracy of knowledge; and this can exist even without the inspiration of the Gods, when it has been once ascertained by constant observation what follows after each omen, and what is indicated by each prodigy."
 This is remarkably similar to how digital neural networks (a form of machine learning meant to imitate the structure of the brain) are trained. At first, the correlation between input and output is completely random, but as observations are made, the associations are strengthened or weakened until it comes to accurately reflect reality in some way. Cicero imagined a simple process that would lead a system of divination to evolve over time into something that could give intelligent and predictive answers without reflecting any hidden agent providing those answers. The serious question here (one Cicero himself raises) is whether the observations were accurate enough, the correlations strong enough, and the period of adjustment long enough that the system had developed to a point where it could be useful.