Journal

A Piece of... Cake? Feb 22, 2013

by Gabriel


In 1955, Noam Chomsky proposed a sentence that, despite having a perfectly grammatical syntactical structure, was without meaning and therefore had likely never occurred before. See what you can make of it: "Colorless green ideas sleep furiously." He used this sentence (in contrast with another sentence deemed equally meaningless, but also ungrammatical) to make the point that humans do not use simple statistical methods to process language, and require some inherent deeper syntactic structure to determine valid sentences in a language. And yet, the simple statistical models that Chomsky loves to hate have become omnipresent in Natural Language Processing, the subfield of Computer Science that busies itself studying how to better make computers "understand" human languages. These models are now known as N-gram Language Models (LMs) and represent the likelihood that a word will occur given the prior N-1 words. For example, given the sequence "New York", the probability of the next word being "City" is much higher than the probability of the next word being "jungle" (about 73,000x higher, in 2000, according to Google N-gram viewer). While the models may or may not reflect how humans process language, they are important and effective tools for many of the language technologies that are commonplace today, including web search tools, speech recognition, and machine translation. Annie Dorsen's latest play makes use of this and other types of statistical language modeling to present an algorithmic mash-up of Hamlet, and, in doing so, gives us a lens to examine computational "intelligence."

The piece opens with a single actor (Scott Shephard--the only human on stage during the performance), and the most iconic words in theater, "To be, or not to be." Then the set of algorithms that are to determine the script for the rest of the play take over and the speech quickly becomes difficult to follow. Incomplete sentences, unparseable phrases, and series of conjunctions longer than your average sentence filled the rest of the monologue. I wondered whether perhaps this Hamlet had descended into true madness, far beyond the mere existential struggle of the original. And yet, in the collage of meaningful partial phrases, as well as the actor's voice and expressions, I found myself following along. While the exact meaning of the words exiting Shephard's mouth eluded me, the emotion of the speech held my attention. As the play continued with only the voices of text-to-speech systems to represent the actors, even the human speech characteristics by which I had identified a way to understand the monologue was taken away.

Still, the voice of the algorithm, now seen without the human filter, successfully imparted some of the "meaning" of each successive scene (which is not to say they were in the original order...) Or did it? Perhaps I only imbued the monologue with my own meaning, as we humans, nature's ultimate pattern-finding machines, tend to do. I would love to get the impression of someone who had never heard of Hamlet. I highly doubt they would have been able to follow the play at all.

Herein lies a central question that the play raises: To what extent can a stochastic process truly produce something meaningful, even given that it is derived from Hamlet (which at the very least contains hundreds, if not thousands of Ph.D. theses worth of meaning)?

It seems that this is a similar question to the one Chomsky has grappled with since the 50's. If you take a closer look at Chomsky's nonsensical sentence, you may notice that each pair of words in the sentence contains an internal contradiction, making it nearly the height of nonsense (excluding, of course, the other Washington). However, if you look up that sentence on Wikipedia, you'll find that people have devised contexts that make it almost profoundly meaningful. Here's the example on the Wikipedia article:

It can only be the thought of verdure to come, which prompts us in the autumn to buy these dormant white lumps of vegetable matter covered by a brown papery skin, and lovingly to plant them and care for them. It is a marvel to me that under this cover they are labouring unseen at such a rate within to give us the sudden awesome beauty of spring flowering bulbs. While winter reigns the earth reposes but these colourless green ideas sleep furiously.
 - C.M. Street

The difference is only in the context, and "A Piece of Work" plays with the context in which the chewed up lines from Hamlet are given in a multitude of ways. Most obvious, and perhaps important, context is the viewer themselves. When one aspect of the communication of the play is stochastic, there is a stronger understanding bias coming from the viewer's own knowledge and perception. Blah, Blah, all art is in the eye of the beholder, yes, yes, Duchamp gave us that (or did he?)1. However, in "A Piece of Work," the viewer is not left alone in an unending sequence of stochastically chosen words. The context given to the audience is strategically varied throughout the performance, in a way that exposes both the pragmatics of staging a performance, and the features of language outside of the specific words that contribute to how one understands a communication.

For instance, in one scene, the lighting of the set changes between shades of red and blue in conjunction with the affect of the words (the emotion they convey) being output by a text-to-speech voice2. In another, the amount of context used by the stochastic algorithm that chose the lines was specifically varied over the course of the scene, with each iteration of the same chunk of the scene using less context (in the sense of N-gram model context, which indicates how many prior words affect the next word predicted; in this case, more context will produce a more familiar, or grammatical result). Here, as each iteration becomes less and less understandable, there are still words, recognizable from the previous iteration, but now occurring without the surrounding sentence that made it intelligible before.

All of the decisions made in the production that are required for the performance to be fully realized can be viewed as a bias placed on the decision-making capabilities of the algorithm. However, the often random-seeming behavior exposed some of how I think many people feel when interacting with statistical software. If you've used any speech recognition tools (like Apple's Siri, or Google's voice search), I'm sure you've experienced some amount of confusion and frustration at how the computer sometimes just can't understand you.3 It's become a common experience that affects everyone. An oft-cited example is the high-frequency trading algorithms used in the financial industry, which are both notoriously successful, and yet unstable. "A Piece of Work" not only exposes seemingly random behavior that these type of algorithms can have, but the biases and human influence inherent in building any such program. In doing so, it leads one to consider the trade-off in control inherent in using statistical algorithms. In some sense, one is giving up control to gain computational power, on the assumption that the combination of good data and a good model can be used to accurately reveal patterns in the world that we could not find. The models used in the financial industry are a great deal more complex than an N-gram language model, and incorporate much more contextual information, but represent the same transfer of decision-making. I don't think that this piece necessarily has anything to say on the value of this transfer, mostly because the statistical models are used in a fundamentally different way4 than most of the statistical applications that affect us. I do hope, however, that it gets people thinking about the type of processing that goes on underneath these applications.

I think there's much more to explore in the space of algorithmic theater. Think of an algorithm that trained itself based on audience feedback to the performance (perhaps a type of reinforcement learning5), or a performance where the computer interacted directly with the actors (perhaps using technology like a group of computer scientists at USC have developed6).

Finally, in the end of the play, there is only context: music, stage directions displayed in text and by voice, and the prior scenes that have set up the finale. All of the actual content of the play has been removed. And yet, through the combination of these features, plus those from my own experience and knowledge of the play, I felt the same sadness and weight of the play that the original delivers. I suppose it was probably in my head the entire time.

Footnotes

1. The idea that semantic interpretation involves the audience dates back to the 1860's, when C.S. Peirce, a logician who proposed that the structure of communication included not only a sign and a signified object, but an interpretant. Implied is that understanding, and therefore intelligence, is not only a function of what is being said, but how it is being interpreted. In this case, where there is an intentional lack of intent in the structure of the language being communicated to the audience, the signifiers on which we usually attend are muddled. The interpretation then, the emotion that watching the play raises, is more strongly dependent on the interpreter's own context, and the clues that are dropped through the aspects of the presentation outside of the text.
2. Likely determined using a system similar to this: EmoLib
3. Note: most speech recognition systems use the same type of models that the play does
4. Generative vs. discriminative models

5. Reinforcement Learning
6. Gunslinger

Categories: 

Archive