Journal

Algorithmic Theater -- By and For the Non-programmer Feb 18, 2013

by Heidi

 

I’m the most excited about A Piece of Work because as soon as I started thinking about and researching this performance, and especially after listening to Andrew Russell interview Annie Dorsen, my safe little view of the world and how I fit into it began unraveling. The more I try to wrap my head around the programming aspect of Annie’s upcoming piece, it has increasingly dawned on me that computer programming is a language that I don’t understand and that while I can navigate our increasingly digitized world, I cannot create any of it. I have new awareness of how algorithms have become infused in day-to-day life (here’s a TED talk about algorithms shaping things) from micro trading on Wall Street to the movies Netflix recommends to the frequently poor decision making of Pandora--algorithms are infiltrating everyday life as silent guides and invisible participants.  

Annie has become quite aware of all of this as she works on her concept of an algorithmic theater. During an interview with members of Nature Theater of Oklahoma (the On the Boards alumni who interviewed Annie for an online project of theirs called OK Radio) Annie mentioned she was interested in, “the dramaturgy of an algorithm.” I was intrigued by this, and it made me want very badly, at least in a conceptual/soft/metaphor-laden way, to understand how the programming of Annie’s piece functioned.  Therefore, this essay is a brief history of language modeling algorithms and my understanding of the Markov models which Annie is going to use in A Piece of Work—by the layman, for the layman.  

For Annie’s purposes, programming has become a medium of expression, and it seems that trying to dissect the functioning, implications and history of the type of programming Annie is using will make her piece more interesting. In my attempts to understand how algorithms could be read dramaturgically, the first thing I asked was, how is programming being used in Annie’s pieces? What meaning is it generating? What material and history does this type of programming address?  

Annie’s algorithmic performances seem concerned with the nature of human intelligence and language, which she then recycles through the types of processing computers are currently able to perform. The types of programs she builds to run her shows all deal somehow with language processing while the subjects of her pieces deal with human nature.  In Hello, Hi There (2010), she wanted to make a performance piece using a famous debate held in 1971 between Noam Chomsky and Michel Foucault that addressed whether there is such a thing as innate human nature or if we are shaped by experiences and the power of cultural and social institutions around us. She decided to have two chatbots be the performers. These performers, embodied in two computers, sat onstage and held a conversation about the debate (using language processing algorithms) as the debate played on an old television.  In A Piece of Work, Annie is creating a programmed version of Hamlet, because, as Annie explains, “it is in a sense the ultimate text for theatre, and the most celebrated disquisition on a certain kind of humanist discourse, in which the pure consciousness of man wrestles with the inevitability of death.”  Annie is taking famous debates and canonical pieces of art dealing with human intelligence/nature, and coding them into a type of computational intelligence. 

Where did this man intellect vs. machine intelligence dichotomy begin? Many attribute the “beginning” to Alan Turing and his seminal paper written in 1950 titled Computing Machinery and Intelligence. This paper discussed the results of a test he invented, the Turing test. In his test there were three parties: a computer, a person with a computer, and a human judge with a computer. All three parties were isolated from one another.  The computer and the person both messaged the judge and the judge’s job was to attempt to distinguish who was the computer and who was the human based on their conversation.  If they were indistinguishable, the computer passed the test, and was considered an intelligent system.  In Turing’s test, the computer did pass which actually proved—not that machines are intelligent—but that in order for a system to pass as intelligent, it only needs to fool a human.   

Moving forward from Turing, the quest for machine intelligence has taken researchers through many algorithmic models of natural language patterning. They started with simple chains of command, or decision trees, where if you receive an input, the word “mother” for example, a computer would perhaps respond with, ‘tell me more about your family”–a fairly inflexible system where responses generally made sense but were fairly canned. In the 70’s programmers started writing ‘conceptual ontologies’ where they would code the relationships between words. An ontology in relation to information science is a "shared vocabulary and taxonomy which models a domain with the definition of objects and/or concepts and their properties and relations.” (Wikipedia) In other words, programmers realized to have more flexible models, they needed to generate large webs of ordered information to pull responses from.  

The most modern approaches to natural language processing are models based on probability called Markov models or hidden Markov models. These are a probabilistic way of guessing what word or phrase might come next based on training a program to a dataset.  Training is just what it sounds like, for example, one hidden Markov model you have all trained is in your cell phone—you train your phone to suggest certain words as you type with T9 based on how frequently you use them. It’s a system of guessing based on probability which gradually gains intelligence over time by repetition.  

Markov Models are based on Markov chains, a mathematical principle that is fairly straightforward. I will spout the definition of a Markov chain, even though reading it might feel kind of like being washed over by a wave: “a Markov chain is a sequence of stochastic events (based on probabilities instead of certainties) where the current state of a variable or system is independent of all past states, except the current (present) state.”  (Wikipedia)

Here’s my watery understanding of Markov chains: we start at ‘S1’, stage one, or: how things are right now. From S1, using a matrix of probability, one can calculate outcomes for how much will change and how much will stay the same in a given time frame.  Once the changes take place we have reached S2, stage two. Then, the same matrix of probability is applied to the second stage, and it becomes S3 . . . and onwards until stage N.  The very easiest example I’ve ever heard of a Markov chain is that it’s like taking a ‘random walk’.  On this walk, at every intersection you flip a coin to decide if you will turn left or right (there is a 50% chance that you are going to turn left, a 50% chance you will turn right).  And this is how you would walk.  It’s without destination and therefore inefficient if you have a specific place to go . . . but you could explore an area using this method and every path outward from S1 would be unique.  

If you think about this very basic model in relation to A Piece of Work, you can see that every night will be a different show, there will always be a logic for what is happening, but it will be a logic driven by probability, meaning the piece will be two-dimensional, memory-less, non-narrative based piece, but it will still explore the area of the data field, which is the text of Hamlet.  (There is a hand drawn diagram below of how a Markov model might be applied to Hamlet; it includes decisions like lights and sound as well.)  We might be returned to questions posed by the Turing test, the question of how we listen and what we will believe as an audience—what kind of intelligence we will attribute to the system.  Also, it seems to ask the question: what does it mean to choose such a canonical text that saturates our culture, and how will the algorithmic reading add to or take from or expose our relationship to the text?  

I will leave you with a little excerpt about Hamlet from The Haunted Stage: Theater as a Memory Machine that I think is great food for thought— to me it seems a random walk through Hamlet might exorcise it's ephemeral, textual, cultural hauntings: 

"As both Bert States and Herbert Blau have noted, Hamlet is not only the central dramatic piece in Western cultural consciousness, but it is a play that is particularly concerned with ghosts and with haunting. In addition to the profound ways in which these two major theorists have demonstrated how the image of haunting appears within this complex and provocative drama, however, Hamlet is involved with haunting in quite another dimension: the temporal movement of the work and its accompanying theory and performance through history. Our language is haunted by Shakespeare in general and Hamlet in particular, so much so that anyone reading the play for the first time is invariably struck by how many of the play’s lines are already known to her. Even more experienced readers (or viewers) can hardly escape the impression that the play is really a tissue of quotations. Our iconic memories are haunted by Hamlet."

 

 

 

Archive