Suppose we apply the structure of composition → researsal → performance to computer programming? This has been a topic of conversation recently between Murphy Stein, Vi Hart and myself.
Currently, computer programming is generally thought of as a strictly compositional activity. A programmer generally creates a program in relative isolation, over a period of time, iterating until the program’s behavior is ready to be experienced by others. The programming languages and environments that have evolved to support this activity very much privilege this way of working.
But what if the semantic constructs of programming — sequences of variable assignment, iteration, conditional execution, defining procedures, building hierarchies of objects — were part of some sort of live performance for an audience?
Suppose we were to design a form of programming from the ground up, specifically to be a performative medium? It wouldn’t replace programming as composition, but rather would complement it. Much of the program would still be pre-written, as a compositional stage. In such a paradigm, one would also expect a rehearsal stage.
This splitting of computer programming into two such different modalities is not as odd as it might sound. After all, this is what happens with plays and music. While a playwright shares the medium of words — the semantic level — with the actor, their modes of expression are wildly different. The playwright expresses through typed or written words on paper, whereas the actor uses voice, facial expression and body movement. What they share is the underlying meaning.
Similarly, a musical composer writes down notes on a staff, whereas a musician plays a physical instrument. What they have in common is, again, the underlying meaning.
A kind of performative programming is done now in the avant guarde music community. Performers manipulate programs written in Max/MSP (or its open-source counterpart PD) to create live variations in the procedure controlling an algorithmically generated or modified composition.
Personally, I’ve never found the experience of attending such concerts to be entirely satisfying, since Max/MSP was never written for this purpose. Max/MSP programming on stage usually remains relatively opaque to the audience, even if audience members can see the computer screen.
It would be fun to design a programming language and environment specifically to be a medium for performance in front of an audience — with the understanding that much prior composition has already been done to scaffold what the audience is seeing.
A programming modality that privileges performance would be useful not just for music, but also for dance or theatre. One could imagine an evolving performance acted out by robot actors or dancers. As the performance progresses, the human performer modifies the procedural behavior of the actors. The audience experience consists not only of observing the actions of the robot actors or dancers, but also of understanding the rules those “performers” are following.