On a great big lake

Today I arrived in Chicago for the first time in many years. I am here for a professional meeting, with an extremely compressed schedule, so I suspect that I will not have time to see much of the city this time around.

Yet from the moment I got off the Blue Line and started to walk through the streets, crossing the Chicago River on one of those beautiful old Richard Serra colored footbridges, I could feel, all around me, the unique atmosphere of this place. The high thrusting buildings of brick, steel and glass are invigorating, assertive, with a kind of brisk take-no-prisoners feeling that makes me understand why Carl Sandburg fell in love with this place.

Yet there was something more, something just outside of the reach of my conscious mind. After walking a few blocks, I suddenly realized what it was: From the moment I had gotten out of the subway, I’d been nursing an earworm, way in the back of my mind.

And in one moment of revelation I realized what it was: The lyrics to a song from Pal Joey, a 1940 stage musical by Rodgers and Hart that I had not thought of for many years, that I had once performed in as a teenager. It turns out that I had remembered all the words.

Here they are, for your enjoyment and cultural edification:

There’s a great big town
On a great big lake
Called Chicago

When the sun goes down
It is wide awake
Take your ma and your pa
Go to Chicago

Boston is England
N’Orleans is France
New York is anyone’s
For ten cents a dance

But this great big town
On a great big lake
Is America’s first
And Americans make

Deliberate mismatch

When we go to the movies, we know immediately when we are seeing a romantic comedy, or a western, or a farce, or a horror film, or a police procedural, or a political drama. There is a certain texture to every genre, and that texture appears pretty much on the very first frame.

I am wondering what would be the effect of deliberately subverting such expectations. Suppose we were to apply the texture of one genre to the substance of another. Say, let’s say we were to release a horror film with the shot by shot ambience of a RomCom — or vice versa.

Would we end up creating a work with validity? Could it provide new insights, or perhaps be just crazy enough to be entertaining in some new and exciting way?

Or would we just end up confusing the audience?

Here’s looking at you

In the movies, a character on screen is either looking at nobody in the audience or else is looking at everybody in the audience. So when somebody breaks the fourth wall, it has a huge impact.

As it happens, nobody in film history ever truly mastered the art of breaking the fourth wall other than Groucho Marx. Although Woody Allen had his moments.

In the theater, it is literally impossible for an actor to look at everyone in the audience simultaneously. Real life sight lines just don’t work that way.

We can all be acutely aware that an actor is staring intently at someone in the third row, but that is not at all the same as being stared at ourselves. And maybe that is good. Unlike cinema, theater allows every audience member to remain unique.

But what about live performances in virtual reality? If we are all wearing VR headsets and watching a performance together, then we will be able to combine the advantages of both film and live theater: We can all be standing around seeing an actor play Hamlet, and each of us can see that actor from our own unique perspective. In that sense, shared VR is much like theater.

But when the actor looks after gazing at the skull of poor Yorich, he will then be able to look every one of us square in the face. In that moment, we will all simultaneously feel that chill down our spines, that feeling of being singled out and recognized.

This is an effect that can be achieved neither in cinema nor theater. It will be something new in the world, part of an emerging visual vocabulary that has never before existed.

Bodies and pianos

Anybody can move their body around and create a sort of dance. And anybody can sit down at a piano keyboard and start banging out a crude sort of music.

But there are also people we recognize as expert dancers, and people we recognize as expert pianists. I am intrigued by the parallels.

Is it possible that the path from “look, I am moving my body around” to serious dance has formal parallels to the path from unschooled noodling on a keyboard to concert level musicianship? Despite the fact that these two media are vastly different, is it possible that their respective learning curves possess a similar structure?

What are the intermediate steps along the way from naive performance to superb mastery? Do all students travel a similar path? Is there always some recognizable half-way point along any such journey?

By comparing very different performance media, and seeing how people progress from beginner to expert in each one, we may gain insights into the process of learning itself — insights that may generalize to future forms of expression yet to discovered.

I now pronounce you…

Today I showed Princess Bruschetta to a number of colleagues at NYU. And a surprisingly fierce debate flared up over just how to pronounce her name.

There are people (mainly Americans) who say “brushetta”, and others (mainly Europeans) who say “brusketta”. Of course if you are speaking Italian, it is definitely the latter. But in what circumstance is the former also valid?

I think I can come up with at least one such circumstance: Princess Bruschetta is, if nothing else, an arriviste. She fancies herself sophisticated in the grand European manner, yet that air of sophistication is all a pose, a construct, a singular creation of her own fevered imagination.

She would never say “brusketta”, because such cultural precision would imply a familiarity with original sources that goes against the very essence of her being. In the final analysis, she is most definitely a “brushetta” kind of gal.

After all, as a delirious marriage of sublime self-possession and pure delusion, Princess Bruschetta must hold to a standard all her own.

What’s cooking?

I don’t usually cook. Instead I program.

That might not make much sense to you, but to me it makes perfect sense. The sort of experimentation, iteration, trying different things out, the energy it takes to learn how to cook a good meal, I generally put into creating software.

But recently I’ve become a bit obsessed with perfecting a particular recipe. I’ve been trying variants on it, spending time in my kichen changing proportions and cooking times, adding and taking away ingredients, varying the order of things.

I recognize this as the same process I use for developing software. Some of that process consists of building tools, support code if you will, and some of it consists essentially of creating a space of parameters, and then tuning those parameters until they are just right.

Of course there is an essential difference in the nature of the code / test iteration cycle. When I am working on a computer graphics project, I can conduct dozens of experiments in an hour. Cooking doesn’t quite work that way, because it involves a different set of senses.

After all, my eyes can take in a vast number of different images in the course of a day. But during that same day, my stomach will only let me eat so many meals.

Alas, there is no Moore’s Law for food. Unlike computer graphics, cooking is hardware limited.

Princess Bruschetta

Today I decided to create a dancing character. She will be performed by a live actor using performance capture technology, and the audience will witness her performance in immersive virtual reality, as the interstitial act of a VR theatrical revue.

Once I got the basic idea of the character, her personality became clear, and therefore her appearance: She knows she is beautiful, a graceful swan among ordinary mortals. She may be vain, but she is proud to share her art with the world, a vision of form and movement.

Nobody knows her real name, for she has long gone by a stage name of her own choosing. She is unsure of its meaning, but she loves its intriguingly European sound: She is the Princess Bruschetta.

Personal principles

I was in a meeting recently, at which we were making a decision which projects to fund. Each of the proposed projects, according to the strict definition of the call for proposals, was worthwhile.

One of those projects offended my ethical principles. I couldn’t in good conscience vote for it, so I didn’t. And so another project, which I did not find to be ethically objectionable, was funded instead.

But here’s the thing: I didn’t tell the other people in the room why I wasn’t voting for that project. I was certainly under no obligation to do so. Still, I could have.

But then they would have had the opportunity to object, to say they didn’t share my principles, and on that basis to come to the defense of the project in question. And so by telling them more than I needed to, I might have helped a project to be funded that I objected to on ethical principles.

I realize that everyone has their own personal principles. I may never agree with yours, and you may never agree with mine. We are all different.

So in that moment I was faced with my own ethical crisis: Should I attempt, within a few minutes, to influence a group of people to agree with my view of what is ethical, or should I instead assert my ethics directly on the world itself?

I chose the latter course, and I still don’t know whether it was the right decision. But on balance, I am glad about the outcome.


I am heartsick at the horrific murders in Brussels. It is hard to put together a coherent set of thoughts in the face of so much cruelty and contempt for the sanctity of human life.

I hope that the United States will have the sense this coming November, in the face of such terrible monstrosity, to elect a sane, level headed and competent grown-up as our next President.

Momentary Utopias

I received a phone call today from a colleague who is exploring the relationship between new technologies and ideas of Utopia. It was a wide-ranging and fun conversation.

The conversation had been prompted by my colleague’s interest in that immediate rush people felt when they tried out our Holojam system, and realized that they were able to enter a virtual world where they could draw in the air together. She said that this might be a feeling of encountering a kind of Utopia.

At some point I told her my view (which I mentioned in a blog post some years back) that you can’t live in the future for more than five minutes. In other words, we experience a feeling of awe and excitement when something is new, but that feeling goes away once we become used to the new way of things.

For example, we don’t stare in astonishment and wonder when the ceiling light goes on after we flip a light switch, even though the underlying technology of modern electrical power distribution is, in fact, pretty amazing. We don’t even stare in wonder when somebody stands on a street corner in NYC holding a conversation with a friend in California, even though mobile phone technology is even more amazing.

In short, my view is that you cannot actually live in techno-Utopia — you can only feel it during brief moments of technological transition. Utopia can never be a place you are living, but only a doorway you are walking through.