Bodies and pianos

Anybody can move their body around and create a sort of dance. And anybody can sit down at a piano keyboard and start banging out a crude sort of music.

But there are also people we recognize as expert dancers, and people we recognize as expert pianists. I am intrigued by the parallels.

Is it possible that the path from “look, I am moving my body around” to serious dance has formal parallels to the path from unschooled noodling on a keyboard to concert level musicianship? Despite the fact that these two media are vastly different, is it possible that their respective learning curves possess a similar structure?

What are the intermediate steps along the way from naive performance to superb mastery? Do all students travel a similar path? Is there always some recognizable half-way point along any such journey?

By comparing very different performance media, and seeing how people progress from beginner to expert in each one, we may gain insights into the process of learning itself — insights that may generalize to future forms of expression yet to discovered.

I now pronounce you…

Today I showed Princess Bruschetta to a number of colleagues at NYU. And a surprisingly fierce debate flared up over just how to pronounce her name.

There are people (mainly Americans) who say “brushetta”, and others (mainly Europeans) who say “brusketta”. Of course if you are speaking Italian, it is definitely the latter. But in what circumstance is the former also valid?

I think I can come up with at least one such circumstance: Princess Bruschetta is, if nothing else, an arriviste. She fancies herself sophisticated in the grand European manner, yet that air of sophistication is all a pose, a construct, a singular creation of her own fevered imagination.

She would never say “brusketta”, because such cultural precision would imply a familiarity with original sources that goes against the very essence of her being. In the final analysis, she is most definitely a “brushetta” kind of gal.

After all, as a delirious marriage of sublime self-possession and pure delusion, Princess Bruschetta must hold to a standard all her own.

What’s cooking?

I don’t usually cook. Instead I program.

That might not make much sense to you, but to me it makes perfect sense. The sort of experimentation, iteration, trying different things out, the energy it takes to learn how to cook a good meal, I generally put into creating software.

But recently I’ve become a bit obsessed with perfecting a particular recipe. I’ve been trying variants on it, spending time in my kichen changing proportions and cooking times, adding and taking away ingredients, varying the order of things.

I recognize this as the same process I use for developing software. Some of that process consists of building tools, support code if you will, and some of it consists essentially of creating a space of parameters, and then tuning those parameters until they are just right.

Of course there is an essential difference in the nature of the code / test iteration cycle. When I am working on a computer graphics project, I can conduct dozens of experiments in an hour. Cooking doesn’t quite work that way, because it involves a different set of senses.

After all, my eyes can take in a vast number of different images in the course of a day. But during that same day, my stomach will only let me eat so many meals.

Alas, there is no Moore’s Law for food. Unlike computer graphics, cooking is hardware limited.

Princess Bruschetta

Today I decided to create a dancing character. She will be performed by a live actor using performance capture technology, and the audience will witness her performance in immersive virtual reality, as the interstitial act of a VR theatrical revue.

Once I got the basic idea of the character, her personality became clear, and therefore her appearance: She knows she is beautiful, a graceful swan among ordinary mortals. She may be vain, but she is proud to share her art with the world, a vision of form and movement.

Nobody knows her real name, for she has long gone by a stage name of her own choosing. She is unsure of its meaning, but she loves its intriguingly European sound: She is the Princess Bruschetta.



Personal principles

I was in a meeting recently, at which we were making a decision which projects to fund. Each of the proposed projects, according to the strict definition of the call for proposals, was worthwhile.

One of those projects offended my ethical principles. I couldn’t in good conscience vote for it, so I didn’t. And so another project, which I did not find to be ethically objectionable, was funded instead.

But here’s the thing: I didn’t tell the other people in the room why I wasn’t voting for that project. I was certainly under no obligation to do so. Still, I could have.

But then they would have had the opportunity to object, to say they didn’t share my principles, and on that basis to come to the defense of the project in question. And so by telling them more than I needed to, I might have helped a project to be funded that I objected to on ethical principles.

I realize that everyone has their own personal principles. I may never agree with yours, and you may never agree with mine. We are all different.

So in that moment I was faced with my own ethical crisis: Should I attempt, within a few minutes, to influence a group of people to agree with my view of what is ethical, or should I instead assert my ethics directly on the world itself?

I chose the latter course, and I still don’t know whether it was the right decision. But on balance, I am glad about the outcome.

Heartsick

I am heartsick at the horrific murders in Brussels. It is hard to put together a coherent set of thoughts in the face of so much cruelty and contempt for the sanctity of human life.

I hope that the United States will have the sense this coming November, in the face of such terrible monstrosity, to elect a sane, level headed and competent grown-up as our next President.

Momentary Utopias

I received a phone call today from a colleague who is exploring the relationship between new technologies and ideas of Utopia. It was a wide-ranging and fun conversation.

The conversation had been prompted by my colleague’s interest in that immediate rush people felt when they tried out our Holojam system, and realized that they were able to enter a virtual world where they could draw in the air together. She said that this might be a feeling of encountering a kind of Utopia.

At some point I told her my view (which I mentioned in a blog post some years back) that you can’t live in the future for more than five minutes. In other words, we experience a feeling of awe and excitement when something is new, but that feeling goes away once we become used to the new way of things.

For example, we don’t stare in astonishment and wonder when the ceiling light goes on after we flip a light switch, even though the underlying technology of modern electrical power distribution is, in fact, pretty amazing. We don’t even stare in wonder when somebody stands on a street corner in NYC holding a conversation with a friend in California, even though mobile phone technology is even more amazing.

In short, my view is that you cannot actually live in techno-Utopia — you can only feel it during brief moments of technological transition. Utopia can never be a place you are living, but only a doorway you are walking through.

Some things never change

I am watching the Netflix series Halt and Catch Fire about computer entrepreneurs in the 1980s. I very much appreciate the fact that the heroes are mostly computer programmers or hardware hackers.

The technology is all absolutely spot-on. Every detail, no matter how arcane or nerdy, is completely correct and chronologically accurate. Clearly somebody on the writing or advisory team was actually there.

But what really intrigues me is that feeling of heady possibility, of creating an astonishing future that you know is just around the corner. It’s exactly what being in computer graphics felt like to me when I was just starting out.

And it’s exactly what it feels like now.

General knowledge

I participated yesterday in a workshop filled with extremely smart and exceptional people. In general the entire experience was wonderful and inspiring, and I learned a lot from everyone. But there was one odd moment.

One the talks, you see, involved a bit of back and forth. From time to time the speaker would show something on the screen and solicit a response from the room. At one point showed an image of the Mona Lisa sporting a mustache. Next to this he showed a photograph of a man’s face. He then asked “Who is the man in the photograph?”

I shouted out the obvious answer, expecting that a chorus of us would give the same answer: “Marcel Duchamp!” Yet in that entire room, only one other person spoke up. I realized then that nobody else knew about Duchamp’s iconic work L.H.O.O.Q.. Either that, or they had suddenly all become strangely shy.

I’m certainly no art historian, and my knowledge of 20th Century art has huge gaps. But it seems to me that some things, like iconic works by pioneering artists, should be part of the general knowledge base of our populace. Yet clearly it is not, which tells me that something is screwy with the way education works in this country.

OK, maybe this isn’t the most important problem with our education system. After all, our high schools also manage to carefully avoid teaching mathematics, or even letting kids know how amazingly creative and fun math is. Instead they mostly teach a sequence of rote exercises and formulae that they mislabel as “mathematics”. Believe it or not, in most parts of this country you can get all the way through high school without ever learning the beauty of Euclid’s proof of the infinity of prime numbers.

So maybe in a way our education system is indeed teaching absurdism to our children. Except instead of painting a silly mustache on Leonardo da Vinci paintings, they are painting a silly mustache on rational thought itself. I wonder if many kids get the joke.

Silly Putty and a knife

I saw a wonderful talk today about machine learning. Most of the time when people talk about machine learning they deal in abstractions. They write down some math, they wave their hands, they mutter vaguely about neural networks, and in general they say things that are completely mysterious to most of the populace.

But the talk today, by Chris Olah, was anything but mysterious. He pretty much laid it out for us, in terms that anybody could understand.

Essentially, machine learning algorithms are like Silly Putty. They take the space of all of the variables that go into whatever an algorithm is trying to recognize, and they stretch and distort that space in all sorts of interesting ways.

After all that distortion, whatever it is the algorithm is supposed to recognize ends up on one side of some plane, and everything else ends up on the other side. For example, if the machine learning algorithm is trying to recognize pictures with dogs in them, then after all the Silly Putty distortion, all the pictures containing dogs will end up on one side of the plane, and all of the pictures without dogs will end up on the other side.

Then it’s just a matter of using a mathematical knife to cut through that plane. On one side will be all the dog pictures, on the other side will be the non-dog pictures.

And that my friends, in a nutshell, is what machine learning is all about. I had no idea, until today, that Silly Putty could be so useful.