The danger

A number of years back I was giving a demo of a technology we had come up with in our lab — a new and improved form of autostereoscopic display. That’s a kind of display that lets you see in 3D stereo without the glasses.

Unlike previous ways to do this, our display let you be any distance away from the screen, and showed things with very high quality, without the visual artifacts that usually accompany autostereo displays. We were very proud of it.

I was just at the point in the demo where I was explaining how a sufficiently advance autostereoscopic display might obviate the need to travel to conferences. “Just think,” I said, “people won’t need to deal with the bother and exhaustion of getting on airplanes and traveling long distances, just to have a high quality face to face interaction.”

But as it turned out, I was wrong. And I only know this for the following reason.

One of the people in the room was Ben Shneiderman, a pioneer in the field of human/computer interfaces. When I got to this point in the demo, he spoke up.

“Ken, people don’t get on those airplanes and travel thousands of miles to conferences just so they can have a face to face conversation.”

“Then why,” I asked, “do they do it?”

“They do it,” he explained, putting his hand on my shoulder, “because of the danger that they might touch each other.”

Anthills in the sun

Most people reading this are, by comparison with the typical human at any earlier era in history, cyborgs. We have vast and constantly updating information literally at our fingertips. We initiate casual face-to-face chats, at a moment’s notice, across vast distances. We collectively create complex webs of social networks and tribal allegiances, all supported by immense engines of computation and connectivity.

Yet we remain, at our core, human. It is true that to be an individual in a human society is a shifting target, buffeted by ever evolving technological capability. Yet this has always been so, and in some essential way our humanness remains unaffected. We love, we laugh, we our share the day with those we care about, we build our little anthills in the sun.

Which is why I suspect, as we creep ever nearer to a seamless merging of the physical and the virtual, that nothing essential will change. Some day soon the very ground under our feet will seem to be one with future abstractions floating in the air, abstractions that will become part of our shared language.

We will continue to upgrade, incorporating cyborg affordances unimaginable to previous generations. Yet we will still understand, at a deep level, what our species has always understood: That such changes are all just part of being human.

The best special effects

Today’s post is more or less the opposite of the previous two.

This evening I had the privilege of hearing Bonnie MacBird read an excerpt from her new Sherlock Holmes pastiche Art in the Blood. It was a simple drawing room scene, in which our famous detective and his loyal sidekick Dr. Watson meet with a prospective client, a beautiful and highly intelligent french chanteuse.

Yet from this apparently simple introductory conversation, the war of wits that rapidly developed between Holmes and the lady in question was breathtaking. The precision of verbal strike and counter-strike between the two quick witted opponents was a wonder to behold, a thing of pure beauty.

I remember thinking, listening to this intricate and brilliant dance of words and ideas, that it would be difficult for any mere movie adaptation to do justice to such a moment.

And then I remembered something my mother once told me, while reminiscing about her childhood. “The best special effects I ever saw,” she said, “were on the radio.”

First person perspectives

We are about to enter the era of consumer Virtual Reality games. The VIVE, the Oculus and the Morpheus should all be hitting the market sometime within the next half year.

This will create an opportunity for a kind of game experience that would merely be interesting when you are looking at a screen, but could be far more compelling when visually immersed in another world: The ability to become many different sorts of creatures.

Imagine a VR game played from the perspective of an ant, or a planet, or a paramecium. As a balloon or a skylark, a trapeze artist or a blue whale.

Sure, these are interesting first person perspectives when you are just looking at a rectangle sitting on your desk. But when you find yourself completely immersed in another reality, they might bring about a fundamental shift in perception.

As this medium starts to take off commercially, many more artists will be creating new kinds of experiences for it. We cannot yet know just how far-ranging those experiences will be. I suspect some of them are going to be pretty darned awesome.

Perfect

I saw The Walk today — in gorgeous stereo IMAX. The movie straddles a tightrope between reporting a real historical event and taking the viewer on a journey into pure wish-fulfilment fantasy.

And it manages that delicate high wire act perfectly. I was aware at every moment that I was watching a Robert Zemeckis fantasia, the kind of astonishingly nimble effects-driven opera de cinema that is his stock and trade.

Yet I was also aware of being invited into a very personal story, that strange and unpredictable place where human frailty somehow transforms into godlike grace.

To my mind it was a perfect film.

As in a dream

The weather has suddenly turned cold in Manhattan. Today, for the first time in quite a while, I dressed in layers to go out. And it was great.

Don’t get me wrong, I love those warm summer days. But when the first chill of fall arrives, something about the autumn air seems distinctly, perfectly New York.

The overcast sky creates a soft and mysterious light. People walk quietly down the street together, almost as in a dream.

Winter will come soon enough, and with it the real cold, the biting kind. Until then, we are all out and about, enjoying autumn in New York.

The happy medium

We all know that when you’re having fun, time seems to go faster. And conversely, when you are bored or having a really bad time, time can seem to drag on and on.

You could, if you wished, achieve the experience of living a longer life by always being bored. In practice, this isn’t a very good strategy, because what’s the point of life if you can’t enjoy it?

But maybe there is the germ of a good idea there. What if your level of happiness and your level of subjective time passing don’t change in the same way? In particular, maybe we can maximize the following product, added up over all the moments of your life:

Subjective Enjoyment (SE) =
      (subjective duration of each moment) ×
      (level of happiness at that moment)

 
For instance, let’s say that at some level of happiness A, time seems to go at a rapid rate TA, and at some lower level of happiness B, time seems to go at a slower rate TB.

Consider the level of happiness half way between A and B. What if time at that level of happiness seems to goes by more slowly than the average of TA and TB?

In that case, you can achieve a greater total lifetime SE score by hovering near this average state.

In fact, if we could measure both subjective happiness and subjective rate of time passing on a linear scale, we might be able to compute an optimal state of happiness, to best make you feel as though you’ve lived a long and satisfying life.

But don’t get too excited about this. If you get too happy, you might end up with a shorter subjective lifetime. 😉

Looping for poets

When I was an undergrad at Harvard there was a one semester course that gave a broad survey of physics to non-majors. The instructors knew their students generally didn’t have a high level of mathematical preparation, and they also knew they needed to make things fun, because students were generally taking this as a requirement, not out of love for the subject.

The course was nicknamed, somewhat derisively, as “Physics for Poets”, since it was essentially trying to teach a highly mathematical field without actually using any math. So the question arises, are all such courses doomed to superficiality and irrelevance, or is there something good there?

Today some of us faculty at NYU were discussing something vaguely similar: Is there a good way to introduce principles of computer science to non-majors who have no prior background in CS?

We came to the conclusion that there isn’t one way, but there might be many ways. Computer science contains quite a few key concepts. To name just a few: looping, conditionals, variables, procedures, inheritance, computational complexity, recursion. The list goes on.

And college students have many and varied interests. To name just a few: Art, photography, music, sports, cinema, politics, literature, dance, poetry, economics, theater, journalism. The list goes on.

For any given interest a student might have, there is a way to teach a corresponding concept in computer science. Consider, for example, looping. Post-processing of photographs requires looping through pixels, music, poetry and dance require looping through rhythmic patterns, and so on. If you understand a field well enough, you can generally find a motivation for the use of any computer science topic in that field.

Of course to turn this insight into proper course design for a given student interest is far from easy. It requires real work and preparation on the part of an educator who loves both that subject and computer science.

But hey, isn’t that why we are here?

Every fifty years

This last summer I saw a wonderful project by some graduate students in the Interactive Digital Media program at Trinity college. Noting that this year is the 150th anniversary of Alice in Wonderland, and the 100th anniversary of Einstein’s theory of general relativity, they created an original work of interactive art.

Much of the power and delight of Lewis Carroll’s classic comes from the way it warps time and space. Everything is relative, and notions of reality as a static frame of reference go out the window.

As the students observed, this is one of the fundamental predictions of Einstein’s theory. It was general relativity that definitively moved our view of reality itself away from the rigid framework of Newtonian mechanics, with its fixed notions of the nature of time and space.

The students created a clever assortment of interactive techno/art experiences that riffed on the connections twixt red shifts and Red Queens, manifolds and Mad Hatters, gravity and Gryphons. Much of it was quite delightful.

But why just those two points in time? What about 50 years ago? I got to wondering what might have happened in 1965 to shake up our fixed notions of time and space.

So I went on the Wikipedia and started snooping around. Quite a few notable things happened that year, in music, politics, science, literature and many other domains of human interest.

But one event in particular jumped out at me: On April 19, 1965, Gordon Moore published a paper laying out the principle that came to be known as “Moore’s Law” — that computation would become exponentially less expensive with each passing year.

In hindsight, that paper was the shot across the bow. It effectively predicted that our nation’s economic engine was going to shift radically from industrial to informational, from an ecology of fixed resources to one of exponentially increasing resources.

The vision of the future that Moore predicted 50 years ago — the world that we live in today — is indeed in the spirit of Carroll and Einstein. For it is a world in which the meaning of time and space, how we move through them, how we use them to communicate with one another, seems to change with every successive doubling of computational power.

Silly rabbit

I was telling a friend over dinner this evening about Joe Harris, who came up with the immortal line “Silly rabbit, Trix are for kids!” — as well as the artwork and entire idea for the spot.

In an interview years later, Harris lamented how General Mills had misunderstood his beloved character. The entire spot is built on the rabbit not getting the breakfast cereal. The way he tells it, one year GM decided to let the rabbit get some Trix. The result? Sales of Trix went down — because now there was no drama.

I wonder how many other examples there are like this in popular culture: Producers give the audience what they think it wants, and the audience rebels. Because sometimes, not getting what it wants is exactly what an audience really wants.