Tomorrow’s nostalgia

There is a powerful tendency at the end of the year to sum up. It is a time filled with rituals, from 10 Best Lists to glib summaries of the year in news, to overly hopeful resolutions for the coming year.

Oh, how tightly we cling to this little floating island of time. Already, we have begun to eulogize these particular 365 consecutive days of our respective lives, knowing that they will never come again. Today’s lived experience is tomorrow’s nostalgia.

If it was a good year, we already know it will live on inside us, in a golden haze of remembrance. If a bad year, we have already begun, in our minds, to formulate the war stories, our own particular tale of The Year I Survived The You-Know-What.

In any case, if December 31 shows us anything, it shows that time, in these odd little human minds of ours, is far from linear. Time, in one’s mind, exists in moments, in particular events and encounters that define the ordinary space around them. We dwell in these singular moments, poised to leap headlong into the next such moment.

I have had many such moments in this past year, both good and bad. Perhaps too many — certainly enough to fill five ordinary years. Not that I am complaining. Yet I’m ready, starting tomorrow morning, to clear the slate, to sweep the dishes off the table and begin again.

Vertical text

Let’s say it’s twenty years in the future, and everyone, to borrow a phrase from Verner Vinge, is wearing. That is, we all have our cyber-contact lenses, and we take for granted that we will all have augmented reality floating in the air between us.

Suppose you and I want to discuss some text. If the text document just floats in the air between us, then either (1) we each see the document the right way forward, or (2) one of us will see it backwards. The problem with the first scenario is that if I point to (or look at) some part of the document, the place where I’m looking won’t correspond to what you see there.

In 1989 Hiroshi Ishii dealt with a similar issue very cleverly in his “ClearBoard” interface. People interacted face to face through a video screen. The video flipped everybody’s image left/right, so that you always saw the other person in mirror reverse. This meant that we could both look at the same document floating between us, and everything worked out — text was forward for both of us, and our gaze directions always matched.

But you can’t do that if you’re physically face to face with somebody. One possibility is that people will just learn to read backwards, but somehow I doubt that this will catch on — from a social perspective, the situation is just too asymmetric.

Another possibility is that augmented reality will use a convention that text runs vertically, rather than horizontally. We can already read vertical text just fine, so this won’t require any new skills or training. The left right reversal will take place within each character. For example, we will both see the letter “E” rather than one of us seeing the letter ““.

In this arrangement, one of us might find ourselves reading the vertical columns of a document from right to left, rather than from left to right. But that doesn’t seem like a real obstacle to comprehension. To make it clear whether you’re reading left to right or right to left, the text in each column could be either left justified or right justified. Below is an example of the same text, as seen by two people who are face to face with each other:


Emails from the present

I got some really helpful emails from people after they watched yesterday’s video. It’s sort of an odd process showing demos of things that do not (and cannot) yet exist. The entire enterprise is balanced on the interstices between science, education, human/computer interface, and stage magic.

My general hope is that the parts of the demonstration that are real (that is, the demo is actually doing what the audience believes it is doing), can expand over time, while the part that is stage magic can gradually fade away as we learn how to replace it by the real thing.

This approach — create an illusion of the way you would like things to be, and then gradually replace the illusion by reality, as you learn how to do that — is related to the “Wizard of Oz” research experiments. Except in this case we are hoping to eventually get rid of the Wizard of Oz aspects.

Visit to the future

I was a bit nervous before my recent talk in Hong Kong, mainly because my pre-talk preparation time — to calibrate the demo and get everything tracking properly — was cancelled at the last moment, due to a conference scheduling SNAFU. But the show must go on, particularly when there are several hundred people in the audience.

Also, they couldn’t get a camera set up until about fifteen minutes into the talk, so you miss the parts where I talk about Will Wright, Gordon Moore, Lance Williams, my dad, C.P. Snow, Myron Krueger, Hiroshi Ishii, Marco Tempest, Arthur C. Clarke, J.K. Rowling, Babak Parviz, Verner Vinge, George Lucas, and the parietal lobe. BTW, the fact that there even is a recording reflects great work by some very hard-working tech-support people.

All part of the thrill of a live show! SIGGRAPH Asia has kindly put the video up on YouTube, so you can see, for yourself, my little visit to the future.

Making things move, part 6

Yesterday we showed what happens if you don’t shape the noise signal — you get a zombie character.

Today we will apply the high gain filter I talked about two days ago, so that iGor’s movement will be more purposeful. I’m still applying a noise signal to his left/right rotation as well as to his up/down rotation, but now I’m shaping each of those movements with a high gain filter. You can see the result by clicking on the image below:

Now iGor appears to be aware of, and interested in, his surroundings.

If I were simulating an actual eyeball, I would move it quite differently. An eyeball generally saccades to successive fixation points in about 20-30 milliseconds. That’s why a real human eye, filmed at 30 frames per second, appears to jump suddenly, in a single frame, from one fixation point to the next. Because iGor is a character whom the audience thinks of as a hybrid between a head and an eyeball, I needed to slow him down a bit.

Making things move, part 5

If we apply procedural animation to iGor naively — by just adding time-varying noise to the rotation of the eyeball, the result is a sort of zombie eyeball, or perhaps an eyeball doped out on some serious medication. Its gaze appears to wander aimlessly, as though there is no focus or intentionality at work. Click on the image below to see what this looks like:

Of course the prospect of a zombie eyeball raises all sorts of interesting metaphysical questions. For one thing, does a zombie eyeball feed upon the brains of other eyeballs? Alas, such questions, while very important, are arguably beyond the scope of the current discussion.

Making things move, part 4

In a way, synthesizing procedural animation is a lot like synthesizing music: you create signals, and then you run those signals through filters to give them character. This is how we will animate our little eyeball friend — I guess we might as well call him iGor. 🙂

To make iGor look around, we need to synthesize a signal that will create a sort of unpredictable yet purposeful movement — as though he is looking at various things around him. To do this, we will need two tools: noise and gain.

The noise signal is the same noise I created to make procedural textures, except that we will vary this signal just over one dimension (time), rather than three dimensions (space). Noise by itself is rather flavorless — it just creates a signal that goes up and down over time smoothly but unpredictably:

But you can then shape this flavorless signal in different ways to get what you want. In our case, we want iGor to appear purposeful, so we will add gain to the noise to make it move more decisively: When the noise signal goes up, it will go up faster, and when it goes down, it will also go down faster. The more gain we add to a character’s movement, the more decisive that movement will seem.

You can see how this works by clicking on the image below to run a Java applet:

As you play with the applet, try varying the value of gain. You will see that after the gain filter is applied, the range of values stays the same. But the high gain noise signal spends more time near its lowest and highest values, and less time near the boring middle values.

Making things move, part 3

Procedural animation is a form of art more than it is a form of technology. You assemble some powerful tools, but the way you use those tools is more like playing an instrument than it is like assembling a machine. The entire purpose of the tools is to provide lots of little knobs and buttons so that you can use your own aesthetic judgement, and understanding of human behavior/perception, to create a compelling illusion.

In the case of an “Eyeball with personality”, our goal will be to create the convincing illusion that there is a mind controlling the movement of the eyeball. Two things make this task easier than it might otherwise be: (1) We only need to move the eyeball along two axes (rotate in longitude, and rotate in latitude), and (2) We don’t need people to know what the eyeball is thinking — we need only convince them that it is thinking.

This last point is crucial, and it’s one of the things that makes procedural animation work. If you create a convincing illusion that there is a personality at work, people want to believe, and so they will suspend their disbelief. It’s the same thing that makes us care about Elizabeth Bennet and Mr. Darcy, even know we know full well that they exist only as words on paper.

In particular, we never quite know what Elizabeth Bennet will do next — Jane Austin builds a nice sense of unpredictability to this headstrong character. But it’s a controlled predictability — the character’s actions may be unexpected, but they take place within a set of constraints.

Going from Austin to Eyeball, tomorrow we will start with controlled randomness.

Making things move, part 2

With an eye to following the Frankensteinian theme suggested by some readers of yesterday’s post, I will use an appropriately creepy subject for talking about procedural animation.

Clicking on the image below will take you to a java applet showing a 3D model of a human eye. We’re going to bring this little critter to life over the next few days. Meanwhile, interacting with the java applet will let you see how the model was put together.

Nothing in the 3D model was actually measured from life. As I will do in each step of this little project, I just eyeballed it. 🙂

Making things move, part 1

I first started developing procedural textures as a way to make things look more natural in computer graphics. These days, much of stuff of fantasy you see in movies — the marble, fire, smoke, clouds, stone, water and all sorts of other things — are built from such methods.

I remember how exciting it was for me back to when I first realized I could apply these same ideas of procedural and noise-based textures not just to appearance, but to animation. I’ve been trying to think of a very user-friendly way to describe how procedural textures can be used in computer graphics for making things move and come to life in a natural way — in a way that, as the master Walt Disney animators used to say, conveys the illusion of life.

More tomorrow.