Note to self

Sometimes, when something interesting happens, I’ll jot some little some little two word phrase or other on a scrap of paper, so I can remember to write about it in the blog. This morning when I woke up, my very first thought was that I had written one of those little reminders. Trying to remember where, I realized I had written it down in one of my dreams.

Now, my general experience with dreams has been that the experience of doing things while dreaming is illusory. I didn’t actually read that book, or give that speech, or write that symphony. So, you can well imagine, I was not very hopeful that I’d remember this little note.

But I decided to try anyway. And as I lay there, it came back to me. I had been experiencing a particularly funny dream, and in the middle of the dream I had thought to write down the two words “DREAM SCENE”, precisely so I could later write about the dream in this blog. And I knew, with complete certainty, that those were the words, because that phrase was in fact one of the most vivid parts of the dream.

Which leads to the following question: Did I remember those two words simply because they were short and iconic, or did they constitute a truly operational note to self? And if the latter, is it possible — with proper training, presumably — for us to jot other things down in our dreams, in a way that would let us remember them when we awaken?

Standing on two legs

I had a somewhat overdue conversation today with a good friend. It was one of those delicate situations where you know that sometime soon you are going to have an important conversation with somebody you care about, and you even know what the conversation will be about, and you suspect it might be a difficult conversation, and neither of you has yet felt ready to have the conversation.

And then today we had the conversation, and it was like that great feeling you get after you’ve had a tooth pulled. “What was the problem?” you find yourself saying, “I feel so much better now!”

We’ve all been there. You suddenly find yourself standing with both legs on the ground, and you realize that for too long you had been trying to balance on one leg — without even knowing you were standing on one leg.

Standing on two legs is so much better. Much less chance of falling down. I’m not sure, but I think 孔子 said that. 🙂

Vertalic font

In a recent post I floated future display technologies in which text might appear to hover in the air between people. And I discussed a potential problem: if both people want to read that text from left-to-right (or from right-to-left, should they live in Tel Aviv), then their respective views of corresponding parts of a document will not line up.

Sharon pointed out that it might be asking too much to ask people to adapt to vertical text, and then Xiao commented that it would probably be a lot easier for China or Japan to adapt the idea of vertical text, since traditional Hanzu and Kanji are already written vertically.

Not wanting our Western culture to fall hopelessly behind in future literacy, I played around with various fonts, and I discovered that an italic font works quite well for vertical text, since “slanted forward” in a horizontal orientation can also be read as “slanted backward” in a vertical orientation. The result is far easier to read than it is for non-italic fonts:



Each of the two conversants would see a different view of any individual letter, with every letter slanting to that person’s respective rightward direction. But the location of each letter in space would be the same for both conversants. Perhaps this combination of vertical orientation (and consequently, vertical kerning), with italic letters should be called a vertalic font.

Freud versus Jung

I had a great conversation on New Years day with two psychologists who had seen the recent film “A Dangerous Method”, about the fraught relationship between Sigmund Freud and Carl Jung. I was fascinated to learn about the fundamental differences in philosophy between Freud and Jung. It seemed that these differences stemmed largely from contrasting views of the collective unconscious.

After listening for a while, I suggested the following analogy: To Freud, the collective unconscious is like a giant power line. When the connection is compromised between a person’s conscious mind and the collective unconscious, then the conscious mind can’t draw enough power to function properly. Therapy essentially repairs this connection to the power grid.

To Jung, the collective unconscious is not like a power line, but rather like the Internet. Down there in the collective unconscious, we’re all sending each other internet packets. Your conscious mind is your local computer, and therapy improves your bandwidth to the Internet.

Yes, the psychologists replied, that’s pretty much it.

Tomorrow’s nostalgia

There is a powerful tendency at the end of the year to sum up. It is a time filled with rituals, from 10 Best Lists to glib summaries of the year in news, to overly hopeful resolutions for the coming year.

Oh, how tightly we cling to this little floating island of time. Already, we have begun to eulogize these particular 365 consecutive days of our respective lives, knowing that they will never come again. Today’s lived experience is tomorrow’s nostalgia.

If it was a good year, we already know it will live on inside us, in a golden haze of remembrance. If a bad year, we have already begun, in our minds, to formulate the war stories, our own particular tale of The Year I Survived The You-Know-What.

In any case, if December 31 shows us anything, it shows that time, in these odd little human minds of ours, is far from linear. Time, in one’s mind, exists in moments, in particular events and encounters that define the ordinary space around them. We dwell in these singular moments, poised to leap headlong into the next such moment.

I have had many such moments in this past year, both good and bad. Perhaps too many — certainly enough to fill five ordinary years. Not that I am complaining. Yet I’m ready, starting tomorrow morning, to clear the slate, to sweep the dishes off the table and begin again.

Vertical text

Let’s say it’s twenty years in the future, and everyone, to borrow a phrase from Verner Vinge, is wearing. That is, we all have our cyber-contact lenses, and we take for granted that we will all have augmented reality floating in the air between us.

Suppose you and I want to discuss some text. If the text document just floats in the air between us, then either (1) we each see the document the right way forward, or (2) one of us will see it backwards. The problem with the first scenario is that if I point to (or look at) some part of the document, the place where I’m looking won’t correspond to what you see there.

In 1989 Hiroshi Ishii dealt with a similar issue very cleverly in his “ClearBoard” interface. People interacted face to face through a video screen. The video flipped everybody’s image left/right, so that you always saw the other person in mirror reverse. This meant that we could both look at the same document floating between us, and everything worked out — text was forward for both of us, and our gaze directions always matched.

But you can’t do that if you’re physically face to face with somebody. One possibility is that people will just learn to read backwards, but somehow I doubt that this will catch on — from a social perspective, the situation is just too asymmetric.

Another possibility is that augmented reality will use a convention that text runs vertically, rather than horizontally. We can already read vertical text just fine, so this won’t require any new skills or training. The left right reversal will take place within each character. For example, we will both see the letter “E” rather than one of us seeing the letter ““.

In this arrangement, one of us might find ourselves reading the vertical columns of a document from right to left, rather than from left to right. But that doesn’t seem like a real obstacle to comprehension. To make it clear whether you’re reading left to right or right to left, the text in each column could be either left justified or right justified. Below is an example of the same text, as seen by two people who are face to face with each other:


Emails from the present

I got some really helpful emails from people after they watched yesterday’s video. It’s sort of an odd process showing demos of things that do not (and cannot) yet exist. The entire enterprise is balanced on the interstices between science, education, human/computer interface, and stage magic.

My general hope is that the parts of the demonstration that are real (that is, the demo is actually doing what the audience believes it is doing), can expand over time, while the part that is stage magic can gradually fade away as we learn how to replace it by the real thing.

This approach — create an illusion of the way you would like things to be, and then gradually replace the illusion by reality, as you learn how to do that — is related to the “Wizard of Oz” research experiments. Except in this case we are hoping to eventually get rid of the Wizard of Oz aspects.

Visit to the future

I was a bit nervous before my recent talk in Hong Kong, mainly because my pre-talk preparation time — to calibrate the demo and get everything tracking properly — was cancelled at the last moment, due to a conference scheduling SNAFU. But the show must go on, particularly when there are several hundred people in the audience.

Also, they couldn’t get a camera set up until about fifteen minutes into the talk, so you miss the parts where I talk about Will Wright, Gordon Moore, Lance Williams, my dad, C.P. Snow, Myron Krueger, Hiroshi Ishii, Marco Tempest, Arthur C. Clarke, J.K. Rowling, Babak Parviz, Verner Vinge, George Lucas, and the parietal lobe. BTW, the fact that there even is a recording reflects great work by some very hard-working tech-support people.

All part of the thrill of a live show! SIGGRAPH Asia has kindly put the video up on YouTube, so you can see, for yourself, my little visit to the future.

Making things move, part 6

Yesterday we showed what happens if you don’t shape the noise signal — you get a zombie character.

Today we will apply the high gain filter I talked about two days ago, so that iGor’s movement will be more purposeful. I’m still applying a noise signal to his left/right rotation as well as to his up/down rotation, but now I’m shaping each of those movements with a high gain filter. You can see the result by clicking on the image below:

Now iGor appears to be aware of, and interested in, his surroundings.

If I were simulating an actual eyeball, I would move it quite differently. An eyeball generally saccades to successive fixation points in about 20-30 milliseconds. That’s why a real human eye, filmed at 30 frames per second, appears to jump suddenly, in a single frame, from one fixation point to the next. Because iGor is a character whom the audience thinks of as a hybrid between a head and an eyeball, I needed to slow him down a bit.