Second Second Life

Today I had a chat with Philip Rosedale, who is planning a sequel to his “Second Life” shared virtual world. We agreed on a lot of things, but on one point we ended up having a little bit of an intellectual tug of war.

Philip feels that because people really want to communicate with each other, it is extremely important to convey the nuance and subtlety of things like facial expression and head movement. On some level this makes a lot of sense to me.

Yet I feel that on another level it may not be the best goal. The more you try to recreate reality, the higher you raise people’s expectations, since people are experts at experiencing reality itself. The resulting dissonance is often referred to as the “uncanny valley”, which I think is largely a result of unmet expectations.

We have no problem relating to Bugs Bunny, but most people had quite a bit of difficulty relating to the far more literally realistic characters in “Polar Express” or the 2007 animated “Beowulf”. The more we work to make something look/act “real”, the greater the disappointment when those efforts fail.

There is another reason I would like to see a shared world with less focus on realism: It would be far easier to have something I think such social on-line worlds should have (which Second Life did not): Cool non-player characters. For one thing, things get a little more interesting in a virtual world if you are never quite sure who is real and who isn’t.

Google Glass is the new Palm Pilot

When people look back at Google Glass, what will they think of it, I mean as an historical artifact?

It’s possible that it will go the way of the Nintendo Virtual Boy, a bold venture into uncharted user interface territory that went down in ignominious defeat.

The odds are very small that Glass itself will be embraced by millions of users. It is simply too soon — the technology is not quite there yet, and the requisite killer apps have not yet been developed (or even conceived).

My guess is that Google is expecting, and has been planning for, a third outcome: That Glass itself will remain something of a curiosity, but it will push the agenda of wearables forward, by getting it on everyone’s mind.

I think this is why the design is so conservative. There is no attempt to create a true augmented reality device, or registration with objects in the scene. Rather, Glass mostly just supports networked audio reception, image capture, and a kind of visual annotation off in the corner of your field of vision.

Google is not trying to create an Apple Newton — a daring attempt to rethink the future in one fell swoop. Rather, Google is aiming more for the PalmPilot: Something simple with a basic feature set that introduces a new form factor in a very basic way.

The Newton was defeated by its own ambition, trying to do things (like true handwriting recognition) that were not yet supported by available technology. In contrast, the PalmPilot kept expectations low, opting for relatively low resolution/cost, a few well chosen features, and a very solid, if clunky, input method.

Almost exactly ten years later the Applet iPhone came out — a great example of what Bill Buxton calls The Long Nose of innovation.

So somewhere around 2023, partly thanks to a timely and prescient seeding of the space by Google, we might see a wearable device that will seem not only right, but inevitable.

The eye of Sauron

Today I was over at a friend’s house, and he showed me the cool drone model airplane he’s been building. Then he demo’d the FPV (first person view) goggles he uses with it.

The idea is that you wear the FPV goggles while flying the plane, upon which is mounted a video camera. As your drone flies through the air, you see everything from the plane’s point of view.

To demo the FPV, my friend had me wear the goggles, while he held the camera and wandered around the various rooms of the house. Just standing in one place, I had the eerie feeling that I was roaming through his house. It was fascinating, but it also felt a little surreptitious, as though somehow I was snooping around where I shouldn’t be.

Not being familiar with his house, I didn’t realize until the very last moment that my friend had circled back and reentered the room from another door, until the moment I saw the back of my own head, and realized he was directly behind me. Suddenly I was seeing myself as a character in a third person shooter!

In that moment I realized that advancing technology — wirelessly networked cameras everywhere, in Smart Phones, in Google Glass and its progeny, in flying drones, and who knows where else — are going to end up interacting with the coming generations of wearable displays in unprecedented ways.

At some point, we will all be able to untether our visual points of view from our physical bodies. We will be able to fly overhead, or jump into each others’ viewpoint at will. We will become a communally roving eye of Sauron.

To me this prospect seems very eerie. But I am sure that generations to come will find it all perfectly natural.

Tragic ironies

I have a friend who loves to create cool optical contraptions with mirrors (and he is very good at it). Today I was telling him how watching him work gave me an idea for a story:

A man who loves to invent things with mirrors becomes sad that one day he will die. It isn’t that he fears death, but that he feels sad about all the great mirror inventions he will not get to make. One day he is visited by a supernatural being who offers him unending life. The man happily accepts the gift, only to find that he has been turned into a vampire (and we all know about vampires and mirrors).

It occurs to me that there could be an entire genre of ” tragically ironic story ideas”. If you like, you could go ahead and write the actual story (or play, or movie, or poem, or opera, or country song), but the idea itself could be considered a work all on its own.

Comfort music

I’m about to go on a five hour evening drive. I’ve rented the cool convertible, stocked up on water, charted my route, and prepared myself for some pleasant alone time on the open road, aiming to arrive at my friends’ house at just around midnight.

But there was one last thing to take care of. Being an old fashioned sort, I decided not to leave my musical experience up to Spotify or some similar service. Instead, I went out to the nearest retail emporium and looked for some music to listen to as I wend my way.

And in the course of doing so, I realized that I really like comfort music. Not exactly music that I already own, but things I already know I like. I’m willing to be a little bit adventurous. For example, I got myself a copy of the new Daft Punk album. I may end up not liking it, but from what I’ve heard, I am fully expecting to get lucky.

I wonder in what ways the situation — for example, a long evening drive in a convertible, as opposed to a marathon work session — influences the choice of music.

Are we really accurate, when left to our own devices, in choosing the right music for each occasion? I am sure there are algorithms out there which map mood and situation to music, selecting the “best” music for each situation.

In practice, I wonder which performs better — the listener who relies on his/her own instincts, or the algorithm that aims to know your varying musical tastes even better than you do?


There is something magical about objects that can transform, taking on multiple functional identities at different times.

The idea goes back to antiquity, but it has taken on a new kind of resonance in our modern technology-obsessed world. In 1948, four year old Bernadette Castro starred in commercials for her father’s convertible couches, incidentally becoming the most televised child in America.

I remember as a child having a toy that was themed from some then-popular TV spy show, which at the press of a button would magically transform between a camera and a gun. It didn’t actually function as either, which was probably a good thing, but I didn’t care. Just the idea of a magical transformation of form and function made me happy.

There are so many examples of this power in popular culture, from Q’s gadgets to George Jetson’s flying saucer / suitcase, and of course the man himself, Inspector Gadget.

Children today obsess over a certain eponymous toy/mega-movie franchise, but I don’t really like the whole rhetoric of “they can do this because they are space aliens”. I don’t want magically transformable objects just in my space aliens. I want them in my real life!!!

Is that asking too much?


There were several fascinating papers at the SIGGRAPH conference about using 3D printers, micro-scale textures and special inks to fabricate perfect replicas of real objects — including subtle yet important visual clues like slight transparency (as in milk or soap) and anisotropic reflection (as in cloth or satin).

The work was impressive, and the results were stunning. Yet they also called to mind Philip K. Dick’s wonderful novel “Do Androids Dream of Electric Sheep”. In the fictional future of that book, it has become easy to replicate all sorts of things, including animals. In fact, some people keep android sheep as pets (hence the title).

As a consequence of nuclear war fallout radiation, actual animals have become scarce and very precious, and it is a sign of wealth and high social status to be able to have a real animal, as opposed to one of the plentiful and cheap perfect copies that technology has enabled.

I wonder whether we are heading down some sort of analogous path. As perfect replicas of more and more things become cheap and plentiful, have an original of anything may become a rare privilege.


One of the papers at the SIGGRAPH conference showed how you can replace the fancy expensive compound lens in a digital camera with a really cheap lens. Of course when you do this you get all sorts of optical aberrations — chromatic aberration, spherical aberration, field curvature, and so on.

But if you have a powerful enough computer, and you know exactly what sorts of errors your cheap lens is introducing into the image, then you can post-process the captured image to get an impressively good result.

I thought this paper was a great example of the progressive virtualization of our physical environment. More and more of the things we think of as being part of the physically built world around us are being augmented — and in some cases replaced — by virtual components.

From the ringing of your phone, (simulating the sound of a long-gone technology) to the electric motor that drives your steering wheel (haptically simulating the direct mechanical linkage of an earlier error), to similar innovations too numerous to mention, every year we make our physical environment just a little bit more virtual.


Oddly enough, the very moment I finished writing the above, an old friend came over to say hello. He has a company called PaintScaping. They specialize in projecting digital make-believe content onto real world walls and other surfaces, matching the lighting, shadows, and 3D relief so perfectly that the resulting images seem like they are part of the physical world itself.

Maybe it’s a sign.

Name brand

Like a number of people in my field, I have developed techniques that ended up being named for me (through no fault of my own).

The other night, just as people we gathering for the SIGGRAPH conference, I was introduced by friends to somebody who decided to make an impromptu joke of the occasion. “Your parents must have had a great sense of humor,” he said, “to name you after a well known technique in computer graphics.”

It could have been an awkward moment, but life is too short for awkward moments. “That’s nothing,” I replied, “You should meet my brother Fourier.”

As it happened I had dinner with my Mom last night. I wanted to tell her the story, but as I began I realized she would have no idea who “Fourier” was.

So I adapted the tale to the audience. Everything was the same until my reply at the end, which had now become: “That’s nothing. You should meet my brother Kleenex.”

I am happy to report that my Mom found the entire episode very funny.

Wondrous and magical things

Today at the SIGGRAPH conference I surprised an older colleague, by suddenly recalling an old memory that involved him, from when I was about eleven years old.

I had just joined the Boy Scouts. One Saturday our troup volunteered to help with paper recycling, which mainly involved loading many bundles of discarded newspapers and magazines into a big dumpster. Amid all of the trash, I happened upon a discarded issue of a magazine for electrical engineers (which of course I had never heard of). Curious, I started leafing through it.

In it I found the coolest article: an ingenious way of making computer graphics look fully three dimensional, as though objects were floating in space. No 3D glasses required.

I remember thinking, as I looked through the article, that this — using technology to make wondrous and magical things — was exactly what I wanted to do when I grew up.

I remember wanting to meet the person who did this work, but I did not meet the man himself until quite a few years later (when I was all grown up and doing computer graphics myself). By then the memory of that Saturday had receded to somewhere far in the back of my mind.

Until today, when I said hello to this colleague and suddenly the events of that long ago day came flooding back. I told him my story about finding that article, and how it was what first inspired me to want to do computer graphics.

He seemed a little taken aback, but very happy.