Happy birthday W.S.

It’s said that all the world’s a stage
Well, one man’s work has been the rage
For about four hundred thirty years
And so today some birthday cheers
For that wondrous Englishman
Who gave us Lear and Caliban
Hamlet, Portia and Ophelia
Iago, Oberon, Cordelia
Macbeth and Puck and Tatiana
Never since the Pax Romana
Has one man’s star shone quite so bright
And filled the stage with such delight
There has never been another maven
Like the man who comes from Stratford on Avon

Passthrough, part 5

Eventually (although not soon), we will be able to use a combination of visual, audio and haptic feedback to create a multi-sensory experience that feels just like reality. In a sense, the challenge here is to pass something akin to the Turing test.

The test would go something like this: If I am collaborating with two people, one of whom is sitting directly across a table from me and the other is 1000 miles away, can we create an experience of presence with sufficient fidelity so that I cannot tell which is which?

For example, if the person sitting directly across from me passes an object to me across the table, I should be able to see it, hear it slide across the table, and feel it as I take the object from my collaborator. I might also feel a slight resistance as the other person lets go of the object.

Can I replicate this experience with a person who is 1000 miles away by using multi-sensory passthrough? At what point does the combination of visual, audio and haptic passthrough sufficiently match the fidelity of physical co-presence so that I can no longer tell the difference?

I don’t know the answer. But I think that this would be a very worthy goal to strive for, and that research in this area would be very exciting.

Passthrough, part 4

Since we have talked about the concept of passthrough for two human senses — vision and hearing — it is logical to think about what other senses might be amenable to this paradigm. A logical candidate is touch.

One of the limitations of video and audio passthrough devices is the intangibility of the items they present us with. We can see and hear virtual objects, but we cannot touch them.

So perhaps we should be thinking in terms of “haptic passthrough”. In other words, by some technological means we should be able to touch virtual objects as though they are physical objects. In addition, we should be able to modify how real objects feel to the touch.

When combined with video and audio passthrough, the effects of this can be powerful. Taken together, all of these things constitute “multi-sensory passthrough”.

More tomorrow.

Passthrough, part 3

Actually, the future I described yesterday already exists for millions of people. But not for their eyes — for their ears.

Consumer audio equipment such as the Apple Earbuds do something that could be called “audio passthrough”. They take reality (in the form of sound waves entering the ear), digitize it, modify that digital signal to taste, combine it with a synthetic digital signal (eg: recorded songs), and then convert back to sound waves for the user to hear.

This lets those audio devices do some pretty impressive things. For example, they can selectively filter out or enhance sound in the world around you, let you hear only sound in front of you but not from other directions (like when you are talking with a friend in a crowded restaurant), or block out sudden loud sounds that might damage your ears.

The key is those four steps: (1) digitizing sound, (2) modifying that digital signal to taste, (3) mixing with synthetic sound, and finally (4) turning that mixture back into sound waves. This is exactly is exactly what you would want (but cannot yet have) in visual passthrough.

So why is that capability available for audio, but not for video? It’s because of Moore’s Law.

Moore’s Law states that computers get approximately 100 times faster every decade. And it turns out that the computer power needed to interactively process an audio signal is about 100 times less than the computer power needed to interactively process a video signal.

I realized back in the 1980s, when I was developing the first procedural shaders for computer graphics, that some of my colleagues in the field of computer music had gotten there a decade earlier. In the 1970s, they had already been adding synthetic noise to audio signals, modifying frequencies, applying filters that turned one musical instrument into another, and much more — all in real time.

As I learned more about computer music synthesis, I gradually came to understand that I was following in their footsteps. And I think that principle is just as valid today. If you want to understand future video passthrough, study present-day audio passthrough.

More tomorrow.

Passthrough, part 2

Today’s mixed reality headsets let you put computer graphics in front of a video capture of the real world around you. The video itself is not quite up to the resolution and color richness of actual reality, but over time, as technology continues to advance, that gap will close.

Today’s headsets only let you see the real world behind synthetic computer graphics. You are not given the ability to modify your view into reality.

But in the future you will be able to edit the world around you through your glasses. You will be able to zoom in, enhance colors, highlight objects, or selectively sharpen details of things that interest you.

More tomorrow.

Passthrough, part 1

The video passthrough feature of the Meta Quest 3 and Apple Vision Pro may seem like a novelty today, but I think there is a very profound principal at work here.

The concept of perceptual passthrough can be generalized in many ways. I think that these first devices are just the tip of the iceberg.

More tomorrow.

Leonardo and the Two Cultures

Today, on what would have been Leonardo DaVinci’s 572nd birthday, is a good time to talk about the two times that I saw Lester Codex.

The Lester Codex is a folio that explains, with beautiful illustrations, various theories that Leonardo had about the physical world around us. Not surprisingly, many of his theories were absolutely correct, such as his surmise that the discovery of seashells on mountaintops suggested that those mountaintops were once at the bottom of the ocean.

Many years ago, soon after Bill Gates purchased the Lester Codex for sixty million dollars, he lent it to the NY Museum of Natural History for a public exhibition. The codex was presented together with interactive Microsoft software that let you interactively explore its contents. Not surprisingly, the software was for sale.

That exhibit, which was wonderful, helped the public to experience the greatness of Leonardo the scientist.

Soon after, Gates lent the Lester Codex to the Metropolitan Museum of Art. The curators of that worthy museum put on a public exhibition in which they placed the codex in the context of many other works of art by Leonardo.

That exhibit, which was wonderful, helped the public to experience the greatness of Leonardo the artist.

I saw both exhibitions, and was struck by the implicit war at play. Two of our city’s greatest temples of culture — sitting across each other on opposite sides of Central Park — seemed to be fighting over the meaning of Leonardo’s life and work.

I also saw that this is far from a new battle. In the second exhibition, there was a letter written by an important personage of the day, which the museum curators helpfully translated from the Italian. He was complaining that Leonardo was wasting time in the frivolous pursuit of science, when he could have been spending more time on something actually important — making more paintings.

As C.P. Snow observed in his brilliant essay The Two Cultures, none of this should come as a surprise.