Space and motion

Yesterday I talked about the possibility of visualizing narrative, for example the plot and character arcs of a novel, as a kind of architectural space within a virtual room in vr.

But what would be the best way to use movement in such a scenario? We could of course just have a static space, and our own movement through the space would provide all of the motion needed to understand what we are seeing. That would be a kind of static VR sculpture.

But why not make use of the fact that the space itself can change in response to our movements through it? We could illuminate narrative and structure by creating a responsive virtual world that changes as we approach and move away from various places within it.

Eventually, we could create an entire language of interaction between our movements and mutable architectural space. I don’t think this would be easy, but it sure would be an interesting thing to explore.

Visualizing narrative

People have done many wonderful things in virtual reality, But one thing I still haven’t seen anyone do well in VR is creating a really good visualization of narrative.

Think, for example, of any novel you really love, one that completely drew you in. For me one example of this was Pride and Prejudice.

There are specific places where we are introduced to key characters, where relationships change, where important new information is revealed. There is a structure to all this, and in a great novel that structure is beautiful.

I would like to be able to walk into a virtual room in which all of that structure is arrayed all around me. And I would like to be able to learn from it, for key insights about the novel to immediately jump out at me.

I’m not sure how much of this is possible, but it seems like a great thing to work on. I might try my hand at it. If you are similarly inspired, I would love to hear how it goes!

Rewatching a favorite TV series

The first time you watch a great TV series, you don’t know where it is going. Sure, the show’s creator had set up some basic arcs and relationships in the pilot episode, but you don’t know yet what the series are going to do with it all.

Yet when you rewatch the series from the beginning, you already know all that. Every planted conflict or pointed conversational barb has a purpose. And now you know just what that purpose is.

In some ways it is a much richer experience. You are being given the privilege of seeing inside the mind of a great writer.

I highly recommend it.

Precision economy

When you program computers, you are often dealing with questions of economy. Much of it comes down to how precise you want things to be.

In computer graphics you can squeeze a red, green or blue color value into a very small space, but for the x,y, or z coordinates that describe where something is located, you need a much larger space.

It occurs to me that this is a kind of metaphor for things we deal with in everyday life. We need some things to be very precise, but other things can stay sort of fuzzy.

If you want to pick up and use a knife or fork, you had better know exactly where it is. But you don’t need to know exactly how much money you have in your bank account, unless funds are very very tight.

We only have some much attention to pay to things, so we generally keep a lot of things fuzzy in our minds, reserving precision for just a few things that need to be absolutely correct.

In a way, we are constantly negotiating a kind of precision economy. Because our time and attention is limited, there is only so much precision to go around. A precision economy is the inevitable result of an attention economy.

People tend to be very good at this without even thinking about it. After all, if we thought about it too much, that would not be an economical use of our limited attention budget. 😉

Blurring the distinction

Here is what I think is one good goal for practical collaboration using virtual reality: Participants should not care whether or not they are in virtual reality.

While working with other people on creating things, I should be able to seamlessly go back and forth between looking at a screen and being immersed in a virtual world. When I do that, there should not be any radical change in what I’m doing — only a change in my point of view.

There are several implications to this. For example, if I can use hand gestures to create and modify things in VR, I should be able to use the same hand gestures to create and modify those same sorts of things while looking into a computer monitor.

Reality and VR each give us a different — and complementary — set of powers. As we work together, we should be able to freely move back and forth between the two modes of viewing, depending on what is most useful at any given moment.

Normal tourist visit

It’s funny how phrases can suddenly enter the lexicon. A new brand new entry is “normal tourist visit.”

In particular, last week U.S. Representative Andrew S. Clyde (R-Ga.) compared the January 6 storming of the U.S. Capitol building to a “normal tourist visit.” This might seem surprising, coming from somebody who spent several hours that day helping to barricade the door against those same “tourists”.

But suppose we go with this a moment, and give the man the benefit of the doubt. What if Rep. Clyde really believes that an angry mob smashing down doors, injuring many people, and causing extreme panic and mayhem is a “normal tourist visit?”

Does that mean he would welcome such a visit into his house in Jackson County, Georgia? Is anything at all those tourists might do within his home that he would not consider proper?

If somebody asked him that, I wonder what he would say.

Happy accidents, part 3

In 2017 our Future Reality Lab spent several days at the Future of Storytelling festival putting on a live theater event with everyone — both audience and actors — walking around wearing untethered VR headsets in the same physical room. The story was inspired by Lewis Carroll’s Alice in Wonderland.

We put a lot of work into making sure that everyone’s location in the virtual room matched their location in the physical room. We felt that correspondence between real and virtual was essential for people feeling that they had been transported together into another world.

We did a lot of technical tests between shows, to make sure everything was working before the public came in. For the most part, things worked, thanks to the hard work of our awesome grad students.

But during one of our tests, in which I was in the experience with our art director Kris Layng, something went really wrong. We were hanging out in Alice’s virtual drawing room, and the tracking failed us.

All of a sudden, Kris and I were floating upward. We soon found ourselves hanging out around the chandelier, abut eight feet off the virtual ground.

Thinking back on it now, it kind of reminds me of the I Love to Laugh scene from Mary Poppins. Except, of course, that we weren’t watching it up on a screen — we were inside it.

And it was totally awesome. Of all of my memories of that virtual experience, that moment is the most vivid and powerful.

Sometimes the best way to create a vivid experience of an alternate reality is to break the rules of reality.

Happy accidents, part 2

In 2014 our lab started putting multiple people in free-roaming VR in the same room, and gave them shared activities like drawing in the air together in 3D. There was no equipment at the time that would support this, so we cobbled together our own technology using an OptiTrack motion capture system, GearVRs and WiiMotes (so people could press buttons on hand-held controllers).

Over the next few years we demonstrated this system at various conferences, such as Siggraph 2015. One of the places we showed it was the 2016 FMX conference in Stuttgart Germany.

We brought with us a PC, an OptiTrack system, three GearVR headsets and three WiiMotes. The idea was that three people at a time could hang out together and collaboratively draw in the air in VR. It was supposed to run for three days.

On the morning of the second day, disaster struck. One of the WiiMotes stopped working. Since we were far from home, there was no way to replace it in time.

Practically, this meant that one of the three people would have no way to draw in the air. We thought the experience would be severely compromised, and were debating whether to reduce it to only two people or else to shut it down altogether.

But in the end we decided to go on for the next two days with a system that supported three participants but only two controllers. To our surprise, the experience suddenly got a lot better.

The participants started to share their controllers back an forth. Everyone was very generous, and would help out the others, making sure everyone got enough time drawing in the air.

We had accidentally discovered generosity in VR. It was a very happy accident.

Be careful how you pronounce things

Some years back my sister had a pet dog, an adorable Bichon Frise named Susie. Everybody loved Susie. She was the sweetest and most good natured dog you could imagine.

It happened that around that time I was invited to speak at a conference in France. During one of the conference dinners, the topic of conversation came around to pets.

I said that I didn’t have any pets, but that my sister had an adorable Bichon Frise named Susie. To my surprise, everybody started laughing.

I was confused, until somebody explained the result of my mispronunciation. Apparently, the way I had said it, I had just told everyone that my sister had a fuzzy pigeon.

For the rest of the conference, I was known as that guy with the fuzzy pigeon. In retrospect, it was pretty funny.