Future board games

In order to play a board game like Monopoly or Chess or Scrabble, you need just the right equipment. Alas, most of the time when you are hanging out with friends, you don’t have a Monopoly or Chess or Scrabble board with you.

But soon that won’t be a problem. As soon as you and your friends put on your XR specs, the board will materialize on the table in front of you.

On the one hand, this seems like a step backward. Instead of a being tangible experience, these games will become ephemeral.

Yet there is another way of looking at it. Many more people will be able to play them. And that can’t be a bad thing, right?

A room with a view

Right now the value of real estate varies tremendously with whether or not it has a good view. Alas, no matter how much money you pay, you always get the same view.

At some point, when extended reality specs become as numerous as smartphones are today, that will change. You will be able to decide what view out of your window that you want on any given day, whether of The Eiffel Tower, or of the Grand Canyon, or of a lunar landscape.

I wonder what that will do to the value of real estate.

A.I. Etiquette, part 2

Our understanding that we are dealing with a fellow human is not something intellectual. It is instinctive, innate, part of our biology.

We don’t reject the humanity of chatbots because they are insufficiently capable. We reject their humanity because they are not human.

It means nothing to us if they are turned off, or duplicated, or altered in various ways, because there is nothing really at stake.

In contrast, we view each human life as inherently precious, and the loss of a human life as a tragedy. This is not intellectual. It is tribal, it is
primal, and it is baked into our DNA.

The rate of A.I. development is not relevant here. In this realm, there are larger forces at work.

A.I. Etiquette, part 1

Today I wanted to confirm whether I already needed to make payment on a bill, so I called the number written on the bill statement. Not surprisingly, the call was answered by a virtual person.

“She” was very polite, and she asked me some questions to verify it was really me, guiding me through the process. At some point she said “Your bill is not due until November 30. Would you like to pay now?”

At that point, I just hung up the phone. The bill was not yet due, so I didn’t need to pay anything, and there was no point in continuing.

Had I been talking to a real person, I would have exchanged some sort of pleasantries before hanging up. Presumably I would have thanked the person for their time, wished them a good Thanksgiving holiday, and so forth. But in this case, since there was no actual person on the other end, I simply hung up.

Afterward, the question occurred to me as to whether A.I. will ever advance enough to change my behavior. In other words, in that same situation, given a sufficiently advanced A.I. agent, would I ever feel the need to first exchange pleasantries with that agent, rather than simply hanging up the phone?

I suspect that the answer is no, and I think the reasons are profound and important. More tomorrow.

Sometimes four there are

When you and I are having a face-to-face conversation, the spatial dynamics are fairly simple. Both of us are facing directly toward the other, and at any point in the conversation it’s clear where the focus of attention is.

If we were to use advanced extended reality technology to visually place virtual objects into the scene, we could pretty much always place them half way between you and me, and everything would make perfect sense.

But when four people are sitting around a square table, things are a little more complex. The two people at one corner might be engaged in their own conversation, or three people might be sharing a conversation while the fourth is checking their notes.

So where should those virtual objects go? Does the system need to actively interpret what is going on within our conversation — perhaps who is paying attention to whom — and then make dynamic decisions based on that?

And what if you are talking to me, but I am ignoring you because I am listening to somebody else around the table. What are the best visuals for that situation?

I suspect that much of this will become clear some time in the future, whem multi-participant extended reality has been seamlessly integrated into our everyday conversations.

Always two there are

Soon after October 10, the launch day for the Meta Quest 3, I took to carrying one around with me when traveling. If I got on an airplane, my handy Quest 3 would go with me.

Previously I had been toting around a Quest Pro, but those are big and clunky. In happy contrast, the Quest 3 takes up hardly any room at all.

But now I no longer carry around a Quest 3 when I travel. Instead, I always carry two Quest 3s.

One of them is for me, and the other is for whatever colleague I am meeting with. I put on one, hand the other to my colleague, and say “try this.”

This lets me properly road test what it feels like for two people to have a face-to-face conversation in mixed reality. Maybe it will feel like the future.

Immersion

There has been much debate in recent years about the relative merits of optical see-through XR (as in the Hololens or Magic Leap) versus video passthrough XR (as in the Vision Pro or Quest Pro). I have a very strong preference.

It comes down to the question of immersion. Do I feel that the XR objects that I am interacting with are immersed within my world?

In the case of optical see-through, objects are visually clipped to a somewhat small area in front of me. If I turn my head to the side by even a moderate amount, the object that I was just looking at will disappear.

That is not the case with video passthrough. A virtual object in video passthrough is visible throughout your entire field of view. However you turn your head, you can see that it is still there off to the side.

On a visceral level, these are a fundamentally different experiences. Objects in optical see-through feel transient and ephemeral. In contract, objects in video passthrough are persistent, the way real objects are — you get the feeling that they are still there even when you are not looking at them.

To me this makes all the difference.

Happy feet

Today is Savion Glover’s 50th birthday. Thinking of that great dancer/choreographer always reminds me of one of my odder cinematic experiences.

In 2006 I went out to the movies with some friends to see Happy Feet, an animated film about dancing penguins (although not the first — Mary Poppins got there first). I enjoyed the movie ok, but the thing I really loved about the movie was the tap dancing.

Now here is the weird part. Every time the lead penguin danced, it was obvious to me that I was watching the tap dancing of Savion Glover. It was as though I was watching the man himself — visualized as a penguin, but still inimitably himself.

Sure enough, his name was listed in small print buried somewhere in the end credits. He had not only choreographed the dances, but they had motion-captured the man himself.

I enjoyed it, but there was something odd about it. The one thing that delighted me about that movie — the dancing — was only credited in a kind of “blink and you’ve missed it” way.

And don’t even get me started on Being John Malkovich and Phil Huber…

Technological nostalgia

When a new technology replaces an old technology, there is a transition period during which people who remember the old technology may miss it. For example, I am sure that there were people during the time of the advent of talkies who missed the very different kind of expressive power of silent movies. But that generation is long gone.

I wonder what technologies of today will experience that temporary burst of nostalgia for a transitional generation. There might, for example, in our own lifetimes, be a group of people who will still remember when people drove cars as opposed to cars driving themselves. Later generations will wonder why anybody would ever want such a thing, and there would be no easy way to explain it to them.