Seeing eyes

We learn a lot about a person by looking at their eyes. There is so much subtlety of emotion in the space just around the eyes.

There are seven distinct groups of muscles in the area around the eyes. And even very slight and subtle uses of those muscles can convey a tremendous amount of meaning — whether intended or not.

When somebody is wearing sunglasses, much of that information is hidden. We simply can’t “read” people as well when they are sporting shades.

In the future, extended reality eyewear will likely become commonplace, and it will likely look a lot like sunglasses. Which means that for much of the time, we won’t be able to read people’s faces as well to catch subtleties of emotion and intent.

I wonder whether taking off your XR glasses will end up become a sort of social signal of emotional intimacy. Perhaps you will only show your actual eyes to people you trust, and the ability to hide your naked eyes from strangers will become a fundamental right of privacy, perhaps enshrined into law.

Zoning out in future extended reality

When I am on a particularly boring Zoom meeting, sometimes I multitask. I mute my microphone so that nobody can hear the telltale typing, and I get other stuff done.

Admit it, you’ve done the same thing. Some meetings just have a way too high time to interest ratio, and we all need to get stuff done.

In the future, when people are wearing extended reality glasses, I wonder whether this trend will continue. A person at the front of a meeting might see a sea of ostensibly interested faces, looking up attentively.

But in actuality, many of those people will zoning out by looking at the display built into their eyeglasses. Maybe they will be programming, or surfing YouTube videos, or shopping on-line. There won’t really be any way to know.

This is probably not a good thing. The alienation of people looking at their phones at least comes with a certain amount of social signaling. We can tell when someone is surfing their SmartPhone rather than paying attention in a meeting.

But when the screens move into our eyeglasses, that will no longer be the case. We may not have any idea who in the room is mentally present, and who is — for all practical purposes — somewhere else.

Math coding versus art coding

Last week in my class I showed the students how to implement some fairly mathematical algorithms. The funny thing about algorithms is that you need to get them exactly right. One false move and the algorithm breaks.

So my live coding had a certain quality of being a high wire act. Everything had to be exactly correct in order for me to “teach by coding”.

But this week, even though I was live coding, I was doing it to build up an animated figure. This wasn’t so much math as it was art — even though I was still using the medium of computer programming.

The tone of the class was much lighter. I could play around, try different things, take suggestions from the class. There was a lot more room for us to explore and experiment.

There is no real answer to the question of which is better — the two experiences are incredibly different, even though they are using the same language of coding.

But I can tell you there is one big difference: After the “making art by coding” class is over, I feel much less anxiety. 🙂

Demos from the future

One of the useful things about science fiction is that it creates a vehicle for presenting demos from the future to a broad audience. This matters because there are always restrictions on what we can build in any given year, yet we still want to be able to talk sensibly about the future.

In the case of computers, we have Moore’s Law, which suggests that computer capability grows exponentially over time. So we may not be able to achieve something in any given year, but we can predict with rough accuracy what we will be able to achieve a decade hence.

This principal can be applied to create plausible demos from the future. For example, Moore’s Law tells us that computer capability grows by roughly a factor of 100 in a decade.

If we were to take what we now know about machine learning, computer vision, computer graphics and other technologies used to support computer enabled human communication and multiply the speed of every component by 100, we could start designing a prototype for what a personal communication device might look like in 2031.

One of the best ways to deliver a sense of the capabilities of such a prototype into the public consciousness is through the medium of science fiction. Sounds to me like a fun project!

Future cool shades

There are many reasons to assume that future extended reality eyewear will look like sunglasses, rather than looking like contact lenses. For one thing, looking at a ten year technology horizon, it will be far easier to engineer the former than the latter. For another thing, there will always be many people who will feel uncomfortable wearing contact lenses.

I wonder whether this will have implications for fashion. After The Matrix came out, a lot of people started to wear cool shades to look like Neo and Trinity. The same thing happened in earlier eras, after Peter Fonda rocked a cool pair of shades in Easy Rider in 1969, and of course even before that with the rise of the Beats in 1958.

These forthcoming devices will likely be socially empowering, and connote power on whatever comes next in social media. Given that, I suspect we will soon be heading for a similar fashion era of “cool shades”.

It may look like a walnut

WandaVision is by far the best thing on television right now. I am very sad that next week will be the final episode.

One great thing about WandaVision is its precise and knowledgable references to classic moments in television culture going back well over half a century. Last night’s penultimate episode was no exception.

One of the things it referenced was my favorite episode of The Dick Van Dyke Show. Season 2, Episode 20 (first airing, February 6, 1963). I hadn’t seen this episode since I was a kid, but today — thanks to the wonders of streaming — I sat down and watched it again, and fully appreciated for the first time what a great parody it is of The Twilight Zone.

It can be difficult to explain now the particular appeal that The Twilight Zone had on America back then, and how deeply the creative tone of paranoia crafted by Rod Serling tapped into our nation’s Cold War fears. It’s even more difficult to explain how formally innovative it was at the time for a light-hearted sitcom to parody The Twilight Zone.

So now I realize how meta is the reference to that particular episode. When you crack open a walnut, you expect to find a walnut — but sometimes you find something completely different.

Episode 20 in Season 2 of The Dick Van Dyke Show moved the ball boldly forward on what can be done within a television show. It may look like a walnut, but it’s so much more.

Just like WandaVision.

Fractal presentations

Today we are hosting the annual visiting day for PhD students accepted into our computer science department at NYU. It’s the first time this event is being held on-line (last year’s visiting day took place just before the pandemic turned everything upside down).

Much of the event consists of a faculty member describing, in about 15 minutes, a large body of research in some broad area, such as machine learning, compiler optimization, or natural language processing. Each such area has an entire group of faculty working on it, and involves collaborations with lots of students and other labs around the world.

I am impressed, as I watch these presentations, at how each presenter is able to describe what is really an enormous amount of research in a clear and economical way. There is something beautiful about seeing a large topic summarized briefly in a way that still captures the excitement of doing research in that topic.

Ideally it would be great to see these short presentations expanded out as a kind of fractal. As a student is interested in learning a bit more about any sub-topic, they should be able to go down one or more levels to learn more, and then continue to descend into further details or pop back up and move on to other sub-topics as desired.

Maybe we should be building tools to support the creation of fractal presentations. That might be a good next step in the evolution of on-line learning.

Apps between us in reality

Now take everything I said in yesterday’s post, and apply it to a future reality in which we can all wear really good high quality mixed reality glasses. In that future, we will be able to see and hear and converse with each other face to face — whether or not we happen to be in the same physical room.

But also we will be able to gesture to create and manipulate objects in the space between us. Semantically, this would be a logical extension of the kind of apps that might be available in future “open” versions of something like Zoom.

This suggests a possible path toward building that shared enhanced reality. If we start the ecosystem now, while we are still all looking into screens, we might learn a lot about what works, and what people find engaging and useful.

Then when the really good extended reality glasses finally show up, we will all have a better idea how to use them properly. Why not start now?

Apps between us in Zoom

Yesterday I talked about potential future variants of Zoom. The idea would be to have something more open in terms of the stuff that could be seen and manipulated on the screen between you and me.

In particular, I’m thinking there could be a sort of App Store, in which vendors would create plug-ins that enhance functionality and interaction in various ways. One vendor might add a slide creation tool or shared text editor that would be superimposed right over the videos of participants. Another vendor might add a 3D character animation tool. A third might add some sort of cooperative video editing suite, another a customizable appearance filter.

Not every app would be successful. Some apps would be wildly popular, while others would languish. But that’s the beauty of the app store model — if there is an audience for any given app, that app will eventually find its audience.

Stuff between us in Zoom

Zoom allows you to choose any background, and also to scribble on the screen. So there is a rudimentary notion of “there are objects between us.”

But it doesn’t get much fancier than that. There isn’t really a shared sense of a world of meaningful tangible objects between you and me — just a kind of blackboard.

There is an opportunity to do much more. With the right software, we can create entire little interactive worlds to share between us.

What little worlds would you want to create and share?