Puppetry by music

I was watching Jaron Lanier play jazz piano with a small ensemble recently, and it occurred to me that in his freeform, shambling, yet artful way, he was radiating the same energy that suffuses his talks about technology: A kind of extreme casual intelligence, seemingly spur of the moment but actually the product of years of thought and contemplation.

And suddenly I decided that when I give talks about the future, I need to be improvising on a piano keyboard. The words I speak about virtual reality, cyber-connections, neural implants, should be complemented by freeform improvised jazz.

The hybrid format I’m contemplating probably breaks at least twenty different rules that keep CP Snow’s two cultures safely apart. Which means I will probably piss off quite a few people. On the other hand, I think Bob Dylan was totally on target in Newport in 1965 (you could look it up).

So today I tweaked my Chalktalk program, the one I’ve been using for all my recent teaching and presentations. Normally, when I use Chalktalk, I sketch as I talk, and those sketches then turn into animated ideas and creatures, which act out whatever topic I’m talking about.

But today I added a new feature: The animated creatures can appear in response to certain chords I play on my (midi) piano keyboard. Now, in addition to puppetry by drawing, puppetry by music.

So I guess I’m already working on that talk Jaron inspired. And if I do it right, the visual ideas that show up and move about on screen will appear to flow naturally not just from the words, but also from the music.

Which, in a way, is exactly right.

Spice

This week a friend told me that somebody she knows is writing a book about the history of cinnamon. That seems like a great idea for a book, because it creates an opportunity to talk about so many interesting topics, from cuisine to culture to capitalism to colonialism.

But then I got to thinking. What if — just maybe — there was some sort of miscommunication?

Publisher:

We’d like you to write a history of Cinema. We’re offering a $100K advance.

Author:

Hmm, that’s an interesting topic. Are you sure people will want to read about something so common?

Publisher:

Oh yes, it’s part of people’s everyday lives, isn’t it?

Author:

Yes … I guess so. Well, ok, I’m not going to turn down such a generous cash advance. I’ll see what I can do.

some months later

Publisher:

How’s the book coming?

Author:

You were right — this is a fascinating topic.

Publisher:

So you’re managing to cover fresh ground?

Author:

Well, not necessarily fresh ground, but definitely spice.

Publisher:

Spice is good. Readers like that. But make sure it’s in good taste.

Author:

Oh, very good taste. If you add the right ingredients to the mix.

Publisher:

Sounds wonderful! When can we expect a draft?

The author was so heartened by his publisher’s unexpected enthusiasm that he started work on a sequel, completely on spec: A Brief History of Thyme.

Making pictures with math

Today, for a virtual reality project we are doing, a student asked me if there was a good way to arrange dots around a sphere in a nice random way. This student isn’t a math or computer science student, but he has been taking my computer graphics class, where we focus on how math can describe visual things.

So I felt confident that I could just describe to him, in a few words, a good approach: Instead of picking dots on a sphere, pick dots inside the cube that surrounds the sphere. This is easier, because you can just pick the dot’s x, y and z coordinates independently.

Then if any dot you pick falls outside the sphere, just throw it out and try again. So now you’ve got a collection of random dots that happen to all be inside the sphere. Now all you have to do is push all those points out to the sphere’s surface, and you’re done.

The cool thing was that this was all I needed to tell him. He totally understood it, got why it worked, and started coding it then and there. The other eight or so students around the table also got it, and none of them were math or computer science majors.

And they all seemed very interested when I said that the technique I’d just described was a well known technique in math called the “Monte Carlo method”. It’s called that because you basically keep rolling the dice like you’re in a gambling casino, but then you get to decide which rolls of the dice you want to keep.

I love the fact that a group of students who think of themselves as artists, animators and designers are comfortable with this way of thinking about visual things. “Making pictures with math” may just be catching on.

Shared language

I was having a delightful discussion today with one of my Ph.D. students. The topic was some exciting new mathematical ideas we are working on to in our computer graphics research.

Part of me was completely immersed in the conversation. But another part of me was listening to the rhythm and flow of it, as a sort of fly on the wall. And I realized that my student and I are so absorbed in this research, we describe things to each other in a way that a third person would have had a very difficult time following.

It’s not that the concepts are so radically difficult. It’s more that we have developed our own shared language for describing the mathematical pictures in our minds, and discussing ways that we might play with those ideas and try out different possibilities.

Any other reasonably thoughtful person could indeed be brought up to speed on what we were saying. But they would probably need to learn some version of this shared language. And that all by itself would take time and effort.

So much of shared understanding can pivot on shared language. If you can’t properly express to eath other the thoughts and images in your mind, then you can’t explore those ideas together — you can’t go on exciting journeys with each other.

Of course a lot of this comes down to motivation. My student and I are both passionate about what we consider to be beautiful ideas in mathematics and computer graphics. I am sure that something similar transpires between two jazz musicians using a verbal shorthand that they have developed over time to discuss cool musical ideas.

It’s not that a third person couldn’t learn their jazz language. It’s more that the person might not want to. Practically speaking, the ability to learn a particular shared language of ideas requires an inherent love of those ideas.

And so we develop languages that bind us to our respective tribes, whether they be tribes of sports, medicine, war, politics, music or computer graphics. We recognize the people who share a tribe because those people have put in the effort to learn our tribal language.

And like us, they did it for love.

Real life

I’ve been noticing that a lot of the shared “virtual” reality experiences in my own research are highly asymmetric. Our team seems to be going for maximum disruption of recieved notions of shared space.

In our experiments, one avatar might be tiny and another huge, one appearing as a realistic human and another as a glowing ball of light. We are creating social experiences between people that place them in radically different vantage points.

One would think that trying to keep things as symmetric as possible would be the name of the game. But somehow that seems like a cop out, a reduction to the obvious. Distance lends value to proximity, and difference gives power to connection.

After all, isn’t our “common viewpoint” merely a well-learned illusion? No two human beings have ever had the same literal experience of reality, and nobody, other than you, has seen the unique sequence of images that form your particular visual life experience.

Yet you accept, without hesitation, that you share the same reality with other people who have never seen the images that have flowed into your eyes and your brain.

So why shouldn’t we explore radically different subjective experience? After all, isn’t that an apt metaphor for our actual experience of so called “real life”?

Back to the Future Reality

HILL VALLEY, 21 October, 2015 — Today Marty McFly arrived in that alternate future reality where the Cubs win the World Series and hoverboards work over any surface (except water, of course). Back to the Future Part II came out in 1989, the same year that Tim Berners-Lee began sketching out his vision for what we now know as the World Wide Web.

Four years later we had Mosaic, the first really practical web browser. For the first time, a significant number of people were exposed to the Web as a reality. Many were wondering what exactly it was good for, and whether it was going to catch on.

Of course it was harder in 1993 to see the Web as the globe-spanning medium we now take for granted. Such a radical level of transformation required people to build Web-based content and software, and other people to use that content and software. That kind of ecosystem takes time to develop and grow.

The Web grew quite steadily for the next 14 years. Then in 2007 came a major disruptive leap: Apple launched the iPhone. For the first time, consumers could put the Web in their pocket and take it with them everywhere. We now take this reality for granted.

I think we are now, in 2015, about to enter the Mosaic stage of Virtual Reality. The technology itself has existed for many years, but in spring of 2016 it will for the first time become widely available to consumers (via competing platforms from Facebook, Valve, SONY and others). Its close cousin, see-through 3D Augmented Reality, will launch soon thereafter, from Microsoft, Google and others.

Many people are asking what VR will be good for. Assuming a rate of evolution analogous to the one from Mosaic to iPhone, the answer to that will become very clear over the next 14 years, more or less. Between now and then applications for VR and AR will continue to grow and develop. People will come to rely on personal and professional applications of VR/AR that nobody today has even thought of.

Then sometime around 2030, VR will get its iPhone: People will just pop in their cyber-contact lenses (which will also double as cameras). Immersive Virtual Reality and see-through Augmented Reality will become one and the same technology, and people will begin to take that reality for granted.

And that reality will come to seem quite normal. Until around 2040, when neural implants get their Mosaic. I can’t even imagine what reality will be like after around 2055, when neural implants get their iPhone, a full century after lightning struck the courthouse clock in Hill Valley.

Awesome evening

This evening I am seated directly beneath the blue whale at the American Museum of Natural History. This is perhaps, to my inner seven year old, the most exalted of all Manhattan locations. The evening is a celebration of science, of progress, of the inexorable power of curiosity.

But then it all goes weird, and that’s good.

The first after dinner speaker is Brian Greene, the string theorist and popularizer of string theory. He is here to tear down the edifice of CP Snow’s two cultures, although, oddly, he never once mentions CP Snow by name.

I’ve met Brian a few times through the years, and he’s seemed like a nice and level-headed fellow. But not tonight. Tonight Brian is on fire. His talk is an interpretive dance, a poem acted out with both body and words. The length of his pauses after every preposition would make Christopher Walken weep with jealousy.

I have never seen anything quite like this performance. In his impassioned defense of science, he becomes a living work of art, his body thrusting this way and that, his hands passionately sculpting the empty air.

I find myself getting lost in the rhythm of his movements, the words merely adornment to this great string theoretic dance. It is the dance of the universe, and Brian Greene is its prophet.

And then something completely different.

The closing speaker is Alan Alda. Many of you know Alan from his days as Hawkeye Pierce in the television adaptation of M*A*S*H. But this is not that Alan Alda.

This isn’t even the next Alan Alda after that, the one who became the go-to spokesperson for science around 10 or 15 years ago, like a sort of Carl Sagan without the Ph.D.

No, this is Alan Alda your grandfather, the old guy with a million stories to tell, and all the time in the world to tell them. Like the one about that time, all those years ago, when your grandmother and I — this would have been before the war — were having lunch in Sammy’s Deli, back when it was really kosher (not like these kids today), and they ran out of pickles. Pickles! Would you believe it? Stop me if you’ve heard this one…

This goes on for several hours (or about 30 minutes if you’re going strictly by clock time), and it is utterly charming. Alan Alda is the granddad you never had, the old geezer with those endless stories that still have the power to make your parents roll their eyes.

I’m not really sure which of the two speakers I like the best. They are both utterly strange and delightful, mostly because they somehow manage to completely subvert the program. Not because of anything they say, but by virtue of the sheer charisma of their insane styles of presentation.

All in all, it’s an awesome evening.

Beyond VR games

Today I tried a demo on the SONY Morpheus VR game system. It was thrilling and fun and completely absorbing. I was playing a computer game, and I was also completely inside the game, surrounded on all sides by its exciting and fast paced world.

Yet there was something about it that was not at all different from my prior experience playing computer games. Fundamentally, it wasn’t about VR as something new, but rather VR as an extension of something completely familiar.

When you are playing a computer game and you find yourself, say, racing down a highway with bad guys all around, your mind pretty much goes with it. You give yourself over to the game’s world.

This is the “Magic Circle” contract: The actor on stage is Hamlet, the words on paper are Lizzie Bennet, the 30 foot tall face projected on a flat surface is Indiana Jones. We know none of it is real, but we agree to suspend our disbelief while we are inside the magic circle. We leave reality behind for the duration.

As I was playing a thrilling chase game with the SONY Morpheus (which really is a fabulous system), I was still sitting in a chair, experiencing essentially the same kind of game experience I had experienced in other computer games. Just in a more immersive way.

This is quite different from our research, and the research of our collaborators, in which participants wearing VR headsets literally walk around in physical space, voting with their feet as they experience another world. I suspect that this difference — the fact that your physical body is literally committing to the choices you make — creates an experience beyond VR itself.

The SONY Morpheus does a fantastic job of creating a vivid and fully realized game world. I think what we are after is something different: A way to create a vivid and fully realized new reality.

Essentially human

Computers are able to do more and more things that in former times were thought to be the sole province of human minds. Deep Blue beat Garry Kasparov way back in 1997, and since then the trend has continued, as the rising tide of Moore’s law claims an ever wider swath of computational challenges.

It looks as though driving cars is going to be taken over by computers pretty soon, and computers are still getting faster. And that leads to an interesting question: What sorts of things remain essentially human, in the sense that they remain definitively outside the grasp of artificial computation?

Is some list being compiled somewhere of firmly inimitable human mental capabilities? I would guess that such a list is being compiled by somebody somewhere, but I am not sure where to begin to look.

Any suggestions how to search for it? Or maybe that’s a job for an AI. 🙂

Something new

Today I am trying something completely new. I am writing this blog post entirely by dictating it into my phone.

I know this is not all that profound, but it does raise some interesting questions in my mind. For example, am I the same person, exactly, when I type and when I speak?

And if I am not precisely the same person, in the sense that there is some other part of my mind being represented, will there be an even more radical change in who that person is, in that future when people will express themselves through direct brain interfaces?

Anyway, something to think about, through thoughts which may be coming out of my mind, but which may never have come out of my fingertips.