Virtua vérité

Cinéma vérité often uses a shaky hand-held camera to emphasize the fact that one is seeing something being filmed. Of course, audience members are still sitting in their seats, and all this apparent shakiness is taking place within a rock-steady rectangular frame — the screen itself.

You can’t quite do the same thing in virtual reality, because there is no frame. In the content itself, you need to keep the horizon line steady. Once the visual horizon line becomes shaky, people quickly become nauseous, and sometimes they fall over.

Film doesn’t have this problem because the screen border itself creates a physiological horizon line, whatever the on-screen content. Which leads to the question: What other fundamental differences are there in the way we perceive “reality” in a film and in virtual reality?

If we take the frame of the screen as a metaphor, perhaps there are other ways that removing the cinematic frame can alter the experience. For example, what about the flow of time itself?

A filmmaker asks only that you face the screen, offering an implicit promise: As long as you are looking in the proper direction, the film itself will do all the work of directing your attention. Cuts, camera movements, changes of scene, these are all done for you.

But this may not be the case in virtual reality, where there isn’t necessary a “proper” direction. In a sense, our narrative horizon line might not be there. Which means we may need to create that narrative horizon line some other way.

It’s not yet entirely clear how best to do that.

The gaze of the puppet

I was watching a puppet show this evening, and it struck me how uniquely powerful the gaze of the puppet can be.

Humans are burdened by our literalness. We have human faces and bodies, we have real lives of our own, we are flesh. This limits our ability to embody abstraction, to focus down our essence to a single powerful idea.

As Scott McCloud pointed out in Understanding Comics, on some deep level we identify a photo-realistic representation of a person as “the other”, whereas we identify a simplified representation of a person as “the self”.

This transference, augmented by an uncanny stillness, operates when we watch a puppet on stage. We are not seeing the puppet the way we see an actor. We see the puppet as ourself. When the puppet looks at something, we feel that it is we who are looking, through the puppet’s eyes.

And it’s what happens next that makes it all exciting: We find ourselves questioning why we are looking, how it makes us feel, what it all really means. We project our emotions onto the puppet, and through that projection we are able to see more deeply into our own soul.

A bit optimistic

I was talking with my friend Oliver about my “Bit” post from the other day. I described to him my possibly dystopian vision of the time-traveling superpower of being able to send a bit of information to the past. It was dystopian because if you screw up, you annihilate your entire time line. Definitely not fun.

Oliver pointed out that there is another way of looking at all this: If you screw up, then you’ve only annihilated the time line of everything that happened after you received the bit of information from the future. So the “early” version of you that gets info from the future is just fine. It’s only the future version of you, the one that has screwed up, which becomes disappeared.

Which means that you actually never get it wrong. The only version of future you which ends up existing is the one that never makes any mistakes.

Oliver’s version of all this is much friendly than mine: Everything always works out by definition.

I must admit, this is a much nicer way to think about time travel.

Mirror mirror

This evening I went to a little gathering in honor of Marvin. I had lovely conversations with his family, close friends, people who were connected to each other through the very fortunate circumstance of having known this marvelous person.

To my surprise a friend of mine — somebody I’ve known for decades — told me that she reads my blog faithfully. It’s “Ken redux”, she said, a kind of boiled down snapshot of me, offered up in little posts.

I had never thought of it that way. For me, this blog isn’t about me. Rather, it’s about everything outside of myself — a chronicle of the wonder of all the things going on in the world.

But of course to anybody else who is not me, it’s the other way around. You are seeing me looking out, and what you see is my act of looking outward — which is, of course, a kind of representation of me.

This whole “I / thou” duality is so ingrained, so taken for granted, that we rarely even think about it. So it was odd, yet strangely fascinating, to see my little mirror onto the world held up to me by a friend, and to realize that what it was reflecting was my own face.

Bit

Here’s an odd twist on the time travel genre: Suppose you could send a single bit of information back to yourself in the past every so often — say, once a month. This is just enough information to tell your past self the answer to true/false questions.

But even this little bit would be an enormous superpower. The you that is in the past could frame questions such as “Will Google stock go up this month?”, and instantly learn the correct answer. To make this all work, the you that is in the future would simply need to make sure to send the proper “true” or “false” answer back in time.

But here’s the kicker: If you screw up even once — if you ever send the wrong bit back, or fail to send any answer at all — then your particular time line would cease to exist, a victim of paradox.

There might still, perhaps, be an infinite number of other versions of you off in parallel universes. But the one with your unique memories and life experiences would vanish. We’re talking existential retirement with extreme prejudice.

Would you want such a superpower? Or would it be too nerve wracking? I’m a bit conflicted myself.

Marvin

Marvin Minsky, who sadly passed away earlier this week, wasn’t just the smartest person I knew. I think he might have been the smartest person that anybody knew. He was in a whole different category.

But I’m not really talking here about intelligence on something as obvious as a linear scale. It’s more that Marvin’s mind was extraordinarily free. He could fly through idea space the way a swallow flies through the air. He would see connections where nobody else even thought to look, leap effortlessly from one concept to an entire new category of concepts, and in general make anybody he was talking with feel ten times smarter.

And all of this without an ounce of hubris. Marvin didn’t care who you were, or whether you were the “right” sort of intellectual. I’ve seen him ignore a room filled with Nobel prize winners, to focus on conversation with a single high school student, just because that student was seriously interested in discussing ideas. He was a true democrat, who believed in the power and the potential of each individual human mind.

I will miss him deeply. The world is a poorer place for having lost him.

Unjargon

A friend pointed out to me that my “Train of Thought” post the other day was incomprehensible to her. And I realized that it might be incomprehensible to a lot of people.

The problem is that I spend much of my time in a milieu where terms like “Turing test” and “Big Data” are understood by everyone in the room. But that doesn’t help once you take the discussion out of that room, and those phrases just sound like jargon.

“Turing test” is shorthand for Alan Turing’s famous thought experiment, which he called the “imitation game”. The idea is that you test a computer in the following way: The computer holds a conversation with a person (over a teletype, so they can’t actually see each other), and the person then tries to guess whether they’ve been conversing with a real person or to a computer.

This contest, the basic set-up for the recent film Ex Machina, as well as many other works of speculative fiction, raises all sorts of interesting questions. For example, if a computer consistently passes this test, can it be said to think? And if so, is it a kind of person? Should it be granted civil rights under the law?

“Big Data”, on the other hand, is the idea that if you feed enormous amounts of data to a computer program that is good only at classifying things into “more like this” or “less like that”, then the program can start to make good decisions when new data is fed to it, even though the program has absolutely no idea what’s going on.

This is what Machine Learning is all about, and it’s the reason that Google Translate is so good. GT doesn’t actually know anything about translating — it’s just very good at imitation. Because Google has fed it an enormous amount of translation data, it can now translate pretty well.

But Google Translate doesn’t really know anything about language, or people, or relationships, or the world. It’s just really good at making correlations between things if you give it enough examples.

So my question was this: If you use the Big Data approach to imitate human behavior, are there some human behaviors that can never be imitated this way, not matter how much data you feed them?

Let’s put it another way: If you fed all the romance novels ever written into a Machine Learning algorithm, and had it crunch away for long enough, would it ever be able to sustain an intimate emotional relationship in a way that is satifying to its human partner? Even though the computer actually has no idea what is going on?

My guess is no. On the other hand, there are probably more than a few human relationships that work on exactly this basis. 🙂

Old fashioned selfie

Today was all about the snow. The snow meant different things to different people. For some it was a terrible inconvenience, for others a day off from work. But for those of us who live on Washington Square Park, it was sheer heaven.

In a sort of magical transformation, the moment anyone set foot in the park today, whatever their age, they turned into little children. People were laughing and running about, throwing snowballs, having a grand old time. Because no cars were on the streets, it occurred to me that this was pretty much the same experience people would have had in this park a century ago.

Here is a photo I took amid the revelry in the park, looking through the Arch toward Fifth Avenue. In the distance you would normally see the Empire State Building. But not today — it is hidden behind a swirling mass of falling snow:

Of course, not everything is the same as it was 100 years ago. For one thing, back in 1916 people weren’t taking so many selfies.

I decided I would record my snow day in the park the old fashioned way, by making a snow angel. It’s something I learned as a kid. The first good snowfall of winter you would would go out in the backyard, lie on your back and flap your arms. When you stood up again, it would be as though an angel had been lying in the snow.

It’s hard to see the result from the image below, taken today in Washington Square Park, as the late afternoon winter light fades everything to ghostly Maxfield Parrish hints of yellow and blue. But it is indeed the angel version of yours truly, in a very old-fashioned sort of selfie:

Train of thought

As I was looking at yesterday’s post, I started thinking about a sort of Turing Test for fonts: Would it be easy or hard to design a randomized font — in the style of the one I showed yesterday — so people would not be able to tell that the randomness was machine generated?

And then I realized that it would be quite easy: You could use a “big data” approach. First analyze a lot of samples of actual human writing, then use those to train a machine learning algorithm. You can then use that algorithm to generate new writing samples. It’s one of those problems that is actually quite amenable to a “big data” machine learning approach.

But then I started thinking, could we start to arrange all human abilities on a scale from “easily faked by big data” to “not at all fake-able by big data”?

Some things, like generating randomized fonts, are on the easy end of the spectrum. Other things, like maintaining a long term intimate relationship, are probably way off on the difficult end of the spectrum (or at least, I’d like to think so).

But what about everything in between? Driving a car has turned out to be more tractable than people had once thought, as have chess and rudimentary translation between natural languages.

I wonder, is there some litmus test we can apply, to get a rough sense of how easy or difficult it would be to emulate any human task via machine learning, given sufficient data showing humans themselves doing it?