Halloween in Greenwich Village

Yes, the parade is crazy and wonderful and good loud fun, but I am far more impressed by the immense variety of costumes worn by ordinary New Yorkers I saw walking around Greenwich Village this evening.

In addition to the usual ghosts and goblins and grinning gruesomes from beyond the grave, and an entire army of refugees from Tim Burton Films past, present and future, you can find every variety of personage from any story that you have ever heard of — and from a few that you haven’t.

I saw a man wearing the world’s largest cowboy hat pass an entire family of ketchup bottles, and on the very next block Glinda the good witch was parading arm in arm with a Flying Monkey who was so alarmingly believable that the mere sight of him made me want to run off and warn Dorothy.

I particularly like when people go beyond the expected and create their own mash-ups. I saw everything from an electric Spiderman Ninja to a sort of behorned Viking version of Clint Eastwood’s “Man with no name”. And some costumes just defy any attempt at explanation: One man was walking along the street gleefully dragging a vacuum cleaner behind him on a rope.

I saw a cop call out to a guy in a mad scientist outfit: “Hey, you look like Elton John!” The mad scientist shouted back “I’m Dr. Farnsworth.” “You look like Elton John to me,” the cop insisted. But the mad scientist was sticking to his guns. “No,” he said, “I am Dr. Farnsworth!”

When I passed by the good doctor, I said to him “You invented the TV!” The mad scientist stared at me blankly for a moment. Then his face lit up in a huge grin. “I invented the TV!” he announced proudly, before walking off into the night.

Eccescopy, part 5

Perhaps the most widely embraced recent pop cultural offering to the effect that “we can see whatever we want wherever we want” is the 1999 film The Matrix. In that movie, the solution is simple — and rather crazy. One’s entire physical world is replaced by a simulated world.



 

In a sense, the conceit behind The Matrix is like putting Neal Stephenson’s Metaverse and Star Trek’s Holodeck on steroids. Where those two fictional ideas suggest a 3D immersive cyberspace merely as a place to visit (the virtual reality world in Caprica has a similar conceit), The Matrix suggests that life itself be lived entirely within cyberspace — generally without even the awareness that one is inhabiting an artificial construct.

Of course that change ups the ante considerably. The artificial world must be perfect, because it serves, for all intents and purposes, as reality. Once this level of fidelity can be achieved, then anything is possible. People can have super powers, and objects can change form or even disappear instantaneously, just as they can in any computer graphic scene.

But is such an all-encompassing direct brain interface possible? There isn’t much indication at this point that it is. Or at least, nobody yet seems to have a real clue how to go about it. The problem isn’t the physical connection of electrode arrays to brains. That’s difficult, but not impossible. Science has already advanced considerably beyond the version shown below. And in the next twenty years direct brain/computer interface technology is likely to advance far beyond what we can do today.



 

No, the basic problem is that your perception of reality is already a construct — one maintained by your brain. For example, you don’t literally see things the way a camera does. At any moment in time, your eyes see only a tiny window into reality, from which your brain then constructs a plausible model. It is really this constructed model that you “see”.

We don’t know very much about how this construction process works, which means we can’t hack into it with any effectiveness. And even if we could, a direct brain interface like the one in The Matrix would need to replace the considerable amount of image processing done by our optic nerve. We might also need to simulate the saccades and other movements made by our eyeballs as our brain continually refocuses its attention.

Most likely the best way to transmit visual information to our brains is the old fashioned way: by sending photons of visible light into our eyes.

Dating advice

Just to break things up a bit, I’m going to post something non-eccescopic today.

I attended a talk yesterday by Tim Johnson — the writer of many Dreamworks films. He pointed out that back when he directed Antz, there was a scene where Sharon Stone’s character asks Woody Allen’s character to dance (which in movie terms means she likes him, and that eventually she will sleep with him).

The script calls for Allen just to say “Yes”. But, Mr. Johnson revealed, Woody Allen ad libbed, and replaced the simple “Yes” with the much more effective line “Absolutely!”.

Fast forward ten years, from 1998 to 2008. In the David Fincher film Benjamin Buttton the character of Daisy, played by Cate Blanchett, at one point says to the eponymous character, played by Brad Pitt, “Sleep with me.”

In what turned out to be my favorite line of the movie, he replies “Absolutely!”.

It occurred to me, in that moment, that the writer Eric Roth probably lifted the line from Woody Allen’s ad lib of ten years earlier.

Which might have made it the first time in history that Woody Allen gave dating advice to Brad Pitt.

Eccescopy, part 4

If you’re going to create virtual objects that appear to float in the air between people, one way to do it is to actually put a virtual object in the air between people. This seems to be the principle of the Holodeck from Star Trek, the Next Generation. Such approaches have the disadvantage that you need some kind of site-specific projection device, so they are most likely not going to scale up to inhabit the entire shared world around us.

As I mentioned in an earlier post, an early fictional version of this projection-based approach is the one developed by the Krel in the 1957 film Forbidden Planet, clearly a direct inspiration for what George Lucas put on the screen twenty years later:



Art imitates art

Around 2002 Jeff Han and I collaborated on a project to try to make something for real, which we called Holodust.

Our basic approach was to draw the virtual object with a laser beam directly onto a cloud of dust. Of course you don’t know the exact position of each particle in a cloud of dust, which is why our plan was to use two lasers: An infrared laser would sweep through the cloud. Whenever it happened to hit a dust particle that it would be useful to illuminate, a second — visible — laser would flash, thereby lighting up just that one dust particle.

And here is a link to some Java applets simulating a Holodust display:



Here is a visualization of what a Holodust display might look like, together with a photo from an experiment that Jeff built to test the principle:



Note that there is a big distinction between such displays and, say, the Heliodisplay by IO2 Technology, which projects a flat image seemingly in thin air (actually into a thin sheet of water mist). The Heliodisplay is not eccescopic, since you don’t see a different image when you walk around it.

In contrast, the wonderful 360o Lightfield Display at USC is indeed more eccescopic than Holodust, because even the shading of the virtual object can change as you look at it from different directions. Unfortunately, it relies on a slanted metal mirror rotating at very high speed, so it you tried to touch it you would most likely destroy both the display and your hand.

Sometimes art imitates life. When I see the little sparkly dustlike particles in the air within the floating display of a virtual brain from Joss Whedon’s recent TV series Dollhouse, I definitely get the impression that it’s supposed to operate through some kind of Holodust:



Eccescopy, part 3

While creating visions from a computer that appear directly in the world around us is a wonderful thing, it’s not the be all and end all of eccescopy. It was arguably George Lucas who whet our appetite for the real killer app: Using that information as a way for people to communicate with each other — without disrupting the sense of shared physical space.

The original Star Wars film provided many distinct examples of this, of which here are just three:


star-wars-hologram

Princess Leia in a beam of light


star-wars-hologram-chess

Playing Dejarik aboard the Millenium Falcon (“Let the Wookie win”)


Rebels watching a visualization of the Death Star circling Endor

 

What’s most striking to me about these scenes, and others like them, is the clear sense they convey that in the world of Star Wars there is no need for computer screens, since a technology exists that allows information to simply appear in mid-air.

And yet the world of Star Wars is filled with computer displays. Perhaps George Lucas hadn’t really thought this through. But more likely, he probably realized that a sci-fi world without traditional computer screens would be completely incomprehensible to contemporary audiences.

Eccescopy, part 2

Forgetting for the moment how we would technically realize a vision of computer information simply coexisting with our physical world, it’s fun to see the fantasy versions of this vision that people have created — many of which are available on YouTube and Vimeo.

One that is particularly nice from a technical/aesthetic perspective is a video entitled “What Matters to Me”, in which Christopher Harrell describes his ideas by pulling them out of the air and arranging them in front of him in space:



I especially like the way some of those ideas perch atop his fingers, until he is ready to wave them away.

Another work that seems to get at some of these ideas (although it isn’t nearly as elegant) is the augmented office scene from the recent computer game “Heavy Rain”:



Then there is the lovely video by Bruce Branit, in which a young man constructs an entire world out of the air, using only his hands, for the woman he loves:



Closer to current technical possibility — and a great example of street theatre — is the delightful demonstration system in which Marco Tempest turns a piece of cardboard into a magical interactive space:



Mr. Tempest only manages to turn that one piece of cardboard eccescopic, but this is clearly a step in the right direction.

What these visions all have in common is the idea that there is no “computer” — there is only us. Information appears not on some disembodied screen, but rather right here in the physical world we share.

But how can we do this for real — not just on pieces of cardboard, but everywhere?

Eccescopy, part 1

Some years ago I was visiting Will Wright at Maxis, while they were still working on “The SIMS 2”. He showed me a box, exactly the size of a computer game CD box, with nice artwork, text, system requirements, everything you’d expect. Except that it was labeled “The SIMS 4”, and the release date was sometime around 2012. I looked more closely at the system requirements, and they were far beyond anything available at the time.

Will explained to me that this was always the way he and his colleagues plan new game releases. Right up front they design the box, the artwork, that characters, the nice little blurb that goes on the back of the box. Then they set about making it possible for you to open the box (which might take a few years). In Will’s own words: “The game is in there. You just can’t open the box yet.”

And so I’ve decided to expand on yesterday’s post with a series of descriptions of the emerging field of “eccescopy”. My techno-geek side likes to think that “ecce” stands for “eye centered computed environment”. An eccescope is simply a device to let everyone see an alternate world created within the computer cloud, thereby allowing that world to appear before our eyes, right alongside our own physical world. It’s the ultimate extension of what is currently called “augmented reality”.

I chose that word because neither “eccescope” nor “eccescopy” appear even once in a Google search (although after today’s post, that will presumably change). I also chose “ecce” in order to rescue that perfectly respectable Latin word, which means “to see”, from its ignominious association with a certain unfortunate fable involving an ancient Roman prefect of Judaea. 🙂

When you put “ecce” together with the Latin word “scopus”, which means “to look”, you get the idea — in order to see, all you need to do is look. Well, that’s that’s the basic idea anyway. In follow-on posts I will describe what an eccescopic future might be like, and how we might get there from here.

Beyond computer screens

One day, in not so many years, we will no longer need computer screens. One way or another, it will become cheap and easy for people to see objects that are not there, superimposed onto the physical world around us. Not only that, but each of us will be able to see our own personal view of this augmented reality, customized for where we happen to be at the moment.

Only a few years ago I thought that the level of technology required for this would not arrive for perhaps half a century, but now I’ve come to see that it will probably be here well within the next decade. Which means it’s time to think seriously about how to make the most of things.

Will it be a good thing or a bad thing when virtual objects will inhabit the physical space between us all — when the collective ideas of humans burst forth, free from mere books and screens, becoming free to roam the world?

Will this new way of seeing information alter our fundamental relationship with our physical selves? Or will it have the opposite effect — freeing us once and for all from the harsh limitations of a screen-bound information world, so that we can return once more to the world of mind joined to body, for which our evolution has so well prepared us?

The inverse law of New York City

For most of my life it was generally received wisdom within the United States of America that New York City was a terrible place to live. That is, of course, unless you lived in New York City. New Yorkers have always loved their city with a fierce pride, a zealous passion akin to that which possessed the ancient citizens of Imperial Rome.

But if you didn’t actually live in the Big Apple, you knew it only from movies and TV shows, which invariably showed it to be a crazy scary brutal place where catching a cab is hard, but getting shot by a drug dealer is easy. Or if you did visit, you probably did what all tourists did, and headed straight for ridiculous places like Times Square (a part of the city actual New Yorkers try to avoid).

But then, nine years ago, New York City got attacked. Yes, I know — technically everyone in the U.S. got attacked — but believe me, it was different if you were there, and people you were actually connected to died, and for months after you had to breath in that sick smell of death and rubble. For us it was a violation more personal than political. And of course it was difficult to stay calm and gracious when so many well meaning visitors wanted to “check out Ground Zero and then head up to see Les Miz”.

A number of my friends found it all too depressing, and moved away from NY during those years. But the flip side was that people all around the United States suddenly liked New Yorkers. We were warmly embraced by our fellow Americans, supported, even honored, and all of that snide anti-New York attitude seemed to fade away.

Until recently. Lately I’ve noticed that the anti-New York sentiment seems to be coming back. Somehow we have once more come to represent, in some circles, everything that is wrong with this country. Yet as it happens, a recent poll has shown that New Yorkers are quite happy — that we are in fact more content and satisfied and enthused about our city than we have been in years.

And so it seems there is some law of conservation at work: The more depressed New Yorkers get, the more they are embraced and celebrated by their fellow Americans. Whereas the happier people are to live in New York, the more convinced the rest of the country is that it’s a terrible place to live.

I wonder, why is that?

Song in my head

This evening, returning from a lovely dinner out with some friends, I realized — at the very moment I reached my door — that I had been replaying the same song over and over in my head for the previous twenty minutes. During those twenty minutes I had not been conscious of doing any such thing, but as soon as I caught myself, awareness and memory came flooding back all at once.

I’ve found myself doing this sort of autopilot song-playing many times, with a surprisingly large variety of musical genres. This time it was Neil Diamond’s “Song Sung Blue”. One recent evening I realized my mind had been endlessly replaying a version of Leonard Cohen’s “Hallelujah” (in particular, the Sad Kermit version of the Jeff Buckley cover).

My unconscious mind does not seem to favor any particular type of music. At various times I have found myself endlessly replaying everything from Beethoven’s “Ode to Joy” to Lady Gaga’s “Bad Romance”. Apparently deep down my mind has highly catholic tastes.

Several years ago some well meaning friends pitched in and bought me an iPod Touch. Dutifully I tried carrying it around the city with a pair of earphones, listening to my favorite songs. But it never really worked. No matter how much I liked any given song, hearing it through earphones seemed vaguely annoying, as though some intruder were trying to barge in on my brain uninvited.

I realize only now what the problem was: The song in my ears was most likely interfering with whatever song was already playing in my head.