Eccescopy, part 8

There was a time, not too long ago, when putting an electronic auditory enhancement device in your ear was something you did surreptitiously. A hearing aid was something you tried to hide — ideally you didn’t want anyone to know that you needed one. For example, here is an ad for a hearing aid designed to be as invisible as possible:




 

This is consistent with the principle that people generally try, whenever possible, to appear “more normal”. Since auditory impairment is seen as “less normal”, a hearing aid is viewed as something to hide.

But there has been a fascinating recent trend in the other direction. When a hearing device on one’s ear is seen as a source of empowerment, as in the case of bluetooth hands-free cellphones, people don’t try to hide these devices. Rather, they try to show them off.

The ultimate current expression of this is the Aliph “Jawbone” headset:




 

Suddenly it’s cool and sexy to have a piece of hi-tech equipment attached to your ear. I think that the key distinction here is between “I am trying to fix a problem” and “I am giving myself a superpower”. The former makes you socially vulnerable, whereas the latter makes you socially powerful.

This is something to consider when designing an eccescopic display device.

Dinner party

This evening I went to a dinner party.

Nothing spectacular happened, other than a perfect evening. The host and hostess had prepared a feast, various guests brought wine and desserts, and the stage was set for the simple magic of people sitting around a table talking and getting to know one another.

I had not met most of the guests, but I was not surprised to find how much I liked them. This is the old fashioned kind of social network — the kind that existed for millennia before Facebook. People whom you really like invite you to dinner, along with other people whom they really like, and soon you find yourself immersed in hours of fascinating conversations with newfound friends.

Afterward, I was amazed to discover that the evening had lasted nearly six hours — so easily did the time fly by. It’s heartening to realize that in this modern age of hi-tech computer games and 3D movies with three hundred million dollar budgets, the most fun is still to be had through the simple pleasure of a group of friends sitting around a table, enjoying each other’s company.

Eccescopy, part 7

I’ve mentioned before, in another context, Charles Darwin’s observation that every genotype requires a viable phenotype. That is, no mutation can survive unless it can produce viable offspring. Technology is like biological evolution in that it can’t just magically jump far ahead. Every step along the path to innovation needs to correspond to a set of needs, or it will die in the marketplace before enabling the next step.

For example, I don’t think we will achieve widespread eccescopy through surgery. Yes, technically we could give everyone an artificial lens implant, but the problem is that until there is a good reason for such a drastic intervention, people won’t do it.

It’s not even that invasive eye surgery is so exotic. You probably know many people who have had cataracts, and are walking around today with an acrylic lens implant — or maybe two. You don’t know who they are, because it’s not something people generally talk about. The operation itself is relatively simple and safe, requiring only local anaesthetic, and no stay in a hospital.

But it’s only done because it avoids blindness — a very different value proposition than, say, implanting an artificial lens so you can do Google searches within your eyeball. Most people won’t opt for invasive surgery unless it helps them to be more “normal”, however that word is currently defined in their culture.

In other words, even if you accept the hypothesis that artificially enhanced eyes, surgically upgraded at infancy, are the long term future of humanity, you can’t get there from here — at least not directly. First there would need to be an non-invasive technology, easy to adopt, to allow the underlying ecosystem to evolve, the layers of application code to be written, the world around us to become populated by well designed and compelling cybernetic ghosts.

Something you can wear, rather than implant. Next time I’ll talk about some candidate technologies.

At six

When I was six years old, I developed a theory that the Universe was divided into two worlds: The world that was inside my head, and the world that was outside my head.

I remember very clearly reasoning about this, and trying to work out how I could test my theory. My six year old brain quickly realized there was probably no good way to evaluate the relative “reality” of these two worlds, since they stood for incompatible things: On the one hand, the first world was the only one I had direct access to. On the other hand, everyone I cared about was in the second world.

Much later — when I was far older than six years of age — I learned that many philosophers throughout history had studied this very question. For one of us can ever truly get outside of our own heads, since we only have direct access to our own brains, not the brains of others. Yet to adopt Solipsism as a philosophy would be emotionally devastating, so we continually search for ways to reconcile the two worlds.

I remember that when the film “The Truman Show” came out, its premise seemed very familiar, since it brought me back to all of those early childhood musings. In my elementary school philosophical explorations, I had often looked around and asked myself “How do I know these people around me are real? I mean, how do I really know?”

Eventually, when I got older, resolution came to me in the form of Occam’s Razor: The idea that everyone was expending so much effort in a mere pretense of reality was far too complex an explanation. Besides, it suffered from the “Turtles all the way down” problem: If this reality was merely a facade, then how would I explain the reality that lay beyond the facade? Wouldn’t that be just a cover for yet another reality beyond, and so on ad infinitum?

Now when I think back on how all those thoughts were crowding into my six year old brain, it makes me suspect that such thoughts about existence have occupied other six year old brains as well. If we actually ever thought to talk with six year olds about these things, we might very well be surprised at what they could tell us.

Eccescopy, part 6

One could argue that coining a new word for what I’ve been describing is redundant, when we already have the perfectly good phrase “augmented reality”. But we’re not really talking about something like Layar, which provides a limited kind of augmented reality through your cell phone screen. We’d need to add some more words and phrases to “augmented reality” to give the whole story — like “social”, “unobtrusive”, “phoneless”, and “eye-relative”. We could, for example, say we’re discussing a Social Unobtrusive Phoneless Eye-Relative Augmented Reality. In other words, SUPER-AR!

Hmm. Maybe that sounds a little pompous. OK, for now I’m just going to stick with “eccescopy”. 🙂

Before delving too much more into the nuts and bolts of the underlying technology, it might be useful to think a bit about possible unexpected outcomes of having seamless augmented reality in our daily lives. Or in other words, “be careful what you wish for”.

Think back to that distant time before the advent of cell phones, if you can. Hard as it is to believe, nobody missed them. If you wanted to meet someone, you made a plan and you stuck to it. Now, of course, such a world seems inconceivable. After all, what happens if your plans change suddenly? As it turns out, the causality in the previous sentence actually goes both ways: Often your plans change suddenly because you have a cell phone. Or more precisely, plans change suddenly because you know it won’t be a social disaster to suddenly change your plans.

Looking around the web, I find far more thought given to these issues by designers than by technologists — which is not all that surprising. For example, this video presents a delightfully dystopian vision by Keiichi Matsuda of what might really happen if you got into the habit of relying on your computer for everything — even something as simple as preparing a cup of tea:




Halloween in Greenwich Village

Yes, the parade is crazy and wonderful and good loud fun, but I am far more impressed by the immense variety of costumes worn by ordinary New Yorkers I saw walking around Greenwich Village this evening.

In addition to the usual ghosts and goblins and grinning gruesomes from beyond the grave, and an entire army of refugees from Tim Burton Films past, present and future, you can find every variety of personage from any story that you have ever heard of — and from a few that you haven’t.

I saw a man wearing the world’s largest cowboy hat pass an entire family of ketchup bottles, and on the very next block Glinda the good witch was parading arm in arm with a Flying Monkey who was so alarmingly believable that the mere sight of him made me want to run off and warn Dorothy.

I particularly like when people go beyond the expected and create their own mash-ups. I saw everything from an electric Spiderman Ninja to a sort of behorned Viking version of Clint Eastwood’s “Man with no name”. And some costumes just defy any attempt at explanation: One man was walking along the street gleefully dragging a vacuum cleaner behind him on a rope.

I saw a cop call out to a guy in a mad scientist outfit: “Hey, you look like Elton John!” The mad scientist shouted back “I’m Dr. Farnsworth.” “You look like Elton John to me,” the cop insisted. But the mad scientist was sticking to his guns. “No,” he said, “I am Dr. Farnsworth!”

When I passed by the good doctor, I said to him “You invented the TV!” The mad scientist stared at me blankly for a moment. Then his face lit up in a huge grin. “I invented the TV!” he announced proudly, before walking off into the night.

Eccescopy, part 5

Perhaps the most widely embraced recent pop cultural offering to the effect that “we can see whatever we want wherever we want” is the 1999 film The Matrix. In that movie, the solution is simple — and rather crazy. One’s entire physical world is replaced by a simulated world.



 

In a sense, the conceit behind The Matrix is like putting Neal Stephenson’s Metaverse and Star Trek’s Holodeck on steroids. Where those two fictional ideas suggest a 3D immersive cyberspace merely as a place to visit (the virtual reality world in Caprica has a similar conceit), The Matrix suggests that life itself be lived entirely within cyberspace — generally without even the awareness that one is inhabiting an artificial construct.

Of course that change ups the ante considerably. The artificial world must be perfect, because it serves, for all intents and purposes, as reality. Once this level of fidelity can be achieved, then anything is possible. People can have super powers, and objects can change form or even disappear instantaneously, just as they can in any computer graphic scene.

But is such an all-encompassing direct brain interface possible? There isn’t much indication at this point that it is. Or at least, nobody yet seems to have a real clue how to go about it. The problem isn’t the physical connection of electrode arrays to brains. That’s difficult, but not impossible. Science has already advanced considerably beyond the version shown below. And in the next twenty years direct brain/computer interface technology is likely to advance far beyond what we can do today.



 

No, the basic problem is that your perception of reality is already a construct — one maintained by your brain. For example, you don’t literally see things the way a camera does. At any moment in time, your eyes see only a tiny window into reality, from which your brain then constructs a plausible model. It is really this constructed model that you “see”.

We don’t know very much about how this construction process works, which means we can’t hack into it with any effectiveness. And even if we could, a direct brain interface like the one in The Matrix would need to replace the considerable amount of image processing done by our optic nerve. We might also need to simulate the saccades and other movements made by our eyeballs as our brain continually refocuses its attention.

Most likely the best way to transmit visual information to our brains is the old fashioned way: by sending photons of visible light into our eyes.

Dating advice

Just to break things up a bit, I’m going to post something non-eccescopic today.

I attended a talk yesterday by Tim Johnson — the writer of many Dreamworks films. He pointed out that back when he directed Antz, there was a scene where Sharon Stone’s character asks Woody Allen’s character to dance (which in movie terms means she likes him, and that eventually she will sleep with him).

The script calls for Allen just to say “Yes”. But, Mr. Johnson revealed, Woody Allen ad libbed, and replaced the simple “Yes” with the much more effective line “Absolutely!”.

Fast forward ten years, from 1998 to 2008. In the David Fincher film Benjamin Buttton the character of Daisy, played by Cate Blanchett, at one point says to the eponymous character, played by Brad Pitt, “Sleep with me.”

In what turned out to be my favorite line of the movie, he replies “Absolutely!”.

It occurred to me, in that moment, that the writer Eric Roth probably lifted the line from Woody Allen’s ad lib of ten years earlier.

Which might have made it the first time in history that Woody Allen gave dating advice to Brad Pitt.

Eccescopy, part 4

If you’re going to create virtual objects that appear to float in the air between people, one way to do it is to actually put a virtual object in the air between people. This seems to be the principle of the Holodeck from Star Trek, the Next Generation. Such approaches have the disadvantage that you need some kind of site-specific projection device, so they are most likely not going to scale up to inhabit the entire shared world around us.

As I mentioned in an earlier post, an early fictional version of this projection-based approach is the one developed by the Krel in the 1957 film Forbidden Planet, clearly a direct inspiration for what George Lucas put on the screen twenty years later:



Art imitates art

Around 2002 Jeff Han and I collaborated on a project to try to make something for real, which we called Holodust.

Our basic approach was to draw the virtual object with a laser beam directly onto a cloud of dust. Of course you don’t know the exact position of each particle in a cloud of dust, which is why our plan was to use two lasers: An infrared laser would sweep through the cloud. Whenever it happened to hit a dust particle that it would be useful to illuminate, a second — visible — laser would flash, thereby lighting up just that one dust particle.

And here is a link to some Java applets simulating a Holodust display:



Here is a visualization of what a Holodust display might look like, together with a photo from an experiment that Jeff built to test the principle:



Note that there is a big distinction between such displays and, say, the Heliodisplay by IO2 Technology, which projects a flat image seemingly in thin air (actually into a thin sheet of water mist). The Heliodisplay is not eccescopic, since you don’t see a different image when you walk around it.

In contrast, the wonderful 360o Lightfield Display at USC is indeed more eccescopic than Holodust, because even the shading of the virtual object can change as you look at it from different directions. Unfortunately, it relies on a slanted metal mirror rotating at very high speed, so it you tried to touch it you would most likely destroy both the display and your hand.

Sometimes art imitates life. When I see the little sparkly dustlike particles in the air within the floating display of a virtual brain from Joss Whedon’s recent TV series Dollhouse, I definitely get the impression that it’s supposed to operate through some kind of Holodust:



Eccescopy, part 3

While creating visions from a computer that appear directly in the world around us is a wonderful thing, it’s not the be all and end all of eccescopy. It was arguably George Lucas who whet our appetite for the real killer app: Using that information as a way for people to communicate with each other — without disrupting the sense of shared physical space.

The original Star Wars film provided many distinct examples of this, of which here are just three:


star-wars-hologram

Princess Leia in a beam of light


star-wars-hologram-chess

Playing Dejarik aboard the Millenium Falcon (“Let the Wookie win”)


Rebels watching a visualization of the Death Star circling Endor

 

What’s most striking to me about these scenes, and others like them, is the clear sense they convey that in the world of Star Wars there is no need for computer screens, since a technology exists that allows information to simply appear in mid-air.

And yet the world of Star Wars is filled with computer displays. Perhaps George Lucas hadn’t really thought this through. But more likely, he probably realized that a sci-fi world without traditional computer screens would be completely incomprehensible to contemporary audiences.