Eccescopy, part 9

How would things look different in an eccescopic world? I’ve been having great conversations about this with some really thoughtful people, and I’ve begun to realize that the difference between a world of computer screens (even the little screens on SmartPhones) and a world where information is truly in the air around you, is at least as large as the difference between, say, books on paper and the Web.

There are at least two distinct reasons for this: First, eccescopic interfaces will allow us to interact with other people directly, without any screens getting in the way. Second, they will allow us to “paint” and otherwise annotate the physical world around us in ways that are visible only to some people and not to others.

Let’s take the first point. Suppose you and I are having a conversation about American history, and a question comes up, such as: “What was the name of Thomas Jefferson’s wife?” In today’s world, there would be a need for at least one of us to break eye contact with the other, type a query onto a computer screen (let’s say a SmartPhone), and then reestablish visual contact with our conversant.

Meanwhile, the other person is probably also visually disengaging — since it is impossible to maintain eye contact with a person who is staring down at a SmartPhone screen.

But if we knew that the entire search transaction — both query and response — were accessible wherever we already happen to be looking, then there would be no need to break eye contact.

Furthermore, such a scenario will encourage wide adoption of ways of entering text into a computer that do not require you to take your eyes away from the person you are talking to. There have indeed been solutions for this, such as the “Twiddler” pocket sized keyboard that Thad Starner uses for his research, but these have not come into general use — because the situations in which they are useful have been socially marginal.

In an eccescopic world, such “eye’s free” methods of entering text might become not only socially acceptable, but socially necessary.

Next time I’ll talk about the other point: annotating the world around us, in ways that are able to appear different to different people.

Writer/Director

This evening, out and about in Manhattan looking for a Saturday night film, my friend and I passed by the Village East Cinema at 2nd Avenue and 12th Street. There was a man outside with a big sign, hawking the new independent film Cherry.

Instinctively distrustful of all men with big signs, my friend and I moved on, first to a theatre on 3rd Avenue, and then to one on Broadway. As it happened, all of the films at those theaters looked like they would be the sort of low-aiming movie Hollywood makes as part of some sort of perversely concerted effort to melt down our collective brain cells into sad little puddles of rotted neural tissue.

We didn’t know that this Cherry film was, but given the competition, it was starting to sound intriguing. Circling back to the Village East Cinema, we discovered that the man with the giant placard was Matthew Fine, the producer. His brother Jeff Fine, the writer/director, was going to do a Q&A after the film.

I was completely charmed. I have gotten so used to the idea of a film being a vast operation that takes at least one full Iraqi War month of national resources, as some kind of James Cameron extravaganza with a billion dollar market cap, that to see the producer and writer/director standing on a street corner, hawking their little film like it was lemonade, made something in my soul sing for joy.

We bought tickets, and man, Cherry is one phenomenally good film. People who read this blog with any regularity know that I am not one to praise a movie lightly. As it happens, a recent review by Mike Hale in the New York Times completely missed the point of the film. Mr. Hale seemed emotionally unequipped to review a movie that actually takes its characters seriously. As far as I can tell, he was looking for cheap titillation, and was disappointed not to find it.

In fact, this is a brilliantly written, perfectly directed and edited, masterfully cast and acted tale of a compelling coming of age story. It has all of the elements of the classic hero’s journey (Joseph Campbell would have been proud), and what it does with those elements is insightful, deeply effecting and highly original.

If you happen to be in New York, it’s playing for only another five days at Village Cinema East (through Thursday November 11). Trust me, you should see it if you can.

Eccescopy, part 8

There was a time, not too long ago, when putting an electronic auditory enhancement device in your ear was something you did surreptitiously. A hearing aid was something you tried to hide — ideally you didn’t want anyone to know that you needed one. For example, here is an ad for a hearing aid designed to be as invisible as possible:




 

This is consistent with the principle that people generally try, whenever possible, to appear “more normal”. Since auditory impairment is seen as “less normal”, a hearing aid is viewed as something to hide.

But there has been a fascinating recent trend in the other direction. When a hearing device on one’s ear is seen as a source of empowerment, as in the case of bluetooth hands-free cellphones, people don’t try to hide these devices. Rather, they try to show them off.

The ultimate current expression of this is the Aliph “Jawbone” headset:




 

Suddenly it’s cool and sexy to have a piece of hi-tech equipment attached to your ear. I think that the key distinction here is between “I am trying to fix a problem” and “I am giving myself a superpower”. The former makes you socially vulnerable, whereas the latter makes you socially powerful.

This is something to consider when designing an eccescopic display device.

Dinner party

This evening I went to a dinner party.

Nothing spectacular happened, other than a perfect evening. The host and hostess had prepared a feast, various guests brought wine and desserts, and the stage was set for the simple magic of people sitting around a table talking and getting to know one another.

I had not met most of the guests, but I was not surprised to find how much I liked them. This is the old fashioned kind of social network — the kind that existed for millennia before Facebook. People whom you really like invite you to dinner, along with other people whom they really like, and soon you find yourself immersed in hours of fascinating conversations with newfound friends.

Afterward, I was amazed to discover that the evening had lasted nearly six hours — so easily did the time fly by. It’s heartening to realize that in this modern age of hi-tech computer games and 3D movies with three hundred million dollar budgets, the most fun is still to be had through the simple pleasure of a group of friends sitting around a table, enjoying each other’s company.

Eccescopy, part 7

I’ve mentioned before, in another context, Charles Darwin’s observation that every genotype requires a viable phenotype. That is, no mutation can survive unless it can produce viable offspring. Technology is like biological evolution in that it can’t just magically jump far ahead. Every step along the path to innovation needs to correspond to a set of needs, or it will die in the marketplace before enabling the next step.

For example, I don’t think we will achieve widespread eccescopy through surgery. Yes, technically we could give everyone an artificial lens implant, but the problem is that until there is a good reason for such a drastic intervention, people won’t do it.

It’s not even that invasive eye surgery is so exotic. You probably know many people who have had cataracts, and are walking around today with an acrylic lens implant — or maybe two. You don’t know who they are, because it’s not something people generally talk about. The operation itself is relatively simple and safe, requiring only local anaesthetic, and no stay in a hospital.

But it’s only done because it avoids blindness — a very different value proposition than, say, implanting an artificial lens so you can do Google searches within your eyeball. Most people won’t opt for invasive surgery unless it helps them to be more “normal”, however that word is currently defined in their culture.

In other words, even if you accept the hypothesis that artificially enhanced eyes, surgically upgraded at infancy, are the long term future of humanity, you can’t get there from here — at least not directly. First there would need to be an non-invasive technology, easy to adopt, to allow the underlying ecosystem to evolve, the layers of application code to be written, the world around us to become populated by well designed and compelling cybernetic ghosts.

Something you can wear, rather than implant. Next time I’ll talk about some candidate technologies.

At six

When I was six years old, I developed a theory that the Universe was divided into two worlds: The world that was inside my head, and the world that was outside my head.

I remember very clearly reasoning about this, and trying to work out how I could test my theory. My six year old brain quickly realized there was probably no good way to evaluate the relative “reality” of these two worlds, since they stood for incompatible things: On the one hand, the first world was the only one I had direct access to. On the other hand, everyone I cared about was in the second world.

Much later — when I was far older than six years of age — I learned that many philosophers throughout history had studied this very question. For one of us can ever truly get outside of our own heads, since we only have direct access to our own brains, not the brains of others. Yet to adopt Solipsism as a philosophy would be emotionally devastating, so we continually search for ways to reconcile the two worlds.

I remember that when the film “The Truman Show” came out, its premise seemed very familiar, since it brought me back to all of those early childhood musings. In my elementary school philosophical explorations, I had often looked around and asked myself “How do I know these people around me are real? I mean, how do I really know?”

Eventually, when I got older, resolution came to me in the form of Occam’s Razor: The idea that everyone was expending so much effort in a mere pretense of reality was far too complex an explanation. Besides, it suffered from the “Turtles all the way down” problem: If this reality was merely a facade, then how would I explain the reality that lay beyond the facade? Wouldn’t that be just a cover for yet another reality beyond, and so on ad infinitum?

Now when I think back on how all those thoughts were crowding into my six year old brain, it makes me suspect that such thoughts about existence have occupied other six year old brains as well. If we actually ever thought to talk with six year olds about these things, we might very well be surprised at what they could tell us.

Eccescopy, part 6

One could argue that coining a new word for what I’ve been describing is redundant, when we already have the perfectly good phrase “augmented reality”. But we’re not really talking about something like Layar, which provides a limited kind of augmented reality through your cell phone screen. We’d need to add some more words and phrases to “augmented reality” to give the whole story — like “social”, “unobtrusive”, “phoneless”, and “eye-relative”. We could, for example, say we’re discussing a Social Unobtrusive Phoneless Eye-Relative Augmented Reality. In other words, SUPER-AR!

Hmm. Maybe that sounds a little pompous. OK, for now I’m just going to stick with “eccescopy”. 🙂

Before delving too much more into the nuts and bolts of the underlying technology, it might be useful to think a bit about possible unexpected outcomes of having seamless augmented reality in our daily lives. Or in other words, “be careful what you wish for”.

Think back to that distant time before the advent of cell phones, if you can. Hard as it is to believe, nobody missed them. If you wanted to meet someone, you made a plan and you stuck to it. Now, of course, such a world seems inconceivable. After all, what happens if your plans change suddenly? As it turns out, the causality in the previous sentence actually goes both ways: Often your plans change suddenly because you have a cell phone. Or more precisely, plans change suddenly because you know it won’t be a social disaster to suddenly change your plans.

Looking around the web, I find far more thought given to these issues by designers than by technologists — which is not all that surprising. For example, this video presents a delightfully dystopian vision by Keiichi Matsuda of what might really happen if you got into the habit of relying on your computer for everything — even something as simple as preparing a cup of tea:




Halloween in Greenwich Village

Yes, the parade is crazy and wonderful and good loud fun, but I am far more impressed by the immense variety of costumes worn by ordinary New Yorkers I saw walking around Greenwich Village this evening.

In addition to the usual ghosts and goblins and grinning gruesomes from beyond the grave, and an entire army of refugees from Tim Burton Films past, present and future, you can find every variety of personage from any story that you have ever heard of — and from a few that you haven’t.

I saw a man wearing the world’s largest cowboy hat pass an entire family of ketchup bottles, and on the very next block Glinda the good witch was parading arm in arm with a Flying Monkey who was so alarmingly believable that the mere sight of him made me want to run off and warn Dorothy.

I particularly like when people go beyond the expected and create their own mash-ups. I saw everything from an electric Spiderman Ninja to a sort of behorned Viking version of Clint Eastwood’s “Man with no name”. And some costumes just defy any attempt at explanation: One man was walking along the street gleefully dragging a vacuum cleaner behind him on a rope.

I saw a cop call out to a guy in a mad scientist outfit: “Hey, you look like Elton John!” The mad scientist shouted back “I’m Dr. Farnsworth.” “You look like Elton John to me,” the cop insisted. But the mad scientist was sticking to his guns. “No,” he said, “I am Dr. Farnsworth!”

When I passed by the good doctor, I said to him “You invented the TV!” The mad scientist stared at me blankly for a moment. Then his face lit up in a huge grin. “I invented the TV!” he announced proudly, before walking off into the night.

Eccescopy, part 5

Perhaps the most widely embraced recent pop cultural offering to the effect that “we can see whatever we want wherever we want” is the 1999 film The Matrix. In that movie, the solution is simple — and rather crazy. One’s entire physical world is replaced by a simulated world.



 

In a sense, the conceit behind The Matrix is like putting Neal Stephenson’s Metaverse and Star Trek’s Holodeck on steroids. Where those two fictional ideas suggest a 3D immersive cyberspace merely as a place to visit (the virtual reality world in Caprica has a similar conceit), The Matrix suggests that life itself be lived entirely within cyberspace — generally without even the awareness that one is inhabiting an artificial construct.

Of course that change ups the ante considerably. The artificial world must be perfect, because it serves, for all intents and purposes, as reality. Once this level of fidelity can be achieved, then anything is possible. People can have super powers, and objects can change form or even disappear instantaneously, just as they can in any computer graphic scene.

But is such an all-encompassing direct brain interface possible? There isn’t much indication at this point that it is. Or at least, nobody yet seems to have a real clue how to go about it. The problem isn’t the physical connection of electrode arrays to brains. That’s difficult, but not impossible. Science has already advanced considerably beyond the version shown below. And in the next twenty years direct brain/computer interface technology is likely to advance far beyond what we can do today.



 

No, the basic problem is that your perception of reality is already a construct — one maintained by your brain. For example, you don’t literally see things the way a camera does. At any moment in time, your eyes see only a tiny window into reality, from which your brain then constructs a plausible model. It is really this constructed model that you “see”.

We don’t know very much about how this construction process works, which means we can’t hack into it with any effectiveness. And even if we could, a direct brain interface like the one in The Matrix would need to replace the considerable amount of image processing done by our optic nerve. We might also need to simulate the saccades and other movements made by our eyeballs as our brain continually refocuses its attention.

Most likely the best way to transmit visual information to our brains is the old fashioned way: by sending photons of visible light into our eyes.

Dating advice

Just to break things up a bit, I’m going to post something non-eccescopic today.

I attended a talk yesterday by Tim Johnson — the writer of many Dreamworks films. He pointed out that back when he directed Antz, there was a scene where Sharon Stone’s character asks Woody Allen’s character to dance (which in movie terms means she likes him, and that eventually she will sleep with him).

The script calls for Allen just to say “Yes”. But, Mr. Johnson revealed, Woody Allen ad libbed, and replaced the simple “Yes” with the much more effective line “Absolutely!”.

Fast forward ten years, from 1998 to 2008. In the David Fincher film Benjamin Buttton the character of Daisy, played by Cate Blanchett, at one point says to the eponymous character, played by Brad Pitt, “Sleep with me.”

In what turned out to be my favorite line of the movie, he replies “Absolutely!”.

It occurred to me, in that moment, that the writer Eric Roth probably lifted the line from Woody Allen’s ad lib of ten years earlier.

Which might have made it the first time in history that Woody Allen gave dating advice to Brad Pitt.