When you live in New York City

Once upon a time my brother lived in downtown Manhattan. I recall that he enjoyed it a lot. Like any self-respecting young man taking a break from earning his M.D. at a top medical school, he spent his time in Manhattan fronting a rock band and picking up a Ph.D. in mathematics.

Eventually my brother left NY to go back and finish medical school, and he now lives in a different city. But every now and again he and his family still come to visit New York, and they always have a great time.

I still recall, all these years later, how my brother described his mixed feelings as he was about to leave NY for Chicago to finish up his M.D. I believe his exact words to me were: “When you live in New York City, you realize that everything is convenient, and nothing is easy.”

As a confirmed New Yorker, I think I can say, with considerable pride, that he was right on the money.

Eccescopy, part 10

Head mounted displays are usually clunky, because they are designed as research instruments. Here, for example, is a perfectly functional virtual reality device that you would probably not want to use at home:




 

It’s pretty clear that an eccescopic populace would need something a lot less bulky and intrusive. The following device by Vuzix is a lot closer:




 

But it’s still not quite there, for at least two reasons: (1) The device doesn’t let you see the actual reality around you, and (2) When you are wearing it, people can’t establish eye contact with you.

The first problem could be tackled by installing little outward-looking video cameras (in fact Vuzix makes just such a product), but that not only degrades one’s view of the actual world, but it makes the second problem worse — with the cameras attached, the user ends up looking like some sort of scary cyber-martian:




 

There are head-mounted devices that let you look through them, so you can see the real world while also looking at cyber-enhancements. One of these is by Nomad, from Virtual Realities, Inc::




 

It’s a very impressive machine, but it’s not going to be winning consumer fashion contests any time soon. Much closer to the mark, in terms of something one might actually wear, is the Brother Airscouter:




 

How far off is this form factor from what is required, to enable an eccescopic world? That’s a topic for next time.

Meta-art

It occurred to me today that there is a certain class of person who might be called a “meta-artist”. Most people, when they set out to, say, make music, will find the instrument that best suits them, and will proceed to master that instrument.

But then there are people whose love of music inspires them to look at instruments and ask the question “what could I do to make this instrument better?” People like Les Paul, Robert Moog and Laurie Spiegel. Such artists don’t just want to give the world music. They want to give the world a better way to make music.

The same thing happens in all of the arts. But it happens in a really unique way in the computer arts.

When I was sixteen years old I saw Walt Disney’s Fantasia for the first time. From that moment forth I knew I wanted to do that with my life — to create visions of the worlds that we can see in our dreams.

As I set about doing this, I quickly found myself proceeding in a meta-artistic way. I didn’t end up drawing pictures (although I could draw pictures reasonably well). Rather, I started to write computer programs that would simulate the worlds I wanted to explore, that might create the visions I wanted to see.

I spent a lot of time doing math and creating new algorithms. Yet I wasn’t particularly interested in the math or the algorithms. Or rather, I was just as interested in them as, say, an architect is interested in a screwdriver. Math and computer software were merely the tools that could lend me greater power to explore the sorts of wonders that I had on seen that day, sprung from the minds of such visionaries as Bill Tytla and Oskar Fischinger.

Now that an ever greater number of kids are becoming versed in the ways of computers, the intersection is growing between kids who yearn to create art and kids who learn to wield the awesome power of programming. We might very well be entering the age of the meta-artist. A brave new world indeed!

Eccescopy, part 9

How would things look different in an eccescopic world? I’ve been having great conversations about this with some really thoughtful people, and I’ve begun to realize that the difference between a world of computer screens (even the little screens on SmartPhones) and a world where information is truly in the air around you, is at least as large as the difference between, say, books on paper and the Web.

There are at least two distinct reasons for this: First, eccescopic interfaces will allow us to interact with other people directly, without any screens getting in the way. Second, they will allow us to “paint” and otherwise annotate the physical world around us in ways that are visible only to some people and not to others.

Let’s take the first point. Suppose you and I are having a conversation about American history, and a question comes up, such as: “What was the name of Thomas Jefferson’s wife?” In today’s world, there would be a need for at least one of us to break eye contact with the other, type a query onto a computer screen (let’s say a SmartPhone), and then reestablish visual contact with our conversant.

Meanwhile, the other person is probably also visually disengaging — since it is impossible to maintain eye contact with a person who is staring down at a SmartPhone screen.

But if we knew that the entire search transaction — both query and response — were accessible wherever we already happen to be looking, then there would be no need to break eye contact.

Furthermore, such a scenario will encourage wide adoption of ways of entering text into a computer that do not require you to take your eyes away from the person you are talking to. There have indeed been solutions for this, such as the “Twiddler” pocket sized keyboard that Thad Starner uses for his research, but these have not come into general use — because the situations in which they are useful have been socially marginal.

In an eccescopic world, such “eye’s free” methods of entering text might become not only socially acceptable, but socially necessary.

Next time I’ll talk about the other point: annotating the world around us, in ways that are able to appear different to different people.

Writer/Director

This evening, out and about in Manhattan looking for a Saturday night film, my friend and I passed by the Village East Cinema at 2nd Avenue and 12th Street. There was a man outside with a big sign, hawking the new independent film Cherry.

Instinctively distrustful of all men with big signs, my friend and I moved on, first to a theatre on 3rd Avenue, and then to one on Broadway. As it happened, all of the films at those theaters looked like they would be the sort of low-aiming movie Hollywood makes as part of some sort of perversely concerted effort to melt down our collective brain cells into sad little puddles of rotted neural tissue.

We didn’t know that this Cherry film was, but given the competition, it was starting to sound intriguing. Circling back to the Village East Cinema, we discovered that the man with the giant placard was Matthew Fine, the producer. His brother Jeff Fine, the writer/director, was going to do a Q&A after the film.

I was completely charmed. I have gotten so used to the idea of a film being a vast operation that takes at least one full Iraqi War month of national resources, as some kind of James Cameron extravaganza with a billion dollar market cap, that to see the producer and writer/director standing on a street corner, hawking their little film like it was lemonade, made something in my soul sing for joy.

We bought tickets, and man, Cherry is one phenomenally good film. People who read this blog with any regularity know that I am not one to praise a movie lightly. As it happens, a recent review by Mike Hale in the New York Times completely missed the point of the film. Mr. Hale seemed emotionally unequipped to review a movie that actually takes its characters seriously. As far as I can tell, he was looking for cheap titillation, and was disappointed not to find it.

In fact, this is a brilliantly written, perfectly directed and edited, masterfully cast and acted tale of a compelling coming of age story. It has all of the elements of the classic hero’s journey (Joseph Campbell would have been proud), and what it does with those elements is insightful, deeply effecting and highly original.

If you happen to be in New York, it’s playing for only another five days at Village Cinema East (through Thursday November 11). Trust me, you should see it if you can.

Eccescopy, part 8

There was a time, not too long ago, when putting an electronic auditory enhancement device in your ear was something you did surreptitiously. A hearing aid was something you tried to hide — ideally you didn’t want anyone to know that you needed one. For example, here is an ad for a hearing aid designed to be as invisible as possible:




 

This is consistent with the principle that people generally try, whenever possible, to appear “more normal”. Since auditory impairment is seen as “less normal”, a hearing aid is viewed as something to hide.

But there has been a fascinating recent trend in the other direction. When a hearing device on one’s ear is seen as a source of empowerment, as in the case of bluetooth hands-free cellphones, people don’t try to hide these devices. Rather, they try to show them off.

The ultimate current expression of this is the Aliph “Jawbone” headset:




 

Suddenly it’s cool and sexy to have a piece of hi-tech equipment attached to your ear. I think that the key distinction here is between “I am trying to fix a problem” and “I am giving myself a superpower”. The former makes you socially vulnerable, whereas the latter makes you socially powerful.

This is something to consider when designing an eccescopic display device.

Dinner party

This evening I went to a dinner party.

Nothing spectacular happened, other than a perfect evening. The host and hostess had prepared a feast, various guests brought wine and desserts, and the stage was set for the simple magic of people sitting around a table talking and getting to know one another.

I had not met most of the guests, but I was not surprised to find how much I liked them. This is the old fashioned kind of social network — the kind that existed for millennia before Facebook. People whom you really like invite you to dinner, along with other people whom they really like, and soon you find yourself immersed in hours of fascinating conversations with newfound friends.

Afterward, I was amazed to discover that the evening had lasted nearly six hours — so easily did the time fly by. It’s heartening to realize that in this modern age of hi-tech computer games and 3D movies with three hundred million dollar budgets, the most fun is still to be had through the simple pleasure of a group of friends sitting around a table, enjoying each other’s company.

Eccescopy, part 7

I’ve mentioned before, in another context, Charles Darwin’s observation that every genotype requires a viable phenotype. That is, no mutation can survive unless it can produce viable offspring. Technology is like biological evolution in that it can’t just magically jump far ahead. Every step along the path to innovation needs to correspond to a set of needs, or it will die in the marketplace before enabling the next step.

For example, I don’t think we will achieve widespread eccescopy through surgery. Yes, technically we could give everyone an artificial lens implant, but the problem is that until there is a good reason for such a drastic intervention, people won’t do it.

It’s not even that invasive eye surgery is so exotic. You probably know many people who have had cataracts, and are walking around today with an acrylic lens implant — or maybe two. You don’t know who they are, because it’s not something people generally talk about. The operation itself is relatively simple and safe, requiring only local anaesthetic, and no stay in a hospital.

But it’s only done because it avoids blindness — a very different value proposition than, say, implanting an artificial lens so you can do Google searches within your eyeball. Most people won’t opt for invasive surgery unless it helps them to be more “normal”, however that word is currently defined in their culture.

In other words, even if you accept the hypothesis that artificially enhanced eyes, surgically upgraded at infancy, are the long term future of humanity, you can’t get there from here — at least not directly. First there would need to be an non-invasive technology, easy to adopt, to allow the underlying ecosystem to evolve, the layers of application code to be written, the world around us to become populated by well designed and compelling cybernetic ghosts.

Something you can wear, rather than implant. Next time I’ll talk about some candidate technologies.

At six

When I was six years old, I developed a theory that the Universe was divided into two worlds: The world that was inside my head, and the world that was outside my head.

I remember very clearly reasoning about this, and trying to work out how I could test my theory. My six year old brain quickly realized there was probably no good way to evaluate the relative “reality” of these two worlds, since they stood for incompatible things: On the one hand, the first world was the only one I had direct access to. On the other hand, everyone I cared about was in the second world.

Much later — when I was far older than six years of age — I learned that many philosophers throughout history had studied this very question. For one of us can ever truly get outside of our own heads, since we only have direct access to our own brains, not the brains of others. Yet to adopt Solipsism as a philosophy would be emotionally devastating, so we continually search for ways to reconcile the two worlds.

I remember that when the film “The Truman Show” came out, its premise seemed very familiar, since it brought me back to all of those early childhood musings. In my elementary school philosophical explorations, I had often looked around and asked myself “How do I know these people around me are real? I mean, how do I really know?”

Eventually, when I got older, resolution came to me in the form of Occam’s Razor: The idea that everyone was expending so much effort in a mere pretense of reality was far too complex an explanation. Besides, it suffered from the “Turtles all the way down” problem: If this reality was merely a facade, then how would I explain the reality that lay beyond the facade? Wouldn’t that be just a cover for yet another reality beyond, and so on ad infinitum?

Now when I think back on how all those thoughts were crowding into my six year old brain, it makes me suspect that such thoughts about existence have occupied other six year old brains as well. If we actually ever thought to talk with six year olds about these things, we might very well be surprised at what they could tell us.

Eccescopy, part 6

One could argue that coining a new word for what I’ve been describing is redundant, when we already have the perfectly good phrase “augmented reality”. But we’re not really talking about something like Layar, which provides a limited kind of augmented reality through your cell phone screen. We’d need to add some more words and phrases to “augmented reality” to give the whole story — like “social”, “unobtrusive”, “phoneless”, and “eye-relative”. We could, for example, say we’re discussing a Social Unobtrusive Phoneless Eye-Relative Augmented Reality. In other words, SUPER-AR!

Hmm. Maybe that sounds a little pompous. OK, for now I’m just going to stick with “eccescopy”. 🙂

Before delving too much more into the nuts and bolts of the underlying technology, it might be useful to think a bit about possible unexpected outcomes of having seamless augmented reality in our daily lives. Or in other words, “be careful what you wish for”.

Think back to that distant time before the advent of cell phones, if you can. Hard as it is to believe, nobody missed them. If you wanted to meet someone, you made a plan and you stuck to it. Now, of course, such a world seems inconceivable. After all, what happens if your plans change suddenly? As it turns out, the causality in the previous sentence actually goes both ways: Often your plans change suddenly because you have a cell phone. Or more precisely, plans change suddenly because you know it won’t be a social disaster to suddenly change your plans.

Looking around the web, I find far more thought given to these issues by designers than by technologists — which is not all that surprising. For example, this video presents a delightfully dystopian vision by Keiichi Matsuda of what might really happen if you got into the habit of relying on your computer for everything — even something as simple as preparing a cup of tea: