Eccescopy, part 14

This week I visited the MIT Media Lab, where I talked to several of my friends on the faculty about eccescopy. I was surprised and rather delighted at the positive response. For example, Ramesh Raskar is fascinated by the possible form factors, and the ways they could be achieved.

Hiroshi Ishii feels that it is important to look at the entire Interface Ecology — how individuals and social networks communicate, and how an embodied face-to-face cyber-enhanced communication fits together with ideas about community.

And Pattie Maes has pretty much been doing this kind of thing anyway with her students, through things like smart light bulbs (which contain cameras and projectors), and position-tracked portable projectors that you carry around with you to visually augment physical objects in your environment (an idea that has been explored by Pierre Wellner, Hiroshi Ishii and Ramesh Raskar).

We all agreed that following Will Wright’s description of The SIMS 5 (“The game is already in the box … you just can’t open it yet”), is a useful way to frame things: While we are developing the physical support layer, we should be building applications as though that support layer is already available.

Today I was also told by a student about a book I am now going to try to read soon — the 2006 novel Rainbows End by Vernor Vinge. Apparently it is a world of the near future in which everyone wears a portable display that lets them see a cyber-enhanced world superimposed upon our real world.

Well, almost everyone wears a portable display. In the book there are some anti-technology rebels who insist on using good old fashioned computer screens. On some days I know just how they feel.

Eccescopy, part 13

It would be appealing to think in terms of the eccescope in a form factor of a contact lens. There actually has been some interesting work in putting electronics on contact lenses, with the goal of eventually building a contact lens display. Babak Parviz and his former student Harvey Ho at the University of Washington have demonstrated that a contact lens containing electronic components can be worn by a rabbit for twenty minutes with no ill effects. The contact lens is shown below:




 

It has not been reported whether the rabbit thought this was a good idea, but that is a subject for another post.

The hope is that this will lead to eventual development of a kind of “hololens”, in which the contact lens somehow creates an image upon the eye’s retina:


hololens

 

Unfortunately, this is more difficult than it might seem. The problem is that the hololens is at an awkward place within the optical system. An image that originates within a contact lens — pressed against the eye’s cornea — is too near to be imaged by the eye’s own lens via conventional optics. Either a collimated light would need to originate inside the hololens (in other words, a tiny laser would need to be embedded inside the hololens), or else the lens would need to incorporate a fine array of micro-scale LEDs, each with its own tiny collimating lens, and each on the order of few microns in width.

This is almost certainly possible in the long term, but it is going to take a formidably large amount of engineering to get all the components in place. So while this may well be the eventual future, I wouldn’t bet on this approach winning out for the first generation of eccescopic displays.

Dundee

Having just returned from a delightful and all too short trip to Dundee Scotland (a beautiful small town just ’round the coast and across the Tay from St. Andrews), my mind is still pleasantly reverberating from just how wonderful the people are. There is an amazing warmth to the people who live there, a genuine sense of acceptance, that is a lovely thing to experience.

I thoroughly enjoy living in the big city (and it doesn’t get much bigger than New York), but for a change it was a delight to be bathed in the magic of a small town, a place where everyone, it seems, knows everyone — and absolutely everybody has a deliciously sardonic sense of humor.

And there’s nothing quite like standing by the Tay Bridge whilst reading William McGonagall’s infamous poem “The Tay Bridge Disaster”, which the residents of Dundee are proud to explain is possibly the worst poem ever written.

Eccescopy, part 12

An eccescope, like any eye-centric augmented reality device, doesn’t merely need to display an image into your eye. It also needs to make sure that this image lines up properly with the physical world around you. That requires tracking the position and orientation of your head with high speed and accuracy. Otherwise, the displayed image will appear to visually drift and swim within the world, rather than seeming like a part of your environment.

Fortunately, there are technologies today that are up to the job, although they have not yet been integrated together into one package. For high speed reliability and responsiveness, there are devices like the Epson Toyocom Inertial Measurement Unit (IMU). This device uses tiny electronic gyroscopes and accelerometers to measure rotation and movement fast and accurately. And it’s only 10 mm long (about 0.4 inches), which means it can fit unobtrusively on an earpiece:




 

But that’s not quite enough. Inertial sensors respond fast to head movements, but over time the measured angle will drift. The Epson Toyocom IMU has a drift of about six degrees per hour, which is very impressive, but still not good enough — an eccescope needs to have no significant angular drift over time.

Fortunately there are other technologies that give absolute measurements, even though they are not as fast. For example, a small video camera can look out into the environment and track features of the room around you (such as corners, where the edges of things meet). As you move your head, the camera tracks the apparent movement of these features.

The two types of information — fast but drift-prone inertial tracking and slow but drift-free video feature tracking — can be combined to give a very good answer to the question: “What is the exact position and orientation of my head right now?” And with good engineering, all of the required components can fit in an earpiece using today’s technology.

Once the computer that is generating the image knows exactly where your head is, it can place the virtual image in exactly the right place. To you, it will look as though the virtual object is residing in the physical world around you. If the data from the IMU is used properly, the position of this object will not seem to drift as you move your head.

There are other things we want the eccescope to know about in the world around us, such as the positions of our own hands and fingers. We’ll return to that later.

BBC

Today, at the NEoN Festival in Dundee Scotland, I was interviewed by the BBC. The interviewer asked me a question about how the games industry can grow.

I didn’t think I was prepared for such a question, but answering words seemed to tumble out of my mouth (for better or worse). I told him that the game industry needs to be ready for the emerging internet market, and that this market is increasingly going to mobile devices. I then said that mobile devices are rapidly going global. For example, many kids in Africa, who have no access to computers, nonetheless have cell phones.

I told him that when those third world cell phones become interestingly powerful — as they soon will, thanks to Moore’s Law — the market for games will grow from mere millions to billions, and whoever is ready to seize that market is going to win big.

It would be nice if that turns out to be right. Especially if you think games can be effective for education.

Eccescopy, part 11

One of the key elements of any eccescope is its display module — the part that actually directs images into your eye. Light doesn’t bend around corners by itself, so the question remains of how to direct those light rays into your pupil. One possibility is to take the concepts suggested by the Brother Airscouter, and push them to their limit. Here is my visualization of how such a display module might appear:




 

What is going on in this visualization is that components within the earpiece (not seen in this image, because they are blocked by the head) are generating a tiny collimated image that is directed to that little optical deflector you can see positioned in front of the eye. Different parts of this generated image are deflected by different parts of the optical deflector into the user’s pupil. The result is that collimated light rays enter the user’s pupil from different directions.

Like the Airscouter, this is a form of “retinal display”, since the user’s eye is left to do the work of converting those light rays from different directions into an image. In a retinal display, there is no physical display screen, just a virtual image that appears to be floating out in space, similar to the virtual image that you see when you peer into the eyepiece of a microscope or telescope.

With the proper engineering, a display module with this form factor is perfectly achievable. In fact, it might even be possible to create a display module that is even smaller. But there is more to an eccescope than its display module. It also needs to track — with high speed and accuracy — the user’s head position and orientation. Also, it is useful for the eccescope to gather information about what the user is looking at.

More to come.

When you live in New York City

Once upon a time my brother lived in downtown Manhattan. I recall that he enjoyed it a lot. Like any self-respecting young man taking a break from earning his M.D. at a top medical school, he spent his time in Manhattan fronting a rock band and picking up a Ph.D. in mathematics.

Eventually my brother left NY to go back and finish medical school, and he now lives in a different city. But every now and again he and his family still come to visit New York, and they always have a great time.

I still recall, all these years later, how my brother described his mixed feelings as he was about to leave NY for Chicago to finish up his M.D. I believe his exact words to me were: “When you live in New York City, you realize that everything is convenient, and nothing is easy.”

As a confirmed New Yorker, I think I can say, with considerable pride, that he was right on the money.

Eccescopy, part 10

Head mounted displays are usually clunky, because they are designed as research instruments. Here, for example, is a perfectly functional virtual reality device that you would probably not want to use at home:




 

It’s pretty clear that an eccescopic populace would need something a lot less bulky and intrusive. The following device by Vuzix is a lot closer:




 

But it’s still not quite there, for at least two reasons: (1) The device doesn’t let you see the actual reality around you, and (2) When you are wearing it, people can’t establish eye contact with you.

The first problem could be tackled by installing little outward-looking video cameras (in fact Vuzix makes just such a product), but that not only degrades one’s view of the actual world, but it makes the second problem worse — with the cameras attached, the user ends up looking like some sort of scary cyber-martian:




 

There are head-mounted devices that let you look through them, so you can see the real world while also looking at cyber-enhancements. One of these is by Nomad, from Virtual Realities, Inc::




 

It’s a very impressive machine, but it’s not going to be winning consumer fashion contests any time soon. Much closer to the mark, in terms of something one might actually wear, is the Brother Airscouter:




 

How far off is this form factor from what is required, to enable an eccescopic world? That’s a topic for next time.

Meta-art

It occurred to me today that there is a certain class of person who might be called a “meta-artist”. Most people, when they set out to, say, make music, will find the instrument that best suits them, and will proceed to master that instrument.

But then there are people whose love of music inspires them to look at instruments and ask the question “what could I do to make this instrument better?” People like Les Paul, Robert Moog and Laurie Spiegel. Such artists don’t just want to give the world music. They want to give the world a better way to make music.

The same thing happens in all of the arts. But it happens in a really unique way in the computer arts.

When I was sixteen years old I saw Walt Disney’s Fantasia for the first time. From that moment forth I knew I wanted to do that with my life — to create visions of the worlds that we can see in our dreams.

As I set about doing this, I quickly found myself proceeding in a meta-artistic way. I didn’t end up drawing pictures (although I could draw pictures reasonably well). Rather, I started to write computer programs that would simulate the worlds I wanted to explore, that might create the visions I wanted to see.

I spent a lot of time doing math and creating new algorithms. Yet I wasn’t particularly interested in the math or the algorithms. Or rather, I was just as interested in them as, say, an architect is interested in a screwdriver. Math and computer software were merely the tools that could lend me greater power to explore the sorts of wonders that I had on seen that day, sprung from the minds of such visionaries as Bill Tytla and Oskar Fischinger.

Now that an ever greater number of kids are becoming versed in the ways of computers, the intersection is growing between kids who yearn to create art and kids who learn to wield the awesome power of programming. We might very well be entering the age of the meta-artist. A brave new world indeed!