Cycle of broken culture

I was having a dinnertime conversation this evening, and to everyone’s surprise the “liberals” and “conservatives” were in agreement about the nature of the problem afflicting kids in our inner city schools, and the reason it is such a difficult problem to tackle.

The key was not to use loaded phrases like “cycle of poverty”. In the United States you don’t need money to succeed. Money helps, but it’s not the essential differentiator. Rather, you need a kind of inner fire and enough of a belief in the system that you’ll learn what you need to learn, and then apply those skills. Yes, this is more difficult during a recession, but the relative situation stays the same. The kid with the motivation and focus, who builds skills over time, is going to be more likely to succeed as he or she grows up.

But there is another phrase that we all could agree on — “cycle of broken culture”. If the parents have no faith in the system, and sees no point in their child entering that system, then the child is far more likely to reject the value system of skill building and achievement that the schools are trying to offer.

So what to do? You can’t tell a parent “You have bad parenting skills, and we are going to take your child away and make sure what is broken in you does not become broken in your child.” Yet to solve the problem of alienated parents producing alienated children who grow up to repeat the cycle, you must give children some exposure to other ways of being that their parents might not be offering them.

I think there are ways to frame practical approaches to this problem that will make equal sense to both liberals and conservatives. More later.

Eccescopy, part 15

One question to ask when talking about a new way of looking at the world (literally), is “How do we get from here to there?” In particular, how do we develop applications for a technology that we have not yet finished creating?

One approach is to fake it. Or, to put it in a somewhat more dignified way, to create a functional prototype. In other words, we don’t need to actually build a device to be able to use it — we just need to build some other device that behaves the same way, albeit under controlled conditions.

For example, we can start to work out what a face-to-face eccescopic conversation between two people might be like through the use of head trackers and transparent projection screens. In particular, we can use some present-day technology, such as an Optitrack optical tracker, to measure with high accuracy the positions and orientations of the heads of two people, 200 times per second.

Both people will need to wear some sort of passive tracking markers, perhaps attached to a headband, but that’s ok — such markers won’t interfere with eye contact, and will serve just fine for a prototype.

In addition, we can use one of several types of transparent projection screens, such as the “CristalLine” rear projection screen from Woehburk, or the DaLite Holo Screen. Two people can look at each other through such a screen, while two projectors are projected onto the screen from opposite sides (so that each person sees only one of the rear projections).

Then we can use the tracked position of the two peoples’ heads to, for example, create the illusion of objects floating between the two participants, continually correcting the apparent position and orientation of those objects as each participant moves his/her head.

We can then use a gesture recognition system such as Microsoft Kinect to track free-hand gestures by the two participants. Eventually we would like miniaturized Kinect-like tracking devices to be built directly and unobtrusively into eccescopic headgear.

Of course this is not perfect. Not only do the two participants need to stay generally in one location, but they also cannot reach out and put their hands through the screen. Yet for prototyping what the experience of an eccescopic might feel like — and then implementing and testing out prototype applications — this isn’t such a bad place to begin.

By the way, does anybody think that the word “ambiscope” (which translates roughly into “device to look around”) is better than “eccescope”?

Volcanoes

Most of the time when I look at people I see relatively little emotion on the surface. I suppose a society could not function if everyone were walking around on the verge of exploding. When we see people on the street or on the subway who look like they are likely to detonate at any moment, we tend to steer clear — for good reason.

Yet if you spend time with anyone, especially if you spend time with them during periods of stress or great loss, you come to realize that everyone, somewhere inside, has a bubbling cauldron of rage, fear and anxiety, of dark emotions lurking just below that apparently placid surface.

I suppose this is why people respond so powerfully to art that exposes the dark underside, such as Pinter’s The Homecoming, Dostoevsky’s Notes from Underground, Shirley Jackson’s The Lottery, Harlan Ellison’s I Have No Mouth and I must Scream, or just about anything by Kafka.

We seem to derive a peculiar pleasure from watching literary characters we care about and identify with, that we understand on some level as being us, as they approach an emotional abyss and proceed to fall off the edge, descending downward in unchecked flight to some or other existential hell.

I’m not sure what it is that fuels these volcanoes in our souls. Perhaps it is some remnant of the terrible and fearsome three year old within us, that unchecked raging infantile id we all embodied before we ever learned to layer over our raw desires with a mask of social agreeability.

Or perhaps it is simply the knowledge, always lurking around the corner, that our own existence is finite — that for all our struggles and sometime successes, death inevitably awaits.

In any case, I am glad when I see these glimpses of naked truth, when a spark of anger flashes unchecked, or some hidden despair surfaces and reveals itself — even for a moment. I am glad that we all must bear witness, from time to time, to the raging and often ugly sight of other souls in all their ungainly struggle.

Because those are the only times when we know for certain that we are not alone.

Eccescopy, part 14

This week I visited the MIT Media Lab, where I talked to several of my friends on the faculty about eccescopy. I was surprised and rather delighted at the positive response. For example, Ramesh Raskar is fascinated by the possible form factors, and the ways they could be achieved.

Hiroshi Ishii feels that it is important to look at the entire Interface Ecology — how individuals and social networks communicate, and how an embodied face-to-face cyber-enhanced communication fits together with ideas about community.

And Pattie Maes has pretty much been doing this kind of thing anyway with her students, through things like smart light bulbs (which contain cameras and projectors), and position-tracked portable projectors that you carry around with you to visually augment physical objects in your environment (an idea that has been explored by Pierre Wellner, Hiroshi Ishii and Ramesh Raskar).

We all agreed that following Will Wright’s description of The SIMS 5 (“The game is already in the box … you just can’t open it yet”), is a useful way to frame things: While we are developing the physical support layer, we should be building applications as though that support layer is already available.

Today I was also told by a student about a book I am now going to try to read soon — the 2006 novel Rainbows End by Vernor Vinge. Apparently it is a world of the near future in which everyone wears a portable display that lets them see a cyber-enhanced world superimposed upon our real world.

Well, almost everyone wears a portable display. In the book there are some anti-technology rebels who insist on using good old fashioned computer screens. On some days I know just how they feel.

Eccescopy, part 13

It would be appealing to think in terms of the eccescope in a form factor of a contact lens. There actually has been some interesting work in putting electronics on contact lenses, with the goal of eventually building a contact lens display. Babak Parviz and his former student Harvey Ho at the University of Washington have demonstrated that a contact lens containing electronic components can be worn by a rabbit for twenty minutes with no ill effects. The contact lens is shown below:




 

It has not been reported whether the rabbit thought this was a good idea, but that is a subject for another post.

The hope is that this will lead to eventual development of a kind of “hololens”, in which the contact lens somehow creates an image upon the eye’s retina:


hololens

 

Unfortunately, this is more difficult than it might seem. The problem is that the hololens is at an awkward place within the optical system. An image that originates within a contact lens — pressed against the eye’s cornea — is too near to be imaged by the eye’s own lens via conventional optics. Either a collimated light would need to originate inside the hololens (in other words, a tiny laser would need to be embedded inside the hololens), or else the lens would need to incorporate a fine array of micro-scale LEDs, each with its own tiny collimating lens, and each on the order of few microns in width.

This is almost certainly possible in the long term, but it is going to take a formidably large amount of engineering to get all the components in place. So while this may well be the eventual future, I wouldn’t bet on this approach winning out for the first generation of eccescopic displays.

Dundee

Having just returned from a delightful and all too short trip to Dundee Scotland (a beautiful small town just ’round the coast and across the Tay from St. Andrews), my mind is still pleasantly reverberating from just how wonderful the people are. There is an amazing warmth to the people who live there, a genuine sense of acceptance, that is a lovely thing to experience.

I thoroughly enjoy living in the big city (and it doesn’t get much bigger than New York), but for a change it was a delight to be bathed in the magic of a small town, a place where everyone, it seems, knows everyone — and absolutely everybody has a deliciously sardonic sense of humor.

And there’s nothing quite like standing by the Tay Bridge whilst reading William McGonagall’s infamous poem “The Tay Bridge Disaster”, which the residents of Dundee are proud to explain is possibly the worst poem ever written.

Eccescopy, part 12

An eccescope, like any eye-centric augmented reality device, doesn’t merely need to display an image into your eye. It also needs to make sure that this image lines up properly with the physical world around you. That requires tracking the position and orientation of your head with high speed and accuracy. Otherwise, the displayed image will appear to visually drift and swim within the world, rather than seeming like a part of your environment.

Fortunately, there are technologies today that are up to the job, although they have not yet been integrated together into one package. For high speed reliability and responsiveness, there are devices like the Epson Toyocom Inertial Measurement Unit (IMU). This device uses tiny electronic gyroscopes and accelerometers to measure rotation and movement fast and accurately. And it’s only 10 mm long (about 0.4 inches), which means it can fit unobtrusively on an earpiece:




 

But that’s not quite enough. Inertial sensors respond fast to head movements, but over time the measured angle will drift. The Epson Toyocom IMU has a drift of about six degrees per hour, which is very impressive, but still not good enough — an eccescope needs to have no significant angular drift over time.

Fortunately there are other technologies that give absolute measurements, even though they are not as fast. For example, a small video camera can look out into the environment and track features of the room around you (such as corners, where the edges of things meet). As you move your head, the camera tracks the apparent movement of these features.

The two types of information — fast but drift-prone inertial tracking and slow but drift-free video feature tracking — can be combined to give a very good answer to the question: “What is the exact position and orientation of my head right now?” And with good engineering, all of the required components can fit in an earpiece using today’s technology.

Once the computer that is generating the image knows exactly where your head is, it can place the virtual image in exactly the right place. To you, it will look as though the virtual object is residing in the physical world around you. If the data from the IMU is used properly, the position of this object will not seem to drift as you move your head.

There are other things we want the eccescope to know about in the world around us, such as the positions of our own hands and fingers. We’ll return to that later.

BBC

Today, at the NEoN Festival in Dundee Scotland, I was interviewed by the BBC. The interviewer asked me a question about how the games industry can grow.

I didn’t think I was prepared for such a question, but answering words seemed to tumble out of my mouth (for better or worse). I told him that the game industry needs to be ready for the emerging internet market, and that this market is increasingly going to mobile devices. I then said that mobile devices are rapidly going global. For example, many kids in Africa, who have no access to computers, nonetheless have cell phones.

I told him that when those third world cell phones become interestingly powerful — as they soon will, thanks to Moore’s Law — the market for games will grow from mere millions to billions, and whoever is ready to seize that market is going to win big.

It would be nice if that turns out to be right. Especially if you think games can be effective for education.

Eccescopy, part 11

One of the key elements of any eccescope is its display module — the part that actually directs images into your eye. Light doesn’t bend around corners by itself, so the question remains of how to direct those light rays into your pupil. One possibility is to take the concepts suggested by the Brother Airscouter, and push them to their limit. Here is my visualization of how such a display module might appear:




 

What is going on in this visualization is that components within the earpiece (not seen in this image, because they are blocked by the head) are generating a tiny collimated image that is directed to that little optical deflector you can see positioned in front of the eye. Different parts of this generated image are deflected by different parts of the optical deflector into the user’s pupil. The result is that collimated light rays enter the user’s pupil from different directions.

Like the Airscouter, this is a form of “retinal display”, since the user’s eye is left to do the work of converting those light rays from different directions into an image. In a retinal display, there is no physical display screen, just a virtual image that appears to be floating out in space, similar to the virtual image that you see when you peer into the eyepiece of a microscope or telescope.

With the proper engineering, a display module with this form factor is perfectly achievable. In fact, it might even be possible to create a display module that is even smaller. But there is more to an eccescope than its display module. It also needs to track — with high speed and accuracy — the user’s head position and orientation. Also, it is useful for the eccescope to gather information about what the user is looking at.

More to come.