Inventing reality

We tend to forget that there is nothing “natural” about clothing, or chairs, or books, or the many other age-old technologies that we rely upon. We often need to be reminded that these are highly evolved technologies, precisely because successful technologies become invisible. In fact, a good indication of the success of a technology is how invisible it has become.

You never “access your clothing”, or “interface with a chair”, or “activate a book”. You get dressed, sit down and read.

I am conscious, as my colleagues and I develop new ways for humans to interact with information, that the best innovations, the ones that have a shot at being of use to future generations, are not going to be the flashiest or the most clever. Rather, they will be the ones that succeed in being so useful that they become invisible as they fade gracefully into the fabric of our daily lives, until they seem to be reality itself.

The Way Things Work

When I was just a kid, there was a book called “The Way Things Work” that I used to pore over with complete delight. It was a 1967 translation into English of a 1963 German book called “Wei Funktioniert das?” In 581 pages, the book explained how several hundred disparate technologies work — everything from the centrifuge to the television to the electron microscope, from electric motors to jet engines to gyroscopes to door locks to how plexiglass is made.

Each topic got two pages: first a page to explain things in words, then a facing page filled with beautiful two-color illustrations. Some topics were strung together in order. For example, you could learn about principles of light refraction and reflection, then lenses and mirrors, then microscopes, telescopes and binoculars, then all sorts of topics around cameras and photography, with each little bite-sized lesson preparing you for the one that followed.

I am quite sure that having this book by my side not only taught me about many ingenious technologies (oh my gosh, the Eidophor projector!!!) but also shaped the way I look at invention in general, bolstering my confidence, at an early age, to go forth and invent.

I have no idea where that actual book from my childhood is now. Fortunately, it’s still possible to get your hands on a copy of this long out of print masterpiece. I ordered a used copy recently on Amazon — it is sitting beside me as I type this. You might want to consider getting one for your favorite intrepid ten year old — or perhaps for the intrepid ten year old in you.

The poetry of projects

I’m currently hard at work with my students on a project with a deadline. We’re all busily writing code, creating virtual objects and creatures, testing ideas and theories, and conducting experiments of one sort or another.

When people come to the lab we excitedly show them what we’re working on, and as soon as those people have left, we all dive back in and keep working — often well into the night.

When you’re caught up in such a scene, you can fail to realize, in the moment, just how much fun you’re having. After all, this is hard work, and sometimes — when things simply refuse to work for hours on end — it can get frustrating.

Yet when I think back over my life, and the times I remember with greatest fondness, many of those times were situations just like this — when there was some hard and challenging work to do, and a team of hardy souls came together to got it done.

William Wordsworth once said that the origin of poetry is “emotion recollected in tranquility”. Maybe that is true of experience in general. In the moment, during the peak times of our lives, we rarely realize just how much joy we are experiencing.

Until, perhaps, some time later, maybe long after the project is done, when time has turned memory into poetry.

The mountain road

I find that one of the hardest things as a teacher is to remember how hard things are for people who don’t already know something you know well. The problem is that I become so used to doing something, day in and day out (like programming in Java) that I lose track of how odd and exotic each of its concepts and particulars are to someone who doesn’t already know them.

Which makes teaching a class partly a process of feeling my way along, like driving up a twisting mountain road on a foggy night. On one side is the mountain — the things they don’t know. I can drive straight up the mountain face if I like, but they can’t follow me there. As soon as I steer too much that way, I can see the light go out in their eyes, replaced by a look of panic.

But if I err the other way, neglecting to drive uphill at all, focusing too much on the stuff they already know, then we never actually go up the mountain — we end up just driving around in circles, and everybody gets bored (including me).

After a while you develop a feel for when to drive fast, what the best slope is, where the nasty curves are along the way. But every class is different, and you really need to gauge how this particular class is dealing with each new concept.

The most important thing is to pay attention. No matter how many times you’ve driven up that mountain, or how well you think you know each twist and turn of the road ahead, you still should not be driving with your eyes closed.

A game of games

Having finished reading George R. R. Martin’s “A Game of Thrones” (finally!) I started thinking about all the many many people who are spending vast amounts of time reading just this one series of fantasy novels — let alone all of the time that been collectively spent reading fiction by Stephen King, Georges Simenon, J.K. Rowling and Harold Robbins — or Leo Tolstoy for that matter.

And I fine myself wondering, why is there so much worry about people spending hours and hours playing computer games?

After all, both activities are pleasurable immersions into fictional worlds. What makes one sort of escapist activity inherently more valid than the other?

Ironically named day

I’ve always thought that today, Labor Day here in the U.S. — the day marking the official end of summer of — is a bit of a contradiction. Although it heralds the beginning of autumn’s labors, of vacations drawing to an end, of kids going back to school, Labor Day itself is pointedly a day of no work. This three day weekend is the last hurrah, a time for family, for barbecues and hanging out, for all sorts of leisure-time activities.

I think it’s a nice tradition, one I firmly support. Yet I confess that I do not practice what I preach. Even as I write this, I and all of the students in our research group are gathered at the lab, busy at our computers. Because, you see, with nobody around, today we can actually get some work done!

Of course we are then all going to have a barbecue

† Don’t even ask what kind of barbecue — you already know what I’ll say. 🙂

Ugly humans

There is a concept, dating back many decades, of the Ugly American — the idea that Americans in their encounters with other cultures are boorish, self-absorbed and uncouth, either as tourists visiting other countries, or as companies doing business with the rest of the world.

At least some part of this concept was a reaction to the immense political and economic advantages conferred on the U.S. in the era that began after WWII. When your country is a rich superpower, whatever you do is going to be judged harshly. Your misdeeds will be amplified, and your good deeds underplayed. What’s interesting about the term “Ugly American”, and its usage in the 1950s, is how much of this critical self-examination came from Americans themselves.

It’s curious though to see the exact trope of the Ugly American replayed recently in three different science fiction films, but with “American” replaced by “Human”. I’m speaking of Avatar, District 9 and Rise of the Planet of the Apes. All three films are very well made in their way, all three were very popular and critically well received, and yet they all had one more thing in common — the way they looked upon humans as the bad guys.

And not just any humans — specifically the technologically advanced, modern product of the European enlightenment. In other words, our familiar industrialized, capitalist “Western Civilization” itself. And in each case, another non-human civilization is shown to be capable of an inherent decency that our own race lacks, whether that “other” is represented by Na’vi, alien “prawns”, or mutant apes.

Some Americans in the 1950s took to looking at themselves critically, leading to the agonizing self-examination exemplified by the term “Ugly American”. We seem to be reaching an analogous cultural moment. At any rate, some sort of self-questioning is clearly in the air.

Future hands

Just because our human bodies have these particular hands, it doesn’t necessarily follow that they are the only kinds of hands our brains could ever be good at using.

While it is true that our brain’s parietal lobe provides massive computational machinery for assisting in manipulating objects with our hands and fingers, it is not necessarily true that the parietal lobe evolved only for these particular human hands, with their five jointed digits and single opposable thumb.

Evolution is parsimonious, since it proceeds via a kind of haphazard hill-climbing algorithm. It’s much more economical (and reachable through the random walk of evolutionary steps) for a brain to encode a set of general procedures for individual learning, than to hard-wire into our neurons all the particulars of specific grasps and gestures.

We see something similar in spoken language. Human children have evolved to learn any language that follows a common set of procedural rules, not any one specific language such as Japanese or Serbo-Croation.

This loose coupling suggests an intriguing possibility: As our technology continues to advance to the point where we will be able to have the sensation of physical manipulation — as well as gestural communication with each other — using whatever bodies we choose, perhaps we will evolve those virtual bodies in various ways.

It might be more useful to have tentacular fingers, or two thumbs on each hand, or something even more radically different. In a sense this question comes down to understanding the functional set of learnable procedures encoded in our parietal lobe, since any modification to our virtual bodies that is not supported by our brain’s hardware will not gain wide acceptance.

There is certainly precedent for looking at such things. In particular, something analogous has happened throughout human history in the evolution of musical instruments. The variety of extant musical instruments is vast, yet no instrument will survive from one generation to the next unless our brains can control our hands when playing that instrument. In a way, the corpus of popular musical instruments serves as a kind of functional roadmap of our brain’s parietal lobe.

If, after mastering the underlying technology, we manage to “physically” evolve the virtual bodies with which we will communicate in cyberspace, that will lead to all sorts of fascinating questions. For example, I wonder what it will mean for the future evolution of musical instruments.

Artists create audiences

Today Vi Hart told me of a thought by the great recently deceased anthropologist Ted Carpenter, which she paraphrased as “Artists don’t address audiences, they create audiences”. I spent some time this evening tracking down the original statement itself:

“Artists don’t address themselves to audiences; they create audiences. That artist talks to himself out loud. If what he has to say is significant, others hear & are affected.” — Edmund Carpenter, in his foreword to ‘They Became What They Beheld’

When I think about Vi’s work, or the works of Picasso, or Woolf, or Schoenberg, or Louise Bourgeois, or Jackson Pollock, or other true originals, I realize that this is a nice way to describe the difference between art and entertainment. Entertainment attracts an audience by making people comfortable — by showing people what they were already expecting to see.

Art creates an audience by making people uncomfortable — by teaching a new way of seeing.

Family drama

I saw a film last night on DVD, one I had been meaning to see for a while.

The basic plot is simple: A middle-age married couple, married for twenty years, have been raising their two wonderful teenagers. During a time when the parents are going through an emotional crisis in their relationship, a charming interloper enters their lives, acts very friendly, forms an emotional bond with their kids, and then starts a clandestine affair with one of the parents.

In the end, the family survives. The couple realize they need to get past the betrayal of the affair (and the crisis between the two of them that had made such a thing possible), and focus instead on treasuring the many years of love and hard work they’ve put into their marriage and into raising their two wonderful children. The interloper is firmly told to leave.

Seems simple, right? The immense amount of loving effort that goes into two people raising their kids and building a family is vastly more valuable than the empty promise of escape offered by an amour fou. I mean, who would ever think otherwise?

Yet as I look at reviews of this film on the internet, I’ve seen a shocking amount of vituperation leveled at this movie. People who watched the film seem angry that the couple do not break up. Some reviewers are furious that the stranger who casually sailed into their lives, who had put no work into building a family, or in providing for and caring for those two children all those years, is cast out. I’ve rarely seen so many angry reviews of a film on the internet.

I suppose at this point I should confess that there’s something I’m not telling you. Something that in a sane world shouldn’t make any difference. But then, we don’t live in a sane world.