Forensic predictive cultural longevity

I spent the day today at an all-day research symposium. Many great conversations, but one conversation in particular got me thinking.

We all know in retrospect that Shakespeare has lasted while many of his contemporaries have not. Likewise we know that the work of Goethe, Austin, Archimedes, DaVinci, Mozart, and an entire pantheon of geniuses has remained relevant down through the ages, even as the work of their contemporaries has faded to oblivion.

Among recent voices, we may strongly suspect that the work of The Beatles, Will Wright or David Foster Wallace might last through the centuries, while the work of, say, Madonna may not. But we can’t be sure.

Moreover, it would seem that this is impossible to know. Yet that might not be the case.

After all, we have all of history to sift through, when looking for patterns that lead to sustained cultural longevity. Perhaps there is a quality in Mozart’s music, as compared with his contemporaries, or a quality that distinguished Shakespeare from other playwrights of his day — other than “he was a genius” — that we can spot, if we sift through the massive data available to us from down through the centuries.

It would be interesting to take such studies seriously as a science — a science of forensic predictive cultural longevity. And in the course of looking for such patterns, of developing a systematic way of looking at these things, perhaps we might gain insight into the creative process itself.

Spam filters ate my friends

I’ve just discovered that certain friends who had previously been posting comments on this blog have found themselves locked out. I haven’t even been getting emails saying “comment pending”. Rather, the voracious and overeager spam filter Akismet, which I recently began to use, knocks them out even before I would get an email letting me know anything had happened.

I had been wondering why it has been so peaceful and quiet here in blog-land. Here I have sat, floating atop my lotus, posting away in eerie cybernetic silence, while marveling at the strange and haunting stillness that has surrounded me of late.

And all the time it was just that pesky spam filter destroying my friends’ comments, stuffing up my ears with cotton, stifling the free voice of the citizen reader, cutting me off from those who love me and would help me be the very best blogger I can be.

Ah, modern technology. So powerful, yet so fickle. Our faithful little electronic servants, who defeat us at every turn.

Only Kinect

It is now commonly known that the ubiquitous QWERTY keyboard layout was designed to make typing slower (because early mechanical typewriters were prone to jamming). What is not generally acknowledged is that this seemingly contradictory approach to design is not the exception, but the rule.

First, let’s be honest about ourselves — about humans. We are magnificent. These astounding brains, coupled with these eyes, ears, hands, language, sense of proprioception, facial expressiveness (I could go on), creates an astonishingly rich package. There is something almost dizzyingly wonderful about the ways that humans communicate with each other, using our minds to control our muscles and interpret our perceptions with a degree of subtlety and effortlessness that we too often take for granted.

Yet as engineers, we are limited. We cannot create anything as wonderful as ourselves, and so we compromise. Our vehicles lack the supreme holonomic grace, balance and flexibility of our own natural movement, so to compensate we make them fast. Our networks of computers are incapable of true thought, reason or judgement, so we compensate by giving them vast powers to sift through data and find patterns by brute force.

We tend not to notice the limitations of our own tools, because our fantastically protean brains and bodies adapt to any tool so quickly that we often overlook the tool’s limitation. Take, for example, the standard Graphical User Interface — buttons, sliders, icons, pull-down menus, all those things you control with mouse or touch screen.

Like the QWERTY keyboard before it, the GUI is, quite literally, designed to cripple us. It deliberately makes us slow, inefficient, clumsy, awkward. You probably think I’m being facetious, but I’m not. After all, from the point of view of any major software company designing interface tools for office workers, the last thing you want is an optimal interface.

No, you want to slow people down, to force them to do exactly one thing at a time. While a software user is pressing a button or changing a slider, they cannot do anything else. This is a very useful design quality in extremely large software systems targeted for purchase by financial institutions and other major corporate clients employing thousands of office workers. The more you can control and restrict the possible things a user can do at once, the more reliably you can guarantee a measurable and repeatable level of productivity.

Apple and its competitors are currently playing an analogous game in the arena of multitouch tablets. As we all learn the pinch gesture, the two finger swipe, the tap and drag, we gradually come to believe that this is what good tactile expression is. But of course exactly the opposite is true. We are in fact being trained to think of a crippling level of inexpressiveness as acceptable.

To realize this is true, think of all the expressiveness and subtlety that your hands and fingers are actually capable of, when playing a guitar or violin, when sculpting in clay, even when simply turning the pages of a book. We are being taught that to access the world of software and shared information, we must abandon all of that power within our own bodies and minds.

The only recent interface I’ve seen that bucks this trend is Microsoft’s Kinect. Of the crop of human/computer interface products out there, only Kinect seems to have the potential, over time, to evolve in a way that does justice to the vast power of human expressiveness.

And so I am encouraged that one day soon we will get over our foolishness. We will embrace our birthright, and design computational interfaces that make full use of gesture, touch, hearing, vision, facial expression, body language, line of sight, proprioception, rhythm, balance, language, and the many other things we use to communicate with each other.

And then things will start to get interesting.

Mars attacks

The president of Venezuela may be to blame for the lack of intelligent life on the planet Earth, said President Blyzto Glaxxpod of planet Mars, on Tuesday.

“I have always said, heard, that it would not be strange that there had been any actual civilization on Earth, but maybe this human Hugo Chavez arrived there, socialism arrived and finished off the planet,” Glaxxpod said in speech to mark World Ammonia Day.

Glaxxpod, who also holds socialism responsible for many of the red planet’s own problems, warned that ammonia supplies on Mars were drying up.

“Careful! Here on planet Mars where hundreds of years ago or less there were great canals, now there are deserts. Where there were nitric seas, there are deserts,” Glaxxpod said, sipping from a glass of ammonia.

He went on to decry the robot spies Earth recently landed on Mars, adding that the Earth’s attacks on its own Moon were about green cheese reserves.

Reality filter

One of the things most of us seem to take for granted is that reality is consistent. People of our acquaintance tend to respond to things in relatively predictable ways, and things around us seem to work more or less the way we expect them to.

Over time, we build a model in our heads of how things work. Barring major disasters, reality generally fits this model, falling into recognizable patterns.

But what if this is an illusion? Maybe people around us do not behave predictably, maybe things are not working according to plan. Perhaps, in reality, the people in our lives are sending us a continuous jumble of conflicting signals, and we just don’t see it.

It may be the case that the human mind, when it is functioning “normally”, imposes a kind of filter on the signals it receives, forcing our perceptions into recognizable patterns, even if those patterns do not quite fit the data.

For example, someone you know might vary widely in mood — being cheerful one moment, and morose the next — and you simply might not notice the variation, because one of those moods corresponds to your conception of this person, while the other is tossed out by your mind as anomalous data.

I’m not sure there is any real way to test this theory. I strongly suspect this filter exists, but it’s hard to tell how strong it is, since we can only evaluate things through the lens of our own perception.

In any case, if this is the way perception actually works, I’m pretty satisfied with this arrangement, since I find most of the people I know to be quite nice to be around.

But maybe that’s just me. 🙂

Watson

It’s funny how certain names seem to gravitate toward particular meanings. Take the name “Watson”, a name that was in the news lately because of the Jeopardy playing IBM computer.

But of course there was a reason this computer was given its particular name. Twas named for the eponymous Thomas J Watson, who while president of the company now known as International Business Machines from 1914 to his death in 1956 built it into one of the largest and most successful corporations in history. The name IBM became so synonymous with computers it inspired the name of the most famous computer villain in history (as well as, arguably, the most likable). Know who I mean? 🙂

Yet the name Watson also shows up in the person of James D. Watson, one of the two men credited with discovering that DNA forms a double helix — a structure which is key to the mechanism of genetic replication. Notice that I said one of two men. Often left out of the story is the woman, Rosalind Franklin, who should have shared the Nobel prize (you see boys, the hydrophilic molecular backbone goes on the outside).

And then of course there is John H. Watson, M.D., friend and chronicler extraordinaire of one Sherlock Holmes. He was the very model of the scientifically minded Victorian gentleman, but also not one to shy away from a fight, should the situation call for it. Evidently an excellent doctor and surgeon, Watson was ever in thrall to Holmes’ scientific approach to criminal investigation.

So here we have three notable figures, all named Watson, all representing a peculiarly western vision of science — science as a kind of vigorous boys club, decidedly masculine and pointedly brash.

Of course every rule has its exception. The best version I’ve ever seen of Sherlock Holmes’ friend and confidente Dr. Watson was played by a woman. Extra points if you can identify the actress and the film.

Just weird enough

Today I was having a conversation at a whiteboard with a colleague — one of those brainstorming sessions where you say “what if we did this“, and scribble something, and then the other person says “yes, but suppose we did that“, and before you know it you’ve figured out something really cool together.

As soon as we realized we had come up with something interesting and useful, one of the first thoughts we had was “surely, somebody has thought of this before”. That question is important, because it’s the difference between doing something fun just as an exercise for yourself, and doing something that can really make a contribution to the community.

At first I was convinced we must be following in the path of others. But the more I thought about it, the less certain I was. Even though our result is useful, the place we started from was a little weird — we hadn’t framed the problem the way this sort of thing is usually framed. Maybe, just maybe, nobody had ever asked this particular question before.

And then I realized that maybe one key to making original contributions is to ask questions that are just a little bit weird. Not too weird mind you — just weird enough.

The last picture show

There was a time when you had to go to a movie theater to see a film. The projectionist would mount a big reel of 35mm print, the house lights would go down, and the flickering magic would begin. Then along came television — a more convenient and decidedly less magical alternative.

The progression continues apace, as movies migrate to our portable devices. We seem to continually trade away magic for ever more convenience. I wonder whether there is some sort of universal constant, some formula like M = 1/C, a kind of inverse law between convenience and magic.

Soon there may come a time when the last movie theater is boarded up, relic of a bygone age, like the corset, or the family doctor who makes house calls. We will all stream our movies in real-time through our cellular networks, onto some portable device or other.

But what will happen after the electronic pulse that wipes out all electronic systems, signaling the start of World War III? What will happen when our civilization has collapsed, as civilizations inevitably do, and we find ourselves groping in the dark, our once vaunted cellphones now useless hunks of plastic and coltan derivatives?

There will no longer be any physical record of the movies that once were, merely the memories in peoples’ minds of films they still remember from childhood, magic images that used to flicker on the screens of the cellular devices of yesteryear. Even these memories will fade each year like a painting in the desert sun, until all that is left is legend, words that escape meaning, like Rhett and Scarlett, Chaplin, Kubrick, Dorothy and the Tin Man.

Of course in time, over the span of centuries, civilization will rebuild itself, as civilizations always do. Perhaps our distant progeny will learn from their forebears’ sad cultural collapse, or perhaps such cultural tragedies are destined to ever repeat themselves, like a Library of Alexandria burning through all eternity.

Ten thousand years from now, when someone finally green-lights a movie about that fabled thing of dreams and myth called “Hollywood”, that dimly remembered Shangri La of beautiful people who never grow old, perhaps they will tell the sad tale of how cellphone streaming spelled the death of our film legacy.

When they do, I have a great title for them, although I doubt the filmmakers of the future will understand. I think they should call it “Lost Verizon”.

Clean

Today I cleaned.

Went through papers, mound by mound, wrestled my apartment to the ground. Threw out all those boxes of stuff, couldn’t get rid of it fast enough. Stacks of newspapers starting to mold, some answering machines maybe ten years old, piles and piles of useless crap, then stopped a while and took a nap.

Woke up refreshed and kept on at it (after drowning in all this stuff I’d had it). Pulled the good stuff from the bad, went through everything I had. Several hours later on, with all the useless garbage gone, my place looked sparkling, wondrous — new! I swear my soul felt cleaner too.

I’m sure you know just what I mean. Sometimes it’s just good to clean.

Interdisciplinary map

I am working on an interdisciplinary proposal with a really interestingly broad range of colleagues. In the mix are people who do research in computer graphics, user interfaces, child language development, data collection, parallel computation, interactive storytelling and other things besides. All of the pieces fit together, but no one participant knows enough about all of these fields to be able to see the entire picture down to its details.

For example, I may understand perfectly well why parallel computation is useful to me, yet I might not be able to understand the details of a technical paper by one of my colleagues that explains his research results in that field. And that’s the key: I don’t need to understand how his field works to be able to work with my colleague, but I do need to understand what his field accomplishes.

This kind of thing comes up all the time. The people who work together on a movie — actors, director, producer, gaffer, best boy, and so on — don’t all know how the others do their jobs. But they understand enough of the results of that expertise that they can successfully make a movie together. There are similar principles at work in many collaborative endeavors, from building a house to putting out a magazine to running a country.

I think the reason such interdisciplinary links are more elusive in research is that there is no pre-defined driving problem, no movie to make, house to build or magazine to publish. It’s easy to do exploratory research in your own field, but much harder once that research starts to cut across fields with vastly different areas of expertise.

So it occurs to me that it might be interesting to build an interdisciplinary map, on which a field is “located” not according to how it works, but according to what other fields make use of it, and vice versa. Such a map would make it much easier to work out a kind of geography of research, an entire world of potentially fruitful collaborations across disciplines. Deciding where to live in this world (or just to drop by and visit) wouldn’t be so much about what your intellectual neighbors do within their own homes, but about the quality of the discussions you can have with them over the fence.