In the dark of the night, when the world is asleep
And there’s no sound at all, just the thoughts that you keep
Does your mind ever wander, your thoughts ever stray
To days long ago, to a time far away?
Where the shifting sands wait, in shadows of blue
To lure the unwary, it is waiting for you
The past is illusion, a place of your dreams
Where nothing is ever the way that it seems
For the world of tomorrow will fade in a sigh
When you let yourself dwell where the memories lie
Take care, dreaming traveller, watch where you go
The more you remember, the less that you know
Your world of tomorrow will fade in a sigh
Leaving nothing but sand, where the memories lie
Category: Uncategorized
Inflection point
Today somebody was wearing a tee shirt that said “The only winning move is not to play.” He told me he was disappointed that most people didn’t recognize the quote. Being appropriately geeky (as, I suspect, are many of you reading this) I recognized it right away as the key line from the 1983 John Badham film “WarGames”. The complete snippet of dialog is between Dr. Falken and his supercomputer Joshua, who has just gone through the exercise of evaluating the outcome of every possible permutation of thermonuclear war:
| Joshua: | Greetings, Professor Falken. |
|---|---|
| Falken: | Hello, Joshua. |
| Joshua: | A strange game. The only winning move is not to play. How about a nice game of chess? |
What fascinates me most about this film is that it represents a precise inflection point in the popular culture – the moment when the programmer became the cool guy who got the girl. Certainly TRON had come out a full year earlier, but Bruce Boxleitner played him as pointedly nerdy – almost the antithesis of cool.
In contrast, Matthew Broderick, three years before he reached his apotheosis as Ferris Bueller, was identifiably cool and sexy, the teen rebel beginning to discover that he is a natural leader – witnessed just as he is coming into his considerable powers. This is essentially the same archetype that appears over and over again in literature. He is Prince Hal in “Henry IV, Part One”, James Dean in “Rebel Without a Cause”, Simba in “The Lion King” and Josh Hartnett in “The Faculty”.
The reason I find this change significant is that it pinpoints the year 1983 as the year the United States first experienced a massive shift in perception of its own power. Historically the power brokers in America had been those men (and it was pretty much always men) who wielded control of industrial production – John D. Rockefeller with his vast holdings in petroleum, and steel magnates like J.P. Morgan and Andrew Carnegie, followed somewhat later by a succession of powerful leaders of the automobile industry from Henry Ford to Lee Iacocca.
America was seen as mighty because of its industrial and manufacturing base, and this continued to be true after WWII and throughout the Cold War. Even the space race was a display of industrial brawn, the ultimate athletic feat of a nation that had worshiped the sheer physicality of transportation since the Wright Brothers and Ford had changed everything in 1908.
But of course now things are different. We are well into an era that worships a newer variety of Alpha leader, and a different kind of throne awaits Prince Hal. This is the era of Bill Gates, of Steve Jobs, of Larry and Sergei – of the supremacy of information over physical power.
Even our recent presidential election has been a triumph of the thinking man over the warrior – an outcome that would have been inconceivable in 1952 or 1956, when the intellectual Adlai Stevenson was practically laughed off the national stage when he attempted to go mano a mano with Eisenhower the war hero. But this time the election was fought and won on the internet. Obama the cool thinker – descendent of Matthew Broderick’s David Lightman – handily beat out McCain’s attempted channeling of John Wayne.
I would argue that the release of Badham’s cautionary film was the moment when this power shift first entered the national zeitgeist. The popular embrace of the internet era in which we all now live, where information is power, where teenagers view the cyber-creations of Will Wright with the same sense of reverent awe that a long-ago generation reserved for the physical feats of Harry Houdini, can be said to have begun a quarter of a century ago, with the release of “WarGames”.
Kindling
It’s clear that Amazon’s Kindle is at the forefront of something big. Maybe this particular device is not going to catch on with everybody, but it’s certainly an important foot in the door to rethinking how we interact with books. The combination of a fairly reasonable form factor, the use of electronic Ink (very easy on the eyes, even in bright sunlight, and not at all a battery hog), and – most important – the backing of the might Amazon, means that this device is getting quite a few people to sit up and take notice, in a way that didn’t happen two years ago with SONY’s ebook reader (when was the last time you bought a book from SONY?).
That said, I’m transfixed by the name. It seems almost an oxymoron for a company called “Amazon” to make a device called “Kindle”. Amazon’s name always gave me the warm fuzzies. Books are substantial, solid, old-fashioned, like the rainforest. Something we want to preserve so that the world can be a good place to live. The Amazon rainforest is a source of endless biodiversity, healthy atmosphere, medicinal treasures and ethnic traditions. I’ve always felt protective toward its bountiful presence, in somewhat the same way I’ve come to feel protective toward books, with their rich history, textured beauty and rugged physicality, in this transient age of the internet.
But to kindle means to start a fire, to burn – not a concept you want to throw around lightly when you’re talking about books. To me book burning is the very bane of civilization, bringing to mind Nazi rallies, as well as movements in our own country to ban “The Catcher in the Rye”.
Is Amazon suggesting that these electronic readers will eventually lead to the disappearance of the physical book? Certainly that would be convenient for a company like Amazon. They are, after all, in the business of licensing intellectual property. Ultimately it is not so much a physical book that they are selling to each buyer, but rather a license to possess a single instance of an copyrighted work. If they can streamline that point of sale, reducing overhead and moving each transaction toward an ideal of pure profit, perhaps that would serve their larger interests.
So in a sense, perhaps we are witnessing the start of the biggest book burning in history. One day such phrases as “between the covers” and “a real page turner” may be as euphemistic as “telephone dialing” or “rewind” – alluding nostalgically back to a Victorian reality that is long gone.
Some day soon, alas, as we all pick up our electronic readers, we may once and for all close the book on books. As our children, and their children after them, run their fingers over magic screens to summon up “The Adventures of Sherlock Holmes”, “Jane Eyre” and “Ivanhoe”, they may catch the sadly bemused looks on the faces of their elders. Perhaps they will even ask us what’s wrong. But I suspect that we will never, try as we might, be able to convey to them just what has been lost.
Missionary cheese
Some years ago I had the good fortune to apartment-sit for some Parisian friends. They had a beautiful duplex, just two blocks from the Seine, across from the Académie des Beaux-Arts. For me the entire experience was a slice of heaven. Every day I would wander out and purchase a fresh bread and some new and exotic cheese, and sometimes a lovely but not too expensive wine, and then I would venture forth into Paris, on my way to explore some new museum or other interesting cultural landmark.
During that trip I developed a taste for really really stinky cheese. I don’t eat cheese these days, but back then I liked to make a point of finding the most alarmingly aromatic cheese I could find – the kind that you could never bring back to the U.S. In those days this sort of cheese was illegal Stateside, presumably because the sheer exquisite headiness of its aroma would cause mass panic and terror in the hearts of American dairy farmers. These farmers knew, to their shame, that their tepid local product could never compete with such pungent magnificence.
Over the course of my stay in Paris I developed little pet names for all aspects of my experience. For example, I began to refer to the very stinkiest cheese – the kind that would spread its intense aroma relentlessly to fill any space – as “missionary cheese”. Later, when I was back in the U.S., I would mention this pet name to people, and they would raise their eyebrows in a most suggestive way. The mere mention of the phrase “missionary cheese” seemed to raise all sorts of lurid images in their minds.
My friends would ask me, not really sure that they wanted to know the answer: “Why did you call it missionary cheese?”
“Because,” I would explain truthfully, “whenever I brought a truly stinky cheese back to my Paris apartment, sooner or later it would convert all the other cheeses.”
Closing the loop
I learned today about “two way learning”. This is a technique whereby you have somebody learn something while you monitor that person’s brain activity. As the person is learning about something, the computer is simultaneously “learning” the patterns of the person’s brain activity.
The hope here is that we can train a computer to recognize particular patterns of brain activity, and use those patterns to determine something about what a person is thinking. The primary application for this now is for people who are paralyzed. If a computer can recognize a paralyzed person’s brain patterns, then eventually that person could simply think a particular thought in order to trigger the computer to perform a particular action.
When this technique was described to me, I had a completely wacky idea: Why not make a loop out of it – have the person and the computer watch each other? The person watches and tries to learn the pattern of the computer that is “learning” the person’s brain pattern. So, for example, when I have a particular thought, the computer monitors my brain activity and shows the results as some kind of image on a display screen. Meanwhile, I watch that screen and try to learn and recognize these images.
This is an interesting scenario because we humans have an extraordinary ability to recognize patterns in what we see. If I see a visual representation of my own thoughts, eventually I might start to be able to recognize what particular thoughts “look like”. Eventually I might learn to modulate my own thoughts in order to make various types of patterns appear on the computer screen. Essentially I am training my mind to train the computer.
By involving the person whose brain is being tracked as an active participant in the process, we might be able to create a rich and powerful learning feedback loop. By making use of the human mind’s amazing ability to recognize patterns, perhaps we can give people the power to modulate their own thoughts at will. Those modulated thoughts could then be used to exert truly precise control – purely through thought – of computers, and thereby of the world around us.
It’s worth trying anyway.
Washington and garlic
Today I flew from Seattle to St. Louis, because tomorrow I’m giving a talk at Washington University. Imagine that – going from one Washington to another in the same day, and neither one our nation’s capital.
My friend and host Caitlin cooked me a delicious dinner (pasta + sun dried tomatoes + garlic + olives + chickpeas + various subtle spices), and that reminded me of the very first time I tried to cook for my parents, after I was out of college and I’d gotten my own apartment and a real job.
I figured that it was an important step – showing my independence by doing something nurturing for my parents – taking care of them for a change. I assiduously followed the cookbook, got all the right ingredients, preheated things, chopped other things, and timed it all out so that my meal would be ready to serve by the time my parents arrived.
I only made one mistake.
I think it was an understandable mistake. After all, I’d never really cooked before. It’s not like I could be expected to know all of the technical terms right out of the gate.
To be more specific, it turns out that this:

is not a clove of garlic. Those of you who, like me in my tender youth, have naively thought that the above thingie is a garlic clove are in for a rude awakening. In fact, it’s something called a “bulb”. When you open up a bulb you get about ten little slices, like pieces of an orange. Each of those little slices, like the thing below to the right, is a garlic clove:

Why is this important? Well, when a recipe calls for two cloves of garlic, and your parents are coming over, and you have prepared them a dinner into which you have actually incorporated two entire bulbs of garlic (around twenty cloves, by my reckoning), things are likely to go amiss.
Fortunately my friend Burke happened to come by some time before the arrival of my unsuspecting parents. Unlike me, Burke actually knows a thing or two about cooking. He had me sautée and sautée relentlessly, which didn’t exactly rescue the meal, but considerably reduced its near-lethal strength.
Needless to say, the entire apartment – and probably all who ate there that evening – reeked of garlic for the next week. My mom and dad were very gracious about it, and they gave an excellent impression of enjoying the meal. It’s amazing what some parents will do out of love for their children.
But there may be a silver lining to this episode: You have probably heard that vampires hate garlic. And I can say definitively that from that day to this, I have never – not even once – been attacked by a vampire. I suspect that my good fortune in this area has been entirely due to the lingering effects of that meal.
You’ve got mail
I was having a conversation with some friends recently about the immense diversity of the things we each do on a computer over the course of a day – same computer screen, wildly different mindsets. For example, we typically use a completely different set of mental constructs in order to, on the one hand, play a first-person shooter, and on the other hand, to read our email.
But why, we wondered, should these two worlds be so separate and distinct? Why not read your email the way you play a first person shooter? After all, haven’t you ever felt a surge of pent up hostility upon receiving that annoying spam message yet again – the one you thought you had filtered out for good? Wouldn’t it be great to have some way to channel those emotions?
In my mind I am imagining my email inbox divided into dungeon levels. On the one hand, there are the relatively gentle levels, where we might encounter most professional and personal emails. And then there are the other levels further down – the ones where we keep all the aggressive spam that we have captured, but have not yet deleted from our hard drive. When you enter these levels, and find yourself face to face with your sworn enemy, it may be best to go out with guns blazing.
What would we call our game? How about “Mailbox Assassin”? Or “Email Nemesis”? Or maybe just “Going Postal”? Can anyody think of another likely name?
The writing on the wall
Today, the forty fifth anniversary of the commercial introduction of the push-button telephone, seems like a fine day to talk about the future of computer/human interfaces.
Let’s skip forward into the future for a moment, to the time when the cost of computer displays will essentially be free – akin to the cost of printing on paper today. Whether it will end up being embodied by some variant of E-Ink, Organic LEDs, or something else entirely, this is not such a farfetched scenario. Give it another fifteen years or so, and we’ll probably have some sort of fairly pervasive and low cost electronic wallpaper.
I’ve been wondering recently, who will get to decide what’s printed on that wallpaper? Let’s say you’re sitting in a restaurant with your friend or spouse. Like most public places, the restaurant wall will consist of changeable paint or wallpaper. Various locations along the wall will be able to display text, images, animations, or whatever else happens to be of interest.
Some uses are obvious: the dinner menu, including special of the day and wine list. News headlines, theatre listings, bus schedules, Op-ed pages. But other uses are not so obvious.
For example, ideally I would like the section of wall in front of me to serve my personal purposes – as though I’m looking at a Web browser on my own computer. I’d want the wall to recognize me and bring up that magazine article or movie I was in the middle of viewing earlier in the day. We will begin to see that sort of capability in just the next few years, through the use of face recognition software.
And it won’t be all that difficult to design this electronic wallpaper so as to show something different to people who are looking at it from different directions. The means to do this is already around – essentially a variant on lenticular lens displays, which have been used for over half a century to make stereo postcards and blinking Jesus pictures.
But will we really get our own customized wall space, wherever and whenever we want it? Perhaps the walls around us will be given over to narrowly targeted advertising – like the nightmare vision of personalized intrusiveness we were shown in the film “Minority Report”. Maybe the dictates of “homeland security” will require that face recognition software be used for reporting our whereabouts to some helpful government agency. In that case you might want to think twice before choosing that left-leaning Op-ed page to read over dinner.
Information utopia or dystopia – who gets to decide? Somehow I suspect that the reality may fall short of our hopes and dreams. But it would be nice to be proven wrong.
On a cold day in New York
Even on a cold day like today in New York, even when I have just spent a week in a delightful little European town, with enchanting Italian scenery, fine wine and brilliant company, it is good to be home, back in Manhattan, in all of its crazy madcap nonstop wonderfulness.
The swirling crowds that sweep you along, the fire in the eyes of people always on the go, the pervasive sense of urgency that practically bubbles up through the pavement, it all just fills my soul with joy.
I know that New York is not for everybody. But for some of us, it is heaven.
Program notes
My last few posts have been circling around my experiences this past week at the fabulous VIEW Conference in Torino, Italy. I’ve been trying to let it all settle in my brain before writing anything definitive.
The program got off to a really great start with Will Wright’s keynote lecture, in which he used a description of his game SPORE to issue several grand challenges. One of those challenges was to figure out an accessible way to let game players direct critters like the ones in SPORE while giving those players real programming power – not just the sorts of combination-through-menu-selection that a game like SPORE currently provides.
The next day I gave my talk, and I answered Will’s challenge by leading the audience in playing a computer game I’d written, which happens also to be a programming language. As you play the game, you send critters around a game board to play sequences of musical notes, somewhat like in SimTunes. The difference is that when you play this game you are actually creating loops, conditionals, setting variables, all the tools of programming. But it doesn’t feel like programming as most people currently know it; it feels like playing a game.
The audience was totally into it. They were enjoying the game, and they were also getting the point that as we were directing the game characters to roam around the board creating melodies, we were actually programming a computer.
I think they also realized that unlike traditional programming, we were engaged in an activity in which all choices lead to a kind of success. Contrast this with the standard approach to teaching programming, which seems too brittle to most learners: either you “get it” – ie: here’s how you write a loop, this is where you need to insert a conditional – or you don’t. And if you don’t get it, then your programs don’t run, end of story.
Brad Lewis was the last speaker on the VIEW program – he gave the closing keynote. Brad is a wonderful producer at PIXAR (eg: Ratatouille), and a man with a long and storied career. He was the speaker I was praising the other day, for his great observation that in order to truly succeed we must also embrace our failures, that only by being ready to accept those failures can we become free to explore and try new things.
After a day or so of mulling over in my mind what I had heard, I realized that I was groping toward a synthesis of these thoughts from Will and Brad. And I see now what that synthesis is: That engagement in “play” is, at its core, the hearty and enthusiastic embrace of the possibility of failure – when we are at play, failure holds no fear.
This notion provides an underpinning for our entire enterprise of using games for learning. Games are the things that invite us into a “magic circle” – a place where our actions have no dire consequences. If failure modes on the way to learning are presented as a game – as fun paths to explore – then kids can learn without fear.
I think that might also help explain why people of certain political persuasions are mistrustful of using games for learning. Fear is a useful political weapon – if you can drill it into kids early and often, then you can prepare them to be compliant citizens, unwilling to question authority.
A child who grows up learning and thinking without the cudgel of fear is the bane of an authoritarian society. I, on the other hand, am quite happy to help create a generation of children who are comfortable saying to themselves “Yes I can.”