The value of a bad demo

I often give perfectly good demos of whatever it is I’ve been working on. Afterward, I feel good, the audience feels good, we all feel good.

But every once in a while I give a demo that doesn’t go so well. And sometimes it’s just a total disaster. Everything goes wrong, the entire thing crashes and burns, and my poor broken ego is left to pick up the pieces, both my self-confidence and my faith in the Universe badly shaken.

I’ve learned over time that these failed demos, as stressful as they inevitably are, are the best fuel to fire creativity. When I’m feeling fat and comfortable, I tend to become lazy. “Hey,” I tell myself, “everything is great!” And that’s when things tend to stagnate.

But after a true failure, my survival instincts come roaring up from wherever they usually hide. In those few days after a disappointing talk, or a demo that has gone horribly wrong, I’ve generally done my best work. Suddenly the cobwebs clear away, my mind is focused and sharp, and creativity begins to flow.

Apparently, nothing succeeds like a good failure.

Cultural subtitles

A few years ago I saw the 1953 Ethel Merman film “Call Me Madam” — appropriately enough (if you know the film) it was part of the in-flight entertainment on an international flight. There were lots of moments when one character or other would say something that was clearly meant to be funny, but that to me was simply mystifying. At some point I realized that these were in-jokes — up to the minute political or cultural references that mostly likely, sixty years ago, seemed very witty and knowing.

I noticed in recent weeks that this memory has been on my mind. Just today I just realized why: I had much the same experience several weeks ago seeing Shakespeare’s “Much Ado About Nothing” on stage (which I wrote about here on February 4).

This delightful play is filled to the brim with the very latest puns and verbal twists of 1598. Alas, unless you are a Shakespearean scholar most of these clever moments will sail right over your post-Elizabethan head. As a friend I mine pointed out, it’s a bit like listening to Abbott and Costello’s “Who’s on First” routine if you’ve never heard of baseball.

Now that everything is on DVD, with subtitle options in just about every language, why can’t they include an option for cultural subtitles? Topical jokes, political references, names of products, as well as actors or other celebrities, mentions of “23 Skidoo” and other lexical mysteries, these would all be explained for the uninitiated.

For recent cinematic and television offerings, this should be done immediately, in the cultural moment. If nothing else, think of all those poor future literary scholars who may spend years trying to parse the meaning of “Snooki”.

The souls of departed geniuses

Yesterday the guy who invented blue screen and green screen passed away. Unless you know something about the technology of film production, this might not mean much to you.

Basically, if you’ve seen a science fiction film, if you’ve experienced any sort of fantasy world or alternate universe on screen, or if you simply possess a world view that is informed — in some deep if mysterious way — by the vision of Dick Van Dyke dancing with penguins, then this man has touched your life.

We seem to be experiencing an epidemic of such sad passings. Only weeks ago we lost the man who invented the Etch-A-Sketch. What could possibly be more beautiful, more poetic, more filled with possibilities for annihilating the gap between C.P. Snow’s two cultures, than the empowerment of young children to create art by direct manipulation of the X and Y axes?

And now these two gentlemen are both in the great beyond. What will happen now is a matter for metaphysical speculation, yet we can entertain the possibilities.

Perhaps they will meet in the afterlife, these giants of visual invention. If one thing leads to another, they will join forces, combining their respective expertise. Perhaps they might even seek out the soul of the late Fritz Fischer, realizing in the inventor of the Eidophor system a kindred spirit.

Are there startups in the afterlife? Do the souls of departed geniuses draw together, seeking to create joint ventures in the great hereafter?

If so, I wonder whether they are open to angel investors.

Making brains

I had some interesting conversations at AAAS on the topic of Artificial Intelligence. In particular around the question: “Can we replicate the functionality of the human brain?”

Everyone I ran into who does actual scientific research on the human brain just shook their heads at the idea of creating an artificial human brain. Their arguments were twofold: (1) We still, after all this time, have no idea whatsoever how to model the brain, and (2) From what we know, the hardware complexity required to replicate just the low level neural activity in a single brain is vastly beyond the combined power of all of the world’s CPUs, even if it turns out that what the brain does is Turing computable in any practical sense.

Furthermore, they don’t think what the brain does is Turing computable in any practical sense. And don’t even get them started on Ray Kurzweil.

On the other hand, pretty much everyone else I spoke with — people who don’t know much about the subject — seemed firmly convinced that we will have an artificial human brain within the next ten years (except for a skeptical few, who thought it might take as much as twenty years).

These non-neuroscientists, generally quite intelligent and informed people, responded to any suggestion that replicating the functionality of the human brain might be out of reach by simply rolling their eyes, while saying things like “Hey, they once thought human flight was impossible.”

Somewhere in here is an interesting story about the extreme disparity of opinion between (1) those who have spent years studying the brain and (2) everyone else.

I’m just not quite sure yet what that story is.

All that we touch

“Humans are the tool makers of the world” is a well known trope. At the AAAS meeting yesterday, neuroscientist Miguel Nicolelis asserted that this concept doesn’t go far enough in describing the nature of humans.

Speaking of the brain’s relationship to the body, he said: “We are not just tool makers, We are tool assimilators.” Specifically, as we use our brains to make tools, those tools become extensions of our bodies. A human brain operates by continually extending its concept of “body”, mentally assimilating ever more of the world to form a more powerful virtual body.

Any that tool we craft or use becomes part of this extended body — a hammer, a piano, an automobile, a computer. As our brains create a mental map of each new tool, that tool becomes part of the brain’s ever extending reach, like another set of hands.

Over time, whatever we can manipulate becomes absorbed into our brain’s virtual body, and all that we touch becomes us.

Maybe this isn’t such a good idea

Today at a session of the American Academy for the Advancement of Science on the topic of direct brain/body interfaces, one of the speakers was a devout Christian. The entire focus of his talk concerned the moral implications “as a Christian” (his words) of everything the other speakers had been discussing. He wondered aloud whether God would approve such doings, whether advancing technology is compromising our sacred humanity, and what it all might mean for our immortal souls.

To put this in context, the other speakers had been very thoughtful about ethical questions. Not one of them had merely discussed the technology. Rather, each presentation had included carefully nuanced points about what a direct brain/body communication interface might mean for privacy, patients’ rights, interpersonal relationships, the limits of government intervention and other matters.

And yet, suddenly, God was in the room. At a conference about science, we were treated to such phrases as “God, who created us all”, and similar sentiments. I have to admit that my very first thought was “What the hell?”

It could be argued that we scientists have no right to expect a safe place to discuss evidence based reasoning, that the special privilege of some particular religion or other is so paramount in our society that a dominant faith has free license to grandstand in the middle of any scientific discussion, trampling over the principles of logical inference and empirical evidence.

But does it go both ways? Do scientists have the right to force their way into the nearest church, perhaps in the middle of the most sacred and holy rites, and shove the priest aside in the name of science?

“Get out of the way,” I can envision them shouting, this gang of rogue empiricists with no respect for decorum, “we are here to conduct some experiments!”

As these scientists, having taken the church by force, rudely sweep the holy wine and bread of Christ onto the floor to set up their beakers and test tubes upon the sacred altar of God, could the stunned priest really be faulted for wondering “Maybe this isn’t such a good idea.”

Race condition

Today, at the annual AAAS meeting, I attended a great talk by Nina Jablonski explaining very clearly and unambiguously why “race” (as in black, white, etc), is a complete myth. Interestingly, she noted that in the U.S. health agencies still use the concept of race — apparently because it makes everyone feel comfy, even though scientifically it has no meaning whatsoever.

I learned that Lucretius was apparently the first person to classify people by color. In his early work he was value neutral, but about ten years later he started associating personality with skin color.

But it seems that the real villian was Emmanuel Kant. He was the first to start ranking people, based on their skin color, from inferior to superior. Because he was a well regarded thinker, this nonsense was taken seriously.

The rest is history.

The key high order bit of the actual science is that dark skin is highly selected for in dry equatorial climates (where people with light skin tend to die off because UV-B from sunlight attacks their folic acid, which is necessary for proper embryonic development), whereas light skin is highly selected for far away from the equator (because absorbing some UV-B is necessary for vitamin D production, without which bones cannot grow properly).

Various populations have changed from dark to light and back again quite often over the last 70,000 years (when the first humans wandered out of Africa). For example, the ancestors of many people now living in southern India went from dark to light to dark again.

During the first 130,000 years of humanity’s existence, everyone lived in Africa. Genetic diversity during that time was vast, yet of course all those genetically diverse peoples were dark skinned because of selection for protection against UV-B.

Meanwhile, the light skin of caucasians and of east asians evolved via completely different mutations. In both cases, some genetic mutation arose that helped guard against deficiency in vitamin D — but implemented by unrelated genetic pathways.

So in reality, it’s all a tangle of genetically diverse subpopulations. Yet the U.S. we still indulge in the fantasy that there is something genetically meaningful about such words as “black” or “white”.

Vision for the future

An article on the front page of today’s New York Times caught my eye. It marks FDA approval, after ten years of research and development, of a working artificial retina.

Of course the tech isn’t quite like an actual retina at this stage. Resolution is extremely low, color is pretty much non-existent, and the externally worn component of the device is large and unsightly. But for people who have had essentially no vision, it is transformative.

Those of you who have been reading this blog for a while will probably be able to see where this is going: Today the quality might be low, but eventually such a device will be as good as a natural retina, and then, at some point, it will be better.

Looking forward, as the technology improves this kind of implant will no longer be seen as a prosthetic to correct a problem, but as something integral to our everyday experience of the world, like electric lights, or cars, or clothing.

And then everything will change.

Coincidence

One afternoon quite a few years ago, when “Sex and the City” was still on the air, I saw Christopher Noth — the actor who plays Carrie Bradshaw’s love interest “Mr. Big” — walking with a friend near Columbus Circle in Manhattan. He was dressed in a nice suit, more or less like the one he generally wore on the TV show.

Objectively I knew that this was “Christopher Noth, the actor”, not “Mr. Big, the character”, yet as I saw him dressed like that, with the glamorous backdrop of Manhattan all around us, part of me could not help but think I was witnessing a TV character come to life.

This morning as I was walking along in Manhattan, I found myself, for the first time in years, thinking back on that moment. Perhaps it was because this is Valentine’s Day, when many New York couples dress up and act out their romantic fantasies — their own personal brand of “Sex and the City” come to life.

Then, early this evening, just after boarding a Eighth Avenue local heading uptown, I saw a man in a suit running to catch the train as the doors were closing — he made it through the doors barely in time. It was Chris Noth himself, the embodiment of a certain fantasy of New York romance. I think I was the only passenger in that crowded subway car who recognized him.

I don’t believe in the supernatural, but seeing the man himself standing there, blessing all of those young Valentine’s couples with his presence, was way cool. Particularly on the very day I had been thinking about him.

I wonder how often that sort of thing happens.

What happens next

Recently, as I started watching a movie on video, a scene came up just before the opening credits of a character having a supremely happy and exciting experience. In that moment, I knew for certain that the character was doomed — in fact would probably not survive past the opening credits.

In another recent viewing experience, I saw a supremely self-possessed character — one who had never been defeated — go confidently into battle, expecting an easy victory. Before the contest had even begun I was already cringing with dread at the horrible defeat I knew the character would suffer. The only question in my mind was how much the writers would pile on the shame and ignominy.

I don’t think it’s that I have ESP. Rather, I believe each of these scenes was designed in such a way that the audience is subliminally tipped off about what happens next.

In a sense, most commercial films are designed on rails: The audience wants to be surprised, but the filmmakers artfully ensure that on an unconscious level the audience will see the surprise coming. I believe this is thought of as good commercial filmmaking.

What if a commercial film were to offer a true surprise — without secretly telegraphing its punches? Could it still be successful?