Jumping in

Knowing that I will be going to an exciting Oscar viewing this evening, I had a very low key day today. Sadly I reached the last of these first thirteen episodes of House of Cards. It’s one of those shows that provides such shamelessly wicked fun, you wish it would go on forever.

Mostly because of Kevin Spacey, who seems to have morphed from a mere actor to a kind of God of charisma. Sort of the way Jeremy Irons did in a slightly earlier era, and Peter O’Toole before him. You simply cannot look away.

I am hoping soon to share in these pages some of our latest research as soon as it is ready for prime time, so today I spent some time cleaning that up and getting it ready. The key question revolves around how seamlessly and gracefully one can combine documents and computer programs. Why not let the reader also be a programmer?

Of course in order for such a thing to work for most people, this all needs to be made accessible — and fun. I hope, dear readers, that you will not object to jumping in and joining me in playing with these ideas in the coming days. 🙂

Banana floats

Saw “Life of Pi” yesterday evening. Beautiful computer graphics, in glorious 3D! And to use the wording of the Academy (the awards being right around the corner), they definitely “serve the film”.

As my friend Michael Wahrman has pointed out:

“Its success is our success, all of us who worked so hard to make CG work for film against the active opposition and indifference and lack of funding, etc. If there is any vindication for the sacrifices we made, it is the existence and success (in many senses of the word success) of films like Life of Pi.”

Well said Michael!

It is also the first Hollywood film that ever led me to commit an act of science. As soon as I got home after seeing this movie, the very first thing I did was go into the kitchen, fill a bowl with water, and then drop in a banana.

If you haven’t seen the film, this might not make sense to you. If you have, I suspect you did exactly what I did. In fact, I imagine that all across the United States over these last weeks, millions of curious moviegoers have returned home after seeing “Life of Pi”, grabbed the nearest bowl and banana, and performed their own empirical studies.

We are, after all, a nation of tinkerers. I wouldn’t be at all surprised to find that the makers of this film are receiving kickbacks from the Dole and Chiquita Banana companies.

The value of a bad demo

I often give perfectly good demos of whatever it is I’ve been working on. Afterward, I feel good, the audience feels good, we all feel good.

But every once in a while I give a demo that doesn’t go so well. And sometimes it’s just a total disaster. Everything goes wrong, the entire thing crashes and burns, and my poor broken ego is left to pick up the pieces, both my self-confidence and my faith in the Universe badly shaken.

I’ve learned over time that these failed demos, as stressful as they inevitably are, are the best fuel to fire creativity. When I’m feeling fat and comfortable, I tend to become lazy. “Hey,” I tell myself, “everything is great!” And that’s when things tend to stagnate.

But after a true failure, my survival instincts come roaring up from wherever they usually hide. In those few days after a disappointing talk, or a demo that has gone horribly wrong, I’ve generally done my best work. Suddenly the cobwebs clear away, my mind is focused and sharp, and creativity begins to flow.

Apparently, nothing succeeds like a good failure.

Cultural subtitles

A few years ago I saw the 1953 Ethel Merman film “Call Me Madam” — appropriately enough (if you know the film) it was part of the in-flight entertainment on an international flight. There were lots of moments when one character or other would say something that was clearly meant to be funny, but that to me was simply mystifying. At some point I realized that these were in-jokes — up to the minute political or cultural references that mostly likely, sixty years ago, seemed very witty and knowing.

I noticed in recent weeks that this memory has been on my mind. Just today I just realized why: I had much the same experience several weeks ago seeing Shakespeare’s “Much Ado About Nothing” on stage (which I wrote about here on February 4).

This delightful play is filled to the brim with the very latest puns and verbal twists of 1598. Alas, unless you are a Shakespearean scholar most of these clever moments will sail right over your post-Elizabethan head. As a friend I mine pointed out, it’s a bit like listening to Abbott and Costello’s “Who’s on First” routine if you’ve never heard of baseball.

Now that everything is on DVD, with subtitle options in just about every language, why can’t they include an option for cultural subtitles? Topical jokes, political references, names of products, as well as actors or other celebrities, mentions of “23 Skidoo” and other lexical mysteries, these would all be explained for the uninitiated.

For recent cinematic and television offerings, this should be done immediately, in the cultural moment. If nothing else, think of all those poor future literary scholars who may spend years trying to parse the meaning of “Snooki”.

The souls of departed geniuses

Yesterday the guy who invented blue screen and green screen passed away. Unless you know something about the technology of film production, this might not mean much to you.

Basically, if you’ve seen a science fiction film, if you’ve experienced any sort of fantasy world or alternate universe on screen, or if you simply possess a world view that is informed — in some deep if mysterious way — by the vision of Dick Van Dyke dancing with penguins, then this man has touched your life.

We seem to be experiencing an epidemic of such sad passings. Only weeks ago we lost the man who invented the Etch-A-Sketch. What could possibly be more beautiful, more poetic, more filled with possibilities for annihilating the gap between C.P. Snow’s two cultures, than the empowerment of young children to create art by direct manipulation of the X and Y axes?

And now these two gentlemen are both in the great beyond. What will happen now is a matter for metaphysical speculation, yet we can entertain the possibilities.

Perhaps they will meet in the afterlife, these giants of visual invention. If one thing leads to another, they will join forces, combining their respective expertise. Perhaps they might even seek out the soul of the late Fritz Fischer, realizing in the inventor of the Eidophor system a kindred spirit.

Are there startups in the afterlife? Do the souls of departed geniuses draw together, seeking to create joint ventures in the great hereafter?

If so, I wonder whether they are open to angel investors.

Making brains

I had some interesting conversations at AAAS on the topic of Artificial Intelligence. In particular around the question: “Can we replicate the functionality of the human brain?”

Everyone I ran into who does actual scientific research on the human brain just shook their heads at the idea of creating an artificial human brain. Their arguments were twofold: (1) We still, after all this time, have no idea whatsoever how to model the brain, and (2) From what we know, the hardware complexity required to replicate just the low level neural activity in a single brain is vastly beyond the combined power of all of the world’s CPUs, even if it turns out that what the brain does is Turing computable in any practical sense.

Furthermore, they don’t think what the brain does is Turing computable in any practical sense. And don’t even get them started on Ray Kurzweil.

On the other hand, pretty much everyone else I spoke with — people who don’t know much about the subject — seemed firmly convinced that we will have an artificial human brain within the next ten years (except for a skeptical few, who thought it might take as much as twenty years).

These non-neuroscientists, generally quite intelligent and informed people, responded to any suggestion that replicating the functionality of the human brain might be out of reach by simply rolling their eyes, while saying things like “Hey, they once thought human flight was impossible.”

Somewhere in here is an interesting story about the extreme disparity of opinion between (1) those who have spent years studying the brain and (2) everyone else.

I’m just not quite sure yet what that story is.

All that we touch

“Humans are the tool makers of the world” is a well known trope. At the AAAS meeting yesterday, neuroscientist Miguel Nicolelis asserted that this concept doesn’t go far enough in describing the nature of humans.

Speaking of the brain’s relationship to the body, he said: “We are not just tool makers, We are tool assimilators.” Specifically, as we use our brains to make tools, those tools become extensions of our bodies. A human brain operates by continually extending its concept of “body”, mentally assimilating ever more of the world to form a more powerful virtual body.

Any that tool we craft or use becomes part of this extended body — a hammer, a piano, an automobile, a computer. As our brains create a mental map of each new tool, that tool becomes part of the brain’s ever extending reach, like another set of hands.

Over time, whatever we can manipulate becomes absorbed into our brain’s virtual body, and all that we touch becomes us.

Maybe this isn’t such a good idea

Today at a session of the American Academy for the Advancement of Science on the topic of direct brain/body interfaces, one of the speakers was a devout Christian. The entire focus of his talk concerned the moral implications “as a Christian” (his words) of everything the other speakers had been discussing. He wondered aloud whether God would approve such doings, whether advancing technology is compromising our sacred humanity, and what it all might mean for our immortal souls.

To put this in context, the other speakers had been very thoughtful about ethical questions. Not one of them had merely discussed the technology. Rather, each presentation had included carefully nuanced points about what a direct brain/body communication interface might mean for privacy, patients’ rights, interpersonal relationships, the limits of government intervention and other matters.

And yet, suddenly, God was in the room. At a conference about science, we were treated to such phrases as “God, who created us all”, and similar sentiments. I have to admit that my very first thought was “What the hell?”

It could be argued that we scientists have no right to expect a safe place to discuss evidence based reasoning, that the special privilege of some particular religion or other is so paramount in our society that a dominant faith has free license to grandstand in the middle of any scientific discussion, trampling over the principles of logical inference and empirical evidence.

But does it go both ways? Do scientists have the right to force their way into the nearest church, perhaps in the middle of the most sacred and holy rites, and shove the priest aside in the name of science?

“Get out of the way,” I can envision them shouting, this gang of rogue empiricists with no respect for decorum, “we are here to conduct some experiments!”

As these scientists, having taken the church by force, rudely sweep the holy wine and bread of Christ onto the floor to set up their beakers and test tubes upon the sacred altar of God, could the stunned priest really be faulted for wondering “Maybe this isn’t such a good idea.”

Race condition

Today, at the annual AAAS meeting, I attended a great talk by Nina Jablonski explaining very clearly and unambiguously why “race” (as in black, white, etc), is a complete myth. Interestingly, she noted that in the U.S. health agencies still use the concept of race — apparently because it makes everyone feel comfy, even though scientifically it has no meaning whatsoever.

I learned that Lucretius was apparently the first person to classify people by color. In his early work he was value neutral, but about ten years later he started associating personality with skin color.

But it seems that the real villian was Emmanuel Kant. He was the first to start ranking people, based on their skin color, from inferior to superior. Because he was a well regarded thinker, this nonsense was taken seriously.

The rest is history.

The key high order bit of the actual science is that dark skin is highly selected for in dry equatorial climates (where people with light skin tend to die off because UV-B from sunlight attacks their folic acid, which is necessary for proper embryonic development), whereas light skin is highly selected for far away from the equator (because absorbing some UV-B is necessary for vitamin D production, without which bones cannot grow properly).

Various populations have changed from dark to light and back again quite often over the last 70,000 years (when the first humans wandered out of Africa). For example, the ancestors of many people now living in southern India went from dark to light to dark again.

During the first 130,000 years of humanity’s existence, everyone lived in Africa. Genetic diversity during that time was vast, yet of course all those genetically diverse peoples were dark skinned because of selection for protection against UV-B.

Meanwhile, the light skin of caucasians and of east asians evolved via completely different mutations. In both cases, some genetic mutation arose that helped guard against deficiency in vitamin D — but implemented by unrelated genetic pathways.

So in reality, it’s all a tangle of genetically diverse subpopulations. Yet the U.S. we still indulge in the fantasy that there is something genetically meaningful about such words as “black” or “white”.

Vision for the future

An article on the front page of today’s New York Times caught my eye. It marks FDA approval, after ten years of research and development, of a working artificial retina.

Of course the tech isn’t quite like an actual retina at this stage. Resolution is extremely low, color is pretty much non-existent, and the externally worn component of the device is large and unsightly. But for people who have had essentially no vision, it is transformative.

Those of you who have been reading this blog for a while will probably be able to see where this is going: Today the quality might be low, but eventually such a device will be as good as a natural retina, and then, at some point, it will be better.

Looking forward, as the technology improves this kind of implant will no longer be seen as a prosthetic to correct a problem, but as something integral to our everyday experience of the world, like electric lights, or cars, or clothing.

And then everything will change.