Phantom limbs of the soul

Last night I had the oddest thought: When person A feels a sense of longing for person B, perhaps deep down it is not actually person B who is being longed for.

Rather, somewhere inside the mind of person A, they have labeled a part of themselves “residing within person B”. In some important emotional sense, person A has placed a portion of their sense of self into the identity of another soul, to be worn like a phantom limb. A common way among lovers of describing this feeling is “I am yours”.

If person B reciprocates this operation, the resonance can create a tremendous mutual sense of euphoria. Each lover perceives a part of their essential self within the person of the other, which creates a feeling of heightened existence for both.

Alas, either A or B might one day drift away, for whatever reason. This leaves the other person feeling that a part of their own self has been severed. The resultant loss can bring about a sense of mourning, as though there has been a death.

Yet within mere months following such a loss, these phantom limbs of the soul will, of their own accord, fade away.

Intellectual flexibility

I was having a conversation today about career choices, in which my friend and I realized that careers can be roughly ordered along a scale of what might be called “intellectual flexibility”.

The basic question here is “How much freedom do I have, in this career, to go wherever my mind takes me?”

There are some quite intellectually challenging disciplines, such as Law, where such freedom is sharply bounded. Yes you can have freedom (and the study of law contains fascinating intellectual challenges), but within a context in which 99% of the field is beyond your ability to change — since the law has a long established set of precedents.

At the other extreme, a writer can pretty much write about any ideas, and a painter can paint just about any image. There are virtually no externally imposed limits.

I can safely say, based on my direct experience, that being a professor of computer science at NYU is a lot more like being a writer or a painter than like being a lawyer.

I think this is a good thing. 🙂

A touch of the future

This evening I was on a panel where the topic was the future of user interfaces. At some point I riffed on the very ideas I discussed here last week — beginning with the announcement last week of FDA approval for artificial retinas.

I suggested that if we wind forward by another twenty five years or so, those artificial retinas will improve to the point where every American parent will demand that their child get implants — simply so that the child can stay competitive with all of the other children.

This will lead to a world where everyone will have augmented reality — the technology will move out of those little round SmartPhone boxes, and migrated into our bodies. We will eventually cease to see this development as “technology”. It will just be normal.

In this new normal, you and I will be able to perceive virtual objects anywhere, including the empty space between us.

Of course we will want to touch those objects. And that’s the point when everyone will get finger implants. After which we will simply come to see such objects as real, just like we now tell ourselves that all our other artificial objects — couches, chairs, cars — are real.

I was heartened by the fact that people completely accepted this radical vision of the future. In fact, over dinner afterward, somebody said: “Twenty five years? Do you really think it will take that long?”

Essentialism

I noticed a pattern in last night’s winners of the Academy Awards. In just about every case, the winner was essential to the film, perhaps even the decisive factor in making their respective film a success.

“Silver Linings Playbook” is, at heart, a soppy and sentimental romantic comedy. But Jennifer Lawrence’s fierce performance lifted it, in its finest moments, into something much more. Unlike her costar Bradley Cooper, whose nominally borderline character pretty played it “cute and adorable” (the staple traits of RomCom leads), Lawrence made us believe that she was actually dangerous, that there was a dark core running through her character which at any moment might tip over into violence. To me this made the film far more watchable — even interesting — until the movie went all soft and soppy and lost its edge.

And of course I’ve already written here about Anne Hathaway in “Les Miserables”. Without her transformative portrayal of Fantine, and that one extraordinary and pivotal scene, the film would have been remembered at best as a failed experiment, and at worst as an embarrassment.

Similarly, Daniel Day Lewis was the saving grace of “Lincoln”. An otherwise ponderous and over-inflated affair, the film would have sunk beneath its own self-important weight, were it not for its lead’s surprisingly nimble and impish portrayal of our 16th president. I strongly suspect that the Abe Lincoln portrayed by Daniel Day Lewis would have loved Seth MacFarlane’s irreverent turn as Oscar host. Including the joke about John Wilkes Booth.

Maybe especially the joke about John Wilkes Booth.

Jumping in

Knowing that I will be going to an exciting Oscar viewing this evening, I had a very low key day today. Sadly I reached the last of these first thirteen episodes of House of Cards. It’s one of those shows that provides such shamelessly wicked fun, you wish it would go on forever.

Mostly because of Kevin Spacey, who seems to have morphed from a mere actor to a kind of God of charisma. Sort of the way Jeremy Irons did in a slightly earlier era, and Peter O’Toole before him. You simply cannot look away.

I am hoping soon to share in these pages some of our latest research as soon as it is ready for prime time, so today I spent some time cleaning that up and getting it ready. The key question revolves around how seamlessly and gracefully one can combine documents and computer programs. Why not let the reader also be a programmer?

Of course in order for such a thing to work for most people, this all needs to be made accessible — and fun. I hope, dear readers, that you will not object to jumping in and joining me in playing with these ideas in the coming days. 🙂

Banana floats

Saw “Life of Pi” yesterday evening. Beautiful computer graphics, in glorious 3D! And to use the wording of the Academy (the awards being right around the corner), they definitely “serve the film”.

As my friend Michael Wahrman has pointed out:

“Its success is our success, all of us who worked so hard to make CG work for film against the active opposition and indifference and lack of funding, etc. If there is any vindication for the sacrifices we made, it is the existence and success (in many senses of the word success) of films like Life of Pi.”

Well said Michael!

It is also the first Hollywood film that ever led me to commit an act of science. As soon as I got home after seeing this movie, the very first thing I did was go into the kitchen, fill a bowl with water, and then drop in a banana.

If you haven’t seen the film, this might not make sense to you. If you have, I suspect you did exactly what I did. In fact, I imagine that all across the United States over these last weeks, millions of curious moviegoers have returned home after seeing “Life of Pi”, grabbed the nearest bowl and banana, and performed their own empirical studies.

We are, after all, a nation of tinkerers. I wouldn’t be at all surprised to find that the makers of this film are receiving kickbacks from the Dole and Chiquita Banana companies.

The value of a bad demo

I often give perfectly good demos of whatever it is I’ve been working on. Afterward, I feel good, the audience feels good, we all feel good.

But every once in a while I give a demo that doesn’t go so well. And sometimes it’s just a total disaster. Everything goes wrong, the entire thing crashes and burns, and my poor broken ego is left to pick up the pieces, both my self-confidence and my faith in the Universe badly shaken.

I’ve learned over time that these failed demos, as stressful as they inevitably are, are the best fuel to fire creativity. When I’m feeling fat and comfortable, I tend to become lazy. “Hey,” I tell myself, “everything is great!” And that’s when things tend to stagnate.

But after a true failure, my survival instincts come roaring up from wherever they usually hide. In those few days after a disappointing talk, or a demo that has gone horribly wrong, I’ve generally done my best work. Suddenly the cobwebs clear away, my mind is focused and sharp, and creativity begins to flow.

Apparently, nothing succeeds like a good failure.

Cultural subtitles

A few years ago I saw the 1953 Ethel Merman film “Call Me Madam” — appropriately enough (if you know the film) it was part of the in-flight entertainment on an international flight. There were lots of moments when one character or other would say something that was clearly meant to be funny, but that to me was simply mystifying. At some point I realized that these were in-jokes — up to the minute political or cultural references that mostly likely, sixty years ago, seemed very witty and knowing.

I noticed in recent weeks that this memory has been on my mind. Just today I just realized why: I had much the same experience several weeks ago seeing Shakespeare’s “Much Ado About Nothing” on stage (which I wrote about here on February 4).

This delightful play is filled to the brim with the very latest puns and verbal twists of 1598. Alas, unless you are a Shakespearean scholar most of these clever moments will sail right over your post-Elizabethan head. As a friend I mine pointed out, it’s a bit like listening to Abbott and Costello’s “Who’s on First” routine if you’ve never heard of baseball.

Now that everything is on DVD, with subtitle options in just about every language, why can’t they include an option for cultural subtitles? Topical jokes, political references, names of products, as well as actors or other celebrities, mentions of “23 Skidoo” and other lexical mysteries, these would all be explained for the uninitiated.

For recent cinematic and television offerings, this should be done immediately, in the cultural moment. If nothing else, think of all those poor future literary scholars who may spend years trying to parse the meaning of “Snooki”.

The souls of departed geniuses

Yesterday the guy who invented blue screen and green screen passed away. Unless you know something about the technology of film production, this might not mean much to you.

Basically, if you’ve seen a science fiction film, if you’ve experienced any sort of fantasy world or alternate universe on screen, or if you simply possess a world view that is informed — in some deep if mysterious way — by the vision of Dick Van Dyke dancing with penguins, then this man has touched your life.

We seem to be experiencing an epidemic of such sad passings. Only weeks ago we lost the man who invented the Etch-A-Sketch. What could possibly be more beautiful, more poetic, more filled with possibilities for annihilating the gap between C.P. Snow’s two cultures, than the empowerment of young children to create art by direct manipulation of the X and Y axes?

And now these two gentlemen are both in the great beyond. What will happen now is a matter for metaphysical speculation, yet we can entertain the possibilities.

Perhaps they will meet in the afterlife, these giants of visual invention. If one thing leads to another, they will join forces, combining their respective expertise. Perhaps they might even seek out the soul of the late Fritz Fischer, realizing in the inventor of the Eidophor system a kindred spirit.

Are there startups in the afterlife? Do the souls of departed geniuses draw together, seeking to create joint ventures in the great hereafter?

If so, I wonder whether they are open to angel investors.

Making brains

I had some interesting conversations at AAAS on the topic of Artificial Intelligence. In particular around the question: “Can we replicate the functionality of the human brain?”

Everyone I ran into who does actual scientific research on the human brain just shook their heads at the idea of creating an artificial human brain. Their arguments were twofold: (1) We still, after all this time, have no idea whatsoever how to model the brain, and (2) From what we know, the hardware complexity required to replicate just the low level neural activity in a single brain is vastly beyond the combined power of all of the world’s CPUs, even if it turns out that what the brain does is Turing computable in any practical sense.

Furthermore, they don’t think what the brain does is Turing computable in any practical sense. And don’t even get them started on Ray Kurzweil.

On the other hand, pretty much everyone else I spoke with — people who don’t know much about the subject — seemed firmly convinced that we will have an artificial human brain within the next ten years (except for a skeptical few, who thought it might take as much as twenty years).

These non-neuroscientists, generally quite intelligent and informed people, responded to any suggestion that replicating the functionality of the human brain might be out of reach by simply rolling their eyes, while saying things like “Hey, they once thought human flight was impossible.”

Somewhere in here is an interesting story about the extreme disparity of opinion between (1) those who have spent years studying the brain and (2) everyone else.

I’m just not quite sure yet what that story is.