Archive for September, 2015

Every fifty years

Wednesday, September 30th, 2015

This last summer I saw a wonderful project by some graduate students in the Interactive Digital Media program at Trinity college. Noting that this year is the 150th anniversary of Alice in Wonderland, and the 100th anniversary of Einstein’s theory of general relativity, they created an original work of interactive art.

Much of the power and delight of Lewis Carroll’s classic comes from the way it warps time and space. Everything is relative, and notions of reality as a static frame of reference go out the window.

As the students observed, this is one of the fundamental predictions of Einstein’s theory. It was general relativity that definitively moved our view of reality itself away from the rigid framework of Newtonian mechanics, with its fixed notions of the nature of time and space.

The students created a clever assortment of interactive techno/art experiences that riffed on the connections twixt red shifts and Red Queens, manifolds and Mad Hatters, gravity and Gryphons. Much of it was quite delightful.

But why just those two points in time? What about 50 years ago? I got to wondering what might have happened in 1965 to shake up our fixed notions of time and space.

So I went on the Wikipedia and started snooping around. Quite a few notable things happened that year, in music, politics, science, literature and many other domains of human interest.

But one event in particular jumped out at me: On April 19, 1965, Gordon Moore published a paper laying out the principle that came to be known as “Moore’s Law” — that computation would become exponentially less expensive with each passing year.

In hindsight, that paper was the shot across the bow. It effectively predicted that our nation’s economic engine was going to shift radically from industrial to informational, from an ecology of fixed resources to one of exponentially increasing resources.

The vision of the future that Moore predicted 50 years ago — the world that we live in today — is indeed in the spirit of Carroll and Einstein. For it is a world in which the meaning of time and space, how we move through them, how we use them to communicate with one another, seems to change with every successive doubling of computational power.

Silly rabbit

Tuesday, September 29th, 2015

I was telling a friend over dinner this evening about Joe Harris, who came up with the immortal line “Silly rabbit, Trix are for kids!” — as well as the artwork and entire idea for the spot.

In an interview years later, Harris lamented how General Mills had misunderstood his beloved character. The entire spot is built on the rabbit not getting the breakfast cereal. The way he tells it, one year GM decided to let the rabbit get some Trix. The result? Sales of Trix went down — because now there was no drama.

I wonder how many other examples there are like this in popular culture: Producers give the audience what they think it wants, and the audience rebels. Because sometimes, not getting what it wants is exactly what an audience really wants.

Memory palaces and embodied cognition

Monday, September 28th, 2015

I have rather enjoyed the version of the memory palace portrayed in the current BBC series Sherlock. The idea of memory palaces goes way back — at least to Simonides of Ceos.

Specific memories are associated with an imagined physical architecture. In the mind of the mnemonist, each fact is placed in a particular location. One then simply needs to tour the imaginary building in one’s mind to retrieve each memory from its allotted place.

There is an argument to be made that the sort of future reality I have been describing in these posts — in which you can walk around freely, using your own physical body to visit imaginary places — would be the ideal interface in which to store a memory palace. Your own muscle memory, proprioception, body sense and geographic intuition could be integrated into the process of storing and retrieving memories.

Given what we now know about place cells and grid cells and how they operate, this seems like a very fruitful avenue of research to explore. These cells, buried deep in the hippocampus, turn out to be incredibly important.

In 2014 John O’Keefe and Edvard and May-Britt Moser won the Nobel prize for the discovery of such neurons, and for thereby showing that our ability to navigate the physical space around us is a fundamental part of how our memory works.

So perhaps the secret to future computer interfaces lies deep in our hippocampus, which is, by the way, a wonderful word. If translated literally, it means “horse fish”.

Idea market

Sunday, September 27th, 2015

Sometimes in research we come up with a cool new way of doing things, but we aren’t always fully aware of all the purposes that new technique is good for. There are well known examples, like silly putty or the Smart Board, of inventions that were created for one purpose or market sector, but found their true utility elsewhere.

I wonder whether it would make sense to create a sort of on-line utility market, in which anybody can participate. The market would be a sort of “invention CraigsList”.

People who have invented (and presumably will patent protect) novel enablements would be able to describe those new techniques in an on-line forum. Other participants to the forum can then suggest interesting uses for those techniques, to which they can claim ownership.

In an economic sense, such a marketplace can benefit everyone involved, since each novel utility increases the potential value of the corresponding novel enablement — or combination of novel enablements.

Multidimensional starfield

Saturday, September 26th, 2015

It is obvious that there are similarities between actors from different eras of Hollywood. Audrey Hepburn and Amanda Seyfried share an elegantly elfin innocence with Leslie Caron, George Clooney and Cary Grant are the square jawed grown-up leading man with a sense of self-deprecating humor, Brad Pitt, Clark Gable and Hugh Grant the irresistable charmer with a devilishly boyish streak, Jerry Lewis and Jim Carrey the manically nutty comic with an undercurrent of bathos, aspiration toward romantic lead and a dash of occasional menace.

We recognize these similarities not merely as one individual to another, but in terms of various qualities that we associate with a Hollywood star. I suspect we could lay these qualities out in a multidimensional space, along such axes as childlike ↔ grown-up, masculine ↔ feminine, serious ↔ comic, upbeat ↔ tragic, knowing ↔ innocent, and so forth.

How many dimensions would we need to effectively classify all the major stars, and how accurately could we position each one, perhaps with the assistance of Amazon’s Mechanical Turk? With the right choice of dimensions, would they fall into constellations, with the constellation containing Clooney and Grant far away, in the celestial sphere, from the constellation that contains Pitt and Gable?

Hand waving

Friday, September 25th, 2015

One part of Minority Report that many people remember very vividly is when Tom Cruise, wearing those cool black gloves, waves his hands around, arms outstretched in front of him, to move virtual objects on what appears to be a holographic screen. There is something iconic about this image, and I am sure it has influenced how many people think of the future of computer interfaces.

There has also been quite a bit of grumbling about this vision of interfacing with a computer. There is something awkward about needing to hold your arms out in front of you for extended periods of time. Even John Underkoffler, the real-life MIT researcher who designed this interface paradigm, has given up on it. Tom Cruise may look great doing all those arm movements, but they sure seem tiring.

But I think it all starts to make sense once we deconstruct Steven Spielberg’s probable intention in creating such an image. The fact that this is exactly the wrong vision for the future is precisely why it works so well.

In reality — as opposed to science fiction fantasies — the human body is supremely lazy, and that is a good thing. Our minds are incredibly good at moving our bodies, in performance of any given task, in a way that uses the smallest amount of energy.

You see this in just about every sort of human movement, from walking, to sitting down or standing up, to reaching for or throwing an object. No matter what the task, we have an uncanny ability to perform that task in an extremely energy conserving way.

This makes perfect sense from an evolutionary perspective. Food can be a scarce resource, and in day to day life there is no survival value in wasting energy on unnecessary movement.

The only situations in which such wasteful movements might be of use are where they carry a social message. Typically those situations come down to social dominance and sexual display. We deliberately move in an energy wasteful way to show that we can. By demonstrating our fitness, we prove a point to potential rivals or potential mates.

And this is precisely what Tom Cruise’s character is doing, on a level of storytelling. By having him perform these power gestures — gestures that nobody else in the film seems to be able to perform as well — Spielberg is telling us that John Anderton is the alpha male in our story. By doing that, he is usefully cluing in the audience to much that will happen later on in the film.

Ever the master visual storyteller, Steven Spielberg is not really interested in predicting what the technological future will look like. Rather, he is interested in guiding our emotions, via cinematic art, through a compelling character driven narrative.

Plates in the air

Thursday, September 24th, 2015

I was just talking to a colleague here at NYU who, like me, is insanely busy and oversubscribed. We both have lots of projects going on in parallel, and sometimes just simply through the day seems to be one giant juggling act.

I told him my personal theory about these things. Some people — and I suspect he and I are both in this club — are plate jugglers. We work best when we are juggling lots of plates in the air at the same time.

Of course every once in a while you just can’t help yourself. You succumb to the temptation to look up and see how many plates there are in the air. And that’s when you notice that a large number of plates are hurtling down at you from above.

I told him that my usual strategy at such moments (and I suspect it is his strategy as well) is to look down, grab the nearest plate I see, and toss it in the air.

The inverse bandwidth law of linear time travel

Wednesday, September 23rd, 2015

There are various kinds of time travel story. Some, like Rian Johnson’s Looper posit that time has many potential branches. When you travel back in time, you can change the course of history and thereby move your reality to another branch.

Others, like Robert Heinlein’s By his Bootstraps, posit that there is only a single time line. Anything you do by jumping back in time inevitably leads to the exact conditions that caused you to jump back in the first place.

I’m fascinated by the constraints imposed by this second kind of time travel story. Inevitably such stories convey a sense that free will is an illusion, since no matter what you do, you always end up in the same place.

But there is another interesting aspect to the linear time travel story: Higher bandwidth interventions lead to ever stranger realities. For example, if you could only send a single bit of information — true or false — from the future to, say, a year into the past, then it is reasonably plausible that whatever happens over the course of that year, the same bit value will always end up being sent back in time.

Even if a character in the story is actively trying to flip that bit, there are many reasonable storylines that could result in that character’s intent always being foiled. This is particularly true if we know that this is a linear time travel story, and that therefore paradoxes are not part of its fictional universe.

But every time we add more bits to the connection, things get a little nuttier. At the opposite extreme, a live video feed always shows reality as seen from one minute into the future. Whatever we try to do, our near future ends up being whatever is in that video. Clearly this scenario is vastly outside the bounds of any psychologically plausible narrative.

So the question I’m wondering is this: What information bandwidth that would be large enough to “break” a linear time travel story?

Imperfect crystals

Tuesday, September 22nd, 2015

At a meeting I attended earlier this week, a physicist who creates nano-scale materials with novel properties was explaining the intricacies of his research. Apparently, much of what he does is based around creating imperfect crystals, in which the carefully engineered imperfections impart exactly the right properties.

At one point he said: “To make the right imperfect crystal, you first need to figure out how to make a perfect crystal.” And that thought really resonated with me.

In my own work in computer graphics and animation, from texturing to 3D modeling to character animation, that’s one of the fundamental principles. You first need to ask yourself what the “perfect” version would be of, say, a marble texture, or a cloud shape, or a human walk.

Then you need to artfully add imperfections, to match the sorts of imperfections one would expect to see in the real world. In a sense, what people are really looking for is the structure through the noise.

You absolutely must have the structure, whether in an arm gesture or an ocean wave or a wisp of smoke — the perfect crystal. But you must also have just the right amount of imperfection.

Symmetry and noise need each other in all things, including human relationships. The interplay between them is what tells us it’s real.

Fate steps in and sees you through

Monday, September 21st, 2015

I realize that it might not win me any coolness points to use a quote from Jiminy Cricket as the title of a blog post. Still, it’s probably the right thought to start off today’s discussion.

I’ve been thinking about fate. A general goes to war, makes some daring decisions based on inspired guesswork, and wins the war. The general is lionized, celebrated as a hero, and goes down in history as a great and inspiring personage.

Another general, in a different time and place, makes essentially the same decisions, based on the same imperfect information, but this time things don’t work out so well. The war is lost, the enemy triumphant. This general is branded as a coward and a traitor — or worse, an incompetent.

We make decisions all the time based on incomplete information, relying on our intuition to fill in the gaps. Sometimes things work out great, and sometimes not so great. Nearly always, we lay the credit or blame at our own feet.

Why do we do this? What is it about our human nature that drives us to insist that everything which happens to us was due to our own agency — either our own genius or our own damned fault?

Maybe, win or lose, life is just more interesting that way.