Between game and story

Athomas raised a good point the other day about games from Naughty Dog studios, such as “The Last of Us” and the “Uncharted” series. Those games are indeed highly cinematic immersive worlds, with true character arcs and fairly linear narrative structures.

And yet they remain decisively on one side of a vast divide. The two sides of this divide could be labeled “things you watch” and “things you play”. For all its visual beauty and relatively rich characters, a Naughty Dog game still succeeds or fails on what it allows you — the consumer of the experience — to do. You shoot at enemies, solve puzzles, figure out how to get from one place to another.

It is true that along the way you are also having elements of a cinematic and narrative experience, and that is indeed innovative and exciting. Yet ultimately your satisfaction comes from using your skill and your wits to solve problems and surmount obstacles. You are engaged in the act of playing a game.

Contrast this with an immersive theatre piece like “Sleep No More”. Nothing you do in “Sleep No More” can possibly affect the outcome. You are free to roam anywhere within the ongoing theatrical world, but that world will always play out in exactly the same way.

I’ve been to “Sleep No More” twice so far (I plan to go more times) and my experience was quite different each time. Yet the only difference was my point of view. The narrative world itself (a radical interpretation of Macbeth) was exactly the same both times, and each time the experience was quite thrilling.

In a way the sort of Movies 2.0 that I’ve been talking about is a bit like the experience of seeing different performances of the same play. When I recently saw two different performances, one a few weeks after the other, of Julie Taymor’s production of “A Midsummer Night’s Dream”, I had two quite dissimilar experiences, for several reasons.

For one thing, at each performance I was seated in a very different place (stage-side versus front-center balcony). For another, the actors’ performances resonated very differently with the two different audiences. Also, of course, on my second viewing I could sit back and analyze what Taymor was up to, so I actually had a lot more fun at that performance.

I used to be convinced that there is something “in between” game and linear narrative. Now I’m not so sure. While I acknowledge that such hybrids are technically possible, and I am in awe of the brilliance of creative experiments like Versu, I’m not convinced that I will ever find, in the valley between the two lofty peaks of “Game” and “Story”, a truly compelling experience.

Pete Seeger

It took me a day or so to process the passing of Pete Seeger. Somehow I had convinced myself that he would live forever. I know that sounds absurd, but Pete Seeger was the kind of guy who inspired thoughts like that.

I first saw him in concert when I was in college. The thing you need to understand about the man, if you never had the opportunity to see him in person, is his purity. Everything he said or did seemed to come from a deep wellspring of conviction, a sense that we were all put on this earth to help each other.

The best part was that he never carried even a whiff of entitlement. His songs were always an invitation between equals, as if to say: “Come join me, let’s you and me roll up our sleeves and get to work, making the world a better place.”

While writing this post, my mind started drifting back to my favorite song from that incredible concert all those years ago. It’s a moment in time I’ve thought about often, Pete’s perfect rendition of a deeply intelligent and powerfully feminist song written by his sister Peggy.

I wasn’t sure of the exact title, but after a little searching on YouTube I managed to find it. And it was every bit as good I had remembered.

Here he is folks, Pete Seeger, a true hero who helped make the world a better place, singing I’M GONNA BE AN ENGINEER.

Attribution

So here’s a puzzler:

Quite often I’ll find out that a technique I’ve been using for years, one I had originally developed because I needed a solution to some problem, has subsequently been independently rediscovered and published by someone else. And now this technique has an official name.

Fair enough. The people publishing the technique are doing a service to the community that I never did: Going through the trouble of officially explaining how the technique works, and perhaps doing user studies to empirically test the technique. And it’s certainly not as if they stole it. Techniques get independently reinvented all the time.

But here’s the puzzling part: Do I need to readjust my thinking, and rename the technique within my own code, to reflect that fact that it now has an official name? Do I need to do this even if I was using the technique for years before someone else independently reinvented it?

I do think that if I publish a paper that relies on the technique, then I should use the newer term of art, and reference the other inventor’s publication. After all, that’s how the edifice of peer reviewed science works.

But what about in my own code? I can think of at least one good argument on both sides:

On one side, maybe I should keep it as is, because it would be untruthful to rename something I had already developed long before, just to reflect events that took place only later. That would be rewriting history.

On the other side, maybe I should change it out of courtesy to other people who will build on my code later, since now there is a “standard” way to refer to this technique.

I’m not sure there is an easy answer to this one.

Attention versus impact

Certain events, such as the Academy Awards or the World Cup games, attract an almost insane amount of focus from the world. Even randomly weird and relatively meaningless events can captivate the attention of millions, like a suggestive dance between Miley Cyrus and a large foam rubber finger.

Yet every day significant things happen which will have a long term impact on all of our lives and yet somehow pass below our collective radar, lost in all the noise. Such influential events can take many forms: A law enacted, a disease cured, a more lethal handgun perfected.

If there were some way to spot such game-changing events early on, surely that would be a good thing.

If we look back over the years, the wisdom of hindsight can sometimes allow us see such long term impacts with greater clarity. For example, it didn’t seem to occur to anybody in the 1950s that the massive expansion of roadways out of New York City by Robert Moses would result in entire industries moving out of the city, leading to the city’s economic collapse by the 1970s.

Perhaps it would be interesting to chart, going back in time, what sorts of events had a particularly high “long term impact” versus “initial attention” ratio. That might make it easier learn what to look for while such events are occurring, rather than discovering their import only years later.

Movies 2.0

Continuing the thought from yesterday, we don’t need to wait 100 years to see a sensory evolution of the protagonist driven linear narrative.

Technologies are already emerging that allow movies to be seen from many different angles. For example, Total Cinema 360 develops software for shooting a movie using the same “see in all directions” camera that Google uses for Google Street View. Viewers can then put on an Oculus Rift and look around to see the movie in any direction.

Some computer games are a bit like movies with a user controllable camera. But games are usually more about making choices to affect the outcome than about conveying a traditional linear narrative. Probably because of this focus, the “acting” by non-player characters generally leaves much to be desired.

But game-related technology can be used another way. Suppose we just want to make a movie that can be wandered through — observed from any location and angle. Even today we can use motion capture and 3D graphical modeling, animation and rendering to create all the digital assets that would be needed to make such an immersive movie. Using emerging technologies like the newest version of the Microsoft Kinect, motion capture doesn’t even need to be prohibitively expensive.

But this is where we get to something that is not quite a movie as we know it: If the viewer can wander around the room and see things from any angle (as in immersive theatre pieces like “Tamara”, “Tony and Tina’s Wedding” and “Sleep No More”) then many of the traditional means of subliminal signaling used by filmmakers would no longer work.

The creators of such “immersive film worlds” cannot use many of the traditional filmmaker’s techniques for creating subjective experiences: The interplay between establishing shots, two-shots and close-ups, the choice of lens power and depth of focus, placing key and fill lights for a particular shot, and so forth.

New and different techniques will need to be developed, which do not rely on camera placement. Over time these new techniques will mature and evolve, and then we will truly have a new medium — Movies 2.0.

After movies

The progression from novel to movie is not really paralleled by anything in interactive media. To say that “Just as we moved from words to images, as the novel gave way to the film, now we are moving to interactivity as the film gives way to the computer game” doesn’t quite sit right.

It’s not that I think of games as a lesser medium. Quite the contrary. Computer games are glorious and exciting in their vast possibility, and they are still in their infancy. No, that’s not it.

It’s more that the progression from page to screen is within the long tradition of protagonist driven linear narrative, and I don’t think that’s going to be replaced. Linear narrative seems to emerge from how our minds work, and it is how we have always told our stories of emotional truth.

And it’s not just novels and films that work this way. The theatre can be thought of as a kind of hybrid of novel and film. It privileges words the way a novel does, yet like cinema it also privileges the visceral quality of physical human presence.

So I am wondering what will be the future of the protagonist driven linear narrative — a form that has existed in human history for as far back as we can see, and that shows no signs of going away. What will it be like in, say, a century from now?

Will it be some form of immersive holodeck, in which we find ourselves seemingly co-present with the characters of a compelling story — seeing what they see, hearing what they hear, touching what they touch?

Or will it be something even beyond that — a direct transposition of their most subtle and fleeting thoughts and emotions onto our own brain, as though these thoughts and feelings were our own, emerging from within the core of our being?

The garden of pure ideology

It’s interesting to think back, from a distance of thirty years, on the once iconic quote from 1984 that I posted yesterday (I changed only one word). Obviously the people who wrote those words were deliberately echoing George Orwell, and riffing on the significance of the year 1984.

But those were more innocent times. The Web was still a good decade away, and few could have predicted that a clever ad for a personal computer — sold as a symbol of personal choice and an icon of freedom from the hegemony of Corporate America — would actually prefigure a very different future.

In the wake of the Snowden revelations, we are all reassessing that dream. In this country, conservatives tend to mistrust power in the hands of government, and liberals tend to mistrust power in the hands of corporations. But now we all have common cause — there is plenty of mistrust to go around. Somebody has our data, and we’re trying to figure out just how scary that is.

The idea that more technology is better is indeed, as that ad from thirty years ago put it, a garden of pure ideology. Alas, we don’t always get to decide what grows in the garden.

Happy birthday you-know-who

 

“Today, we celebrate the thirtieth glorious anniversary of the Information Purification Directives. We have created, for the first time in all history, a garden of pure ideology–where each worker may bloom, secure from the pests purveying contradictory truths. Our Unification of Thoughts is more powerful a weapon than any fleet or army on earth. We are one people, with one will, one resolve, one cause. Our enemies shall talk themselves to death, and we will bury them with their own confusion. We shall prevail!”

Naming things

The last two posts have gotten me thinking about our need to name things.

I guess it’s logical that humans need to put a name on something before they can see it as something of value. After all, for the last several hundred thousand years our species has been developing this astonishing facility with language. Our particular way with words seems to be unique among nature’s wonders, at least as far as we know.

And yet there is a contradiction running through the very core of this way of valuing things. After all, what do we truly value, when it comes right down to it? Here are some things that come to mind: Friendship, our children, courage in the face of danger, our attraction to our chosen partner.

None of these things require words. In fact, there is every reason to believe that we evolved these emotional traits long before we evolved our highly developed sense of language. After all, these are traits we share with other species.

And therein is one of the great contradictions of being human: We are creatures obsessed with naming things. In fact we fill our lives with constant chatter. Yet the things we value most in our hearts are beyond words.

The privatization of language

The provisional Trademarking of the word “Candy” for computer games might seem silly, but it’s actually deadly serious. Buried amidst its claim to exclusive use of the word for recording equipment, computer games and every conceivable type of clothing (an astonishingly exhaustive list), the King Limited trademark also lists this:

“Educational services, namely, conducting classes, seminars, workshops in the field of computers, computer games; Training in the field of computers”

Just as I did in 2010, and Vi Hart did before that, many people have used the iconography of candy in on-line learning experiences that get kids interested in math and computation. If this trademark goes through, free use of such socially positive uses of this word would be prohibited.

But the implications are far larger than one word. Such a trademark would create legal precedent for a lexical land grab. Corporations could trademark equally broad usage of any word — mom, dad, friendship, love — effectively turning these words into private property.

Among the many reasons that’s a bad idea, here’s one: Once the line is crossed to prevent the free use of common positive words for educational purposes, it is the beginning of the end of what people like Vi and I do — helping to make learning more fun and enjoyable. Think about that the next time you play Candy Crush Saga.

Amazingly, the right to use our own language is being taken away from us. And it’s all happening so easily.

Like taking candy from a baby.