Dublin books

Today, at the very end of my stay in Dublin, I decided to honor the long and illustrious literary history of that great city by doing a mini-tour of its bookstores. Not surprisingly, there are quite a few options for the avid bibliophile, from purveyors of rare and used books to Hodges figgis, just across the street from Trinity College, where it seems you can purchase just about any volume in print.

You see, I’ve always loved books. There is something about the physical book that sets it apart from any electronic equivalent. Yes I know the book is impractical, compared with its more modern competitors. It’s heavy and wasteful of resources, it takes up far too much space, and you can’t take your library with you when you travel.

But ah, the sensory experience! The feeling of opening the cover, riffling through the pages, the heft of a book in your hands, the wondrous physicality of black ink on textured paper, the very smell of it. All of these things contribute to a powerful sense of connection.

Some might say that there’s nothing even remotely rational about this view of books. After all, the act of reading is, by its very nature, a renunciation of the physical world in favor of a symbolic realm of pure information. Yet there it is.

But what to buy? Some neglected work by a great Irish poet? A play by Shaw perhaps? Maybe something written in Irish Gaelic, just for the sheer beauty of the words on paper, even though I wouldn’t begin to know how to read it.

In the end I chose a collection of short stories by Philip K. Dick. Yes, I know, it’s not Beckett. But I think it still counts. 🙂

 

† Not surprisingly, PKD has cited Beckett as an influence. (see The Selected Letters of Philip K. Dick 1938-1971. Grass Valley: Underwood Books, 1996, p 56).

Lighthouses

Today, being in Dublin, I went on a little day trip to the charming seaside town of Dun Laoghaire (when you say it out loud, it sounds like “Dun Leary” — don’t ask). While there, I had a splendid time exploring the National Maritime Museum of Ireland.

I learned all sorts of things today about sailing, lighthouses, engine technology, nautical charts, shipwrecks, lifeboats, sextants, trade routes, and many other fascinating topics, far too numerous to list here.

But one fact in particular really stuck with me: In their hay day, every lighthouse flashed at a unique rate. That is, the motor that spun the light around was set to a different rate of rotation for every lighthouse.

And the reason for this was simple and ingenious: If your ship was lost out at sea, and all you had to navigate by was the faint pulse of a distant lighthouse, you could time that pulse and you would know where you were.

This concept is also the basis for heterodyning, the core technology behind broadcast radio and television — a more recent innovation whereby multiple signals can be distinguished by their differing carrier frequencies.

What I love about the pulse frequency scheme for identifying lighthouses is the way it shows that people are not getting more clever over time. People were always clever. It’s just that, at various times in history, they have access to different technologies for showing how clever they can be.

The minimal subproblem

In my work recently, I was beating my head against the wall trying to solve a particular problem. It seemed to me that getting the solution to that problem, however difficult, was a bridge that I must cross, in order to get to the place I wanted to be.

Then today, during a meeting about something else entirely, I realized that I could instead solve a far easier problem — crossing a little side bridge, as it were — and use that much easier path to find a good enough solution to the larger problem.

It occurs to me that there must be some principle at work here, some simpler and more productive way of looking at the solving of seemingly intractable problems. Perhaps it could be called the “minimal subproblem” principle.

What is the self? Part 2

Two excellent comments on yesterday’s post! The reason this all came up is that I gave a talk this week about possible directions for socially shared virtual reality, and a student asked whether replacing reality with virtual reality will change who we are.

I responded that we are already living in virtual reality. We just don’t know it because we are subject to Alan Kay’s dictum that “Technology is anything invented after you were born.” I told the student that we generally go through our day sincerely believing that we are living in some sort of natural state, whereas in truth our perception of the world and of each other is highly mediated.

To put this in context, if you and I talk on the phone, or over Skype, you don’t think “Oh, that’s not really Ken, that’s just a technological reconstruction of Ken.” And the same goes for email or handwritten letters, both of which are technologically mediated artifacts. You don’t think “I received an avatar of Ken in the form of his handwriting.” Although, in a sense, you have. Rather, you think “I got a letter from Ken.”

The important thing is that the thoughts between my mind and your mind are connecting. Any medium that achieves this, for people who are used to that medium, is simply labeled as part of reality.

So yes, the self may be a complex, multifaceted and ever changing thing, but reality (if you are a human) is quite simple: It is whatever medium of communication allows my self and your self to effectively connect with each other.

What is the self?

One of the questions that keeps coming up, as I talk with people about the possible future where everybody is wearing (in the Verner Vinge sense of the word), is the relationship, if any, between the nature of our physical interaction with the world and the definition of ‘self’.

For example, if Neo in The Matrix actually only believes that he is walking around and interacting with the physical world, whereas in fact his body is floating in a vat somewhere, does this impact in any essential way who he is? Does it mean that his ‘self’ has been altered in some fundamental way?

This is a potentially thorny question, and there are many possible directions from which to approach it. But for now I’m just going to let the thought linger. Something to chew until tomorrow.

Best origin story

Every superhero has an origin story. Superman is an alien from planet Krypton, Thor a misplaced Norse god, the Hulk the product of an overdose of gamma rays. I love all these origin stories.

But my favorite origin story of them all is the one for Spiderman. Instead of relying on one preposterous premise, two such premises are mashed together, and somehow that tips it all over into a kind of crazy pseudo-plausibility.

I mean, we all know that being bombarded by intense gamma rays doesn’t actually give you super powers. In fact, it kills you rather efficiently. And we all know that being bitten by a spider doesn’t give you super powers. It gives you a nasty rash, and sometimes a fever.

But if you put the two of them together, now you are tapping into the joint mysteries of cosmic rays and genetics. I mean, who knows for sure what kinds of mutations radiation might cause in a spider? I certainly don’t, and you don’t either.

And so this origin story manages to cross the line from the patently ridiculous to “Wait a second, could that work?” At least to an open minded eleven year old, and that’s all that really counts.

There’s something else I like about Spiderman’s origin story, which has more to do with the complexity of our relationship with our superheroes, and that is the element of the monstrous that lurks around the edges of their stories.

In fact, I suspect this monstrousness may be an intrinsic part of our continuing fascination with them. Batman is an emotionally damaged little boy turned vengeance machine, Superman a lonely alien orphin who flies around in his underwear. And the Hulk is, well, a monster. But for sheer borderline weirdness, Spiderman has them all beat. He’s basically an insect.

It doesn’t get any better than that.

Track 45 left, part 4

I really liked Weston’s comments on my previous post on this topic. A movie has always been a experience to visit for relatively brief periods of time (at least, if we are talking only about mainstream movies). Yet it seems that in recent years, filmed fictional narrative has been preparing for the advent of complete immersion — for the visual equivalent of getting lost in a novel.

We are now very much in the age of long form filmed narrative, of The Sopranos, of Breaking Bad, House of Cards and A Game of Thrones. Nearly two decades ago in the U.S., Buffy the Vampire Slayer was practically unique in having sustained multi-year narrative and character arcs. Now everyone else seems to be catching up to Joss Whedon.

If we end up losing the frame — and if, as Ridley Scott posited in Blade Runner, the camera position can shift even as you watch — will this complete the transition of cinematic art from the long short story to the full fledged novel? Are we witnessing parallel developments of narrative form and technical enablement?

Technically, we are not quite there yet. Our cameras are now able to capture movies in full 360 surround, but the audience of those 360 movies cannot quite yet wander freely around the set. To do that, we will essentially need to create a fusion of cinema and high quality computer graphics, in which the set will be stored as a representation of itself in the form of a high quality textured 3D model.

Of course this is the approach that has been continually evolving for years in high end computer games. Not coincidentally, such games are generally constructed as long-form narratives, in which a fictional world is meant to be experienced — and explored — over the course of days or even weeks.

Maybe that’s why Joss Whedon has taken on the Avengers movies. He might be angling sometime in the next few years to come full circle back to what he started in Buffy: Writing and directing the first truly long form fully immersive cinematic novel. And he might just pull it off, given that he has the financial might of Walt Disney Studios Motion Pictures behind him.

John and Alicia

I just heard the tragic news that John and Alicia Nash were killed yesterday in an automobile accident. The obituary in the NY Times is already on-line.

I got to spend some quality time with the two of them at the 2012 Hamptons Film Festival, where I introduced and interviewed them as part of a screening of A Beautiful Mind. They were both among the most fascinating, yet enigmatic, people I’ve ever met.

They couldn’t have been more different from each other, and they fit together perfectly. Both were brilliant, but there the similarity stopped. John was the consummate gentleman, quietly polite, diffident and deeply thoughtful, more than a little awkward in a charming way. Alicia, on the other hand, was a real hoot, a total force of nature. She was clearly aware of the somewhat out of control media whirlwind going on around the two of them — she seemed quite protective of her husband — and you could tell that she missed nothing.

There was, in fact, a lot of odd energy at the festival. I think it was partly because when people think of John and Alicia Nash, they usually think of the actors in the Ron Howard film, and many festival attendees were seeing these two brilliant and elegant and rather private people as some sort of connection to Hollywood glamour.

Which was, of course, not quite accurate, but that is the nature of these events. At one point the three of us were discussing this strange energy between the film and the reality. I told them that one of the festival organizers had at first wanted me to interview only John, but that I had insisted on interviewing them both together. The film was, after all, essentially a love story, based on their own astonishing real life romance.

Alicia smiled at her husband, aware that people at the festival were somehow conflating him with Russell Crowe, and her with Jennifer Connelly. “Yes,” she said, “After all, I was the one who won the Academy Award.”

Track 45 left, part 3

Think for a moment about the deal between photographer and audience. A photo is a set of choices, the deliberate selection of a moment, of frame, lighting and viewpoint. All of these choices are the prerogative and responsibility of the artist. Cinema has a similar ethic. No matter how many times you see a given cut of Blade Runner, you will see the same sequence of images. The aesthetic choices have been baked in.

What Ridley Scott was up to, I think, in the famous “Enhance 224” scene, was a challenge. He was asking us to question the definition of “image”. What if an image were not merely an image, but rather a universe of possible images? This would fundamentally change the relationship between artist and audience.

This doesn’t mean that creators would cede all control. For example, a sculpture can be seen from an infinite number of viewpoints, yet sculpture is still a medium that gives enormous control to the artist.

Rather, I think Ridley Scott was hinting at a possible future for cinema itself. Suppose you could enter into the world of Blade Runner, peer around its corners, see some of Sebastian’s other creatures, maybe even visit the out-world.

In more than one sense Ridley Scott was being a visionary. Because now, more than three decades later, the capabilities hinted at in that scene are just beginning to become possible.

Track 45 left, part 2

Deckard: Enhance 224 to 176. Enhance, stop. Move in, stop. Pull out, track right, stop. Center in, pull back. Stop. Track 45 right. Stop. Center and stop. Enhance 34 to 36. Pan right and pull back. Stop. Enhance 34 to 46. Pull back. Wait a minute, go right, stop. Enhance 57 to 19. Track 45 left. Stop. Enhance 15 to 23. Give me a hard copy right there.

The above is the entire dialog of one of the greatest scenes in all of movies. In a way, it is about movies. The scene starts out with deceptive simplicity. Harrison Ford, as the classic brooding anti-hero shamus on a case, is nursing a stiff drink and peering at some sort of electronic photo enlarger. He issues commands, the machine zooms and pans in response.

Except that he doesn’t say “pan” — he says “track”. Filmmakers know that this means “move the camera”, not “turn the camera”. And it’s something you cannot do with a photo enlarger.

We don’t really notice this odd terminology at first. After all, what we are seeing is so familiar, so much like the panning across a shot that we are used to. Then the view seems to look around a corner, which is impossible.

But wait — it’s not looking around a corner, just zooming into a mirror in the photo. All perfectly reasonable. Sure, the photograph must be super-high resolution for him to be able to zoom into a reflection like that, but why not? This is a sci-fi movie after all.

And then, near the very end, Ridley Scott drops the other shoe. Deckard says “Track 45 left.” And unmistakably, astoundingly, the camera tracks to the left, as though the person taking the photo had literally moved sideways.

At that moment, my friend Josh and I, seeing the film together in the cinema, both practically jumped out of our seats. It was a moment that changed everything. And continues to change everything, even today.

More tomorrow.