Iconic theater posters

I saw a theater poster the other day — just an image with no words. Yet I knew in an instant that the play was A Streetcar Named Desire.

To me this means that the poster was successful. It managed to boil down the essence of the play into a single image.

For some plays this is easy. A poster for Hamlet pretty much just needs a guy talking to a skull he’s holding in one hand.

In fact, it can be a gal holding the skull. Hamlet is so iconic, that we would know immediately that what is being advertised is a production of Hamlet with a female lead.

I wonder whether we could rank plays this way: For a given play, how amenable is it to being recognized by a single iconic image?

Let’s posit, for any given play, that we could come up with a poster consisting only of the optimal image to represent that play. Let’s further posit that are showing that poster only to people who have already seen the play.

Could we rate every play in order from “most iconic” to “least iconic”? It would be an interesting exercise.

Infinitely better

I was having breakfast at a cafe that had exactly one vegetarian option. My dining companion said to me, “It’s too bad they have only one thing for you on the menu.”

I replied, without really thinking about it, “Yes, but it’s infinitely better than if they had zero things for me on the menu.”

Then I realized that what I had just said was mathematically correct, in a very precise way. Somehow that made the breakfast taste better.

Links to galleries

When I look up today’s date on the Wikipedia, I mostly see a list of events, as well as births and deaths on this date. Each of these contains a link and a very brief summary.

If you mouse over the link, you often see a pop-up image. The image you see is the first image on the page being linked to, and it only shows up if that page contains an image.

Suppose I wanted to create an on-line gallery of all of those images. I could do it manually, cutting and pasting images one by one, but that would take a lot of work.

Or I could write a “robot” that simulates a user, hovering over each link in tern and capturing the image that results, if any.

Or I could write a script that interprets the HTML of the Wikipedia page, follows the links and looks for that first image tag.

But what I really want is software that would let me just say “Arrange all of the images that pop-up over links into a gallery, and make a web page of that.” Or something like that.

The software that does this for me wouldn’t need to be something that understands English. It could be some sort of graphic user interface that lets me create such requests by clicking and dragging on the screen.

Is that asking too much?

Non-colocated Olympics

With the Olympics in the news, and COVID in the news as well, I’ve been starting to wonder what it would be like to have a non-colocated Olympics competition.

One could imagine, with sufficiently advanced technology, people competing in the future in ways that will feel to them as though they are in the same physical location, while they are actually located at various places around the world.

This is not quite possible in any meaningful way today, outside of the relatively non-physical domain of computer games, but it may be a worthwhile question to ask. Is a non-colocated Olympics competition possible, given sufficiently advanced and achievable technology?

I am not suggesting that we should be planning to hold the Olympics remotely, but I think it might be an interesting thought experiment. As the Olympics competitions continue to test the limits of human physical achievement, could we achieve such a thing, given the right combination of virtual reality and robotic interfaces?

Creating a pitch

I’ve been working on several different varieties of “pitch” in parallel. In one domain, I’m advising some start-ups. In another I am helping to ask the government for research funding.

The differences between these two varieties of pitch is fascinating. Both are, in essence, a process of going up to somebody and saying “give us money.” But after that, things diverge radically.

The people who fund start-ups are mainly interested in one thing — getting lots of money back in return. The people who fund academic research don’t expect to get any money out of it. But they want the future economy to grow.

So on the one hand, you’ve got people who are cannily working in their own personal self-interest. On the other hand you’ve got people who are interested in a very large scale and future looking form of tribal self-interest.

In the larger picture, both forms of support are essential. The academic funding of today produces innovations that will allow tech start-ups to succeed ten or twenty years from now.

Designing for non-existent hardware

One of the odd things about research is that you often find yourself designing for a world that doesn’t exist. It’s a world that you believe will exist, but it’s not here now.

The general idea is that if commercial products are already out there now that can do something, then you shouldn’t be focusing on that for academic research. There are large corporations that have that mandate. Those corporations are very well funded, and they are doing a reasonably good job of serving their customers.

But serving customers who will use technologies that will not yet exist for another ten years is not their concern. That’s where academic research comes in. We can focus on asking questions that are well beyond the commercial horizon.

To do that we need to do a kind of fakery. For example, we might run a wire from a massive computer to a small handheld device, and pretend, for the sake of research, that we are holding a future device which is capable of doing all that massive computation on its own.

There may not be a market now for such a thing, because that wire stops you from taking the device with you. But you can still do lots of useful research to explore what it would be like if that device were untethered, and if you could take it with you.

Widget Wednesdays #3

I’ve always been fascinated by optics. It was one of the many great discoveries made by Sir Isaac Newton.

When I was a kid I used to come up with all sorts of ideas for optical devices. Most of them were based on a wrong idea of how optics worked, but that didn’t stop me.

Now that I know a little more, I like the idea of teaching optics through the use of software toys. Such a toy should allow you to create and then combine optical elements at will, varying such things as surface curvature and index of refraction — things that are difficult to vary in the real world.

Ultimately we’d like those toys to exist in a kind of tangible augmented reality, but we are not quite there yet. The next best thing is to simulate optics on a computer screen.

For this week’s Widget Wednesday I am sharing with you one of my software experiments with optics. This program just focuses on letting you define your own lenses, and showing you what happens to the light rays that pass through them.

Try it. I hope you find it illuminating!

Reversal

Terminator 2 — Judgment Day came out in 1991. Now, thirty one years later, we have Peacemaker. One thing they have in common is that they both star Robert Patrick, player of bad guys par excellence.

Another thing they have in common is their use of synthetic characters. But in fascinatingly opposite ways.

In T2 Patrick was a real person playing a synthetic person. For the time, the effects that brought this about were strikingly good. I still get chills when I think back on the scenes where Patrick melted effortlessly into liquid metal.

Peacemaker, on the other hand, has a character that you are supposed to think of as entirely real. I’m speaking of Eagly, the protagonist’s pet bird — played by an entirely synthetic actor.

Eagly is 100% CGI, but you would never know it from watching the show. Unless, that is, you took the time to think about how difficult it would be to get a real eagle to act on command in so many different situations.

So here we have a complete reversal of real and synthetic. Three decades ago you faked a character who is supposed to be synthetic by using a real actor. Nowadays, you fake a character who is supposed to be real by using a synthetic actor.

Times have changed.

Two birthdays

Benjamin Franklin, one of my favorite humans ever, was born on this day, January 17, in the year 1706. Since I was a little kid he has been one of my heroes.

When I was eight years old, I knew I wanted to be Benjamin Franklin when I grew up. Yeah sure, there’s all the political stuff. But he was my hero because the man was just the most awesome inventor a kid could ever aspire to be.

Lightning rods, bifocals, water flippers, that stove. Not to mention the glass harmonica. I just loved the glass harmonica.

Today is also the ninety first birthday of James Earl Jones, a hero to millions. I would love to see James Earl Jones play Benjamin Franklin. Steven Spielberg, are you listening?

First you make it, then you figure it out

This last week I was faced with a problem in computer graphics, and I did something I often do. I jumped in and started hacking until I got something that worked.

The problem was, I couldn’t really figure out why or how it worked. So then the real work began.

Over the course of the next several days I sautéed and sautéed, transforming the code piece by piece, breaking things down into properly named methods, trying to turn it into something that would explain itself.

After a few days I finally ended up with something that not only worked, but that another programmer could pick up and read and understand. In the scheme of things, this is much more valuable than what I had originally, because now it can also be used by other people to do other things.

I can’t say whether this approach is good. It’s not clear whether I would have gotten the thing working had I approached it more methodically.

I suppose I should be grateful that the process works, as messy as it is. I wonder whether other people trying to make things have similar experiences.