Mirror mirror

This evening I went to a little gathering in honor of Marvin. I had lovely conversations with his family, close friends, people who were connected to each other through the very fortunate circumstance of having known this marvelous person.

To my surprise a friend of mine — somebody I’ve known for decades — told me that she reads my blog faithfully. It’s “Ken redux”, she said, a kind of boiled down snapshot of me, offered up in little posts.

I had never thought of it that way. For me, this blog isn’t about me. Rather, it’s about everything outside of myself — a chronicle of the wonder of all the things going on in the world.

But of course to anybody else who is not me, it’s the other way around. You are seeing me looking out, and what you see is my act of looking outward — which is, of course, a kind of representation of me.

This whole “I / thou” duality is so ingrained, so taken for granted, that we rarely even think about it. So it was odd, yet strangely fascinating, to see my little mirror onto the world held up to me by a friend, and to realize that what it was reflecting was my own face.

Bit

Here’s an odd twist on the time travel genre: Suppose you could send a single bit of information back to yourself in the past every so often — say, once a month. This is just enough information to tell your past self the answer to true/false questions.

But even this little bit would be an enormous superpower. The you that is in the past could frame questions such as “Will Google stock go up this month?”, and instantly learn the correct answer. To make this all work, the you that is in the future would simply need to make sure to send the proper “true” or “false” answer back in time.

But here’s the kicker: If you screw up even once — if you ever send the wrong bit back, or fail to send any answer at all — then your particular time line would cease to exist, a victim of paradox.

There might still, perhaps, be an infinite number of other versions of you off in parallel universes. But the one with your unique memories and life experiences would vanish. We’re talking existential retirement with extreme prejudice.

Would you want such a superpower? Or would it be too nerve wracking? I’m a bit conflicted myself.

Marvin

Marvin Minsky, who sadly passed away earlier this week, wasn’t just the smartest person I knew. I think he might have been the smartest person that anybody knew. He was in a whole different category.

But I’m not really talking here about intelligence on something as obvious as a linear scale. It’s more that Marvin’s mind was extraordinarily free. He could fly through idea space the way a swallow flies through the air. He would see connections where nobody else even thought to look, leap effortlessly from one concept to an entire new category of concepts, and in general make anybody he was talking with feel ten times smarter.

And all of this without an ounce of hubris. Marvin didn’t care who you were, or whether you were the “right” sort of intellectual. I’ve seen him ignore a room filled with Nobel prize winners, to focus on conversation with a single high school student, just because that student was seriously interested in discussing ideas. He was a true democrat, who believed in the power and the potential of each individual human mind.

I will miss him deeply. The world is a poorer place for having lost him.

Unjargon

A friend pointed out to me that my “Train of Thought” post the other day was incomprehensible to her. And I realized that it might be incomprehensible to a lot of people.

The problem is that I spend much of my time in a milieu where terms like “Turing test” and “Big Data” are understood by everyone in the room. But that doesn’t help once you take the discussion out of that room, and those phrases just sound like jargon.

“Turing test” is shorthand for Alan Turing’s famous thought experiment, which he called the “imitation game”. The idea is that you test a computer in the following way: The computer holds a conversation with a person (over a teletype, so they can’t actually see each other), and the person then tries to guess whether they’ve been conversing with a real person or to a computer.

This contest, the basic set-up for the recent film Ex Machina, as well as many other works of speculative fiction, raises all sorts of interesting questions. For example, if a computer consistently passes this test, can it be said to think? And if so, is it a kind of person? Should it be granted civil rights under the law?

“Big Data”, on the other hand, is the idea that if you feed enormous amounts of data to a computer program that is good only at classifying things into “more like this” or “less like that”, then the program can start to make good decisions when new data is fed to it, even though the program has absolutely no idea what’s going on.

This is what Machine Learning is all about, and it’s the reason that Google Translate is so good. GT doesn’t actually know anything about translating — it’s just very good at imitation. Because Google has fed it an enormous amount of translation data, it can now translate pretty well.

But Google Translate doesn’t really know anything about language, or people, or relationships, or the world. It’s just really good at making correlations between things if you give it enough examples.

So my question was this: If you use the Big Data approach to imitate human behavior, are there some human behaviors that can never be imitated this way, not matter how much data you feed them?

Let’s put it another way: If you fed all the romance novels ever written into a Machine Learning algorithm, and had it crunch away for long enough, would it ever be able to sustain an intimate emotional relationship in a way that is satifying to its human partner? Even though the computer actually has no idea what is going on?

My guess is no. On the other hand, there are probably more than a few human relationships that work on exactly this basis. 🙂

Old fashioned selfie

Today was all about the snow. The snow meant different things to different people. For some it was a terrible inconvenience, for others a day off from work. But for those of us who live on Washington Square Park, it was sheer heaven.

In a sort of magical transformation, the moment anyone set foot in the park today, whatever their age, they turned into little children. People were laughing and running about, throwing snowballs, having a grand old time. Because no cars were on the streets, it occurred to me that this was pretty much the same experience people would have had in this park a century ago.

Here is a photo I took amid the revelry in the park, looking through the Arch toward Fifth Avenue. In the distance you would normally see the Empire State Building. But not today — it is hidden behind a swirling mass of falling snow:

Of course, not everything is the same as it was 100 years ago. For one thing, back in 1916 people weren’t taking so many selfies.

I decided I would record my snow day in the park the old fashioned way, by making a snow angel. It’s something I learned as a kid. The first good snowfall of winter you would would go out in the backyard, lie on your back and flap your arms. When you stood up again, it would be as though an angel had been lying in the snow.

It’s hard to see the result from the image below, taken today in Washington Square Park, as the late afternoon winter light fades everything to ghostly Maxfield Parrish hints of yellow and blue. But it is indeed the angel version of yours truly, in a very old-fashioned sort of selfie:

Train of thought

As I was looking at yesterday’s post, I started thinking about a sort of Turing Test for fonts: Would it be easy or hard to design a randomized font — in the style of the one I showed yesterday — so people would not be able to tell that the randomness was machine generated?

And then I realized that it would be quite easy: You could use a “big data” approach. First analyze a lot of samples of actual human writing, then use those to train a machine learning algorithm. You can then use that algorithm to generate new writing samples. It’s one of those problems that is actually quite amenable to a “big data” machine learning approach.

But then I started thinking, could we start to arrange all human abilities on a scale from “easily faked by big data” to “not at all fake-able by big data”?

Some things, like generating randomized fonts, are on the easy end of the spectrum. Other things, like maintaining a long term intimate relationship, are probably way off on the difficult end of the spectrum (or at least, I’d like to think so).

But what about everything in between? Driving a car has turned out to be more tractable than people had once thought, as have chess and rudimentary translation between natural languages.

I wonder, is there some litmus test we can apply, to get a rough sense of how easy or difficult it would be to emulate any human task via machine learning, given sufficient data showing humans themselves doing it?

Unfont design

The word “font” derives from the old days, when printing was done with real metal pieces that were used to press ink onto paper. A font was a complete set of such pieces that shared a particular weight, size and style.

For a given font, the letter “A” always looked the same, as did the letters “B”, “C” and so forth. And this is essentially still true today, in the computer age. When printing with a given font, a particular character always appears the same.

Consider, for example, the following word in my new line font:

In the word “chalktalk” above, the letters “a”, “l” and “k” each appear twice, in each case without any variation. This is part of the definition of a font: The appearance of any printable character is completely determined. This is in contrast to, say, handwritten text, in which characters look somewhat different every time they are written.

But sometimes I want text in my Chalktalk system to have a casual handwritten quality. Because this is a procedurally defined font, I can just add noise to make that happen:

Now any given letter, such as the “a”, “l” and “k” above, will look somewhat different every time it appears. Which means that this is no longer a font — it violates the very definition of a font.

Yet it is recognizable. The statistical average of all occurances of any given letter converges to the original line font, even though no letter is actually in that font. And although this is not a font, we perceive it much the way we would perceive a font.

I guess you could call it an unfont.

Font design

The reason I was thinking about fonts yesterday was that this week I designed my own font. I needed it because I am moving my Chalktalk interactive drawing program into VR, so I need text that can be “drawn in 3D space” like any other drawing.

I couldn’t use the standard font design tools, because those tools don’t let you create characters that can be drawn as lines and curves in space. So I wrote my own font design software, which actually only took about an hour (it’s a lot easier if you’re only going to design a single font). The design of the font itself was really fun, and took a few hours, mostly because I really got into the stylistic details.

Fonts are like chess sets — all of the characters need to “feel” like they belong together. You’re basically asserting a coherent design space, and all of the characters in that space need to play well together, so that they reaffirm each other aesthetically. But some of them also need to be just a little bit cheeky and impertinent.

Below is what I have so far. I’ve already switched Chalktalk over to use this new font, and it looks a lot better than the off-the-shelf one I was using before:

And here’s something I wasn’t able to do with my off-the-shelf font — let people walk around text in virtual reality, like any other object in the 3D shared virtual world. And that’s the real payoff!